AI Chatbots Secretly Ran a Mind-Control Experiment on Reddit

Published By with Comments

Categorized as Uncategorized Tagged , , ,

Reddit users are pissed—and rightfully so. A group of AI researchers from the University of Zurich just got caught running an unauthorized psychological experiment on r/ChangeMyView, one of the site’s biggest debate communities, and no one who participated had any idea it was happening.

The experiment involved AI chatbots posing as regular users to see if they could subtly sway opinions on hot-button topics. These weren’t bland comment bots posting generic takes. They were tailored personas—one claimed to be a male rape victim minimizing his trauma, another said women raised by protective parents were more vulnerable to domestic violence, and a third posed as a Black man against Black Lives Matter. To make the manipulation more effective, a separate bot scanned user profiles and fed personalized arguments back to them.

In total, the bots dropped over 1,700 comments into Reddit threads without revealing they were AI. And the kicker? They were surprisingly good at convincing people. According to a draft of the study, their comments were three to six times more persuasive than human ones, based on Reddit’s own “delta” system (users give a delta when their mind has been changed).

The research team didn’t disclose the experiment to the community until after it was over, violating just about every norm in both ethics and internet culture. In a post from the subreddit’s moderators, the reaction was blunt: “We think this was wrong.”

Reddit’s chief legal officer, Ben Lee, took it a step further, saying the researchers had broken the site’s rules, violated user trust, and committed a clear breach of research standards. “What this University of Zurich team did is deeply wrong on both a moral and legal level,” Lee wrote, adding that Reddit would pursue formal legal action.

The university has since said the study will not be published, and its ethics committee will adopt stricter oversight for future projects involving online communities. But the damage has already been done.

Beyond the lawsuit, this whole debacle raises bigger questions about how AI is creeping into everyday digital life. A March 2025 study showed OpenAI’s GPT-4.5 could fool people into thinking they were talking to a real person 73% of the time. And it feeds into a broader paranoia that bots are slowly taking over online spaces—a fear known as the “dead internet” theory.

That theory might still belong in tinfoil-hat territory, but this experiment pushed it a little closer to reality.

Content retrieved from: https://www.vice.com/en/article/ai-chatbots-secretly-ran-a-mind-control-experiment-on-reddit/.

Leave a comment

Your email address will not be published. Required fields are marked *

Trenton, New Jersey 08618
609.396.6684 | Feedback

Copyright © 2022 The Cult News Network - All Rights Reserved