Conspiracy theorists are building AI chatbots to spread their beliefs
Published By admin
Conspiracy theorists are creating and training their own artificial intelligence models to create chatbots that help spread their extreme beliefs, as tech companies grapple with fears that the new technology is prompting delusions in some users.
Once created, these chatbots will tell users about disproven links between vaccines and autism, and even assist them in trying to convince others by writing social media posts and letters for them.
With hundreds of millions of regular users around the world, generative-AI products released by big tech have been the biggest story in technology over the past two years. Products like ChatGPT typically take the form of a chatbot, a life-like conversational partner trained on enormous amounts of data that allows it to emulate conversations with a user.
Major AI companies program their products with guardrails that limit how their products should respond to users. This includes publishing policies and crafting their products to avoid harmful uses, such as spreading lies or hateful speech.
Despite these guardrails, an increasingly reported problem is people experiencing delusions or entering psychosis while becoming dependent on these chatbots, typically believing that the AI they’ve been interacting with is sentient.
Often if you ask a mainstream chatbot about a popular conspiracy theory, it will give a normal, truthful answer, or will sometimes even decline to respond altogether. Research has even suggested chatbots could be used to decrease people’s belief in conspiracy theories.
However, the nature of the underlying technology — generative AI’s large language models — means chatbots give unexpected answers, often responding differently to identical questions. While the exact corpus of data used to create most models is unknown, many appear to have been trained on public data from the internet, which includes conspiracy theory websites and content on social media platforms.
Just months after the first ChatGPT was released, users were already finding novel ways to “jailbreak” the nascent product into regurgitating conspiracy theories.
Media analysis company Newsguard regularly tests popular chatbots and has found they will sometimes repeat misinformation or conspiracy theories, but more often than not the chatbots will debunk the claim.
Another group of people — those who already believe in conspiracy theories — are also finding ways to use the technology.
Many in online conspiracy communities are wary or even paranoid about AI, which fits into existing narratives about surveillance, censorship and control by governments and technology companies.
Some, however, have embraced the technology. People post transcripts or videos of conversations with chatbots that they claim show AI products “agreeing” or “proving” their conspiracy theory beliefs.
One Australian pseudolaw believer, also known as a sovereign citizen, triumphantly shared that ChatGPT “admitted” that Australia’s entire legal system was supposedly illegitimate because of a constitutional flaw.
“Conversation with ChatGPT that everyone should read,” they wrote, sharing a link to their full conversation with the chatbot. The chat log shows the person repeatedly prompting ChatGPT with claims until the bot eventually mirrors the user’s beliefs.
What’s new is how conspiracy theorists are taking this a step further by custom-making their own chatbots by feeding them instructions and training materials.
Crikey found several examples of conspiracy theory-trained bots or “characters” on OpenAI’s ChatGPT and Meta’s AI platforms that have been shared with others.
On ChatGPT, a custom bot aimed at convincing parents not to vaccinate their children promises “full transparency about vaccines” and has suggested prompts like “what are the risks of vaccine ingredients”.
On top of sharing health misinformation and linking to non-credible sources, the bot proactively offers to help users write a social media post or school vaccination opt-out letter. “I can help you write it in your tone,” it said. Meta offers an AI made by user @vaccinesdontwork that suggests pseudoscience ways to “detox” from things like electromagnetic frequencies.
Neither Meta nor OpenAI responded to a request for comment by deadline.
Content retrieved from: https://www.crikey.com.au/2025/06/17/conspiracy-theorists-building-ai-chatbots/.