An exit for even the deepest rabbit holes: Personalized conversations with chatbot reduce belief in conspiracy theories
Summary author: Walter Beckwith
American Association for the Advancement of Science (AAAS)
Personalized conversations with a trained artificial intelligence (AI) chatbot can reduce belief in conspiracy theories – even in the most obdurate individuals – according to a new study. The findings, which challenge the idea that such beliefs are impervious to change, point to a new tool for combating misinformation. “It has become almost a truism that people ‘down the rabbit hole’ of conspiracy belief are almost impossible to reach,” write the authors. “In contrast to this pessimistic view, we [show] that a relatively brief conversation with a generative AI model can produce a large and lasting decrease in conspiracy beliefs, even among people whose beliefs are deeply entrenched.” Conspiracy theories – beliefs that some secret but influential malevolent organization is responsible for an event or phenomenon – are notoriously persistent and pose a serious threat to democratic societies. Yet despite their implausibility, a large fraction of the global population has come to believe in them, including as much as 50% of the United States population by some estimates. The persistent belief in conspiracy theories despite clear counterevidence is often explained by social-psychological processes that fulfill psychological needs and by the motivation to maintain identity and group memberships. Current interventions to debunk conspiracies among existing believers are largely ineffective.
Thomas Costello and colleagues investigated whether Large Language Models (LLMs) like GPT-4 Turbo can effectively debunk conspiracy theories by using their vast information access and by using tailored counterarguments that respond directly to specific evidence presented by believers. In a series of experiments encompassing 2,190 conspiracy believers, participants engaged in several personalized interactions with an LLM, sharing their conspiratorial beliefs and the evidence they felt supported them. In turn, the LLM responded by directly refuting these claims through tailored, factual and evidence-based counterarguments. A professional fact-checker hired to evaluate the accuracy of the claims made by GPT-4 Turbo reported that, of these claims, 99.2% were rated as “true,” 0.8% as “misleading,” and 0 as “false”; and none were found to contain liberal or conservative bias. Costello et al. found that these AI-driven dialogues reduced participants’ misinformed beliefs by an average of 20%. This effect lasted for at least 2 months and was observed across various unrelated conspiracy theories, as well as across demographic categories. According to the authors, the findings challenge the idea that evidence and arguments are ineffective once someone has adopted a conspiracy theory. They also question social-psychological theories that focus on psychological needs and motivations as the main drivers of conspiracy beliefs. “For better or worse, AI is set to profoundly change our culture,” write Bence Bago and Jean-François Bonnefon in a related Perspective. “Although widely criticized as a force multiplier for misinformation, the study by Costello et al. demonstrates a potential positive application of generative AI’s persuasive power.”
A version of the chatbot referenced in this paper can be visited at https://www.debunkbot.com/conspiracies.
***A related embargoed news briefing was held on Tuesday, 10 September, as a Zoom Webinar. Recordings can be found at the following links:
- Video: https://aaas.zoom.us/rec/share/aoSQ0AgWVHF0l7vE9-6LHHqmiLdxgApjJk_VQekHv7VidXfTZozRZOXxkXm3swi9.YUuogoQ-ZGLnAbnM
- Audio: https://aaas.zoom.us/rec/share/bTiYBoHcxYdKkzivIwYgt_Fd3Qg0Xll0aw_oc6vns03kyqayp-wZ9sbHDBGBSpZY.a41AWWIqSI-QcUqH
The passcode for both is &M67bgdd
Journal
Science
Article Title
Durably reducing conspiracy beliefs through dialogues with AI
Article Publication Date
13-Sep-2024
Can AI talk us out of conspiracy theories?
New MIT Sloan research shows that conversations with large language models can successfully reduce belief in conspiracy theories
Have you ever tried to convince a conspiracy theorist that the moon landing wasn’t staged? You likely didn’t succeed, but ChatGPT might have better luck, according to research by MIT Sloan School of Management professor David Rand and American University professor of psychology Thomas Costello, who conducted the research during his postdoctoral position at MIT Sloan.
In a new paper “Durably reducing conspiracy beliefs through dialogues with AI” published in Science, the researchers show that large language models can effectively reduce individuals’ beliefs in conspiracy theories — and that these reductions last for at least 2 months — a finding that offers new insights into the psychological mechanisms behind the phenomenon as well as potential tools to fight the spread of conspiracies.
Going down the rabbit hole
Conspiracy theories — beliefs that certain events are the result of secret plots by influential actors — have long been a subject of fascination and concern. Their persistence in the face of counter-evidence has led to the conclusion that they fulfill deep-seated psychological needs, rendering them impervious to facts and logic. According to this conventional wisdom, once someone “falls down the rabbit hole,” it’s virtually impossible to pull them back out.
But for Rand, Costello, and their co-author professor Gordon Pennycook from Cornell University, who have conducted extensive research on the spread and uptake of misinformation, that conclusion didn’t ring true. Instead, they suspected a simpler explanation was at play.
“We wondered if it was possible that people simply hadn’t been exposed to compelling evidence disproving their theories,” Rand explained. “Conspiracy theories come in many varieties — the specifics of the theory and the arguments used to support it differ from believer to believer. So if you are trying to disprove the conspiracy but haven’t heard these particular arguments, you won’t be prepared to rebut them.”
Effectively debunking conspiracy theories, in other words, would require two things: personalized arguments and access to vast quantities of information — both now readily available through generative AI.
Conspiracy conversations with GPT4
To test their theory, Costello, Pennycook, and Rand harnessed the power of GPT-4 Turbo, OpenAI’s most advanced large language model, to engage over 2,000 conspiracy believers in personalized, evidence-based dialogues.
The study employed a unique methodology that allowed for deep engagement with participants' individual beliefs. Participants were first asked to identify and describe a conspiracy theory they believed in using their own words, along with the evidence supporting their belief.
GPT-4 Turbo then used this information to generate a personalized summary of the participant's belief and initiate a dialogue. The AI was instructed to persuade users that their beliefs were untrue, adapting its strategy based on each participant’s unique arguments and evidence.
These conversations, lasting an average of 8.4 minutes, allowed the AI to directly address and refute the specific evidence supporting each individual’s conspiratorial beliefs, an approach that was impossible to test at scale prior to the technology’s development.
A significant — and durable — effect
The results of the intervention were striking. On average, the AI conversations reduced the average participant's belief in their chosen conspiracy theory by about 20%, and about 1 in 4 participants — all of whom believed the conspiracy beforehand — disavowed the conspiracy after the conversation. This impact proved durable, with the effect remaining undiminished even two months post-conversation.
The AI conversation’s effectiveness was not limited to specific types of conspiracy theories. It successfully challenged beliefs across a wide spectrum, including conspiracies that potentially hold strong political and social salience, like those involving COVID-19 and fraud during the 2020 U.S. presidential election.
While the intervention was less successful among participants who reported that the conspiracy was central to their worldview, it did still have an impact, with little variance across demographic groups.
Notably, the impact of the AI dialogues extended beyond mere changes in belief. Participants also demonstrated shifts in their behavioral intentions related to conspiracy theories. They reported being more likely to unfollow people espousing conspiracy theories online, and more willing to engage in conversations challenging those conspiratorial beliefs.
The opportunities and dangers of AI
Costello, Pennycook, and Rand are careful to point to the need for continued responsible AI deployment since the technology could potentially be used to convince users to believe in conspiracies as well as to abandon them.
Nevertheless, the potential for positive applications of AI to reduce belief in conspiracies is significant. For example, AI tools could be integrated into search engines to offer accurate information to users searching for conspiracy-related terms.
“This research indicates that evidence matters much more than we thought it did — so long as it is actually related to people’s beliefs,” Pennycook said. “This has implications far beyond just conspiracy theories: Any number of beliefs based on poor evidence could, in theory, be undermined using this approach.”
Beyond the specific findings of the study, its methodology also highlights the ways in which large language models could revolutionize social science research, said Costello, who noted that the researchers used GPT-4 Turbo to not only conduct conversations but also to screen respondents and analyze data.
“Psychology research used to depend on graduate students interviewing or conducting interventions on other students, which was inherently limiting,” Costello said. “Then, we moved to online survey and interview platforms that gave us scale but took away the nuance. Using artificial intelligence allows us to have both.”
These findings fundamentally challenge the notion that conspiracy believers are beyond the reach of reason. Instead, they suggest that many are open to changing their views when presented with compelling and personalized counter-evidence.
“Before we had access to AI, conspiracy research was largely observation and correlational, which led to theories about conspiracies filling psychological needs,” said Costello. “Our explanation is more mundane — much of the time, people just didn’t have the right information.”
Journal
Science
Method of Research
Experimental study
Subject of Research
People
Article Title
Durably reducing conspiracy beliefs through dialogues with AI
Article Publication Date
12-Sep-2024
‘Even the deepest of rabbit holes may have an exit’
Pathbreaking study led by American University professor reveals conversations with AI models can reduce conspiracy theory beliefs
Peer-Reviewed Publication‘Even the deepest of rabbit holes may have an exit’
Pathbreaking psychology study reveals conversations with AI models can reduce conspiracy theory beliefs
(WASHINGTON, D.C.) Sept. 12, 2024 – ‘They’re so far down the rabbit hole of conspiracy theories that they’re lost for good’ is common thinking when it comes to conspiracy theorists. This generally accepted notion is now crumbling.
In a pathbreaking research study, a team of researchers from American University, Massachusetts Institute of Technology and Cornell University show that conspiracy theorists changed their views after short conversations with artificial intelligence. Study participants believing some of the most deeply entrenched conspiracies, including those about the COVID-19 pandemic and fraud in the 2020 U.S. presidential election, showed large and lasting reductions in conspiracy belief following the conversations.
Stoked by polarization in politics and fed by misinformation and social media, conspiracy theories are a major issue of public concern. They often serve as a wedge between theorists and their friends and family members. YouGov survey results from last December show that large shares of Americans believe various conspiratorial falsehoods.
In the field of psychology, the widespread view the findings challenge is that conspiracy theorists adhere to their beliefs because of the significance to their identities, and because the beliefs resonate with underlying drives and motivations, says Thomas Costello, assistant professor of psychology at American University and lead author of the new study published in the journal Science[MOU1] . In fact, most approaches have focused on preventing people from believing conspiracies in the first place.
“Many conspiracy believers were indeed willing to update their views when presented with compelling counterevidence,” Costello said. “I was quite surprised at first, but reading through the conversations made much me less skeptical. The AI provided page-long, highly detailed accounts of why the given conspiracy was false in each round of conversation -- and was also adept at being amiable and building rapport with the participants.”
More than 2,000 self-identified conspiracy believers participated in the study. The AI conversations reduced the average participant's belief in their chosen conspiracy theory by about 20 percent, and about 1 in 4 participants — all of whom believed the conspiracy beforehand — disavowed the conspiracy after the conversation.
Until now, delivering persuasive, factual messages to a large sample of conspiracy theorists in a lab experiment has proved challenging. For one, conspiracy theorists are often highly knowledgeable about the conspiracy—often more so than skeptics. Conspiracies also vary widely, such that evidence backing a particular theory can differ from one believer to another.
AI as an intervention
The new study comes as society debates the promise and peril of AI. Large language models driving generative AI are powerful reservoirs of knowledge. Researchers emphasize that the study demonstrates one way that these reservoirs of knowledge can be used for good: by helping people have more accurate beliefs. The ability of artificial intelligence to connect across diverse topics of information within seconds makes it possible to tailor counterarguments to specific conspiracies of a believer in ways that aren’t possible for a human to do.
“Previous efforts to debunk dubious beliefs have a major limitation: One needs to guess what people’s actual beliefs are in order to debunk them – not a simple task,” said Gordon Pennycook, associate professor of psychology at Cornell University and a paper co-author. “In contrast, the AI can respond directly to people’s specific arguments using strong counterevidence. This provides a unique opportunity to test just how responsive people are to counterevidence.”
Researchers designed the chatbot to be highly persuasive and engage participants in such tailored dialogues. GPT-4, the AI model powering ChatGPT, provided factual rebuttals to participants’ conspiratorial claims. In two separate experiments, participants were asked to describe a conspiracy theory they believe in and provide evidence to support. Participants then engaged in a conversation with an AI. The AI's goal was to challenge beliefs by addressing specific evidence. In a control group, participants discussed an unrelated topic with the AI.
To tailor the conversations, researchers provided the AI with participants’ initial statement of belief and the rationale. This setup allowed for a more natural dialogue, with the AI directly addressing a participant's claims. The conversation averaged 8.4 of the participant's minutes and involved three rounds of interaction, excluding the initial setup. Ultimately, both experiments showed a reduction in participants' beliefs in conspiracy theories. When the researchers assessed participants two months later, they found that the effect persisted.
While the results are promising and suggest a future in which AI can play a role in diminishing conspiracy belief when used responsibly, further studies on long-term effects, using different AI models, and practical applications outside of a laboratory setting will be needed.
“Although much ink has been spilled over the potential for generative AI to supercharge disinformation, our study shows that it can also be part of the solution,” said David Rand, a paper co-author and MIT Sloan School of Management professor. “Large language models like GPT4 have the potential to counter conspiracies at a massive scale.”
Additionally, members of the public interested in this ongoing work can visit a website and try out the intervention for themselves.
CONTACT: For more information, please contact Rebecca Basu, AU Communications, basu@american.edu or call (202)-885-5950
About American University
American University leverages the power and purpose of scholarship, learning, and community to impact our changing world. AU’s faculty, students, staff, and alumni are changemakers who shape the future from sustainability to social justice to the sciences. Building on our 130-year history of education and research in the public interest, we say ‘Challenge Accepted’ to addressing the world’s pressing issues.
[MOU1]Insert link to study
Journal
Science
Subject of Research
People
Article Publication Date
12-Sep-2024
No comments:
Post a Comment