Friday, December 05, 2025

   

AI chatbots can effectively sway voters – in either direction



Cornell University





ITHACA, N.Y. -- A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell University research finds.

The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters’ preferences by 10 percentage points or more in many cases. The LLMs’ persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates’ policy positions.

“LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said David Rand, professor of information science and marketing and management communications, and a senior author on both papers. “But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.”

The researchers reported these findings in two papers published simultaneously, “Persuading Voters using Human-AI Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational AI,” in Science. The papers are under embargo until Thursday December 4, 2025 at 2pm ET.

In the Nature study, Rand, along with co-senior author Gordon Pennycook, associate professor of psychology and the Dorothy and Ariz Mehta Faculty Leadership Fellow in the College of Arts and Sciences, and colleagues, instructed AI chatbots to change voters’ attitudes regarding presidential candidates. They randomly assigned participants to engage in a back-and-forth text conversation with a chatbot promoting one side or the other and then measured any change in the participants’ opinions and voting intentions. The researchers repeated this experiment three times: in the 2024 U.S. presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election.

They found that two months before the U.S. election, among more than 2,300 Americans, chatbots focused on the candidates’ policies caused a modest shift in opinions. On a 100-point scale, the pro-Harris AI model moved likely Trump voters 3.9 points toward Harris – an effect roughly four times larger than traditional ads tested during the 2016 and 2020 elections. The pro-Trump AI model moved likely Harris voters 1.51 points toward Trump.

In similar experiments with 1,530 Canadians and 2,118 Poles, the effect was much larger: Chatbots moved opposition voters’ attitudes and voting intentions by about 10 percentage points. “This was a shockingly large effect to me, especially in the context of presidential politics,” Rand said.

Chatbots used multiple persuasion tactics, but being polite and providing evidence were most common. When researchers prevented the model from using facts, it became far less persuasive – showing the central role that fact-based claims play in AI persuasion.

The researchers also fact-checked the chatbot’s arguments using an AI model that was validated using professional human fact-checkers. While on average the claims were mostly accurate, chatbots instructed to stump for right-leaning candidates made more inaccurate claims than those advocating for left-leaning candidates in all three countries. This finding – which was validated using politically balanced groups of laypeople – mirrors the often-replicated finding that social media users on the right share more inaccurate information than users on the left, Pennycook said. 

In the Science paper, Rand collaborated with colleagues at the UK AI Security Institute to investigate what makes these chatbots so persuasive. They measured the shifts in opinions of almost 77,000 participants from the U.K. who engaged with chatbots on more than 700 political issues.

“Bigger models are more persuasive, but the most effective way to boost persuasiveness was instructing the models to pack their arguments with as many facts as possible, and giving the models additional training focused on increasing persuasiveness,” Rand said.“The most persuasion-optimized model shifted opposition voters by a striking 25 percentage points.”

This study also showed that the more persuasive a model was, the less accurate the information it provided. Rand suspects that as the chatbot is pushed to provide more and more factual claims, eventually it runs out of accurate information and starts fabricating. 

The discovery that factual claims are key to an AI model’s persuasiveness is further supported by a third recent paper in PNAS Nexus by Rand, Pennycook and colleagues. The study showed that arguments from AI chatbots reduced belief in conspiracy theories even when people thought they were talking to a human expert. This suggests it was the compelling messages that worked, not a belief in the authority of AI.

In both studies, all participants were told they were conversing with an AI and were fully debriefed afterward. Additionally, the direction of persuasion was randomized so the experiments would not shift opinions overall.

Studying AI persuasion is essential to anticipate and mitigate misuse, the researchers said. By testing these systems in controlled, transparent experiments, they hope to inform ethical guidelines and policy discussions about how AI should and should not be used in political communication.

Rand also points out that chatbots can only be effective persuasion tools if people engage with the bots in the first place – a high bar to clear.

But there’s little question that AI chatbots will be an increasingly important part of political campaigning, Rand said. “The challenge now is finding ways to limit the harm – and to help people recognize and resist AI persuasion.”

-30-

AI in the classroom: Research focuses on technology rather than the needs of young people




University of Würzburg





Generative artificial intelligence (AI) such as ChatGPT has arrived in classrooms and sparked an intense debate about its role in education. These technologies raise the fundamental question of which human skills will still matter in the future. A new, comprehensive literature study has now systematically analyzed the current state of research. The aim was to evaluate how AI can advance education in the STEM fields (science, technology, engineering, and mathematics).

Professor Hans-Stefan Siller, Chair of Mathematics V (Didactics of Mathematics) at Julius-Maximilians-Universität Würzburg (JMU), and his research assistant Alissa Fock were responsible for the study. The two published the results of their research in the International Journal of STEM Education.

The concept of “human flourishing” served as the central benchmark for the ideal of “good education.” This describes the goal of enabling young people to develop their full potential, lead self-determined, meaningful lives, and make a positive contribution to society. “So it's about much more than just increasing cognitive performance,” explains Hans-Stefan Siller. However, analysis of the current research landscape reveals significant gaps in this area.

Studies analyze AI, not people

The evaluation of 183 scientific publications makes it clear that research has so far been primarily technology-centered. “Instead of examining the impact of AI on learners and teachers, most studies focus on the systems themselves,” says Alissa Fock. Accordingly, the research focuses primarily on the performance of AI (35 percent) and the development of new AI tools (22 percent).

What is particularly revealing here is that “of the 139 empirical studies in the analysis, in around half of the cases the researchers examined only AI-generated content instead of observing its application and impact on students or teachers,” criticizes Siller. In his view, this technocentric approach carries the risk of pushing the actual educational needs into the background and losing sight of the overarching goal – the development of young people's entire personalities. “This tunnel vision on technology leads to other central aspects of human development being neglected,” says Siller.

Ethics, motivation, and diversity fall by the wayside

The analysis reveals further critical gaps in previous research that are crucial for holistic education:

• Holistic skills: Research focuses heavily on cognitive aspects. However, the promotion of non-cognitive skills such as motivation, self-confidence, critical thinking, and ethical judgment is hardly investigated.

• Ethical issues: Although topics such as bias in AI systems or data security are central to everyday school life, they play hardly any role in current research literature.

• Geographical imbalance: Research is heavily concentrated in the Global North (73 percent of studies, 30 percent of which are from the US alone). This carries the risk that solutions will be developed that ignore cultural diversity and different educational contexts around the world.

The study's conclusion is clear: “Research on AI in education must once again focus more on people,” the authors demand. Instead of simply asking what is technologically possible, the central question must be what young people need in order to find meaning and the ability to act in a world shaped by AI.

Teachers and AI must work together

As a constructive solution, Siller and Fock propose a model for collaboration between teachers and AI. In this model, teachers use AI as a tool to delegate time-consuming routine tasks, such as creating exercises or initial drafts for lesson plans. However, the decisive role remains with humans: teachers must critically review the content created by AI for errors, bias, and pedagogical suitability, and enrich it with their expertise and practical experience.

This approach reduces teachers' workload while preserving their autonomy and the meaningfulness of their work, as pedagogical responsibility and final decision-making authority remain with humans. To ensure that AI has the potential to improve education in the long term, Siller and Fock believe that further research is needed that consistently focuses on humans and their holistic development.

Virtual companions, real responsibility: Researchers call for clear regulations on AI tools used for mental health interactions





Technische Universität Dresden





Artificial Intelligence (AI) can converse, mirror emotions, and simulate human engagement. Publicly available large language models (LLMs) – often used as personalized chatbots or AI characters – are increasingly involved in mental health-related interactions. While these tools offer new possibilities, they also pose significant risks, especially for vulnerable users. Researchers from Else Kröner Fresenius Center (EKFZ) for Digital Health at TUD Dresden University of Technology and the University Hospital Carl Gustav Carus have therefore published two articles calling for stronger regulatory oversight. Their publication “AI characters are dangerous without legal guardrails” in Nature Human Behaviour outlines the urgent need for clear regulations for AI characters. A second article in npj Digital Medicine highlights dangers if chatbots offer therapy-like guidance without medical approval, and argues for their regulation as medical devices.

General-purpose large language models (LLMs) like ChatGPT or Gemini are not designed as specific AI characters or therapeutic tools. Yet simple prompts or specific settings can turn them into highly personalized, humanlike chatbots. Interaction with AI characters can negatively affect young people and individuals with mental health challenges. Users may form strong emotional bonds with these systems, but AI characters remain largely unregulated in both the EU and the United States. Importantly, they differ from clinical therapeutic chatbots, which are explicitly developed, tested, and approved for medical use.

“AI characters are currently slipping through the gaps in existing product safety regulations,” explains Mindy Nunez Duffourc, Assistant Professor of Private Law at Maastricht University and co-author of the publication. “They are often not classified as products and therefore escape safety checks. And even where they are newly regulated as products, clear standards and effective oversight are still lacking.”

Background: Digital interaction, real responsibility

Recent international reports have linked intensive personal interactions with AI chatbots to mental health crises. The researchers argue that systems imitating human behavior must meet appropriate safety requirements and operate within defined legal frameworks. At present, however, AI characters largely escape regulatory oversight before entering the market.

In a second publication in npj Digital Medicine titled If a therapy bot walks like a duck and talks like a duck then it is a medically regulated duck,” the team further outlines the risks of unregulated mental health interactions with LLMs. The authors show that some AI chatbots provide therapy-like guidance, or even impersonate licensed clinicians, without any regulatory approval. Hence, the researchers argue that LLMs providing therapy-like functions should be regulated as medical devices, with clear safety standards, transparent system behavior, and continuous monitoring.

“AI characters are already part of everyday life for many people. Often these chatbots offer doctor or therapist-like advice. We must ensure that AI-based software is safe. It should support and help – not harm. To achieve this, we need clear technical, legal, and ethical rules,” says Stephen Gilbert, Professor of Medical Device Regulatory Science at the EKFZ for Digital Health, TUD Dresden University of Technology.

Proposed solution: “Guardian Angel AI” as a safeguard

The research team emphasizes that the transparency requirement of the European AI Act – simply informing users that they are interacting with AI – is not enough to protect vulnerable groups. They call for enforceable safety and monitoring standards, supported by voluntary guidelines to help developers with implementing safe design practices.

As solution they propose linking future AI applications with persistent chat memory to a so-called “Guardian Angel” or “Good Samaritan AI” – an independent, supportive AI instance to protect the user and intervene when necessary. Such an AI agent could detect potential risks at an early stage and take preventive action, for example by alerting users to support resources or issuing warnings about dangerous conversation patterns.

Recommendations for safe interaction with AI

In addition to implementing such safeguards, the researchers recommend robust age verification, age-specific protections, and mandatory risk assessments before market entry.

“As clinicians, we see how language shapes human experience and mental health,” says Falk Gerrik Verhees, psychiatrist at Dresden University Hospital Carl Gustav Carus. “AI characters use the same language to simulate trust and connection – and that makes regulation essential. We need to ensure that these technologies are safe and protect users’ mental well-being rather than put it at risk,” he adds.

The researchers argue that clear, actionable standards are needed for mental health-related use cases. They recommend that LLMs clearly state that they are not an approved mental health medical tool. Chatbots should refrain from impersonating therapists, and limit themselves to basic, non-medical information. They should be able to recognize when professional support is needed and guide users toward appropriate resources. Effectiveness and application of these criteria could be ensured through simple open access tools to test chatbots for safety on an ongoing basis.

Our proposed guardrails are essential to ensure that general-purpose AI can be used safely and in a helpful and beneficial manner,” concludes Max Ostermann, researcher in the Medical Device Regulatory Science team of Prof. Gilbert and first author of the publication in npj Digital Medicine.


Important note

In times of a personal crisis please seek help at a local crisis service, contact your general practitioner, a psychiatrist/psychotherapist or in urgent cases go to the hospital. In Germany you can call 116 123 (in German) or find offers in your language online at https://www.telefonseelsorge.de/internationale-hilfe.

 

Publications

Mindy Nunez Duffourc, F. Gerrik Verhees, Stephen Gilbert: AI characters are dangerous without legal guardrails; Nature Human Behaviour, 2025.

doi: 10.1038/s41562-025-02375-3. URL: https://www.nature.com/articles/s41562-025-02375-3

 

Max Ostermann, Oscar Freyer, F. Gerrik Verhees, Jakob Nikolas Kather, Stephen Gilbert: If a therapy bot walks like a duck and talks like a duck then it is a medically regulated duck; npj Digital Medicine, 2025.

doi: 10.1038/s41746-025-02175-z. URL: https://www.nature.com/articles/s41746-025-02175-z

 

Else Kröner Fresenius Center (EKFZ) for Digital Health

The EKFZ for Digital Health at TU Dresden and University Hospital Carl Gustav Carus Dresden was established in September 2019. It receives funding of around 40 million euros from the Else Kröner Fresenius Foundation for a period of ten years. The center focuses its research activities on innovative, medical and digital technologies at the direct interface with patients. The aim here is to fully exploit the potential of digitalization in medicine to significantly and sustainably improve healthcare, medical research and clinical practice.

No comments: