Feral AI gossip with the potential to spread damage and shame will become more frequent, researchers warn
“Feral” gossip spread via AI bots is likely to become more frequent and pervasive, causing reputational damage and shame, humiliation, anxiety, and distress, researchers have warned.
Chatbots like ChatGPT, Claude, and Gemini don't just make things up—they generate and spread gossip, complete with negative evaluations and juicy rumours that can cause real-world harm, according to new analysis by philosophers Joel Krueger and Lucy Osler from the University of Exeter.
The harm caused by AI gossip isn’t a hypothetical threat. Real-world cases of AI gossip already exist. After publishing an article about how emotionally manipulative chatbots can be, the New York Times reporter Kevin Roose found out chatbots were describing his writing as sensational and accusing him of poor journalistic ethics and being unscrupulous. Other AI bots have falsely detailed people’s involvement in bribery, embezzlement, and sexual harassment. These gossipy AI-generated outputs cause real-world harms—reputational damage, shame, and social unrest.
The study outlines how chatbots gossip, both to human users and other chatbots, but in a different way to humans. This can lead to harm which is potentially wider in scope than fake information spread by chatbots.
Bot-to-bot gossip is particularly dangerous because it operates unconstrained by the social norms that moderate human gossip. It continues to embellish and exaggerate without being checked, spreading quickly in the background, making its way from one bot to the next and inflicting significant harms.
Dr Osler said: “Chatbots often say unexpected things and when chatting with them it can feel like there’s a person on the other side of the exchange. This feeling will likely be more common as they become even more sophisticated.
“Chatbot “bullshit” can be deceptive — and seductive. Because chatbots sound authoritative when we interact with them — their dataset exceeds what any single person can know, and false information is often presented alongside information we know is true — it’s easy to take their outputs at face value.
“This trust can be dangerous. Unsuspecting users might develop false beliefs that lead to harmful behaviour or biases based upon discriminatory information propagated by these chatbots.”
The study shows how the drive to increasingly personalise chatbots could be led by the hope that we’ll become more dependent on these systems and give them greater access to our lives. It’s also done to intensify our feeling of trust and drive us to develop increasingly rich social relationships with them.
Dr Krueger said: “Designing AI to engage in gossip is yet another way of securing increasingly robust emotional bonds between users and their bots.
“Of course, bots have no interest in promoting a sense of emotional connection with other bots, since they don’t get the same “kick” out of spreading gossip the way humans do. But certain aspects of the way they disseminate gossip mirror the connection-promoting qualities of human gossip while, simultaneously making bot-to-bot gossip potentially even more pernicious than gossip involving humans.”
The researchers predict that user-to-bot gossip may become more common. In these cases, users might seed bots with different nuggets of gossip knowing the latter will, in turn, rapidly disseminate them in its characteristically feral way. Bots might therefore act as intermediaries, responding to user-seeded gossip and rapidly spreading it to others.
Journal
Ethics and Information Technology
Method of Research
Case study
Subject of Research
People
Article Title
AI gossip
Article Publication Date
22-Dec-2025
AI gives scientists a boost, but at the cost of too many mediocre papers
Cornell University
ITHACA, N.Y. -- After ChatGPT became available to the public in late 2022, scientists began talking among themselves about how much more productive they were using these new artificial intelligence tools, while scientific journal editors complained of an influx of well-written papers with little scientific value.
These anecdotal conversations represent a real shift in how scientists are writing up their work, according to a new study by Cornell researchers. They showed that using large language models (LLMs) like ChatGPT boosts paper production, especially for non-native English speakers. But the overall increase in AI-written papers is making it harder for many people – from paper reviewers to funders to policymakers – to separate the valuable contributions from the AI slop.
“It is a very widespread pattern, across different fields of science – from physical and computer sciences to biological and social sciences,” said Yian Yin, assistant professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science. “There’s a big shift in our current ecosystem that warrants a very serious look, especially for those who make decisions about what science we should support and fund.”
The new paper, “Scientific Production in the Era of Large Language Models,” published Dec. 18 in Science.
Yin’s group investigated the impacts of LLMs on scientific publishing by collecting more than 2 million papers posted between January 2018 and June 2024 on three online preprint websites. The three sites – arXiv, bioRxiv and Social Science Research Network (SSRN) – cover the physical, life and social sciences, respectively, and post scientific papers that have yet to undergo peer review.
The researchers compared presumably human-authored papers posted before 2023 to AI-written text, in order to develop an AI model that detects papers likely written by LLMs. With this AI detector, they could identify which scientists were probably using the technology for writing, count how many papers they published before and after adopting AI, and then see whether those papers were ultimately deemed worthy of publication in scientific journals.
Their analysis showed a big AI-powered productivity bump. On the arXiv site, scientists who appeared to use LLMs posted about one-third more papers than scientists who weren’t getting an assist from AI. The increase was more than 50% for bioRxiv and SSRN.
Not surprisingly, scientists whose first language is not English, who face the hurdle of communicating science in a foreign language, benefited the most from LLMs. Researchers from Asian institutions, for example, posted between 43.0% and 89.3% more papers after the AI detector indicated a switch to using LLMs compared to similar scientists not using the technology, depending on the preprint site. The benefit is so large, Yin predicts a global shift in the regions with the greatest scientific productivity, to areas previously disadvantaged by the language barrier.
The study uncovered another positive effect of AI in paper preparation. When scientists search for related research to cite in their papers, Bing Chat – the first widely adopted AI-powered search tool – is better at finding newer publications and relevant books, compared to traditional search tools, which tend to identify older, more commonly cited works.
“People using LLMs are connecting to more diverse knowledge, which might be driving more creative ideas,” said first author Keigo Kusumegi, a doctoral student in the field of information science. In future work, he hopes to explore whether AI use leads to more innovative, interdisciplinary work.
While LLMs make it easier for individuals to produce papers, they also make it harder for others to evaluate their quality. For human-written work, clear yet complex language – with big words and long sentences – is usually a reliable indicator of quality research. Across all three preprint sites, papers likely written by humans that scored high on a writing complexity test were most likely to be accepted to a scientific journal. But high-scoring papers probably written by LLMs were less likely to be accepted, suggesting that despite the convincing language, reviewers deemed many of these papers to have little scientific value.
This disconnect between writing quality and scientific quality could have big implications, Yin said, as editors and reviewers struggle to identify valuable paper submissions, and universities and funding agencies can no longer evaluate scientists based on their productivity.
The researchers caution that the new findings are based solely on observations. Next, they hope to perform causal analysis, such as a controlled experiment, where some scientists are randomly assigned to use LLMs and others can’t.
Yin is also planning a symposium that will examine how generative AI is transforming research – and how scientists and policymakers can best shape these changes – to take place March 3-5, 2026 on the Ithaca campus.
As scientists increasingly rely on AI for writing, coding and even idea generation – essentially using AI as a co-scientist – Yin suspects that its impacts will likely broaden. He urges policymakers to make new rules to regulate the rapidly evolving technological landscape
“Already now, the question is not, have you used AI? The question is, how exactly have you used AI and whether it’s helpful or not.”
Co-authors on the study include Xinyu Yang, a doctoral student in the field of computer science; Paul Ginsparg, professor of information science in Cornell Bowers and of physics in the College of Arts and Sciences, and founder of arXiv; and Mathijs de Vaan and Toby Stuart of the University of California, Berkeley.
This work received support from the National Science Foundation.
-30-
Journal
Science
Article Title
Scientific production in the era of large language models
Article Publication Date
18-Dec-2025

No comments:
Post a Comment