Wednesday, December 24, 2025

Feral AI gossip with the potential to spread damage and shame will become more frequent, researchers warn


University of Exeter




“Feral” gossip spread via AI bots is likely to become more frequent and pervasive, causing reputational damage and shame, humiliation, anxiety, and distress, researchers have warned.

Chatbots like ChatGPT, Claude, and Gemini don't just make things up—they generate and spread gossip, complete with negative evaluations and juicy rumours that can cause real-world harm, according to new analysis by philosophers Joel Krueger and Lucy Osler from the University of Exeter.

The harm caused by AI gossip isn’t a hypothetical threat. Real-world cases of AI gossip already exist. After publishing an article about how emotionally manipulative chatbots can be, the New York Times reporter Kevin Roose found out chatbots were describing his writing as sensational and accusing him of poor journalistic ethics and being unscrupulous. Other AI bots have falsely detailed people’s involvement in bribery, embezzlement, and sexual harassment. These gossipy AI-generated outputs cause real-world harms—reputational damage, shame, and social unrest.

The study outlines how chatbots gossip, both to human users and other chatbots, but in a different way to humans. This can lead to harm which is potentially wider in scope than fake information spread by chatbots.

Bot-to-bot gossip is particularly dangerous because it operates unconstrained by the social norms that moderate human gossip. It continues to embellish and exaggerate without being checked, spreading quickly in the background, making its way from one bot to the next and inflicting significant harms.

Dr Osler said: “Chatbots often say unexpected things and when chatting with them it can feel like there’s a person on the other side of the exchange. This feeling will likely be more common as they become even more sophisticated.

“Chatbot “bullshit” can be deceptive — and seductive. Because chatbots sound authoritative when we interact with them — their dataset exceeds what any single person can know, and false information is often presented alongside information we know is true — it’s easy to take their outputs at face value.

“This trust can be dangerous. Unsuspecting users might develop false beliefs that lead to harmful behaviour or biases based upon discriminatory information propagated by these chatbots.”

The study shows how the drive to increasingly personalise chatbots could be led by the hope that we’ll become more dependent on these systems and give them greater access to our lives. It’s also done to intensify our feeling of trust and drive us to develop increasingly rich social relationships with them.

Dr Krueger said: “Designing AI to engage in gossip is yet another way of securing increasingly robust emotional bonds between users and their bots.  

“Of course, bots have no interest in promoting a sense of emotional connection with other bots, since they don’t get the same “kick” out of spreading gossip the way humans do. But certain aspects of the way they disseminate gossip mirror the connection-promoting qualities of human gossip while, simultaneously making bot-to-bot gossip potentially even more pernicious than gossip involving humans.”

The researchers predict that user-to-bot gossip may become more common. In these cases, users might seed bots with different nuggets of gossip knowing the latter will, in turn, rapidly disseminate them in its characteristically feral way. Bots might therefore act as intermediaries, responding to user-seeded gossip and rapidly spreading it to others.

AI gives scientists a boost, but at the cost of too many mediocre papers




Cornell University






ITHACA, N.Y. -- After ChatGPT became available to the public in late 2022, scientists began talking among themselves about how much more productive they were using these new artificial intelligence tools, while scientific journal editors complained of an influx of well-written papers with little scientific value.

These anecdotal conversations represent a real shift in how scientists are writing up their work, according to a new study by Cornell researchers. They showed that using large language models (LLMs) like ChatGPT boosts paper production, especially for non-native English speakers. But the overall increase in AI-written papers is making it harder for many people – from paper reviewers to funders to policymakers – to separate the valuable contributions from the AI slop.

“It is a very widespread pattern, across different fields of science – from physical and computer sciences to biological and social sciences,” said Yian Yin, assistant professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science. “There’s a big shift in our current ecosystem that warrants a very serious look, especially for those who make decisions about what science we should support and fund.”    

The new paper, “Scientific Production in the Era of Large Language Models,” published Dec. 18 in Science.

Yin’s group investigated the impacts of LLMs on scientific publishing by collecting more than 2 million papers posted between January 2018 and June 2024 on three online preprint websites. The three sites – arXiv, bioRxiv and Social Science Research Network (SSRN) – cover the physical, life and social sciences, respectively, and post scientific papers that have yet to undergo peer review.

The researchers compared presumably human-authored papers posted before 2023 to AI-written text, in order to develop an AI model that detects papers likely written by LLMs. With this AI detector, they could identify which scientists were probably using the technology for writing, count how many papers they published before and after adopting AI, and then see whether those papers were ultimately deemed worthy of publication in scientific journals.

Their analysis showed a big AI-powered productivity bump. On the arXiv site, scientists who appeared to use LLMs posted about one-third more papers than scientists who weren’t getting an assist from AI. The increase was more than 50% for bioRxiv and SSRN.

Not surprisingly, scientists whose first language is not English, who face the hurdle of communicating science in a foreign language, benefited the most from LLMs. Researchers from Asian institutions, for example, posted between 43.0% and 89.3% more papers after the AI detector indicated a switch to using LLMs compared to similar scientists not using the technology, depending on the preprint site. The benefit is so large, Yin predicts a global shift in the regions with the greatest scientific productivity, to areas previously disadvantaged by the language barrier.

The study uncovered another positive effect of AI in paper preparation. When scientists search for related research to cite in their papers, Bing Chat – the first widely adopted AI-powered search tool – is better at finding newer publications and relevant books, compared to traditional search tools, which tend to identify older, more commonly cited works.

 “People using LLMs are connecting to more diverse knowledge, which might be driving more creative ideas,” said first author Keigo Kusumegi, a doctoral student in the field of information science. In future work, he hopes to explore whether AI use leads to more innovative, interdisciplinary work.

While LLMs make it easier for individuals to produce papers, they also make it harder for others to evaluate their quality. For human-written work, clear yet complex language – with big words and long sentences – is usually a reliable indicator of quality research. Across all three preprint sites, papers likely written by humans that scored high on a writing complexity test were most likely to be accepted to a scientific journal. But high-scoring papers probably written by LLMs were less likely to be accepted, suggesting that despite the convincing language, reviewers deemed many of these papers to have little scientific value. 

This disconnect between writing quality and scientific quality could have big implications, Yin said, as editors and reviewers struggle to identify valuable paper submissions, and universities and funding agencies can no longer evaluate scientists based on their productivity.

The researchers caution that the new findings are based solely on observations. Next, they hope to perform causal analysis, such as a controlled experiment, where some scientists are randomly assigned to use LLMs and others can’t.

Yin is also planning a symposium that will examine how generative AI is transforming research – and how scientists and policymakers can best shape these changes – to take place March 3-5, 2026 on the Ithaca campus.

As scientists increasingly rely on AI for writing, coding and even idea generation – essentially using AI as a co-scientist – Yin suspects that its impacts will likely broaden. He urges policymakers to make new rules to regulate the rapidly evolving technological landscape

“Already now, the question is not, have you used AI? The question is, how exactly have you used AI and whether it’s helpful or not.”

Co-authors on the study include Xinyu Yang, a doctoral student in the field of computer science; Paul Ginsparg, professor of information science in Cornell Bowers and of physics in the College of Arts and Sciences, and founder of arXiv; and Mathijs de Vaan and Toby Stuart of the University of California, Berkeley.

This work received support from the National Science Foundation.

-30-

As measles cases rise, views of MMR vaccine safety and effectiveness -- and willingness to recommend it -- drop



Annenberg Public Policy Center of the University of Pennsylvania

Likelihood U.S. adults would recommend getting various vaccines 

image: 

The likelihood that U.S. adults would recommend that someone in their household who is eligible for a vaccine get that vaccine. The totals of likely to recommend decreased significantly from 2024 to 2025 for the MMR, HPV, and polio vaccines. See the topline for details. Source: Annenberg Public Policy Center surveys in Nov. 2024 and Dec. 2025.

view more 

Credit: Annenberg Public Policy Center






PHILADELPHIA – The United States is experiencing the worst year for measles cases in more than three decades, with nearly 2,000 cases confirmed by the Centers of Disease Control and Prevention (CDC). There have been 49 outbreaks spanning 44 states, with major outbreaks in Texas, along the Utah-Arizona border, and most recently, in South Carolina, where hundreds of people who were exposed to the virus have been quarantined.

A vaccine-preventable illness, measles is highly contagious and potentially deadly, especially for young children. This year’s outbreaks have led to three deaths, including two children. Although measles was declared “eliminated” from the United States in 2000, thanks largely to what the CDC calls “a highly effective vaccination program,” if the current outbreaks cannot be stopped, the nation may lose its elimination status, which is determined by the World Health Organization.

According to the CDC, 93% of the confirmed measles cases in the United States are among those who are unvaccinated or whose vaccination status is unknown.

As U.S. cases rise, a new nationally representative panel survey by the Annenberg Public Policy Center (APPC) of the University of Pennsylvania finds a small but significant drop in the proportion of the public that would recommend that someone in their household get the MMR vaccine, which protects against measles, mumps, and rubella. The survey of 1,637 U.S. adults, which was conducted Nov. 17-Dec. 1, 2025, finds drops in both the perceived safety and effectiveness of the MMR vaccine, as well as for two other vaccines, for seasonal flu and Covid-19. The public does, however, continue to see vaccination as the best defense against diseases like the measles. See the end of this news release or the topline for additional details.

“Vaccination dispatched measles to the history books for most children in the United States,” said Patrick E. Jamieson, the director of APPC’s Annenberg Health and Risk Communication Institute. “Tragically, fears driven by misinformation have revived the threat.”

Decline in likelihood to recommend MMR vaccine

According to the CDC, “The best way to protect against measles is to get the measles, mumps, and rubella (MMR) vaccine.”

The survey finds that most people (86%) say they would be likely to recommend that someone in their household who is eligible for the measles, mumps, and rubella (MMR) vaccine get the vaccine. This represents a small but significant decline from November 2024, when 90% said they would recommend the MMR vaccine to eligible members of their household. Respondents also report significant declines in the likelihood they would recommend vaccines against HPV, or human papillomavirus (75%, down from 79% in November 2024) and polio (85% down from 88% in November 2024) to eligible people in their household over the same time period.

A regression analysis shows that the average decline in recommending the MMR vaccine is not significantly different from those other declines.

“The small but significant decreases in the likelihood to recommend the MMR, HPV, and polio vaccines should be a cause for concern,” said Ken Winneg, APPC’s managing director of survey research.

The likelihood of people recommending three other vaccines to people in their household — the shingles, Tdap (tetanus, diphtheria, and pertussis) and pneumonia vaccines — did not change significantly from 2024 to 2025. 

Perceptions of the MMR vaccine’s safety and effectiveness down from 2022

Safety: More than 8 in 10 people (83%) rate the MMR vaccine as safe. While that proportion has remained about the same since APPC surveys conducted in 2024 and 2023, it is significantly lower than how people perceived its safety in August 2022, when 88% considered the MMR vaccine safe – a drop of five percentage points from the current survey. Survey respondents also show significant declines in the perceived safety of the seasonal flu vaccine (80%, down from 85% in August 2022) and Covid-19 vaccine (65%, down from 73% in August 2022) over the same time period.

Effectiveness: Since August 2022, results from APPC surveys have shown a significant decline in the perceived effectiveness of the MMR vaccine as well. In the current survey, 83% say that the MMR vaccine is effective, down from 87% in 2022. Respondents similarly report significant declines in the perceived effectiveness of the vaccines against the flu (72%, down from 81% in August 2022) and Covid-19 (61%, down from 69% in August 2022) over the same time period.

Here, regression analyses show that the average decline in the perceived safety, and separately the perceived effectiveness, of the MMR vaccine is significantly smaller than the declines in perceived safety or effectiveness of the other vaccines assessed, including flu and Covid-19.

MMR vaccine still seen as safer than getting the diseases it prevents

The survey finds no change in the relative perception of the safety of getting the MMR vaccine compared with getting the diseases it protects against. Three in 4 people (76%) say it is true that it is safer to get the MMR vaccine than to get the diseases it protects against: measles, mumps, or rubella. This is unchanged since February 2024, when we last asked this question.

Worry about contracting measles declines since April 2025

People’s worry that they or a family member will get measles in the next three months has decreased slightly but significantly since April 2025. In the current survey, 13% say they are worried that they or someone in their family would contract measles over the “next three months,” a four-point decline from April 2025 (17%). By contrast, among illnesses that people are susceptible to year-round, respondents are more worried about Covid-19 in the current survey (25%) than they were in April 2025 (20%). “Since measles cases are surging, this is a surprising finding,” Jamieson said.

Though worry is low, many say it would be “bad” to have measles

More than two-thirds (68%) say that it would be “bad” for them to have the measles, including 31% who say it would be “extremely bad.” In August 2022, a smaller majority (58%) said getting measles would be “bad.” More respondents think it would be bad to have polio (87%) or skin cancer (80%, up from 76% in 2022), and fewer respondents think it would be bad to have the seasonal flu (25%) or Covid-19 (42%). Regression analyses show that the average increase in people reporting that it would be “bad” to have measles is significantly larger than any other illness assessed.

Vaccines still seen as the best defense against diseases like measles

Nearly 8 in 10 people (76%) correctly say it is true that vaccines are the best defense we have against measles, chickenpox, polio, and Covid-19, which represents no change since 2024. “Individuals deciding not to vaccinate against measles, chickenpox, polio, or Covid-19 put both their families and communities at risk because these diseases are so infectious,” said Laura A. Gibson, an APPC research analyst.

APPC’s ASAPH survey

The survey data come from the 26th wave of a nationally representative panel of 1,637 U.S. adults conducted for the Annenberg Public Policy Center by SSRS, an independent market research company. This wave of the Annenberg Science and Public Health Knowledge (ASAPH) survey was fielded Nov. 17-Dec. 1, 2025. The margin of sampling error (MOE) is ± 3.5 percentage points at the 95% confidence level. All figures are rounded to the nearest whole number and may not add to 100%. Combined subcategories may not add to totals in the topline and text due to rounding.

Download the topline and the methods report.

The policy center has been tracking the American public’s knowledge, beliefs, and behaviors regarding vaccination, Covid-19, flu, RSV, and other consequential health issues through this survey panel since April 2021. APPC’s survey team includes Patrick E. Jamieson, director of the Annenberg Health and Risk Communication Institute; research analyst Laura A. Gibson; and Ken Winneg, managing director of survey research.

See other recent Annenberg health survey news releases:

The Annenberg Public Policy Center was established in 1993 to educate the public and policy makers about communication’s role in advancing public understanding of political, science, and health issues at the local, state, and federal levels.