AI reveals language links between Reddit groups for hate speech, psychiatric disorders
Findings could help inform efforts to counter hate speech and misinformation
PLOS
image:
Researchers assessed speech patterns of those participating in hate speech online.
view moreCredit: Mika Baumeister, Unsplash (CC0, https://creativecommons.org/publicdomain/zero/1.0/)
A new analysis suggests that posts in hate speech communities on the social media website Reddit share speech-pattern similarities with posts in Reddit communities for certain psychiatric disorders. Dr. Andrew William Alexander and Dr. Hongbin Wang of Texas A&M University, U.S., present these findings July 29th in the open-access journal PLOS Digital Health.
The ubiquity of social media has raised concerns about its role in spreading hate speech and misinformation, potentially contributing to prejudice, discrimination and real-world violence. Prior research has uncovered associations between certain personality traits and the act of posting online hate speech or misinformation.
However, whether any associations exist between psychological wellbeing and online hate speech or misinformation has been unclear. To help clarify, Alexander and Wang used artificial intelligence tools to analyze posts from 54 Reddit communities relevant to hate speech, misinformation, psychiatric disorders, or, for neutral comparison, none of those categories. Selected groups included r/ADHD, a community for discussing attention-deficit/hyperactivity disorder, r/NoNewNormal, dedicated to COVID-19 misinformation, and r/Incels, a community banned for hate speech.
The researchers used the large-language model GPT3 to convert thousands of posts from these communities into numerical representations capturing the posts’ underlying speech patterns. These representations, or “embeddings,” could then be analyzed through machine-learning techniques and a mathematical approach known as topological data analysis.
This analysis showed that speech patterns in hate speech communities were similar to speech patterns in communities for complex post-traumatic stress disorder, and borderline, narcissistic and antisocial personality disorders. Links between misinformation and psychiatric disorders were less clear, but with some connections to anxiety disorders.
Importantly, these findings do not at all suggest that people with psychiatric disorders are more prone to hate speech or misinformation. For one, there was no way of knowing if the analyzed posts were made by people actually diagnosed with disorders. More research is needed to understand the links and explore such possibilities as hate speech communities mimicking speech patterns seen in psychiatric disorders.
The authors suggest their findings could help inform new strategies to combat online hate speech and misinformation, such as treating them using elements of therapy developed for psychiatric disorders.
The authors add, “Our results show that the speech patterns of those participating in hate speech online have strong underlying similarities with those participating in communities for individuals with certain psychiatric disorders. Chief among these are the Cluster B personality disorders: Narcissistic Personality Disorder, Antisocial Personality Disorder, and Borderline Personality Disorder. These disorders are generally known for either lack of empathy/regard towards the wellbeing of others, or difficulties managing anger and relationships with others.”
Alexander notes, “While we looked for similarities between misinformation and psychiatric disorder speech patterns as well, the connections we found were far weaker. Besides a potential anxiety component, I think it is safe to say at this point in time that most people buying into or spreading misinformation are actually quite healthy from a psychiatric standpoint.”
Alexander concludes, “I want to emphasize that these results do not mean that individuals with psychiatric conditions are more likely to engage in hate speech. Instead, it suggests that people who engage in hate speech online tend to have similar speech patterns to those with cluster B personality disorders. It could be that the lack of empathy for others fostered by hate speech influences people over time and causes them to exhibit traits similar to those seen in Cluster B personality disorders, at least with regards to the target of their hate speech. While further studies would be needed to confirm this, I think it is a good indicator that exposing ourselves to these types of communities for long periods of time is not healthy and can make us less empathetic towards others.”
In your coverage, please use this URL to provide access to the freely available paper in PLOS Digital Health: http://plos.io/4028vQ5
Citation: Alexander AW, Wang H (2025) Topological data mapping of online hate speech, misinformation, and general mental health: A large language model based study. PLOS Digit Health 4(7): e0000935. https://doi.org/10.1371/journal.pdig.0000935
Author countries: United States
Funding: AWA was a Burroughs Wellcome Fund Scholar supported by a Burroughs Wellcome Fund Physician Scientist Institutional Award (G-1020069) to the Texas A&M University Academy of Physician Scientists (https://www.bwfund.org/funding-opportunities/biomedical-sciences/physician-scientist-institutional-award/grant-recipients/). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. HW received no specific funding for this work.
Journal
PLOS Digital Health
Method of Research
Computational simulation/modeling
National Science Foundation awards UC Davis $5 million for artificial intelligence hub
University of California - Davis
The National Science Foundation has awarded $5 million over five years to the University of California, Davis, to run the Artificial Intelligence Institutes Virtual Organization, a community hub for new and existing AI institutes established by the federal government.
AIVO is part of a $100 million public-private investment in AI announced by NSF July 29.
“Artificial intelligence is key to strengthening our workforce and boosting U.S. competitiveness,” said Brian Stone, performing the duties of the NSF director, in a news release. “Through the National AI Research Institutes, we are turning cutting-edge ideas and research into real-world solutions and preparing Americans to lead in the technologies and jobs of the future.”
Up to July 29, AIVO has been a virtual organization with NSF support, run by staff from the Artificial Intelligence Institute for Next Generation Food Systems (AIFS) at UC Davis. With the new investment, it will become a NSF-branded community hub.
AIVO began as an effort to coordinate activities among the original federal AI institutes, including AIFS, and then to share knowledge with new institutes as they were established, said Steve Brown, associate director of AIFS. It has expanded into a virtual hub that supports all the institutes, including organizing an annual summit for AI institutes’ leadership.
Under the new contract, AIVO will provide the events and venues that bring the AI Institutes’ personnel and other stakeholders together and create mechanisms for cross-institute connection. It will also foster the development of new public-private partnerships and promote positive interest in university-based AI research and the development and use of AI for societal good, Brown said.
AIVO also received $1.75 million in December 2024 from Google.org to support AI education in many forms, including AI curriculum for K-16 and workforce training, AI-assisted learning and summer programs in AI for high school teachers and students.
AIFS at UC Davis was one of the seven original AI institutes announced in August 2020. AIFS is funded by the U.S. Department of Agriculture’s National Institute for Food and Agriculture, while the overall AI Institutes program is led by NSF.
Brown University to lead national institute focused on intuitive, trustworthy AI assistants
Brown University
image:
A new institute, based at Brown and supported by a $20 million National Science Foundation grant, will convene researchers to guide development of a new generation of AI assistants for use in mental and behavioral health. Ellie Pavlick, an associate professor of computer science at Brown, will lead the effort.
view moreCredit: Nick Dentamaro/Brown University
PROVIDENCE, R.I. [Brown University] — With a $20 million grant from the U.S. National Science Foundation, Brown University researchers will lead an artificial intelligence research institute aimed at developing a new generation of AI assistants capable of trustworthy, sensitive and context-aware interactions with people. Work to develop the advanced assistants is specifically motivated by the potential for use in mental and behavioral health, where trust and safety are of the utmost importance.
The AI Research Institute on Interaction for AI Assistants (ARIA) will combine research on human and machine cognition, with the goal of creating AI systems that are able to interpret a person’s unique behavioral needs and provide helpful feedback in real time. To understand what form such systems should take and how they could be safely and responsibly deployed, the institute will bring together experts from across the nation spanning computer science and machine learning, cognitive and behavioral science, law, philosophy and education.
Creating AI systems that can operate safely in a sensitive area like mental health care will require capabilities that extend well beyond those of even today’s most advanced chatbots and language models, according to Ellie Pavlick, an associate professor of computer science at Brown who will lead the ARIA collaboration.
“Any AI system that interacts with people, especially who may be in states of distress or other vulnerable situations, needs a strong understanding of the human it’s interacting with, along with a deep causal understanding of the world and how the system’s own behavior affects that world,” Pavlick said. “At the same time, the system needs to be transparent about why it makes the recommendations that it does in order to build trust with the user. Mental health is a high stakes setting that embodies all the hardest problems facing AI today. That’s why we’re excited to tackle this and figure out what it takes to get these things absolutely right.”
That work will require deep collaboration across institutions, expertise and academic disciplines, Pavlick said. She and her colleagues have carefully assembled a nationwide collaboration to address these critical challenges in AI development.
“AI systems — particularly those brought to bear in sensitive areas of human health — require thoughtful development that combines technological advancement with a deep understanding of their societal implications,” said Brown University Provost Francis J. Doyle III. “Brown is well-positioned to lead this collaborative research, and I’m confident the work of ARIA’s scholars will produce scientific breakthroughs that will have a positive impact on the lives of countless people.”
ARIA is one of five national AI institutes that will receive a total of $100 million in funding, the National Science Foundation, in partnership with Capital One and Intel, announced on Tuesday, July 29. The public-private investment aligns with the White House AI Action Plan, a national initiative to sustain and enhance America's global AI leadership, the NSF noted.
“Artificial intelligence is key to strengthening our workforce and boosting U.S. competitiveness," said Brian Stone, who is performing the duties of the NSF director. "Through the National AI Research Institutes, we are turning cutting-edge ideas and research into real-world solutions and preparing Americans to lead in the technologies and jobs of the future."
ARIA’s research team includes experts from leading research institutions nationwide including Colby College; Dartmouth College; New York University; Carnegie Mellon University; the University of California, Berkeley; the University of California, San Diego; the University of New Mexico; the Santa Fe Institute; and Data and Society, a civil society organization in New York. The institute will draw on specialized expertise from Brown’s Data Science Institute and Carney Institute for Brain Science, Dartmouth’s Center for Technology and Behavioral Health, and Colby’s Davis Institute for AI.
Additional collaborators include SureStart, Google, the National Institutes of Health, Addiction Policy Forum, Community College of Rhode Island, and Clemson University. As part of its partnership with NSF, Capital One is contributing $1 million over five years to support ARIA’s research efforts.
“ARIA, in its very conception, incorporates some of the most important ideals of doing people- and community-centered research,” said Suresh Venkatasubramanian, a professor of computer science at Brown, director of Brown’s Center for Technological Responsibility, Reimagination and Redesign, and co-director of ARIA. “Our team has scholars who span multiple disciplines, deep engagement with stakeholders in the mental and behavioral health community, and cutting-edge expertise in doing sociotechnical research.”
ARIA’s work will also include a robust education and workforce development program spanning K-12 students through working professionals. The ARIA team will work with the Bootstrap program, a computer science curriculum developed at Brown, to support evidence-based practices for building new AI curricula and training for K-12 teachers. An initiative called the Building Bridges Summer Program will bring college and high school students from across the country to ARIA campuses to work on cutting-edge AI research.
New technologies for tomorrow, new insights for today
According to the National Institute of Mental Health, more than one in five Americans lives with a mood, anxiety or substance use disorder. There are effective treatments for these conditions, but high cost, lack of insurance, limited access to transportation and social stigma can all create barriers to effective care. AI has the potential to break through these barriers in a variety of ways, Pavlick says.
“There are still a lot of open questions about what a ‘good’ AI system for mental health support looks like,” Pavlick said. “We can imagine people wearing smartwatches or other devices that collect behavioral and biometric information, and having an AI system that uses that data to provide nudges or goal-oriented feedback. But there are obviously a lot of considerations about privacy, accuracy, personalization, safety and when to have a therapist in the loop. Part of the work of the institute will be to understand what forms this technology could take, which types of systems could work and which shouldn’t exist.”
The need for this work is urgent, according to Pavlick. New startups and existing companies are already developing AI apps and chatbots for mental health support, and evidence suggests that people often turn to ChatGPT and other chatbots for relationship advice and other information tied to mental well-being.
“The work we’ll be doing on trust, safety and responsible AI will hopefully address immediate safety concerns with these systems — for example, developing safeguards against responses that reinforce delusions or unempathetic responses that could increase someone’s distress,” Pavlick said. “We need short-term solutions to avoid harms from systems already in wide use, paired with long-term research to fix these problems where they originate.”
New and smarter AI systems will be required to help deliver the kind of trustworthy and context-aware feedback required for safe and effective mental health interventions. Today’s large language models generate text through statistical inference — predicting which words to use next based on prior words or user inputs. Unlike humans, they don’t have a mental model of the world around them, they don’t understand chains of cause and effect, and they have little intuition about the internal states of the people with whom they interact.
“There's a lot of work in cognitive science and neuroscience trying to understand how humans develop this kind of causal understanding of the world and of their own activities,” Pavlick said. “We’ll be adding to that work and thinking about how to endow AI systems with analogous abilities so that they can interact naturally and effectively with people.”
At the same time, the team will engage legal scholars, philosophers, education experts and others to better understand how such systems would fit into existing social and cultural infrastructure.
“You don't just want to take for granted that any system that you can build should exist, because not all of them will have a net benefit,” Pavlick said. “So we’ll be addressing questions about what systems should even be built and which should not.”
Ultimately, Pavlick says, developing smarter, more responsible AI will be a benefit not only in the mental health sphere, but in the course of AI development in general.
“We’re addressing this critical alignment question of how to build technology that is ultimately good for society,” she said. “These are extremely hard problems in AI in general that happen to have a particularly pointed use case in mental health. By working toward answers to these questions, we’ll work toward making AI that’s beneficial to all.”
A new institute, based at Brown and supported by a $20 million National Science Foundation grant, will convene researchers to guide development of a new generation of AI assistants for use in mental and behavioral health. Ellie Pavlick (center), an associate professor of computer science at Brown, will lead the effort.
Credit
Nick Dentamaro/Brown University
With no need for sleep or food, AI-built ‘scientists’ get the job done quickly
In Virtual Lab project, CZ Biohub San Francisco researchers and collaborators assemble team of interdisciplinary AI agents that can solve complex research questions
Chan Zuckerberg Biohub
image:
The Virtual Lab comprises a team of AI scientists, guided by a human scientist, capable of carrying out complex scientific research.
view moreCredit: Swanson, et al., Nature
Imagine you’re a molecular biologist wanting to launch a project seeking treatments for a newly emerging disease. You know you need the expertise of a virologist and an immunologist, plus a bioinformatics specialist to help analyze and generate insights from your data. But you lack the resources or connections to build a big multidisciplinary team.
Researchers at Chan Zuckerberg Biohub San Francisco and Stanford University now offer a novel solution to this dilemma: an AI-driven Virtual Lab through which a team of AI agents, each equipped with varied scientific expertise, can tackle sophisticated and open-ended scientific problems by formulating, refining, and carrying out a complex research strategy — these agents can even conduct virtual experiments, producing results that can be validated in real-life laboratories.
In a study published in Nature on July 29, 2025, co–senior authors John Pak of CZ Biohub SF and Stanford’s James Zou describe their Virtual Lab platform, in which a human user creates a “Principal Investigator” AI agent (the PI) that assembles and directs a team of additional AI agents emulating the specialized research roles seen in science labs. The human researcher proposes a scientific question, and then monitors meetings in which the PI agent exchanges ideas with the team of specialist agents to advance the research. The agents are run by a large language model (LLM), giving them scientific reasoning and decision-making capabilities.
The authors then used the Virtual Lab platform to investigate a timely research question: designing antibodies or Nanobodies to bind to the spike protein of new variants of the SARS-CoV-2 virus. After just a few days working together, the Virtual Lab team had designed and implemented an innovative computational pipeline, and had presented Pak and Zou with blueprints for dozens of binders, two of which showed particular promise against new SARS-CoV-2 strains when subsequently tested in Pak’s lab. The overall Virtual Lab study was led by Kyle Swanson, a Ph.D. student in Zou’s group.
“What was once this crazy science fiction idea is now a reality,” said Pak, group leader of the Biohub SF Protein Sciences Platform. “The AI agents came up with a pipeline that was quite creative. But at the same time, it wasn’t outrageous or nonsensical. It was very reasonable – and they were very fast.”
Zou is a pioneering AI researcher who has been recognized widely for breakthroughs in using AI for biomedical research, including winning the International Society of Computational Biology’s 2025 Overton Prize and being named in the New York Times’ 2024 Good Tech Awards for SyntheMol, an AI system that can design and validate novel antibiotics.
“This is the first demonstration of autonomous AI agents really solving a challenging research problem, from start to finish,” said Zou, an associate professor of biomedical data science who leads Stanford University’s AI for Health program and is also a CZ Biohub SF Investigator. “The AI agents made good decisions about complex problems and were able to quickly design dozens of protein candidates that we could then test in lab experiments.”
A fortuitous real-world meeting
It’s become increasingly common for human scientists to employ LLMs to help with science research, such as analyzing data, writing code, and even designing proteins. Zou and Pak’s Virtual Lab platform, however, is to their knowledge the first to apply multistep reasoning and interdisciplinary expertise to successfully address an open-ended research question.
Zou and Pak first met at one of the biweekly Biohub SF Investigator Program meetings. “I had just seen James give a talk at the previous Investigator meeting, where he said he wished he could do more experimental work,” Pak said. “So I decided to introduce myself.”
That conversation, in the spring of 2024, sparked a collaboration that drew on Zou’s AI expertise and Pak’s expertise in protein science.
In addition to the PI agent and specialist agents, their Virtual Lab platform includes a Scientific Critic agent, a generalist whose role is to ask probing questions and inject a dose of skepticism into the process. “We found the Critic to be quite essential, and also reduced hallucinations,” Zou said.
While human researchers participated in AI scientists’ meetings and offered guidance at key moments, their words made up only about 1% of all conversations. The vast majority of discussions, decisions, and analyses were performed by the AI agents themselves.
In this study, the Virtual Lab team came up with 92 new “Nanobodies” (tiny proteins that work like antibodies), and experiments in Pak’s lab found that two bound to the so-called spike protein of recent SARS-CoV-2 variants, a significant enough finding that Pak expects to publish studies on them.
“You’d think there’d be no way AI agents talking together could propose something akin to what a human scientist would come up with, but we found here that they really can,” said Pak. “It’s pretty shocking.”
When asked if he’s worried about AI scientists replacing him, Pak says no. Instead, he thinks these new virtual collaborators will just enhance his work.
“This project opened the door for our Protein Science team to test a lot more well-conceived ideas very quickly,” he said. “The Virtual Lab gave us more work, in a sense, because it gave us more ideas to test. If AI can produce more testable hypotheses, that’s more work for everyone.”
The results, said Pak and Zou, not only demonstrate the potential benefits of human–AI collaborations but also highlight the importance of diverse perspectives in science. Even in these virtual settings, instructing agents to assume different roles and bring varying perspectives to the table resulted in better outcomes than one AI agent working alone, they said. And because the discussions result in a transcript that human team members can access and review, researchers can feel confident about why certain decisions were made and probe further if they have questions or concerns.
“The agents and the humans are all speaking the same language, so there’s nothing ‘black box’ about it, and the collaboration can progress very smoothly,” Pak said. “It was a really positive experience overall, and I feel pretty confident about applying the Virtual Lab in future research.”
Zou says the existing platform is designed for biomedical research questions, but modifications would allow it to be used in a much wider array of scientific disciplines.
“We’re demonstrating a new paradigm where AI is not just a tool we use for a specific step in our research, but it can actually be a primary driver of the whole process to generate discoveries,” said Zou. “It’s a big shift, and we’re excited to see how it helps us advance in all areas of research.”
Excerpts from a Virtual Lab team meeting, in which AI agents with unique roles discuss the antibody project.
About CZ Biohub San Francisco: A nonprofit biomedical research center founded in 2016, CZ Biohub SF is part of the CZ Biohub Network, a group of research institutes created and supported by the Chan Zuckerberg Initiative. CZ Biohub SF’s researchers, engineers, and data scientists, in collaboration with colleagues at our partner universities — Stanford University; the University of California, Berkeley; and the University of California, San Francisco — seek to understand the fundamental mechanisms underlying disease and develop new technologies that will lead to actionable diagnostics and effective therapies. Learn more at czbiohub.org/sf.
Journal
Nature
Article Title
The Virtual Lab of AI Agents Designs New SARS-CoV-2 Nanobodies
Article Publication Date
29-Jul-2025
No comments:
Post a Comment