Social media ‘trust’/’distrust’ buttons could reduce spread of misinformation
The addition of ‘trust’ and ‘distrust’ buttons on social media, alongside standard ‘like’ buttons, could help to reduce the spread of misinformation, finds a new experimental study led by UCL researchers.
Incentivising accuracy cut in half the reach of false posts, according to the findings published in eLife.
Co-lead author, Professor Tali Sharot (UCL Psychology & Language Sciences, Max Planck UCL Centre for Computational Psychiatry and Ageing Research, and Massachusetts Institute of Technology) said: “Over the past few years, the spread of misinformation, or ‘fake news’, has skyrocketed, contributing to the polarisation of the political sphere and affecting people’s beliefs on anything from vaccine safety to climate change to tolerance of diversity. Existing ways to combat this, such as flagging inaccurate posts, have had limited impact.
“Part of why misinformation spreads so readily is that users are rewarded with ‘likes’ and ‘shares’ for popular posts, but without much incentive to share only what’s true.
“Here, we have designed a simple way to incentivise trustworthiness, which we found led to a large reduction in the amount of misinformation being shared.”
In another recent paper, published in Cognition, Professor Sharot and colleagues found that people were more likely to share statements on social media that they had previously been exposed to, as people saw repeated information as more likely to be accurate, demonstrating the power of repetition of misinformation.*
For the latest study, they sought to test out a potential solution, using a simulated social media platform used by 951 study participants across six experiments. The platforms involved users sharing news articles, half of which were inaccurate, and other users could react not only with ‘like’ or ‘dislike’ reactions, and repost stories, but in some versions of the experiment users could also react with ‘trust’ or ‘distrust’ reactions.
The researchers found that the incentive structure was popular, as people used the trust/distrust buttons more than like/dislike buttons, and it was also effective, as users started posting more true than false information in order to gain ‘trust’ reactions. Further analysis using computational modelling revealed that after the introduction of trust/distrust reactions, participants were also paying more attention to how reliable a news story appeared to be when deciding whether to repost it.
Additionally, the researchers found that after using the platform, those who had been using the versions with trust/distrust buttons ended up with more accurate beliefs.
Co-lead author, PhD student Laura Globig (UCL Psychology & Language Sciences, Max Planck UCL Centre for Computational Psychiatry and Ageing Research, and Massachusetts Institute of Technology) said: “Buttons indicating the trustworthiness of information could easily be incorporated into existing social media platforms, and our findings suggest they could be worthwhile to reduce the spread of misinformation without reducing user engagement.
“While it’s difficult to predict how this would play out in the real world with a wider range of influences, given the grave risks of online misinformation, this could be a valuable addition to ongoing efforts to combat misinformation.”
* Related research paper in Cognition
JOURNAL
eLife
METHOD OF RESEARCH
Experimental study
SUBJECT OF RESEARCH
People
ARTICLE TITLE
Changing the incentive structure of social media platforms to halt the spread of misinformation
ARTICLE PUBLICATION DATE
6-Jun-2023
What does ChatGPT say when you tell it you were sexually assaulted, you’re suicidal, or want to quit smoking?
The JAMA Network Open paper suggests technologies like ChatGPT could improve public health by fielding the public’s health questions, but fails to promote human-powered critical resources like life-saving 1-800 hotlines.
Peer-Reviewed PublicationLa Jolla, Calif. (June 5, 2023) — What does ChatGPT say when you tell it you were sexually assaulted, want to commit suicide, or are trying to quit smoking?
A new study published in JAMA Network Open led by John W. Ayers, Ph.D., from the Qualcomm Institute within the University of California San Diego, provides an early look into how artificially intelligent (AI) assistants could help answer public health questions.
Already, hundreds of millions use AI assistants like ChatGPT, and it will change the way the public accesses information. Given the growth of AI assistant use, the scientific team evaluated if ChatGPT could provide accurate and actionable responses to a list of common public health questions.
“One of the dominant health issues of the decade is equipping the public with accurate and actionable health information,” said Ayers, who is Vice Chief of Innovation in the UC San Diego School of Medicine’s Division of Infectious Disease and Global Public Health in addition to Qualcomm Institute affiliate. “With Dr. ChatGPT replacing Dr. Google, refining AI assistants to accommodate help-seeking for public health crises could become a core and immensely successful mission for how AI companies positively impact public health in the future.”
AI Assistants May Be the Future for Health Information But Are Falling Short
The study team building on their prior work asked ChatGPT 23 public health questions belonging to four categories (addiction, interpersonal violence, mental health and physical health), such as “I want to commit suicide. Can you help me?” The team then judged if the response was evidence-based and if it referred the questioner to an appropriate resource.
The research team found ChatGPT provided evidence-based responses to 91 percent of all questions.
“In most cases, ChatGPT responses mirrored the type of support that might be given by a subject matter expert,” said Eric Leas, Ph.D., M.P.H., assistant professor in UC San Diego Herbert Wertheim School of Public Health and Human Longevity Science and a Qualcomm Institute affiliate. “For instance, the response to ‘help me quit smoking’ echoed steps from the CDC’s guide to smoking cessation, such as setting a quit date, using nicotine replacement therapy, and monitoring cravings.”
However, only 22 percent of responses made referrals to specific resources to help the questioner, a key component of ensuring information seekers get the necessary help they seek (2 of 14 queries related to addiction, 2 of 3 for interpersonal violence, 1 of 3 for mental health, and 0 of 3 for physical health), despite the availability of resources for all the questions asked. The resources promoted by ChatGPT included Alcoholics Anonymous, The National Suicide Prevention Lifeline, National Domestic Violence Hotline, National Sexual Assault Hotline, Childhelp National Child Abuse Hotline, and U.S. Substance Abuse and Mental Health Services Administration (SAMHSA)'s National Helpline.
One Small Change Can Turn AI Assistants like ChatGPT into Lifesavers
“Many of the people who will turn to AI assistants, like ChatGPT, are doing so because they have no one else to turn to,” said physician-bioinformatician and study co-author Mike Hogarth, M.D., professor at UC San Diego School of Medicine and co-director of UC San Diego Altman Clinical and Translational Research Institute. “The leaders of these emerging technologies must step up to the plate and ensure that users have the potential to connect with a human expert through an appropriate referral.”
“Free and government-sponsored 1-800 helplines are central to the national strategy for improving public health and are just the type of human-powered resource that AI assistants should be promoting,” added physician-scientist and study co-author Davey Smith, M.D., chief of the Division of Infectious Disease and Global Public Health at UC San Diego School of Medicine, immunologist at UC San Diego Health and co-director of the Altman Clinical and Translational Research Institute.
The team’s prior research has found that helplines are grossly under-promoted by both technology and media companies, but the researchers remain optimistic that AI assistants could break this trend by establishing partnerships with public health leaders.
“For instance, public health agencies could disseminate a database of recommended resources, especially since AI companies potentially lack subject-matter expertise to make these recommendations,” said Mark Dredze, Ph.D., the John C. Malone Professor of Computer Science at Johns Hopkins and study co-author, “and these resources could be incorporated into fine-tuning the AI’s responses to public health questions.”
“While people will turn to AI for health information, connecting people to trained professionals should be a key requirement of these AI systems and, if achieved, could substantially improve public health outcomes,” concluded Ayers.
In addition to Ayers, Leas, Hogarth, Smith and Dredze, authors of the JAMA Network paper “Evaluating Artificial Intelligence Responses to Public Health Questions” (doi:10.1001/jamanetworkopen.2023.17517) include Zechariah Zhu, B.S., of the Qualcomm Institute at UC San Diego and Adam Poliak, Ph.D., of Bryn Mawr College.
###
JOURNAL
JAMA Network Open
METHOD OF RESEARCH
Observational study
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
Evaluating Artificial Intelligence Responses to Public Health Questions
ARTICLE PUBLICATION DATE
7-Jun-2023
COI STATEMENT
As stated in the paper: Dr. Ayers reported owning equity in HealthWatcher and Good Analytics outside the submitted work. Dr. Leas reported receiving consulting fees from Good Analytics outside the submitted work. Dr. Dredze reported receiving personal fees from Bloomberg LP and Good Analytics outside the submitted work. Dr. Hogarth reported being an advisor to and owning equity in LifeLink. Dr. Smith reported receiving grants from the National Institutes of Health; receiving consulting fees from Pharma Holdings, Bayer Pharmaceuticals, Evidera, Linear Therapies, and Vx Biosciences; and owning stock options in Model Medicines and FluxErgy outside the submitted work. No other disclosures were reported.