AI perceived more negatively than climate science or science in general
Annenberg Public Policy Center of the University of Pennsylvania
ChatGPT was released to the public in late 2022, and the promise and perils of artificial intelligence (AI) have loomed large in the public consciousness ever since. Because perceptions of a new technology like AI can help shape how the technology is developed and used, it is important to understand what Americans think about AI – how positively or negatively they regard the technology, and what hopes and concerns they have about it.
In a new paper, researchers affiliated with the Annenberg Public Policy Center (APPC) of the University of Pennsylvania explore public perceptions of AI science and scientists, comparing those to perceptions of science and scientists in general, and perceptions of climate science and scientists in particular.
The researchers surveyed an empaneled national probability sample of U.S. adults about how they perceived these different scientific domains in terms of each of the “Factors Assessing Science’s Self-Presentation” (FASS) – a rubric that includes credibility, prudence, unbiasedness, self-correction, and benefit.
They found that people perceived AI scientists more negatively than climate scientists or scientists in general, and that this negativity is driven by concern about AI scientists’ prudence – specifically, the perception that AI science is causing unintended consequences. The researchers also examined whether these negative perceptions might be a result of AI being so new and unknown, but found that public perceptions of AI science and scientists did not significantly improve from 2024-2025, even as AI became a more common presence in everyday life.
Perceptions of science are often influenced by political dynamics: Climate science has long suffered from partisan politicization and, after the Covid-19 pandemic, Republicans’ confidence in medical scientists and scientists in general declined. But the researchers found that perceptions of AI are less polarized than perceptions of science and climate science. “Our research suggests that AI has not been politicized in the U.S., at least not yet,” says lead author Dror Walter, an associate professor of digital communication at Georgia State University and an APPC distinguished research fellow.
Walter says that “identifying negative perceptions can help guide messaging about new science,” and that “the public unease about AI’s potential to create unintended consequences invites transparent, well-communicated ongoing assessment of the effectiveness of self or governmental regulation of AI.”
“Public Perceptions of AI Science and Scientists Relatively More Negative but Less Politicized Than General and Climate Science” was published in PNAS Nexus on June 17, 2025, and co-authored by APPC distinguished research fellows Dror Walter, associate professor of digital communication at Georgia State University, and Yotam Ophir, associate professor of communication at the University of Buffalo, State University of New York; Patrick E. Jamieson, director of APPC’s Annenberg Health and Risk Communication Institute; and Kathleen Hall Jamieson, director of the Annenberg Public Policy Center.
Journal
PNAS Nexus
Method of Research
Survey
Subject of Research
People
Article Title
Public Perceptions of AI Science and Scientists Relatively More Negative but Less Politicized Than General and Climate Science
Article Publication Date
17-Jun-2025
Negative perception of scientists working on AI
A public survey indicates that Americans have negative opinions of scientists who work on AI. Dror Walter and colleagues collected opinions about scientists from thousands of US adults via the Annenberg Science and Public Health survey and compared the perceived credibility, prudence, unbiasedness, self-correction, and benefit of scientists working on AI with those of scientists in general and climate scientists in particular. Previous work has established that high scores on these dimensions predict support for science funding and science-consistent beliefs. Respondents’ perceptions of scientists working on AI were the most negative of the three, a result driven by low scores on the “prudence” dimension, specifically the perception that AI science is causing unintended consequences. Political leanings and media consumption habits did not predict opinions about scientists working on AI to the same degree that these factors predict opinions about climate scientists, suggesting the field has not been politicized—at least not yet. Perceptions of scientists working on AI were negative in both 2024 and 2025 surveys and did not improve over time. The authors interpret the persistent perceptions of scientists working on AI as an indicator that the negativity is unlikely to be solely due to a moral panic prompted by the novelty of AI. According to the authors, the evident unease with AI science suggests that the public would welcome transparent information about the effectiveness of self or governmental regulation of the emerging technology.
Journal
PNAS Nexus
Article Title
Public perceptions of AI science and scientists relatively more negative but less politicized than general and climate science
Article Publication Date
17-Jun-2025
Using AI to find persuasive public health messages and automate real-time campaigns
AI can help public health agencies in the quest to end HIV. The United States is pursuing an initiative to end the HIV epidemic by 2030. To achieve this goal, public health agencies and organizations must remind the public about how best to avoid transmitting and acquiring the virus. Public health campaigns are costly, their effectiveness is seldom systematically assessed, and no systematic methods have been developed to build health campaigns in real-time. Dolores Albarracin and colleagues collected public health messages about HIV prevention and testing from US federal agencies, non-profit organizations, and HIV/STI researchers posting on social media. AI was then used to classify those that were actionable, relevant to men who have sex with men, and effective. An online experiment with men who have sex with men, and a field experiment with public health agencies showed that the classification model was successful in picking persuasive public heath messages. Specifically, posts selected by the AI classifier were six times more likely to be selected for reposting by government and community agencies in US counties than general posts about HIV prevention—and the target audience expressed greater interest in sharing AI-selected posts online. According to the authors, community-based organizations can save time and money by using AI to select publicly available public health messages to repost, allowing the organizations to share messages about prevention and testing more often.
Journal
PNAS Nexus
Article Title
Living health-promotion campaigns for communities in the United States: Decentralized content extraction and sharing through AI
Article Publication Date
17-Jun-2025
No comments:
Post a Comment