UNIVERSITY PARK, Pa. — Congratulations. Reading this far into the story is a feat not many will accomplish, especially if shared on Facebook, according to a team led by Penn State researchers. In an analysis of more than 35 million public posts containing links that were shared extensively on the social media platform between 2017 and 2020, the researchers found that around 75% of the shares were made without the posters clicking the link first. Of these, political content from both ends of the spectrum was shared without clicking more often than politically neutral content.
The findings, which the researchers said suggest that social media users tend to merely read headlines and blurbs rather than fully engage with core content, appeared today (Nov. 19) in Nature Human Behavior. While the data were limited to Facebook, the researchers said the findings could likely map to other social media platforms and help explain why misinformation can spread so quickly online.
“It was a big surprise to find out that more than 75% of the time, the links shared on Facebook were shared without the user clicking through first,” said corresponding author S. Shyam Sundar, Evan Pugh University Professor and the James P. Jimirro Professor of Media Effects at Penn State. “I had assumed that if someone shared something, they read and thought about it, that they’re supporting or even championing the content. You might expect that maybe a few people would occasionally share content without thinking it through, but for most shares to be like this? That was a surprising, very scary finding.”
Access to the Facebook data was granted via Social Science One, a research consortium hosted by Harvard University’s Institute for Quantitative Social Science focused on obtaining and sharing social and behavioral data responsibly and ethically. The data were provided in collaboration with Meta, Facebook’s parent company, and included user demographics and behaviors, such as a “political page affinity score.” This score was determined by external researchers identifying the pages users follow — like the accounts of media outlets and political figures. The researchers used the political page affinity score to assign users to one of five groups — very liberal, liberal, neutral, conservative and very conservative.
To determine the political content of shared links, the researchers in this study used machine learning, a form of artificial intelligence, to identify and classify political terms in the link content. They scored the content on a similar five-point political affinity scale, from very liberal to very conservative, based on how many times each affinity group shared the link.
"We created this new variable of political affinity of content based on 35 million Facebook posts during election season across four years. This is a meaningful period to understand macro-level patterns behind social media news sharing,” said co-author Eugene Cho Snyder, assistant professor of humanities and social sciences at New Jersey Institute of Technology
The team validated the political affinity of news domains, such as CNN or Fox, based on the media bias chart produced by AllSides, an independent company focused on helping people understand the biases of news content, and a ratings system developed by researchers at Northeastern University.
With these rating systems, the team manually sorted 8,000 links, first identifying them as political or non-political content. Then the researchers used this dataset to train an algorithm that assessed 35 million links shared more than 100 times on Facebook by users in the United States.
“A pattern emerged that was confirmed at the level of individual links,” Snyder said. “The closer the political alignment of the content to the user — both liberal and conservative — the more it was shared without clicks. … They are simply forwarding things that seem on the surface to agree with their political ideology, not realizing that they may sometimes be sharing false information.”
The findings support the theory that many users superficially read news stories based just on headlines and blurbs, Sundar said, explaining that Meta also provided data from its third-party fact-checking service — which identified that 2,969 of the shared URLs linked to false content.
The researchers found that these links were shared over 41 million times, without being clicked. Of these, 76.94% came from conservative users and 14.25% from liberal users. The researchers explained that the vast majority — up to 82% — of the links to false information in the dataset originated from conservative news domains.
To cut down on sharing without clicking, Sundar said that social media platforms could introduce “friction” to slow the share, such as requiring people to acknowledge that they have read the full content prior to sharing.
“Superficial processing of headlines and blurbs can be dangerous if false data are being shared and not investigated,” Sundar said, explaining that social media users may feel that content has already been vetted by those in their network sharing it, but this work shows that is unlikely. “If platforms implement a warning that the content might be false and make users acknowledge the danger in doing so, that might help people think before sharing.”
This wouldn’t stop intentional misinformation campaigns, Sundar said, and individuals still have a responsibility to vet the content they share.
“Disinformation or misinformation campaigns aim to sow the seeds of doubt or dissent in a democracy — the scope of these efforts came to light in the 2016 and 2020 elections,” Sundar said. “If people are sharing without clicking, they’re potentially playing into the disinformation and unwittingly contributing to these campaigns staged by hostile adversaries attempting to sow division and distrust.”
So, why do people share without clicking in the first place?
“The reason this happens may be because people are just bombarded with information and are not stopping to think through it,” Sundar said. “In such an environment, misinformation has more of a chance of going viral. Hopefully, people will learn from our study and become more media literate, digitally savvy and, ultimately, more aware of what they are sharing.”
Other collaborators on this paper include Junjun Yin and Guangqing Chi, Penn State; Mengqi Liao, University of Georgia; and Jinping Wang, University of Florida.
The Social Science Research Council, New York, supported this research.
Journal
Nature Human Behaviour
Method of Research
Content analysis
Subject of Research
Not applicable
Article Title
Sharing without clicking on news in social media
A new study shows that Latinos who rely on Spanish-language social media for news are significantly more likely to believe false political narratives than those who consume English-language content. The research – published in PNAS Nexus and led by political scientists at the University of California San Diego and New York University – highlights growing concerns over misinformation targeting Spanish-speaking communities in the United States.
“Latino voters are heavily courted in U.S. elections, and there has been much speculation on the reasons behind their increase in Republican support in the 2024 Presidential contest. Understanding their news and information sources on social media, especially as it pertains to political misinformation, is an important factor to consider, ” said Marisa Abrajano, the study’s corresponding author and a professor of political science at UC San Diego. “Our study, which we believe to be the largest of its kind to examine Latinos’ self-reported social media behaviors, finds that Spanish-speaking Latinos who access their news on social media are more vulnerable to political misinformation than those who use English-language social media.”
The research team, convened by NYU’s Center for Social Media and Politics (CSMaP), surveyed more than 1,100 Latino Facebook and Instagram users in the United States. The team offered participants a small monetary incentive to join the study, and included English-dominant, bilingual and Spanish-dominant respondents. The participants were tested on their belief in seven false political narratives, including the claim that Venezuela is intentionally sending criminals to the U.S., the claim that the majority of Planned Parenthood clinics closed after Roe v. Wade was overturned, and the claim that the COVID-19 vaccine makes breast milk dangerous to infants.
The results reveal that Latinos who use Spanish-language social media for their news were between 11 to 20 percentage points more likely to believe in these false stories compared to those who rely on English-language platforms. The relationship persisted even when controlling for factors such as the primary language spoken at home, and the findings remained robust even after testing for acquiescence bias, where respondents might agree with survey statements regardless of their truth.
“While there's been widespread concern about the prevalence of Spanish-language misinformation on social media, our study is the first to empirically demonstrate its impact on political knowledge among Latino communities in the United States,” said Jonathan Nagler, co-author of the paper and co-director of NYU's CSMaP. “We've established a crucial link between the consumption of Spanish-language social media and a less informed electorate. This research fills a critical gap in our understanding of how misinformation affects different linguistic communities and highlights the urgent need for more robust fact-checking and content moderation in Spanish-language social media spaces.”
Additional insights on WhatsApp and YouTube
In a related study forthcoming in the journal Political Research Quarterly, Abrajano, Nagler and colleagues show that Latino online political engagement is very similar to that of non-Hispanic whites across major platforms like Facebook, Instagram, YouTube, and X, formerly Twitter.
WhatsApp, however, stands out as a unique space for Latino users, who engage in political conversations on the platform far more often than non-Hispanic whites. Latinos rely on WhatsApp as a daily source for sharing news, discussing politics, and staying updated, highlighting its importance in Latino political digital life.
This study, based on a survey of 2,326 U.S.-based Latinos and 769 non-Hispanic whites, also used digital trace data – information that reflects real online behaviors, such as which social media accounts people follow or what videos they watch. This data helps researchers understand not just what people self-report about their online behaviors but what they actually do.
Findings from the digital trace data showed that both Latinos and whites frequently turn to YouTube for political news, raising concerns about misinformation given YouTube’s challenges with content moderation.
Spanish-speaking Latinos were also found to engage frequently with Spanish-language political pages from Latin America, creating a unique cross-border information environment.
The combined research findings have serious implications for U.S. democracy, the authors conclude. Their work also highlights the need for additional research efforts on how Latino news consumption helps to explain their political attitudes and beliefs.
The research is part of CSMaP's Bilingual Election Monitor, a project supported by Craig Newmark Philanthropies, the John S. and James L. Knight Foundation, and NYU's Office of the Provost and Global Institute for Advanced Study.
In addition to Abrajano and Nagler, co-authors of the PNAS Nexus and PRQ studies are: Marianna Garcia from UC San Diego; Aaron Pope, formerly of CSMaP and now at the University of Copenhagen; Robert Vidigal, formerly of CSMaP and now at Vanderbilt University; and Joshua A. Tucker, co-director of CSMaP.
###
Subject of Research
People
Article Title
How reliance on Spanish-language social media predicts beliefs in false political narratives amongst Latinos
Content moderators are influenced by online misinformation
PNAS Nexus
Repeated exposure to lies online may influence the beliefs of professional content moderators, with consequences for online platforms. Hundreds of thousands of content moderators, typically based in non-Western countries, identify and weed out problematic and false content on social platforms. However, constant exposure to misinformation could convince some content moderators that false claims are true, in what is known as the “illusory truth effect.” Hause Lin and colleagues assessed the extent of this effect among professional content moderators in India and the Philippines and explored whether encouraging an accuracy mindset reduces the effect. The authors asked 199 content moderators to rate 16 COVID-19 news headlines, first on their interestingness and then, after a break, on their accuracy—along with 32 new COVID-19 news headlines. As predicted by the illusory truth effect, headlines seen for the second time were 7.1% more likely to be judged as accurate than non-repeated headlines. However, in a similar experiment in which content moderators were asked to rate accuracy first—thereby encouraging an accuracy mindset—repeated headlines were not rated as more accurate than new headlines. Similar experiments with members of the public in India and the Philippines found similar effects. According to the authors, the results suggest that the illusory truth effect is not idiosyncratic to Western populations, suggesting that content moderators may become less effective over time due to being chronically exposed to falsehoods, which could compromise the safety and integrity of online platforms. Accuracy mindset prompts could help, the authors note.
Article Title
Accuracy prompts protect professional content moderators from the illusory truth effect
Article Publication Date
19-Nov-2024
COI Statement
Research by G.P. and D.G.R. has been funded by Meta and Google. TaskUs authors are employees of TaskUs. M.S. is an employee of TikTok and was a former employee of TaskUs. D.S. is an employee of Google. G.P. was a Faculty Research Fellow at Google in 2022. D.G.R. is in the PNAS Nexus editorial board