Op-Ed: Maybe polarizing social media was an even dumber idea than it looks
By Paul Wallis
November 20, 2024DIGITAL JOURNAL
Elon Musk's X. — © AFP
The hordes of people leaving X are a sort of statistical obituary to years of propaganda. X is in big trouble, mainly because of its policies and algorithms and a bizarre relationship with infuriated advertisers.
Like most of American media, X instantly demoted itself to half of its own market share with its politics. It also became a servant, like FOX, to one side only. That’s not working out too well.
You’ll also notice from media coverage that nobody’s questioning the self-decapitation and disembowelment of X. The other glaring problem is that somehow this situation is now seen as normal.
It isn’t normal. People and advertisers are voting with their dollars and clicks.
Disgruntled X users are now heading to Bluesky in vast numbers. Millions of people have basically abandoned X. Bluesky looks a bit like early Twitter. Seems quite OK as an environment. I haven’t seen a rabid lunatic yet.
The new and huge problem is the quality of information vs communication. This is now an abyss of opposing information.
Unless there’s some middle ground, and the medium isn’t hopelessly biased, social media has just machinegunned itself in the foot. Politics is not the sole interest of the world. Other things happen, too, y’know.
That’s where audience loss is likely to be fatal. What if you don’t want to read about The Adventures of Donny and Elon in Disneyland?
What if you want a broad and useful mix of your own interests, instead?
The sole and whole purpose of social media is communication. Reducing your content range to such a narrow focus means you inevitably lose users.
Nor is the Chicken Little approach to information exactly popular. Nobody listens to raving lunatics if they don’t have to.
The possible exception to that theory is screen-fed America. Markets and media businesses take a long time to change course. This market doesn’t eat solid food anymore. The child-psychology is pretty obvious. These sources will probably simply continue to produce pablum.
The total stagnation of American mass media was one thing. This social media situation is stagnation of real-time information as well. X has made itself useless to its users.
People obviously don’t like that. The move to Bluesky is self-defense. The social media market can blame itself for a newcomer just walking off with its customers.
Let’s talk “dumb”.
Billions of dollars of investment are now evaporating in a festering social media environment and those billions aren’t coming back.
The World’s Richest Sudden Instant Fan Boy doesn’t have the excuse of being geriatric or illiterate. He should have enough metrics to see the cliff coming.
The big money markets in the US are all blue. The black holes are all red, with the possible exception of Texas. Ignore those big blue markets at your peril. Lose those markets and you’re doing hillbilly scale dollars. These much smaller markets can’t deliver the same value. Advertisers and marketers know that.
The rest of the world is also reacting very negatively to the toxicity of social media. The hubris and hype are all American-generated. The rest of the world can easily ignore most of this garbage.
What if reality gets involved in this mess? Economic risks and social disintegration are circling like buzzards over America in huge numbers with dollar signs on them. All types of media should be building financial fallout shelters about now. Markets can evaporate overnight.
No amount of fake news hysteria and self-congratulation can make an impression on that situation. You need “Make America Solvent Again” to do that, and it looks like that’s not happening for the next four years.
How would you read a commercial market that is basically suicidal? Would you focus on customer retention in the afterlife? You might have to do that.
Social media needs a functional market to exist at all. If the money’s pulling out, the message couldn’t be clearer.
__________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.
Elon Musk's X. — © AFP
The hordes of people leaving X are a sort of statistical obituary to years of propaganda. X is in big trouble, mainly because of its policies and algorithms and a bizarre relationship with infuriated advertisers.
Like most of American media, X instantly demoted itself to half of its own market share with its politics. It also became a servant, like FOX, to one side only. That’s not working out too well.
You’ll also notice from media coverage that nobody’s questioning the self-decapitation and disembowelment of X. The other glaring problem is that somehow this situation is now seen as normal.
It isn’t normal. People and advertisers are voting with their dollars and clicks.
Disgruntled X users are now heading to Bluesky in vast numbers. Millions of people have basically abandoned X. Bluesky looks a bit like early Twitter. Seems quite OK as an environment. I haven’t seen a rabid lunatic yet.
The new and huge problem is the quality of information vs communication. This is now an abyss of opposing information.
Unless there’s some middle ground, and the medium isn’t hopelessly biased, social media has just machinegunned itself in the foot. Politics is not the sole interest of the world. Other things happen, too, y’know.
That’s where audience loss is likely to be fatal. What if you don’t want to read about The Adventures of Donny and Elon in Disneyland?
What if you want a broad and useful mix of your own interests, instead?
The sole and whole purpose of social media is communication. Reducing your content range to such a narrow focus means you inevitably lose users.
Nor is the Chicken Little approach to information exactly popular. Nobody listens to raving lunatics if they don’t have to.
The possible exception to that theory is screen-fed America. Markets and media businesses take a long time to change course. This market doesn’t eat solid food anymore. The child-psychology is pretty obvious. These sources will probably simply continue to produce pablum.
The total stagnation of American mass media was one thing. This social media situation is stagnation of real-time information as well. X has made itself useless to its users.
People obviously don’t like that. The move to Bluesky is self-defense. The social media market can blame itself for a newcomer just walking off with its customers.
Let’s talk “dumb”.
Billions of dollars of investment are now evaporating in a festering social media environment and those billions aren’t coming back.
The World’s Richest Sudden Instant Fan Boy doesn’t have the excuse of being geriatric or illiterate. He should have enough metrics to see the cliff coming.
The big money markets in the US are all blue. The black holes are all red, with the possible exception of Texas. Ignore those big blue markets at your peril. Lose those markets and you’re doing hillbilly scale dollars. These much smaller markets can’t deliver the same value. Advertisers and marketers know that.
The rest of the world is also reacting very negatively to the toxicity of social media. The hubris and hype are all American-generated. The rest of the world can easily ignore most of this garbage.
What if reality gets involved in this mess? Economic risks and social disintegration are circling like buzzards over America in huge numbers with dollar signs on them. All types of media should be building financial fallout shelters about now. Markets can evaporate overnight.
No amount of fake news hysteria and self-congratulation can make an impression on that situation. You need “Make America Solvent Again” to do that, and it looks like that’s not happening for the next four years.
How would you read a commercial market that is basically suicidal? Would you focus on customer retention in the afterlife? You might have to do that.
Social media needs a functional market to exist at all. If the money’s pulling out, the message couldn’t be clearer.
__________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.
Social media users probably won’t read beyond this headline, researchers say
A new study of 35 million news links circulated on Facebook reports that more than 75% of the time they were shared without the link being clicked upon and read
Penn State
UNIVERSITY PARK, Pa. — Congratulations. Reading this far into the story is a feat not many will accomplish, especially if shared on Facebook, according to a team led by Penn State researchers. In an analysis of more than 35 million public posts containing links that were shared extensively on the social media platform between 2017 and 2020, the researchers found that around 75% of the shares were made without the posters clicking the link first. Of these, political content from both ends of the spectrum was shared without clicking more often than politically neutral content.
The findings, which the researchers said suggest that social media users tend to merely read headlines and blurbs rather than fully engage with core content, appeared today (Nov. 19) in Nature Human Behavior. While the data were limited to Facebook, the researchers said the findings could likely map to other social media platforms and help explain why misinformation can spread so quickly online.
“It was a big surprise to find out that more than 75% of the time, the links shared on Facebook were shared without the user clicking through first,” said corresponding author S. Shyam Sundar, Evan Pugh University Professor and the James P. Jimirro Professor of Media Effects at Penn State. “I had assumed that if someone shared something, they read and thought about it, that they’re supporting or even championing the content. You might expect that maybe a few people would occasionally share content without thinking it through, but for most shares to be like this? That was a surprising, very scary finding.”
Access to the Facebook data was granted via Social Science One, a research consortium hosted by Harvard University’s Institute for Quantitative Social Science focused on obtaining and sharing social and behavioral data responsibly and ethically. The data were provided in collaboration with Meta, Facebook’s parent company, and included user demographics and behaviors, such as a “political page affinity score.” This score was determined by external researchers identifying the pages users follow — like the accounts of media outlets and political figures. The researchers used the political page affinity score to assign users to one of five groups — very liberal, liberal, neutral, conservative and very conservative.
To determine the political content of shared links, the researchers in this study used machine learning, a form of artificial intelligence, to identify and classify political terms in the link content. They scored the content on a similar five-point political affinity scale, from very liberal to very conservative, based on how many times each affinity group shared the link.
"We created this new variable of political affinity of content based on 35 million Facebook posts during election season across four years. This is a meaningful period to understand macro-level patterns behind social media news sharing,” said co-author Eugene Cho Snyder, assistant professor of humanities and social sciences at New Jersey Institute of Technology
The team validated the political affinity of news domains, such as CNN or Fox, based on the media bias chart produced by AllSides, an independent company focused on helping people understand the biases of news content, and a ratings system developed by researchers at Northeastern University.
With these rating systems, the team manually sorted 8,000 links, first identifying them as political or non-political content. Then the researchers used this dataset to train an algorithm that assessed 35 million links shared more than 100 times on Facebook by users in the United States.
“A pattern emerged that was confirmed at the level of individual links,” Snyder said. “The closer the political alignment of the content to the user — both liberal and conservative — the more it was shared without clicks. … They are simply forwarding things that seem on the surface to agree with their political ideology, not realizing that they may sometimes be sharing false information.”
The findings support the theory that many users superficially read news stories based just on headlines and blurbs, Sundar said, explaining that Meta also provided data from its third-party fact-checking service — which identified that 2,969 of the shared URLs linked to false content.
The researchers found that these links were shared over 41 million times, without being clicked. Of these, 76.94% came from conservative users and 14.25% from liberal users. The researchers explained that the vast majority — up to 82% — of the links to false information in the dataset originated from conservative news domains.
To cut down on sharing without clicking, Sundar said that social media platforms could introduce “friction” to slow the share, such as requiring people to acknowledge that they have read the full content prior to sharing.
“Superficial processing of headlines and blurbs can be dangerous if false data are being shared and not investigated,” Sundar said, explaining that social media users may feel that content has already been vetted by those in their network sharing it, but this work shows that is unlikely. “If platforms implement a warning that the content might be false and make users acknowledge the danger in doing so, that might help people think before sharing.”
This wouldn’t stop intentional misinformation campaigns, Sundar said, and individuals still have a responsibility to vet the content they share.
“Disinformation or misinformation campaigns aim to sow the seeds of doubt or dissent in a democracy — the scope of these efforts came to light in the 2016 and 2020 elections,” Sundar said. “If people are sharing without clicking, they’re potentially playing into the disinformation and unwittingly contributing to these campaigns staged by hostile adversaries attempting to sow division and distrust.”
So, why do people share without clicking in the first place?
“The reason this happens may be because people are just bombarded with information and are not stopping to think through it,” Sundar said. “In such an environment, misinformation has more of a chance of going viral. Hopefully, people will learn from our study and become more media literate, digitally savvy and, ultimately, more aware of what they are sharing.”
Other collaborators on this paper include Junjun Yin and Guangqing Chi, Penn State; Mengqi Liao, University of Georgia; and Jinping Wang, University of Florida.
The Social Science Research Council, New York, supported this research.
Journal
Nature Human Behaviour
Method of Research
Content analysis
Subject of Research
Not applicable
Article Title
Sharing without clicking on news in social media
Spanish-language social media increases Latinos’ vulnerability to misinformation
University of California - San Diego
A new study shows that Latinos who rely on Spanish-language social media for news are significantly more likely to believe false political narratives than those who consume English-language content. The research – published in PNAS Nexus and led by political scientists at the University of California San Diego and New York University – highlights growing concerns over misinformation targeting Spanish-speaking communities in the United States.
“Latino voters are heavily courted in U.S. elections, and there has been much speculation on the reasons behind their increase in Republican support in the 2024 Presidential contest. Understanding their news and information sources on social media, especially as it pertains to political misinformation, is an important factor to consider, ” said Marisa Abrajano, the study’s corresponding author and a professor of political science at UC San Diego. “Our study, which we believe to be the largest of its kind to examine Latinos’ self-reported social media behaviors, finds that Spanish-speaking Latinos who access their news on social media are more vulnerable to political misinformation than those who use English-language social media.”
The research team, convened by NYU’s Center for Social Media and Politics (CSMaP), surveyed more than 1,100 Latino Facebook and Instagram users in the United States. The team offered participants a small monetary incentive to join the study, and included English-dominant, bilingual and Spanish-dominant respondents. The participants were tested on their belief in seven false political narratives, including the claim that Venezuela is intentionally sending criminals to the U.S., the claim that the majority of Planned Parenthood clinics closed after Roe v. Wade was overturned, and the claim that the COVID-19 vaccine makes breast milk dangerous to infants.
The results reveal that Latinos who use Spanish-language social media for their news were between 11 to 20 percentage points more likely to believe in these false stories compared to those who rely on English-language platforms. The relationship persisted even when controlling for factors such as the primary language spoken at home, and the findings remained robust even after testing for acquiescence bias, where respondents might agree with survey statements regardless of their truth.
“While there's been widespread concern about the prevalence of Spanish-language misinformation on social media, our study is the first to empirically demonstrate its impact on political knowledge among Latino communities in the United States,” said Jonathan Nagler, co-author of the paper and co-director of NYU's CSMaP. “We've established a crucial link between the consumption of Spanish-language social media and a less informed electorate. This research fills a critical gap in our understanding of how misinformation affects different linguistic communities and highlights the urgent need for more robust fact-checking and content moderation in Spanish-language social media spaces.”
Additional insights on WhatsApp and YouTube
In a related study forthcoming in the journal Political Research Quarterly, Abrajano, Nagler and colleagues show that Latino online political engagement is very similar to that of non-Hispanic whites across major platforms like Facebook, Instagram, YouTube, and X, formerly Twitter.
WhatsApp, however, stands out as a unique space for Latino users, who engage in political conversations on the platform far more often than non-Hispanic whites. Latinos rely on WhatsApp as a daily source for sharing news, discussing politics, and staying updated, highlighting its importance in Latino political digital life.
This study, based on a survey of 2,326 U.S.-based Latinos and 769 non-Hispanic whites, also used digital trace data – information that reflects real online behaviors, such as which social media accounts people follow or what videos they watch. This data helps researchers understand not just what people self-report about their online behaviors but what they actually do.
Findings from the digital trace data showed that both Latinos and whites frequently turn to YouTube for political news, raising concerns about misinformation given YouTube’s challenges with content moderation.
Spanish-speaking Latinos were also found to engage frequently with Spanish-language political pages from Latin America, creating a unique cross-border information environment.
The combined research findings have serious implications for U.S. democracy, the authors conclude. Their work also highlights the need for additional research efforts on how Latino news consumption helps to explain their political attitudes and beliefs.
The research is part of CSMaP's Bilingual Election Monitor, a project supported by Craig Newmark Philanthropies, the John S. and James L. Knight Foundation, and NYU's Office of the Provost and Global Institute for Advanced Study.
In addition to Abrajano and Nagler, co-authors of the PNAS Nexus and PRQ studies are: Marianna Garcia from UC San Diego; Aaron Pope, formerly of CSMaP and now at the University of Copenhagen; Robert Vidigal, formerly of CSMaP and now at Vanderbilt University; and Joshua A. Tucker, co-director of CSMaP.
###
Journal
PNAS Nexus
Method of Research
Survey
Subject of Research
People
Article Title
How reliance on Spanish-language social media predicts beliefs in false political narratives amongst Latinos
Content moderators are influenced by online misinformation
PNAS Nexus
Repeated exposure to lies online may influence the beliefs of professional content moderators, with consequences for online platforms. Hundreds of thousands of content moderators, typically based in non-Western countries, identify and weed out problematic and false content on social platforms. However, constant exposure to misinformation could convince some content moderators that false claims are true, in what is known as the “illusory truth effect.” Hause Lin and colleagues assessed the extent of this effect among professional content moderators in India and the Philippines and explored whether encouraging an accuracy mindset reduces the effect. The authors asked 199 content moderators to rate 16 COVID-19 news headlines, first on their interestingness and then, after a break, on their accuracy—along with 32 new COVID-19 news headlines. As predicted by the illusory truth effect, headlines seen for the second time were 7.1% more likely to be judged as accurate than non-repeated headlines. However, in a similar experiment in which content moderators were asked to rate accuracy first—thereby encouraging an accuracy mindset—repeated headlines were not rated as more accurate than new headlines. Similar experiments with members of the public in India and the Philippines found similar effects. According to the authors, the results suggest that the illusory truth effect is not idiosyncratic to Western populations, suggesting that content moderators may become less effective over time due to being chronically exposed to falsehoods, which could compromise the safety and integrity of online platforms. Accuracy mindset prompts could help, the authors note.
Journal
PNAS Nexus
Article Title
Accuracy prompts protect professional content moderators from the illusory truth effect
Article Publication Date
19-Nov-2024
COI Statement
Research by G.P. and D.G.R. has been funded by Meta and Google. TaskUs authors are employees of TaskUs. M.S. is an employee of TikTok and was a former employee of TaskUs. D.S. is an employee of Google. G.P. was a Faculty Research Fellow at Google in 2022. D.G.R. is in the PNAS Nexus editorial board