Friday, July 28, 2023

 

Special package: Social media and elections


Peer-Reviewed Publication

AMERICAN ASSOCIATION FOR THE ADVANCEMENT OF SCIENCE (AAAS)




Removing Facebook and Instagram users from seeing default algorithmic feeds – which are designed to optimize the order in which content is presented – reduced time users spent on the platforms, a new study by Andrew M. Guess et al. shows, but it did not change users’ political attitudes, knowledge, and offline behaviors. The study is one of three in this issue involving collaboration between Meta and academic researchers to study social media’s impact on elections. The package is accompanied by a Policy Forum by the project’s independent rapporteur. The algorithms used by social media companies are largely opaque to users. As a result, there are various “folk theories” about how they work and disagreements about their effects. “The notion that such algorithms create political ‘filter bubbles’, foster polarization, exacerbate existing social inequalities, and enable the spread of disinformation has become rooted in the public consciousness,” write Guess and colleagues. Here, Guess and team sought to examine the effect of algorithmic feed-ranking systems on individuals’ political attitudes and behaviors as related to the U.S. presidential election in 2020. Participants were recruited through survey invitations placed on the top of their Facebook and Instagram feeds in August 2020. In a three-month analysis that ensued, the researchers compared participants in a control condition (i.e., no changes were made to status quo algorithmic feeds) to participants in a treatment condition (i.e., participants saw chronological feeds, where the most recent posts appeared on top). The treatment group spent less time on and were less engaged with content on platforms and were exposed to more ideologically diverse content. However, this did not lead to detectable changes in downstream political attitudes, knowledge, or offline behavior, including survey-based measures of polarization and political participation. The authors say there are several possible explanations for the disconnect between the large changes in online behavior caused by their treatment and the few discernible changes in political attitudes, including length of intervention period and background in which the study was conducted (i.e., a highly polarized election campaign when political conversations were occurring at relatively higher frequencies).

In a second study in the same issue, research also led by Guess studied the impact of reshared Facebook content – which constitutes more than one-fourth of posts Facebook users see – on exposure to political news during the 2020 U.S. election. They report that suppressing reshared content substantially decreased the amount of political news – including from untrustworthy sources – to which users were exposed but did not affect political opinions. On Facebook, resharing content is a core feature. Prior research suggests that reshared content may increase political polarization and political knowledge. Guess et al. undertook a randomized controlled experiment during the 2020 U.S. presidential election to examine the effects of withholding reshared content. They compared a control group for whom no changes were made to Facebook feeds to a treatment group for whom reshared content was removed from feeds. Removing reshared content decreased users’ clicks on partisan news links, the proportion of political news they saw, and their exposure to untrustworthy content. However, despite these changes, the authors were not able to reliably detect shifts in users’ political attitudes or behaviors, with the exception of a decrease in news knowledge within the sample. “[T]hough reshares may have been a powerful mechanism for directing users’ attention and behavior on Facebook during the 2020 election campaign,” conclude the authors, “they had limited impact on politically relevant attitudes and offline behaviors.”

In a third study, Sandra González-Bailón and colleagues tackled the question of whether Facebook enables ideological segregation in political news consumption. They report that politically conservative users are much more segregated and encounter far more misinformation on the platform. “Our analyses highlight that Facebook… is substantially segregated ideologically—far more than previous research on internet news consumption based on browsing behavior has found,” write González-Bailón et al. People form beliefs from political news they encounter, and there has been a vigorous debate about the role of the internet in shaping the policy-relevant information that people see, with some suggesting that platforms like Facebook may facilitate “ideological segregation.” Using a sample of 208 million U.S. Facebook users during the U.S. 2020 presidential election, González-Bailón et al. examined the flow of political content: all content users could potentially see; content they actually did see on feeds (selectively curated by Facebook's algorithms); and content engaged with through clicks, reshares, or other reactions. Compared to liberals, the authors found that politically conservative users were far more siloed in their news sources and exposed to much more misinformation. Both algorithmic processes (data-driven and automated) and social amplification processes (related to choices made by users) played a part in this ideological segregation. It primarily surfaced in Facebook's Pages and Groups – areas policymakers may target to combat misinformation – as opposed to from content posted by friends. An important direction for further research is to understand how individuals discover and decide to follow or join Pages and/or Groups, say the authors.

In a related Policy Forum, Mike Wagner, who was asked to serve as the project’s independent rapporteur in order “increase the confidence that the public and the scholarly community would have in the project,” discusses results from these studies – the first publicly shared outcomes of the U.S. 2020 Election Project. Wagner, in his role, was charged with commenting on the nature of the collaboration between Meta and the outside academics and whether the project might serve as a model for future industry–academy collaborations. He concludes that the team conducted “rigorous, carefully checked, transparent, ethical, and path-breaking studies.” He cites several strengths of the related research approach, including the outside academics having “control rights” for the papers – meaning that in the event of a dispute between the outside academics and the Meta research­ers over how to interpret a finding or what to include in a research paper, the lead author – which would always be an outside academic – would have the final decision. But he goes on to write that though the work is trustworthy, “I argue that the project is not a model for future industry–academy collaborations” for various reasons. “Creative so­cial media–academy-funder partnerships, or, more likely, government regulation and data-sharing requirements…that also provide privacy protections, as well as de­fined structures to encourage and protect industry-employed researchers to collabo­rate, are necessary to foster opportunities for path-breaking, comprehensive scholar­ship that does not require a social media platform’s permission,” he writes.

The package is accompanied by an introduction by Science Associate Editor Ekeoma Uzogara and an editorial by Science Editor-in-Chief Holden Thorp.

*** A related embargoed news briefing was held at 11:00 a.m. U.S. ET on Tuesday, 25 July, as a Zoom Webinar. Recordings are available here. Questions about META policy should be directed to press@meta.com.

No comments:

Post a Comment