Friday, September 02, 2022

YouTube more likely to direct election-fraud videos to users already skeptical about 2020 election’s legitimacy

New study shows how site’s algorithms perpetuate existing misperceptions

Peer-Reviewed Publication

NEW YORK UNIVERSITY

YouTube was more likely to recommend videos about election fraud to users who were already skeptical about the legitimacy of the 2020 U.S. presidential election, shows a new study examining the impact of the site’s algorithms. 

The results of the research, which is published in the Journal of Online Trust and Safety, showed that those most skeptical of the election’s legitimacy were shown three times as many election-fraud-related videos as were the least skeptical participants—roughly 8 additional recommendations out of approximately 400 videos suggested to each study participant. 

While the overall prevalence of these types of videos was low, the findings expose the consequences of a recommendation system that provides users with the content they want. For those most concerned about possible election fraud, showing them related content provided a mechanism by which misinformation, disinformation, and conspiracies can find their way to those most likely to believe them, observe the authors of the study. Importantly, these patterns reflect the independent influence of the algorithm on what real users are shown while using the platform.

“Our findings uncover the detrimental consequences of recommendation algorithms and cast doubt on the view that online information environments are solely determined by user choice,” says James Bisbee, who led the study as a postdoctoral researcher at New York University’s Center for Social Media and Politics (CSMaP).

Nearly two years after the 2020 presidential election, large numbers of Americans, particularly Republicans, don’t believe in the legitimacy of the outcome. 

“Roughly 70% of Republicans don’t see Biden as the legitimate winner,” despite “multiple recounts and audits that confirmed Joe Biden’s win,” the Poynter Institute’s PolitiFact wrote earlier this year.

While it’s well-known that social media platforms, such as YouTube, direct content to users based on their search preferences, the consequences of this dynamic may not be fully realized. 

In the CSMaP study, the researchers sampled more than 300 Americans with YouTube accounts in November and December of 2020. The subjects were asked how concerned they were with a number of aspects of election fraud, including fraudulent ballots being counted, valid ballots being discarded, foreign governments interfering, and non-U.S. citizens voting, among other questions. 

These participants were then asked to install a browser extension that would record the list of recommendations they were shown. The subjects were then instructed to click on a randomly assigned YouTube video (the “seed” video), and then to click on one of the recommendations they were shown according to a randomly assigned “traversal rule”. For example, users assigned to the “second traversal rule” would be required to always click on the second video in the list of recommendations shown, regardless of its content. By restricting user behavior in these ways, the researchers were able to isolate the recommendation algorithm’s influence on what real users were being suggested in real time. 

The subjects then proceeded through a sequence of YouTube recommended videos, allowing the researchers to observe what the YouTube algorithm suggested to its users. Bisbee and his colleagues then compared the number of videos about election fraud in the 2020 U.S. presidential election that were recommended to participants who were more skeptical about the legitimacy of the election to those recommended to participants who were less skeptical. These results showed that election skeptics were recommended an average of eight additional videos about possible fraud in the 2020 US election, relative to non-skeptical participants (12 vs. 4).

“Many believe that automated recommendation algorithms have little influence on online ‘echo chambers’ in which users only see content that reaffirms their preexisting views,” observes Bisbee, now an assistant professor at Vanderbilt University. “Our study, however, suggests that YouTube's recommendation algorithm was able to determine which users were more likely to be concerned about fraud in the 2020 U.S. presidential election and then suggested up to three times as many videos about election fraud to these users compared to those less concerned about election fraud. This highlights the need for further investigation into how opaque recommendation algorithms operate on an issue-by-issue basis."

The paper’s other authors were Joshua A. Tucker and Jonathan Nagler, professors in NYU’s Department of Politics, and Richard Bonneau, a professor in NYU’s Department of Biology and Courant Institute of Mathematical Sciences, as well as Megan A. Brown, the senior research engineer at CSMaP, and Angela Lai, an NYU doctoral student. Tucker and Nagler are co-directors of CSMaP.

# # #

 

No comments:

Post a Comment