Thursday, November 27, 2025

Platform-independent experiment shows tweaking X’s feed can alter political attitudes




Summary author: Walter Beckwith



American Association for the Advancement of Science (AAAS)





A new experiment using an AI-powered browser extension to reorder feeds on X (formerly Twitter), and conducted independently of the X platform’s algorithm, shows that even small changes in exposure to hostile political content can measurably influence feelings toward opposing political parties – within days of X exposure. The findings provide direct causal evidence of the impact of algorithmically controlled post ranking on a user’s social media feed. Social media has become an important source of political information for many people worldwide. However, the platform’s algorithms exert a powerful influence on what we encounter during use, subtly steering thoughts, emotions, and behaviors in poorly understood ways. Although many explanations for how these ranking algorithms affect us have been proposed, testing these theories has proven exceptionally difficult. This is because the platform operators alone control how their proprietary algorithms behave and are the only ones capable of experimenting with different feed designs and evaluating their causal effects. To sidestep these challenges, Tiziano Piccardi and colleagues developed a novel method that lets researchers reorder people’s social media feeds in real time as they browse, without permission from the platforms themselves. Piccardi et al. created a lightweight, non-intrusive browser extension, much like an ad blocker, that intercepts and reshapes X’s web feed in real time, leveraging large language model-based classifiers to evaluate and reorder posts based on their content. This tool allowed the authors to systematically identify and vary how content expressing antidemocratic attitudes and partisan animosity (AAPA) appeared on a user’s feed and observe the effects under controlled experimental conditions.

 

In a 10-day field experiment on X involving 1,256 participants and conducted during a volatile stretch of the 2024 U.S. presidential campaign, individuals were randomly assigned to feeds with heightened, reduced, or unchanged levels of AAPA content. Piccardi et al. discovered that, relative to the control group, reducing exposure to AAPA content made people feel warmer toward the opposing political party, shifting the baseline by more than 2 points on a 100-point scale. Increasing exposure resulted in a comparable shift toward colder feelings toward the opposing party. According to the authors, the observed effects are substantial, roughly comparable to three years’ worth of change in affective polarization over the duration of the intervention, though it remains unknown if these effects persist over time. What’s more, these shifts did not appear to fall disproportionately on any particular group of users. These shifts also extended to emotional experience; participants reported changes in anger and sadness through brief in-feed surveys, demonstrating that algorithmically mediated exposure to political hostility can shape both affective polarization and moment-to-moment emotional responses during platform use.

 

“One study – or set of studies – will never be the final word on how social media affects political attitudes. What is true of Facebook might not be true of TikTok, and what was true of Twitter 4 years ago might not be relevant to X today,” write Jennifer Allen and Joshua Tucker in a related Perspective. “The way forward is to embrace creative research and to build methodologies that adapt to the current moment. Piccardi et al. present a viable tool for doing that.”

Social media research tool can lower political temperature. It could also lead to more user control over algorithms.



Stanford University





A new tool shows it is possible to turn down the partisan rancor in an X feed – without removing political posts and without the direct cooperation of the platform. 

The Stanford-led research, published in Science, also indicates that it may one day be possible to let users take control of their own social media algorithms.

A multidisciplinary team created a seamless, web-based tool that reorders content to move posts lower in a user’s feed when they contain antidemocratic attitudes and partisan animosity, such as advocating for violence or jailing supporters of the opposing party.

In an experiment using the tool with about 1,200 participants over 10 days during the 2024 election, those who had antidemocratic content downranked showed more positive views of the opposing party. The effect was also bipartisan, holding true for people who identified as liberals or conservatives. 

“Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them,” said Michael Bernstein, a professor of computer science in Stanford’s School of Engineering and the study’s senior author. “We have demonstrated an approach that lets researchers and end users have that power.” 

The tool could also open ways to create interventions that not only mitigate partisan animosity, but also promote greater social trust and healthier democratic discourse across party lines, added Bernstein, who is also a senior fellow at the Stanford Institute for Human-Centered Artificial Intelligence.

For this study, the team drew from previous sociology research from Stanford, identifying categories of antidemocratic attitudes and partisan animosity that can be threats to democracy. In addition to advocating for extreme measures against the opposing party, these attitudes include statements that show rejection of any bipartisan cooperation, skepticism of facts that favor the other party’s views, and a willingness to forgo democratic principles to help the favored party.

Preventing emotional hijacking

There is often an immediate, unavoidable emotional response to seeing this kind of content, said study co-author Jeanne Tsai.

“This polarizing content can just hijack their attention by making people feel bad the moment they see it,” said Tsai, a professor of psychology in the Stanford School of Humanities and Sciences

The study brought together researchers from University of Washington and Northeastern, as well as Stanford, to tackle the problem from a range of disciplines, including computer science, psychology, information science, and communication.

The study’s first author, Tiziano Piccardi, a former postdoctoral fellow in Bernstein’s lab, created a web extension tool coupled with an artificial intelligence large language model that scans posts for these types of antidemocratic and extreme negative partisan sentiments. The tool then re-orders posts on the user’s X feed in a matter of seconds. 

Then, in separate experiments, the researchers had a group of participants, who consented to have their feeds modified, view X with this type of content downranked or upranked over 10 days, and compared their reactions to a control group. No posts were removed, but the more incendiary political posts appeared lower or higher in their content streams.

The impact on polarization was clear, said Piccardi, who is now an assistant professor of computer science at Johns Hopkins University. 

“When the participants were exposed to less of this content, they felt warmer toward the people of the opposing party,” he said. “When they were exposed to more, they felt colder.” 

Small change with a potentially big impact

Before and after the experiment, the researchers surveyed participants on their feelings toward the opposing party on a scale of 1 to 100. Among the participants who had the negative content downranked, their attitudes improved on average by two points – equivalent to the estimated change in attitudes that has occurred among the general U.S. population over a period of three years.

Previous studies on social media interventions to mitigate this kind of polarization have shown mixed results. Those interventions have also been rather blunt instruments, the researchers said, such as ranking posts chronologically or stopping social media use altogether.

This study shows that a more nuanced approach is possible and effective, Piccardi said. It can also give people more control over what they see, and that might help improve their social media experience overall since downranking this content not only decreased participants’ polarization but also their feelings of anger and sadness.

The researchers are now looking into other interventions using a similar method, including ones that aim to improve mental health. The team has also made the code of the current tool available, so other researchers and developers can use it to create their own ranking systems independent of a social media platform’s algorithm.

Media, sentiment, power: New study on discrimination by public authorities




University of Konstanz






In recent years, right-wing populist parties have experienced significant political success across nearly all Western democracies. With their increasing political establishment, xenophobic attitudes have become normalized. While previous studies have primarily examined the effects of this development on voting behaviour, little is known about the wider social consequences. A new study by the Cluster of Excellence “The Politics of Inequality” at the University of Konstanz has therefore investigated how this normalization affects administration practices in German job centres – in other words, concrete state decision-making processes on essential social benefits that are intended to ensure an adequate standard of living. The focus is on the role of negative media coverage of people with a migration background and the potentially reinforcing influence of this coverage on group-specific discrimination.

 

In an experiment, fictitious newspaper articles about welfare fraud by Romanian nationals were presented to 1,400 case workers from 60 German job centres. They were then asked to make decisions on authentically designed but fictitious applications for basic income. The result: After reading an article about alleged welfare fraud, Romanian citizens’ requests for social benefits were considered less credible, indicating group-specific discrimination. This effect is stronger in states where sceptical attitudes towards migration are particularly pronounced: In these regions, it was more likely that Romanian nationals were treated differently from applicants with German nationality – even though they were equally eligible for social benefits. At the same time, there was an opposite effect for foreign nationals who were not explicitly mentioned in the newspaper article: Job centre staff reacted to their applications with less scepticism and, in part, with greater willingness to help. Researchers refer to this form of unequal treatment as positive discrimination.

 

“Our results show that the administration is not a neutral space”, explains Gerald Schneider, professor of political science at the University of Konstanz and co-author of the study. “Where social stereotypes are strong and the media spreads negative images of migration, these attitudes can be directly reflected in the work of state authorities”. However, the phenomenon does not only occur in the parts of Germany where resentment towards people with a migration background is already widespread. Stefanie Rueß, postdoctoral researcher at Zeppelin University and corresponding author of the study, adds: “Negative headlines about migration subconsciously activate stereotypes that determine which of these groups of people are considered ‘suspicious’, ‘deserving of help’ or less ‘credible’. These subtle forms of discrimination can be harmful because they are more difficult to recognize and can impact further decisions. The media, social norms, and administrative decisions are closely intertwined”.

 

Jan Vogler, an associate professor of political science at Aarhus University in Denmark, emphasizes that the results could have far-reaching consequences for the relationship between the state and the specific population groups that are affected by discrimination: “If people feel that the state is discriminating against them, this can permanently shake their trust in public institutions. Subsequently, this may also negatively impact their general interactions with the state, manifesting across many different dimensions.” According to the authors, countermeasures could include targeted media literacy skills training, standardized decision-making processes as well as more balanced (regional) reporting on migration. Through such measures, the state can ensure that social benefits are allocated based on objective criteria rather than on regional sentiments.

 

 

Key facts:

  • Original publication Rueß, S., Schneider, G., Vogler, J. (2025): Illiberal Norms, Media Reporting, and Bureaucratic Discrimination: Evidence from State-Citizen Interactions in GermanyComparative Political Studies.
  • Authors:
    • Stefanie Rueß is a postdoctoral researcher in the ERC project "DEMOLAW" at Zeppelin University Friedrichshafen and a former member of Gerald Schneider's research team.
    • Gerald Schneider is a professor of international politics and a principal investigator in the Cluster of Excellence "The Politics of Inequality" at the University of Konstanz.
    • Jan Vogler is an associate professor of political science at Aarhus University, Denmark. Until 2024, he was a junior professor at the University of Konstanz.
  • Methodology: Representative survey and experiment with 1,400 employees from 60 job centres in Germany (June – July 2023). First, the participants read fictitious newspaper articles on social fraud (control group: neutral article on digitalization). They then evaluated fictitious social benefit applications with varying characteristics (name, nationality, gender, etc.).
  • The Cluster of Excellence "The Politics of Inequality" at the University of Konstanz investigates the political causes and consequences of inequality from an interdisciplinary perspective. The research is dedicated to some of the most pressing issues of our time: Access to and distribution of (economic) resources, the global rise of populists, climate change, and unfairly distributed educational opportunities.
  • The study also relies on funding of the InRa-network ("Institutions & Racism"), a large-scale research project by the Research Institute Social Cohesion (RISC) on behalf of the Federal Ministry of the Interior.

No comments: