Disinformation can reinforce polarization in society
The polarizing effects of disinformation endure even when faced with a powerful external shock
With over four billion people eligible to vote in elections, 2024 is the largest election year ever. At the same time, disinformation and polarization on social media pose unprecedented challenges to the democratic process. New research from Aalto University and the University of Helsinki investigated how real-world shocks affect online discussions, using the Ukraine war and Finland’s NATO accession to understand how disinformation reinforces polarization.
‘The potential for democratic political participation in the world is greater than ever,’ says Tuomas Ylä-Anttila, associate professor of political science at the University of Helsinki. ‘At the same time, the deliberate use of disinformation by those who want to disturb democratic processes and generate polarization poses a threat to democracy and societal stability. This threat is now recognized widely, not just by political scientists but also organizations like the World Economic Forum.’
The research was a case study of how Russia's 2022 invasion of Ukraine affected discussions on NATO in the Finnish Twitter space immediately afterward. Finnish public opinion had long been split about joining NATO, with only around 20-30 percent in favour of joining the alliance. The Russian invasion led to a rapid convergence in favour of joining, which eventually led to Finland applying for membership. NATO and Russia are major themes in the campaigns for the Finnish presidential election, which will be held later this month.
The Russian invasion quickly depolarized NATO discussions in Finland but was unable to break a social bubble built on disinformation and conspiracy theories. These findings hold lessons for how disinformation will affect political campaigns elsewhere in today’s rapidly changing world.
‘By analysing retweeting patterns, we found three separate user groups before the invasion: a pro-NATO group, a left-wing anti-NATO group, and a conspiracy-charged anti-NATO group,’ says Yan Xia, a doctoral researcher at Aalto and lead author of the study. ‘After the invasion, the left-wing anti-NATO group members broke out of their retweeting bubble and connected with the pro-NATO group despite their difference in partisanship, while the conspiracy-charged anti-NATO group mostly remained a separate cluster.’
The research revealed that the left-wing anti-NATO group and the pro-NATO group were bridged by a shared condemnation of Russia’s actions and shared democratic norms. The other anti-NATO group, mainly built around conspiracy theories and disinformation, consistently demonstrated a clear anti-NATO attitude.
Disinformation persists even under threat
‘An external threat can bridge partisan divides, but bubbles upheld by conspiracy theories and disinformation may persist even under dramatic external threats,’ says Ylä-Anttila. ‘The continuity of these bubbles is likely explained by the notion that people within disinformation bubbles have limited communication with others outside their bubble, which tends to reinforce their prior beliefs.’
According to Ylä-Anttila, this effect is not limited to Finnish NATO discussions.
‘People who have strong, non-mainstream opinions are often more likely to hold on to their beliefs. They’re more prone to confirmation bias, meaning that they’re more likely to disregard information that is contrary to their own beliefs,’ says Ylä-Anttila.
‘For democratic decision-making, it’s essential to note that these disinformation bubbles are a part of our political reality and various actors that benefit from them – such as the Kremlin propaganda machine – will most likely try to exploit them.’
How did the researchers measure users’ opinions of NATO and social bubbles?
The research team consisted of network scientists from Aalto University and political scientists from the University of Helsinki. While network analysis can reveal the structure of user interactions and how it changes over time, analysing the content uncovers how the discussion climate evolves and what arguments connect or distinguish opposing sides. Combining research methods and expertise from computer science and social science offers a more holistic view of the discussions and dynamics on social media.
‘Network science methods enable us to measure structural polarization in these discussions and automate the search for different bubbles and other structures,’ says Mikko Kivelä, assistant professor at Aalto University. ‘In comparison to surveys, our methods are especially interesting because we can follow all of these discussions accurately after they have happened. In this research project, we were able to study and compare the discussions right before and right after the Russian invasion. We’re able to directly follow public discourse and the political elites that engage in it online.’
The research article was published in the European Physical Journal Data Science.
Research article:
Xia, Y., Gronow, A., Malkamäki, A. et al. The Russian invasion of Ukraine selectively depolarized the Finnish NATO discussion on Twitter. EPJ Data Sci. 13, 1 (2024). https://doi.org/10.1140/epjds/s13688-023-00441-2
JOURNAL
EPJ Data Science
METHOD OF RESEARCH
Content analysis
SUBJECT OF RESEARCH
People
ARTICLE TITLE
The Russian invasion of Ukraine selectively depolarized the Finnish NATO discussion on Twitter
Misinformation and irresponsible AI - experts forecast how technology may shape our near future
From misinformation and invisible cyber attacks, to irresponsible AI that could cause events involving multiple deaths, expert futurists have forecast how rapid technology changes may shape our world by 2040.
From misinformation and invisible cyber attacks, to irresponsible AI that could cause events involving multiple deaths, expert futurists have forecast how rapid technology changes may shape our world by 2040.
As the pace of computer technology advances surges ahead, and systems become increasingly interlinked, it is vital to know how these fast technology advances could impact the world in order to take steps to prevent the worst outcomes.
Using a Delphi study, a well known technique for forecasting, a team of cyber security researchers led by academics from Lancaster University interviewed 12 experts in the future of technologies.
The experts, ranged from chief technology officers in businesses, consultant futurists and a technology journalist to academic researchers. They were asked how particular technologies may develop and change our world over the next 15 years by 2040, what risks they might pose, and how to address the challenges that may arise.
Most of the experts forecasted exponential growth in Artificial Intelligence (AI) over the next 15 years, and many also expressed concern that corners could be cut in the development of safe AI. They felt that this corner cutting could be driven by nation states seeking competitive advantage. Several of the experts even considered it possible that poorly implemented AI could lead to incidents involving many deaths, although other experts disagreed with this view.
Dr Charles Weir, Lecturer at Lancaster University’s School of Computing and Communications and lead researcher of the study, said: “Technology advances have brought, and will continue to bring, great benefits. We also know there are risks around some of these technologies, including AI, and where their development may go—everyone’s been discussing them—but the possible magnitude of some of the risks forecast by some of the experts was staggering.
“But by forecasting what potential risks lie just beyond the horizon we can take steps to avoid major problems.”
Another significant concern held by most of the experts involved in the study was that technology advances will make it easier for misinformation to spread. This has the potential to make it harder for people to tell the difference between truth and fiction - with ramifications for democracies.
Dr Weir said: “We are already seeing misinformation on social media networks, and used by some nation states. The experts are forecasting that advances in technologies will make it much easier for people and bad actors to continue spreading misleading material by 2040.”
Other technologies were forecast to not have as big as impact by 2040, including quantum computing which experts see as having impacts over a much longer timeframe, and Blockchain which was dismissed by most of the experts as being a source of major change.
The experts forecast that:
· By 2040, competition between nation states and big tech companies will lead to corners being cut in the development of safe AI
· Quantum computing will have limited impact by 2040
· By 2040 there will be ownership of public web assets. These will be identified and traded through digital tokens
· By 2040 it will be harder to distinguish truth from fiction because widely accessible AI can massively generate doubtful content
· By 2040 there will be less ability to distinguish accidents from criminal incidents due to the decentralised nature and complexity of systems
The forecasters also offered some suggested solutions to help mitigate against some of the concerns raised. Their suggestions included governments introducing AI purchasing safety principles, new laws to regulate AI safety. In addition, universities could be vital by introducing courses combining technical skills and legislation.
These forecasts will help policy makers and technology professionals make strategic decisions around developing and deploying novel computing technologies. They are outlined in the paper ‘Interlinked Computing in 2040: Safety, Truth, Ownership and Accountability’ which has been published by the peer-reviewed journal IEEE Computer.
The paper’s authors are: Charles Weir and Anna Dyson of Lancaster University; Olamide Jogunola and Katie Paxton-Fear of Manchester Metropolitan University; and Louise Dennis of Manchester University.
JOURNAL
Computer
METHOD OF RESEARCH
Survey
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
Interlinked Computing in 2040: Safety, Truth, Ownership and Accountability