Saturday, October 26, 2024

 

Malicious social media bots increased significantly during the COVID-19 pandemic and continue to influence public health communication



Finnish Institute for Health and Welfare






The information environment in Finland during the coronavirus pandemic was exceptional and intense in many ways. The spread of disinformation and the number of actors involved reached unprecedented levels. The demand for accurate information was enormous, and the situation was constantly evolving. Information was disseminated through various channels. Official information played a crucial role, but at the same time, social media posed challenges in the fight against false and misleading information.

Malicious bots increased significantly during the pandemic. The operation of bots – i.e. programs imitating human users – was particularly aggressive during the key corona measures. These included, for example, the biggest information campaigns about corona vaccinations and instructions. This was evident in a study that analyzed a total of 1.7 million tweets related to the topic of COVID-19 on Twitter/X in Finland over the course of three years.

Bots accounted for 22 percent of the messages, while normally bots produce about 11 percent of the content in Twitter/X. Of the identified bot accounts, 36 percent (4,894) acted maliciously. In particular, they emphasized the unintentional dissemination of misinformation, i.e. incorrect information. About a quarter (approx. 460,000) of all messages contained incorrect information. Roughly the same proportion of messages expressed a negative attitude towards vaccines.

According to the study, malicious bots used the Finnish Institute for Health and Welfare's (THL) Twitter to intentionally spread disinformation, i.e. misleading information, but did not actually target THL. The bots increased the effectiveness and reach of their publications in different ways. For example, they mentioned other accounts in 94 percent of their tweets. The bots also proved to be adaptable; their messages varied according to the situation.

The study utilized the latest version of Botometer (4.0) to classify bot accounts, going beyond mere identification to differentiate between regular bots and COVID-19-specific malicious bots. This distinction is critical, as it reveals that traditional binary classifications of bots are insufficient. 

“The findings highlight how regular bots often align with governmental messaging, enhancing their credibility and influence, while malicious bots employ more aggressive and deceptive tactics. The malicious bots may amplify false narratives, manipulate public opinion, and create confusion by blurring the line between credible and noncredible sources,” says Senior Researcher Ali Unlu, the primary author of the study. 

Bot activity should be taken into account in public health communication

Malicious bots pose persistent threat even after the pandemic's peak. They continue to spread misinformation, particularly concerning vaccines, by exploiting public fears and skepticism. 

The research suggests that these bots could have long-term implications for public trust in health institutions and highlights the importance of developing more sophisticated tools for detecting and mitigating the influence of such bots.

“Public health agencies need to enhance their monitoring and response strategies. Our study suggests that preemptive measures such as public education on bot activity and improved detection tools. The study also calls for more actions from social media platforms to curb clearly false information and account authenticity, which could significantly improve public trust and the effectiveness of public health communication,” says Lead Expert Tuukka Tammi from THL.

Non-English setting makes the research unique

Unlike most studies in this domain, which are predominantly in English, this research is one of the few that investigates social media bots in a non-English language, specifically Finnish. This unique focus allows for a detailed examination of external factors such as geographical dispersion and population diversity in Finland, providing valuable insights that are often overlooked in global studies.

“This study represents a significant contribution to understanding the complex role of bots in public health communication, particularly in the context of a global health crisis. It highlights the dual nature of bot activity — where regular bots can support public health efforts, while malicious bots pose a serious threat to public trust and the effectiveness of health messaging. The research provides a roadmap for future studies and public health strategies to combat the ongoing challenge of misinformation in the digital age,” concludes Professor of Practice Nitin Sawhney from Aalto University’s computer science department.

The study was conducted as part of the joint Crisis Narratives research project between Aalto University and THL, and was funded by the Research Council of Finland from 2020 to 2024.

Journal

DOI

Method of Research

Subject of Research

Article Title

AI-generated news is harder to understand





Ludwig-Maximilians-Universität München





Readers find automated news articles poorer than manually-written texts in relation to word choice and use of numbers.

Traditionally-crafted news articles are more comprehensible than articles produced with automation. This was the finding of an LMU study that was recently published in the journal Journalism: Theory, Practice, and Criticism. The research team at the Department of Media and Communication (IfKW) surveyed more than 3,000 online news consumers in the UK. Each of the respondents rated one of 24 texts, half of which had been produced with the help of automation and half of which had been manually written by journalists. “Overall, readers found the 12 automated articles significantly less comprehensible,” summarizes lead author Sina Thäsler-Kordonouri. This was despite the fact that the AI-generated articles had been sub-edited by journalists prior to publication.

 

Worse handling of numbers and word choice

According to the survey, one of the reasons for reader dissatisfaction was the word choice used in the AI texts. Readers complained that the AI-produced articles contained too many inappropriate, difficult, or unusual words and phrases. Furthermore, readers were significantly less satisfied with the way the automated articles handled numbers and data.

The deficiencies readers perceived in the automated articles’ handling of numbers and word choice partly explain why they were harder to understand, the researchers say. However, readers were equally satisfied with the automated and manually-written articles in terms of the ‘character’ of the writing and their narrative structure and flow.

 

More human sub-editing required

Professor Neil Thurman, who led the project, suggests that “when creating and/or sub-editing automated news articles, journalists and technologists should aim to reduce the quantity of numbers, better explain words that readers are unlikely to understand, and increase the amount of language that helps readers picture what the story is about.”

This study is the first to investigate both the relative comprehensibility of manual and automated news articles and explore why a difference exists. “Our results indicate the importance not only of maintaining human involvement in the automated production of data-driven news content, but of refining it,” says Sina Thäsler-Kordonouri.

No comments: