Friday, April 24, 2020

Nearly 50% of Twitter Accounts Talking about Coronavirus Might Be Bots

Twitter is dealing with a pandemic of bots jamming the platform with misinformation about COVID-19.


By Tess Owen Apr 23 2020


Nearly half the “people” talking about the coronavirus pandemic on Twitter are not actually people, but bots, according to new research from Carnegie Mellon University.

And many of those bots are rapidly feeding Twitter with harmful, false story lines about the pandemic, including some inspiring real-world activity, such as the theory that 5G towers cause COVID-19, or state-sponsored propaganda from Russia and China that falsely claims the U.S. developed the coronavirus as a bioweapon or that American politicians are issuing “mandatory” lockdowns.

“We do see that a lot of bots are acting in ways that are consistent with the story lines that are coming out of Russia or China,” said Kathleen Carley, professor at Carnegie Mellon’s School of Computer Science’s Institute for Software Research.

Researchers there found that 45.5% of users tweeting about the coronavirus have the characteristics of bots, such as tweeting more frequently than is humanly possible, or appearing to be in one country and then another a few hours later.

Carley says that’s a massive jump from the 20% she’d expected based on previous analyses of bot activity around other major global news events and national disasters.

The Carnegie Mellon team identified more than 100 false narratives relating to coronavirus worldwide, which they divided into six different categories: cures or preventative measures, weaponization of the virus, emergency responses, the nature of the virus (like children being immune to it), self-diagnosis methods, and feel-good stories, like dolphins returning to Venice’s canals.



They found the largest number of different narratives in the cures or preventative measures category — 77 in total. Carley said those ranged from the downright silly, like Corona beer cures coronavirus, to the downright deadly, like drinking bleach cures coronavirus (this was touted by the pro-Trump group QAnon). Disinformation in this category was the most likely to travel internationally, Carley said.




Disinformation about the coronavirus erodes trust in institutions and makes the public less likely to comply with scientifically informed government measures needed to curb the spread of the virus, like lockdowns and social distancing.

"The real goal of those running these disinformation campaigns is about creating distrust in the overall ecosystem and institutions. It’s not so much about picking a side as it is about creating confusion and doubt and distrust of authority,” said Jevin West, who runs the Center for an Informed Public at the University of Washington.
Bot or not?

Carley and her team rely on a “bot hunter” tool that they developed, which uses artificial intelligence to process account information from users on Twitter to determine who is or who isn’t a bot.

The bot hunter looks at information like number of followers, the things they tweet about, the frequency of tweeting, language, the types of accounts they retweet, and their mentions network.

To analyze bot activity around the pandemic, the tool examined all tweets discussing coronavirus or COVID-19 — that ended up being about 67 million tweets between January 29 and March 4, and after that about 4 million tweets on average each day, from more than 12 million users.

Carley’s findings, which will be laid out in an upcoming paper, are in keeping with reports that China and Russia have launched massive disinformation campaigns around the coronavirus pandemic directed at the U.S.

Reuters got their hands on a European Union document last month alleging that the Kremlin had mounted a “significant disinformation campaign” against the West with the goal of sowing panic and distrust.

In mid-March, the White House’s National Security Council had to put out an announcement via Twitter denying social media reports that President Donald Trump was about to lock down the entirety of the U.S. According to the New York Times, that narrative was pushed by Chinese agents. And earlier this month, the Justice Department said it was investigating coronavirus disinformation campaigns originating from China and Russia.

While some bots fit the profile of state-sponsored disinformation campaigns, Carley said it’s hard to say definitively where they came from or who made them. “We can’t prove attribution,” she said.
Removing tweets

A spokesperson for Twitter told VICE News that they’re “prioritizing the removal of COVID-19 content when it has a call to action that could potentially cause harm,” a policy that they adopted on March 18. Since then, they’ve removed more than 2,200 tweets.

“As we’ve said previously, we will not take enforcement action on every tweet that contains incomplete or disputed information about COVID-19,” the spokesperson said. “As we’ve doubled down on tech, our automated systems have challenged more than 3.4 million accounts that were targeting discussions around COVID-19 with spammy or manipulative behaviors.”

Carley has described bot detection as a game of cat-and-mouse: Bots are constantly becoming more sophisticated to evade social media crackdowns, and so the software needed to catch them has to be regularly updated.


At the moment, Twitter says they’re not seeing any kind of coordinated platform-manipulation effort with regards to coronavirus. They also cautioned that not all bots are created equal — and not all bots are bad. If they were, Twitter says, they’d be in violation of their company policy.

No comments: