Wednesday, May 18, 2022

Livestreamed carnage: Tech's hard lessons from mass killings






BARBARA ORTUTAY, HALELUYA HADERO and MATT O'BRIEN
Tue, May 17, 2022,

These days, mass shooters like the one now held in the Buffalo, New York, supermarket attack don’t stop with planning out their brutal attacks. They also create marketing plans while arranging to livestream their massacres on social platforms in hopes of fomenting more violence.

Sites like Twitter, Facebook and now the game-streaming platform Twitch have learned painful lessons from dealing with the violent videos that often accompany such shootings. But experts are calling for a broader discussion around livestreams, including whether they should exist at all, since once such videos go online, they're almost impossible to erase completely.

The self-described white supremacist gunman who police say killed 10 people, all of them Black, at a Buffalo supermarket Saturday had mounted a GoPro camera to his helmet to stream his assault live on Twitch, the video game streaming platform used by another shooter in 2019 who killed two people at a synagogue in Halle, Germany.

He had previously outlined his plan in a detailed but rambling set of online diary entries that were apparently posted publicly ahead of the attack, although it's not clear how may people might have seen them. His goal: to inspire copycats and spread his racist beliefs. After all, he was a copycat himself.

He decided against streaming on Facebook, as yet another mass shooter did when he killed 51 people at two mosques in Christchurch, New Zealand, three years ago. Unlike Twitch, Facebook requires users to sign up for an account in order to watch livestreams.

Still, not everything went according to plan. By most accounts the platforms responded more quickly to halt the spread of the Buffalo video than they did after the 2019 Christchurch shooting, said Megan Squire, a senior fellow and technology expert at the Southern Poverty Law Center.

Another Twitch user watching the live video likely flagged it to the attention of Twitch’s content moderators, she said, which would have helped Twitch pull down the stream less than two minutes after the first gunshots per a company spokesperson. Twitch has not said how the video was flagged. In a statement about the shooting Tuesday, the company expressed thanks “for the user reports that help us catch and remove harmful content in real time.”

“In this case, they did pretty well,” Squire said. “The fact that the video is so hard to find right now is proof of that.”

That was little consolation to family members of the victims. Celestine Chaney’s son, Wayne Jones, found out his mother had been killed when someone sent him a video screenshot from the livestream. Not long after, he saw the video itself.

“I didn’t find out, nobody knocked on my door like the usual process,” he said. “I found out in a Facebook picture that my mom was gunned down. Then I watched the video on social media.”

Danielle Simpson, the girlfriend of Chaney’s grandson, said she reported dozens of sites after the video kept appearing over and over in her Facebook feed and she worried that Chaney’s family would see them.

“I think I reported about 100 pages on Sunday because every time I got on Facebook it was either pictures or the video was right there,” she said. “You couldn’t escape it. There was nowhere you could go.”

In 2019, the Christchurch shooting was streamed live on Facebook for 17 minutes and quickly spread to other platforms. This time, the platforms generally seemed to coordinate better, particularly by sharing digital “signatures” of the video used to detect and remove copies.

But platform algorithms can have a harder time identifying a copycat video if someone has edited it. That's created problems, such as when some internet forums users remade the Buffalo video with twisted attempts at humor. Tech companies would have needed to use “more fancy algorithms” to detect those partial matches, Squire said.

“It seems darker and more cynical,” she said of the attempts to spread the shooting video in recent days.

Twitch has more than 2.5 million viewers at any given moment; roughly 8 million content creators stream video on the platform each month, according to the company. The site uses a combination of user reports, algorithms and moderators to detect and remove any violence that occurs on the platform. The company said that it quickly removed the gunman’s stream, but hasn’t shared many details about what happened on Saturday — including whether the stream was reported or how many people watched the rampage live.

A Twitch spokesperson said the company shared the livestream with the Global Internet Forum to Counter Terrorism, a nonprofit group set up by tech companies to help others monitor their own platforms for rebroadcasts. But clips from the video still made their way to other platforms, including the site Streamable, where it was available for millions to view. A spokesperson for Hopin, the company that owns Streamable, said Monday that it's working to remove the videos and terminate the accounts of those who uploaded them.

Looking ahead, platforms may face future moderation complications from a Texas law — reinstated by an appellate court last week — that bans big social media companies from “censoring” users’ viewpoints. The shooter “had a very specific viewpoint” and the law is unclear enough to create a risk for platforms that moderate people like him, said Jeff Kosseff, an associate professor of cybersecurity law at the U.S. Naval Academy. “It really puts the finger on the scale of keeping up harmful content,” he said.

Some lawmakers have called for social media companies to further police their platforms following the gunman’s livestream. President Joe Biden did not bring up such calls during his remarks Tuesday in Buffalo.

Alexa Koenig, executive director of the Human Rights Center at the University of California, Berkeley, said there's been a shift in how tech companies are responding to such events. In particular, Koenig said, coordination between the companies to create fingerprint repositories for extremist videos so they can't be re-uploaded to other platforms “has been an incredibly important development.”

A Twitch spokesperson said the company will review how it responded to the gunman’s livestream.

Experts suggest that sites such as Twitch could exercise more control over who can livestream and when — for instance, by building in delays or whitelisting valid users while banning rules violators. More broadly, Koenig said, “there’s also a general societal conversation that needs to happen around the utility of livestreaming and when it’s valuable, when it’s not, and how we put safe norms around how it’s used and what happens if you use it.”

Another option, of course, would be to end livestreaming altogether. But that's almost impossible to imagine given how much tech companies rely on livestreams to attract and keep users engaged in order to bring in money.

Free speech, Koenig said, is often the reason tech platforms give for allowing this form of technology — beyond the unspoken profit component. But that should be balanced "with rights to privacy and some of the other issues that arise in this instance,” Koenig said.

___

AP journalists Robert Bumsted and Carolyn Thompson contributed from Buffalo.

___

This story has been updated to clarify that all 10 of the people killed in the shooting were Black.

After Buffalo Shooting Video Spreads, Social Platforms Face Questions

In March 2019, before a gunman murdered 51 people at two mosques in Christchurch, New Zealand, he went live on Facebook to broadcast his attack. In October of that year, a man in Germany broadcast his own mass shooting live on Twitch, the Amazon-owned livestreaming site popular with gamers.

On Saturday, a gunman in Buffalo, New York, mounted a camera to his helmet and livestreamed on Twitch as he killed 10 people and injured three more at a grocery store in what authorities said was a racist attack. In a manifesto posted online, Payton S. Gendron, the 18-year-old whom authorities identified as the shooter, wrote that he had been inspired by the Christchurch gunman and others.

Twitch said it reacted swiftly to take down the video of the Buffalo shooting, removing the stream within two minutes of the start of the violence. But two minutes was enough time for the video to be shared elsewhere.

Sign up for The Morning newsletter from the New York Times

By Sunday, links to recordings of the video had circulated widely on other social platforms. A clip from the original video — which bore a watermark that suggested it had been recorded with a free screen-recording software — was posted on a site called Streamable and viewed more than 3 million times before it was removed. And a link to that video was shared hundreds of times across Facebook and Twitter hours after the shooting.

Mass shootings — and live broadcasts — raise questions about the role and responsibility of social media sites in allowing violent and hateful content to proliferate. Many of the gunmen in the shootings have written that they developed their racist and antisemitic beliefs trawling online forums like Reddit and 4chan, and were spurred on by watching other shooters stream their attacks live.

“It’s a sad fact of the world that these kind of attacks are going to keep on happening, and the way that it works now is there’s a social media aspect as well,” said Evelyn Douek, a senior research fellow at Columbia University’s Knight First Amendment Institute who studies content moderation. “It’s totally inevitable and foreseeable these days. It’s just a matter of when.”

Questions about the responsibilities of social media sites are part of a broader debate over how aggressively platforms should moderate their content. That discussion has been escalated since Elon Musk, CEO of Tesla, recently agreed to purchase Twitter and has said he wants to make unfettered speech on the site a primary objective.

Social media and content moderation experts said Twitch’s quick response was the best that could reasonably be expected. But the fact that the response did not prevent the video of the attack from being spread widely on other sites also raises the issue of whether the ability to livestream should be so easily accessible.

“I’m impressed that they got it down in two minutes,” said Micah Schaffer, a consultant who has led trust and safety decisions at Snapchat and YouTube. “But if the feeling is that even that’s too much, then you really are at an impasse: Is it worth having this?”

In a statement, Angela Hession, Twitch’s vice president of trust and safety, said the site’s rapid action was a “very strong response time considering the challenges of live content moderation, and shows good progress.” Hession said the site was working with the Global Internet Forum to Counter Terrorism, a nonprofit coalition of social media sites, as well as other social platforms to prevent the spread of the video.

“In the end, we are all part of one internet, and we know by now that that content or behavior rarely — if ever — will stay contained on one platform,” she said.

In a document that appeared to be posted to the forum 4chan and the messaging platform Discord before the attack, Gendron explained why he had chosen to stream on Twitch, writing that “it was compatible with livestreaming for free and all people with the internet could watch and record.” (Discord said it was working with law enforcement to investigate.)

Twitch also allows anyone with an account to go live, unlike sites like YouTube, which requires users to verify their account to do so and to have at least 50 subscribers to stream from a mobile device.

“I think that livestreaming this attack gives me some motivation in the way that I know that some people will be cheering for me,” Gendron wrote.

He also said he had been inspired by Reddit, far-right sites like The Daily Stormer and the writings of Brenton Tarrant, the Christchurch shooter.

In remarks Saturday, Gov. Kathy Hochul of New York criticized social media platforms for their role in influencing Gendron’s racist beliefs and allowing video of his attack to circulate.

“This spreads like a virus,” Hochul said, demanding that social media executives evaluate their policies to ensure that “everything is being done that they can to make sure that this information is not spread.”

There may be no easy answers. Platforms like Facebook, Twitch and Twitter have made strides in recent years, the experts said, in removing violent content and videos faster. In the wake of the shooting in New Zealand, social platforms and countries around the world joined an initiative called the Christchurch Call to Action and agreed to work closely to combat terrorism and violent extremism content. One tool that social sites have used is a shared database of hashes, or digital footprints of images, that can flag inappropriate content and have it taken down quickly.

But in this case, Douek said, Facebook seemed to have fallen short despite the hash system. Facebook posts that linked to the video posted on Streamable generated more than 43,000 interactions, according to CrowdTangle, a web analytics tool, and some posts were up for more than nine hours.

When users tried to flag the content as violating Facebook’s rules, which do not permit content that “glorifies violence,” they were told in some cases that the links did not run afoul of Facebook’s policies, according to screenshots viewed by The New York Times.

Facebook has since started to remove posts with links to the video, and a Facebook spokesperson said the posts do violate the platform’s rules. Asked why some users were notified that posts with links to the video did not violate its standards, the spokesperson did not have an answer.

Twitter had not removed many posts with links to the shooting video, and in several cases, the video had been uploaded directly to the platform. A company spokesperson initially said the site might remove some instances of the video or add a sensitive content warning, then later said Twitter would remove all videos related to the attack after the Times asked for clarification.

A spokesperson at Hopin, the video conferencing service that owns Streamable, said the platform was working to remove the video and delete the accounts of people who had uploaded it.

Removing violent content is “like trying to plug your fingers into leaks in a dam,” Douek said. “It’s going to be fundamentally really difficult to find stuff, especially at the speed that this stuff spreads now.”

© 2022 The New York Times Company



No comments:

Post a Comment