Friday, October 23, 2020

French NGOs take Twitter to court for failing to moderate hate speech

CIVIL SOCIETY NOT THE STATE


Issued on: 19/10/2020 - 

Twitter's logo displayed on a mobile phone on May 27, 2020, in Arlington, Virginia, USA.
 © Olivier Douliery, AFP

Text by:
Sophie GORMAN


French NGOs took Twitter to court in Paris on Monday morning, accusing the social media giant of not doing enough to tackle hate speech online.

Four French NGOs – SOS Racisme, SOS Homophobie, the Union of Jewish Students of France and J'accuse – filed suit against Twitter on May 11. A Paris court began hearing the case on Monday before postponing further hearings until December 1; Twitter and the NGOs have agreed to take part in mediation ahead of the next session.


At the heart of the case is Twitter’s refusal to provide information on its moderation processes. The four NGOs have filed to obtain this data.

Social network platforms are required by the new French Avia Law combatting online hate speech (May 2020) to make public how they limit the dissemination of such content and how they respond to reports. For example, they need to reveal the number of moderators, where they work and the training they have received. Twitter does not share this information.

Social media networks have come under renewed fire in France in recent days after the decapitation of a teacher was posted on Twitter. A photograph of the teacher's body, accompanied by a message claiming responsibility, was posted on the social network. It was also discovered on the assailant's phone, found near his body. France’s anti-terrorism prosecutor, Jean-François Ricard, confirmed on Saturday that the Twitter account belonged to the attacker.

The post was swiftly removed by Twitter, which also said it had suspended the account for violating its company policies.

Twitter’s low removal policy

In the European Commission’s 5th code of conduct on countering hate speech online, published in June 2020, Twitter came bottom of the league when it came to removing hate speech. The review covered a period of six weeks at the end of 2019. Facebook removed 87.6 percent of the content, YouTube removed 79.7 percent, but Twitter only took down 35.9 percent.

So is Twitter the worst offender when it comes to content moderation? “Yes, if you only consider the top four (Instagram, Twitter, Facebook and YouTube) – but there are hundreds of other social media platforms,” said lawyer Philippe Coen, who founded the Respect Zone NGO to target cyber violence. “Twitter has, in fact, made many efforts to improve its moderation in recent months. It just needs to make a lot more.”

“Interestingly, the CEOs of all the main social media platforms are themselves asking for more defined regulations in terms of hate content. They cannot act without the supporting legislation. And there are many ways to fight cyber bulling other than just in court. You need to start with schools and companies and societies. We don’t want to work against the social media platforms, we want to work with them.”

Twitter refused to comment on the case.

Is Twitter responsible for bullying?

As cyber violence has risen exponentially in recent years, there has also been a move to increase the obligation of host providers to moderate content. However, it is still not clearly regulated.

Social networks are not currently legally responsible for their content. They have the legal status of a host, which limits their legal responsibility for content published on their networks. They are only required to delete content after a report has been made and if it clearly breaches the law.

The question at issue in the courts is whether Twitter has neglected its legal responsibility to moderate content.

“The term negligence legally refers to a fault of imprudence, a breach of the duty of care or a lack of diligence,” said French information technology and data privacy lawyer Olivia Luzi, speaking with FRANCE 24. “Given the legal obligations currently imposed on platforms and, in reality, the enormous task of monitoring all content at the exact moment it appears online rather than from the moment it is reported, it is difficult to qualify what constitutes negligence.”

“Twitter currently has in place reporting and removal measures which are within the European Commission's recommendations. They must review the majority of reports within 24 hours and, if necessary, block access to them,” explains Luzi.

“This case against Twitter will affect all hosting providers and therefore all social media, particularly online journals and their comment sections,” says Luzi. “They can no longer systematically hide behind the great and beautiful principles of freedom of expression to tolerate that social media tools are hijacked from their purpose and used as a vector of hatred. It is up to these organisations to take initiatives to moderate without necessarily being accused of censorship, and to collaborate in building a digital world that reflects the values they advocate.”

Defining hate speech

In September, the World Federation of Advertisers announced it had reached an agreement with Facebook, Twitter and YouTube. For the first time, they agreed on common definitions of content such as hate speech and aggression, established harmonised reporting standards across platforms and empowered external auditors to oversee the system, which will launch in the second half of 2021.

In July, an independent audit conducted by Facebook itself accused the social network of failing to tackle hate speech and fake news. Auditors, who included the Anti-Defamation League, denounced it for putting free speech above all else.

A week ago, Facebook explicitly banned Holocaust denial for the first time.

The social network said its new policy prohibits "any content that denies or distorts the Holocaust". Facebook CEO Mark Zuckerberg wrote that he had "struggled with the tension" between free speech and banning such posts, but that "this is the right balance".

“This move by Facebook is a revolution, it surprised everyone,” said Coen. “It’s a long, long battle for all the social media platforms, though, and we are only at the beginning of it. We are working now to try to convince digital companies to include in their digital design the ideas of human dignity and respect, which has been completely forgotten by the architects of these platforms. These sites are designed to catch your money and your data, but not your decency.”

No comments:

Post a Comment