Tuesday, July 30, 2024

Fighting deepfakes: Can laws be good weapons?

DW
July 27, 2024

Germany is debating new laws to counter a wave of malicious AI-generated content flooding the web. But civil liberties advocates warn tougher rules are no silver bullet — and say they could have unintended consequences.


AI-powered deepfake technology blurs the boundary between what's real and what's fake
Himanshu Sharma/dpa/picture alliance

Whether it's politicians apparently making outrageous statements, criminals masquerading as confidential sources, or individuals in humiliating situations that never happened, new deepfake technology is making it easier than ever to create convincing videos, images or audio clips in which people appear to say or do things they never did.

"The threat that deepfakes pose to our democratic society is extremely high," Franziska Benning, head of the legal department at the Berlin-based nonprofit HateAid, told DW.

To address the problem, lawmakers worldwide are debating new regulations that would specifically target the publication and distribution of deepfake content.



In Germany, the debate found new momentum with a draft law published in July by the Bundesrat, the chamber of parliament representing the 16 state governments. The proposal includes tougher penalties and a new clause for "violation of personal rights through digital forgery."

Provisions in German law that are currently used to target deepfakes predate the emergence of the technology. They range from privacy violations to copyright infringement.

That makes the legal situation "confusing and incomplete," said Georg Eisenreich, a member of the conservative Christian Social Union (CSU) and justice minister of the southern state of Bavaria, who initiated the proposal. Adding a new offense to Germany's Criminal Code would create "more clarity," he said.

Striking the right balance


But not everyone is convinced. Advocates for civil liberties argue most violations related to deepfakes are already covered by existing legislation. They also warn that overly restrictive regulations could impede legitimate uses of the technology.

"The problem of deepfakes is real, but we cannot draw the conclusion that we need to tighten criminal law to the point where even non-criminal behavior becomes a crime," said Benjamin Lück, a lawyer with the Berlin-based NGO Society for Civil Rights, or GFF.



"There is a danger of criminalizing even socially appropriate behavior and the use of deepfakes for satirical or artistic purposes," Lück told DW.

The debate in Berlin underscores the challenges faced by lawmakers in regulating technology that blurs the boundary between reality and fiction: striking a delicate balance between preventing misuse and safeguarding civil liberties, including freedom of expression.

From early development to widespread misuse


Deepfake technology dates back to the mid-2010s when researchers began using "deep learning," an emerging approach in artificial intelligence, to create realistic fake content. Soon after, pornographic content featuring celebrity faces swapped onto other people's bodies began circulating online.

Since then, the technology has rapidly advanced, with new generative AI programs now allowing anyone with basic technical skills to create fake content.

This has led to various forms of misuse. For example: criminals use deepfake technology to commit fraud, such as impersonating CEOs to trick employees or business partners into transferring funds or sharing confidential information. Across the web,

deepfake technology has also been used by domestic and international actors to spread disinformation and influence public opinion.

Protecting victims of image-based abuse

The vast majority of cases, however, involve non-consensual sexualized deepfakes: fake images or videos in which people appear to be naked or engage in sexual activity.

"Women are particularly affected by this," said Benning of the NGO HateAid, which supports victims of digital violence. Previously, this image-based abuse targeted primarily celebrities. "Today, more and more private individuals are contacting HateAid," she warned.
Franziska Benning is the head of legal at the Berlin-based nonprofit HateAid
Image: HateAid

Spreading such sexualized deepfakes is already punishable under German law, but "whether the law has been violated depends on the individual case and legal assessment," Benning explained. Sending deepfakes in direct messages, for example, is a gray area, she added.

The Bundesrat aims to close such loopholes. Though the draft law is unlikely to come into effect in its current form, experts expect it to influence an upcoming cyber violence law that is currently being drafted by the German Justice Ministry.

Benning welcomed the introduction of a new criminal offense for deepfakes, as proposed by the Bundesrat. But she urged that it should even go further and also criminalize the production of non-consensual, pornographic deepfakes, even if they are not being shared. "That would be a big step forward for many affected by non-consensual deepfake pornography," she said.

'Not every deepfake needs to be criminalized'

Civil liberties advocates, however, caution that well-intentioned efforts to curb abuse could lead to the overcriminalization of deepfake technology itself.

"Not every deepfake needs to be criminalized," said Lück of the Society for Civil Rights. "When harmless words are put into the mouths of politicians and it's clearly recognizable as a joke, it's questionable whether you really need to go in there with a sharp sword and deem it criminally relevant."



Moreover, he warned that tougher laws will do little to address another significant threat posed by deepfakes:the fabrication of falsehoods about politicians or events to sow unrest or widen divisions in society. "No ban will prevent disinformation campaigns orchestrated by entire states," he said.

Instead, it's important to raise awareness across all sectors of society that anything seen or heard online might be fake, he added.

The use of deepfakes in satire or art will play an important role in promoting this kind of "media literacy," said Lück, warning that criminalizing the technology could backfire and hinder efforts to raise awareness about the risks of deepfakes.

"A blanket ban could end up making us less informed as a society," said Lück.

Edited by: Rina Goldenberg

Janosch Delcker Janosch Delcker is based in Berlin and covers the intersection of politics and technology.@JanoschDelcker
Send us your feedback

No comments:

Post a Comment