Wednesday, August 21, 2024

 

Writers’ lawsuit against Anthropic highlights tensions between AI developers & artists



By Webb Wright, NY Reporter

August 20, 2024 | 


A proposed class action lawsuit threatens to tarnish Anthropic’s reputation as a beacon of safety and responsibility in an industry mired in controversy.

Anthropic/Claude

Anthropic launched its Claude 3 family of models earlier this year. / Adobe Stock

Amazon-backed AI developer Anthropic is being sued by a trio of authors who claim that the AI company has illegally used pirated and copyrighted materials to train its Claude chatbot.

Filed yesterday in San Francisco by authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson, the proposed class action lawsuit accuses Anthropic of training Claude using pirated copies of books gathered from an open source training dataset called The Pile.

It also argues that Anthropic is depriving authors of revenue by enabling the creation of AI-generated lookalikes of their books. The rise of large language models (LLMs) has made it "easier than ever to generate rip-offs of copyrighted books that compete with the original, or at a minimum dilute the market for the original copyrighted work," the complaint states. "Claude in particular has been used to generate cheap book content ... [and it] could not generate this kind of long-form content if it were not trained on a large quantity of books, books for which Anthropic paid authors nothing. In short, the success and profitability of Anthropic is predicated on mass copyright infringement without a word of permission from or a nickel of compensation to copyright owners, including Plaintiffs here."

It’s the first time that Anthropic has been sued by writers, though the company is facing another legal challeng from a group of music publishers, including Concord Music Group and Universal Music Group. In a lawsuit lodged last fall, the groups allege that the company trained Claude with copyrighted lyrics, which the chatbot then illegally distributed through its outputs.

Anthropic released its Claude 3 family of models in March, shortly before Amazon completed a $4bn investment in the company.

The new case arrives at a historic moment of reckoning between the AI and publishing industries. The rise of popular generative AI chatbots like ChatGPT, Gemini and Claude have caused many online publishers to fear that the technology could undermine their flow of web traffic. Meanwhile, a growing chorus of actors, musicians, illustrators and other artists are calling for legal protections against what they’ve come to view as a predatory AI industry built upon their creative output without compensating or often even crediting them.

As Motti Peer, chairman and co-CEO of PR agency ReBlonde, puts it: “This legal challenge is emblematic of a broader struggle between traditional content creators and the emerging generative AI technology that threatens the relevance of quite a few long-standing professions.”

He notes that this dilemma “is not ... specific to Anthropic.”

In fact, thus far, OpenAI has been the primary target of the creative industry’s AI ire. The company – along with Microsoft, its primary financial backer – has been sued by a fleet of newspapers, including The New York Times and The Chicago Tribune, who claim that their copyrighted materials were used illegally to train AI models. Copyright infringement lawsuits have also been filed against both OpenAI and Microsoft by prominent authors like George R.R. Martin, Jonathan Franzen and Jodi Picoult.

In the midst of those legal battles, OpenAI has sought to position itself as an ally to publishers. The company has inked content licensing deals with publishing companies including Axel Springer, The Associated Press and, as of Tuesday, Condé Nast, giving them the right to use their content to train models while linking back to their articles in responses generated by ChatGPT, among other perks.

Other AI developers, like Perplexity, have also debuted new initiatives this year designed to mitigate publisher concerns.

But both Anthropic and OpenAI have argued that their use of publisher content is permissible according to the ’fair use’ doctrine, a US law that allows for the repurposing of copyrighted materials without the permission of their original creators in certain cases.

Anthropic would do well to frame its response to the allegation within the broader discourse about ”how society should integrate transformative technologies in a way that balances progress with the preservation of existing cultural and professional paradigms,” according to Peer. He advises treading “carefully” and says the company should ”[respect] the legal process while simultaneously advocating for a broader discussion on the principles and potential of AI.”

Anthropic has not yet replied to The Drum’s request for comment.

The new lawsuit raises key questions about Anthropic’s ethics and governance practices. Founded by ex-OpenAI staffers, the company has positioned itself as an ethical counterbalance to OpenAI – one that can be trusted to responsibly build artificial general intelligence, or AGI, that will benefit humanity. Since its founding in 2021, Anthropic has continued to attract a steady stream of former OpenAI engineers and researchers who worry that the Sam Altman-led company is prioritizing commercial growth over safety (the highest-profile being Jan Leike, who headed AI safety efforts at OpenAI).

The future of Anthropic’s reputation may hinge in part on the company’s willingness to collaborate on the creation of state and federal regulation of the AI industry, suggests Andrew Graham, founder of PR firm Bread & Law. “Being available and engaged in the lawmaking and regulatory process is a great way for a company in a controversial industry to boost its reputation and attract deeper levels of trust from the stakeholders that matter most,” he says. “This is the core mistake that crypto firms made back a handful of years ago.”

Anthropic has already signaled its willingness to collaborate with policymakers: The company recently helped to amend California’s controversial AI bill, SB 1047, designed to establish increase accountability for AI companies and mitigate some of the dangers posed by the most advanced AI systems.

The company might bolster its reputation as the conscientious, responsible force within the AI community that it markets itself as by engaging directlywith artists, authors and the other professionals whose work is being used to train Claude and other chatbots. Such an approach could turn the negative press surrounding Monday’s lawsuit “into an opportunity,” according to Andrew Koneschusky, a PR and crisis communications expert and founder of the AI Impact Group, an AI-focused consultancy firm.

He suggests that the company has the chance to set a positive standard for similar debacles in the future, saying, “If the rules for training AI models are ambiguous, as the responsible and ethical AI company, Anthropic should take the lead in defining what responsible and ethical training entails.”

No comments: