Saturday, December 14, 2024

AI Firm Sued Over Chatbot That Suggested It Was OK for Child to Kill Parents

"In their rush to extract young people's data and sow addiction, Character.AI has created a product so flawed and dangerous that its chatbots are literally inciting children to harm themselves and others," said one advocate.


Character.AI, Suqian City, Jiangsu Province, China, June 2, 2023. Character.AI was downloaded over 1.7 million times in its first week.
(Photo: CFOTO/Future Publishing via Getty Images)


Eloise Goldsmith
Dec 10, 2024
COMMON DREAMS


"You know sometimes I'm not surprised when I read the news and I see stuff like 'child kills parents after a decade of physical and emotional abuse' stuff like this makes me understand a little bit why it happens."

That's a message sent to a child in Texas from a Character.AI chatbot, indicating to the boy that "murdering his parents was a reasonable response to their limiting of his online activity," according to a federal lawsuit filed in Texas district court Monday.

The complaint was brought by two families in Texas who allege that the Google-backed chatbot service Character.AI harmed their two children, including sexually exploiting and abusing the elder, a 17-year-old with high functioning autism, by targeting him with extreme sexual themes like incest and pushing him to self-harm.

The parents argue that Character.AI, "through its design, poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others. Inherent to the underlying data and design of C.AI is a prioritization of overtly sensational and violent responses."

Google is also named as a defendant in the suit. In their filing, the plaintiffs argue that the tech company supported Character.AI's launch even though they knew that it was a "defective product."

The families, who are being represented by the Social Media Victims Law Center and the Tech Justice Law Project, have asked the court to take the product offline.

The explosive court filing comes not long after a mother in Florida filed a separate lawsuit against Character.AI in October, arguing that the chatbot service is responsible for the death of her teenage son because it allegedly encouraged him to commit suicide, per CNN.

Character.AI is different than other chatbots in that it lets uses interact with artificial intelligence "characters." The Texas complaint alleges that the 17-year-old, for example, engaged in a conversation with a character modeled after the celebrity Billie Eilish. These sorts of "companion apps" are finding a growing audience, even though researchers have long warned of the perils of building relationships with chatbots, according to The Washington Post.

A spokesperson for Character.AI declined to comment directly on the lawsuit when asked by NPR, but said the company does have guardrails in place overseeing what chatbots can and cannot say to teen users.

"We warned that Character.AI's dangerous and manipulative design represented a threat to millions of children," said Social Media Victims Law Center founding attorney Matthew P. Bergman. "Now more of these cases are coming to light. The consequences of Character.AI's negligence are shocking and widespread." Social Media Victims Law Center is the plaintiff's counsel in the Florida lawsuit as well.

Josh Golin, the executive director of Fairplay, a nonprofit children's advocacy group, echoed those remarks, saying that "in their rush to extract young people's data and sow addiction, Character.AI has created a product so flawed and dangerous that its chatbots are literally inciting children to harm themselves and others."

"Platforms like Character.AI should not be allowed to perform uncontrolled experiments on our children or encourage kids to form parasocial relationships with bots their developers cannot control," he added.

No comments: