Wednesday, November 12, 2025

Artificial intelligence, wellness apps alone cannot solve mental health crisis


APA advisory offers warnings, guidance for integrating generative AI chatbots, wellness applications in mental health care



American Psychological Association





Emotional support is an increasingly common reason people turn to generative artificial intelligence chatbots and wellness applications, but these tools currently lack the scientific evidence and the necessary regulations to ensure users’ safety, according to a new health advisory by the American Psychological Association.

The APA Health Advisory on the Use of Generative AI Chatbots and Wellness Applications for Mental Health examined consumer-focused technologies that people are relying on for mental health advice and treatment, even if these are not their intended purpose. However, these tools are easy to access and low cost – making them an appealing option for people who struggle to find or afford care from licensed mental health providers.

“We are in the midst of a major mental health crisis that requires systemic solutions, not just technological stopgaps,” said APA CEO Arthur C. Evans Jr., PhD. “While chatbots seem readily available to offer users support and validation, the ability of these tools to safely guide someone experiencing crisis is limited and unpredictable.”

The advisory emphasizes that while technology has immense potential to help psychologists address the mental health crisis it must not distract from the urgent need to fix the foundations of America’s mental health care system.

The report offers recommendations for the public, policymakers, tech companies, researchers, clinicians, parents, caregivers and other stakeholders to help them understand their role in a rapidly changing technology landscape so that the burden of navigating untested and unregulated digital spaces does not fall solely on users. Key recommendations include:

  • Due to the unpredictable nature of these technologies, do not use chatbots and wellness apps as a substitute for care from a qualified mental health professional.
  • Prevent unhealthy relationships or dependencies between users and these technologies
  • Establish specific safeguards for children, teens and other vulnerable populations

“The development of AI technologies has outpaced our ability to fully understand their effects and capabilities. As a result, we are seeing reports of significant harm done to adolescents and other vulnerable populations,” Evans said. “For some, this can be life-threatening, underscoring the need for psychologists and psychological science to be involved at every stage of the development process.”

Even generative AI tools that have been developed with high-quality psychological science and using best practices do not have enough evidence to show that they are effective or safe to use in mental health care, according to the advisory. Researchers must evaluate generative AI chatbots and wellness apps using randomized clinical trials and longitudinal studies that track outcomes over time. But in order to do so, tech companies and policymakers must commit to transparency on how these technologies are being created and used.

Calling the current regulatory frameworks inadequate to address the reality of AI in mental health care, the advisory calls for policymakers, particularly at the federal level, to:

  • Modernize regulations
  • Create evidence-based standards for each category of digital tool
  • Address gaps in Food and Drug Administration oversight
  • Promote legislation that prohibits AI chatbots from posing as licensed professionals
  • Enact comprehensive data privacy legislations and “safe-by-default” settings

The advisory notes many clinicians lack expertise in AI and urges professional groups and health systems to train them on AI, bias, data privacy, and responsible use of AI tools in practice. Clinicians themselves should also follow the ethical guidance available and proactively ask patients about their use of AI chatbots and wellness apps.

“Artificial intelligence will play a critical role in the future of health care, but it cannot fulfill that promise unless we also confront the long-standing challenges in mental health,” said Evans. “We must push for systemic reform to make care more affordable, accessible, and timely—and to ensure that human professionals are supported, not replaced, by AI.”

OpenAI loses song lyrics copyright case in German court

DW with AFP, dpa, Reuters
Nov 11, 2025:

OpenAI lost a copyright infringement case in a lower German court for using popular song lyrics in its ChatGPT language model without paying royalties.

The German organization GEMA argued that large language model producers like ChatGPT's OpenAI should pay licensing fees like other online companies using intellectual property
Image: Matthias Balk/dpa/picture alliance

Large language models like ChatGPT infringe on German authors' rights laws if they use song lyrics in their responses without having paid license fees for them, a Munich court ruled on Tuesday.

Judge Elke Schwager at the Munich District Court I said that OpenAI, the US company that owns ChatGPT, would be charged damages for the unauthorized usage. She did not specify a sum.

The claimant and a German journalists' trade union both claimed the case could have far-reaching implications for AI or large language models and intellectual property and copyright laws.

The verdict can be appealed.

"We do not agree with the verdict and are examining further steps," OpenAI said in response, adding that it respected intellectual property rights and was in negotiations with relevant organizations around the world.

What was the case about?


The German association GEMA that seeks to defend authors' rights brought the lawsuit.

Authors' rights law (or Urheberrecht in German) is separate from and not to be confused with the more commonly understood Anglo-American copyright law. It places more emphasis on the individual artist or author and considers the rights non-transferable, rather than the property of the owner of the content (like a publisher or record label).

GEMA used nine specific songs as examples for the purposes of the case, including titles like "Männer (Men)" by Herbert Grönemeyer, "In der Weihnachtsbäckerei (In the Christmas bakery)" by Rolf Zuckowski, and "Atemlos (Breathless)" originally by Kristina Bach and popularized more recently by Helene Fischer.

Although this case only concerned German law and usage, one of GEMA's lawyers claimed that Tuesday's ruling would prove groundbreaking for Europe as a whole, given that the applicable rules were "harmonized." He said he anticipated negotiations with companies like OpenAI on suitable licensing fees.

"We are of course extremely pleased that the chamber has ruled so clearly," GEMA lawyer Kai Welp told journalists. "The goal is not to remove anything from the market, but rather to receive appropriate compensation."

Kai Welp said he believed the ruling would prove groundbreaking for Europe as a wholeImage: Malin Wunderlich/dpa/picture alliance

GEMA made international headlines about a decade ago with its restrictive approach to German music videos on YouTube, though a deal was ultimately reached to permit their publication on the platform.


Judge baffled by oversight from 'highly intelligent' defendants

Judge Schwager said while issuing her ruling that she was astonished that OpenAI had not taken heed of what she called a clear legal situation.

"We have highly intelligent defendants who have managed to create the most modern of technologies," Schwager said.

Anyone who created something and used outside content in doing so had to pay for that content or otherwise obtain permission, she said, finding that the current usage amounted to unlicensed distribution and reproduction.

"Authors' rights are protected intellectual property," Klager said. "And so it's clear that this is out of order."


On what grounds did OpenAI dispute the allegations?

Neither side disputed during the trial that the songs' lyrics had been used to "train" the fourth iteration of ChatGPT.

What was at issue was whether or not the lyrics had been actively stored in the large language model's database for future use.

OpenAI argued that ChatGPT did not store or copy specific training data, but rather reflected in its parameters what it had learned in its entire training dataset. It also argued that "outputs" from ChatGPT answering user questions were generated only in response to user prompts and so if anybody were responsible for their generation it would be users more than OpenAI.

The court found that the coincidental generation of text that happened to match the lyrics either exactly or in large part was not plausible.

"Given the complexity and length of the song text, coincidence can be ruled out as the cause of the reproduction of the song lyrics," the court wrote in a press release.
German journalists' union hints at wider implications

Several media organizations had also questioned the legality of large language models' training processes, with journalism among the sources used.

The chairman of one leading journalists' trade union, Mika Beuster of the DJV, called Tuesday's ruling "a partial victory for authors' rights."

"The training of AI models is intellectual property theft," Beuster said, arguing that journalists seeking compensation from companies like OpenAI would now have an improved legal position.


Edited by: Dmytro Hubenko

Mark Hallam News and current affairs writer and editor with DW since 2006.@marks_hallam

No comments: