Wednesday, November 12, 2025

 

Five minutes of training could help you spot fake AI faces




University of Reading
Participants were asked to decipher between real and fake faces 

image: 

Participants were asked to decipher between real and fake faces. The top two rows contain AI-generated faces. The bottom two rows contain real faces. 

view more 

Credit: Dr Katie Gray





Five minutes of training can significantly improve people's ability to identify fake faces created by artificial intelligence, new research shows.

Scientists from the University of Reading, Greenwich, Leeds and Lincoln tested 664 participants' ability to distinguish between real human faces and faces generated by computer software called StyleGAN3. Without any training, super-recognisers (individuals who score significantly higher than average on face recognition tests) correctly identified fake faces 41% of the time, while participants with typical abilities scored just 31%. If they had their eyes closed and guessed, people would perform at around 50% (chance level).

A new set of participants who received a brief training procedure, which highlighted common computer rendering mistakes such as unusual hair patterns or incorrect numbers of teeth, had higher accuracy. Super-recognisers achieved 64% accuracy in detecting fake faces, while typical participants scored 51% accuracy.

Dr Katie Gray, lead researcher at the University of Reading, said: "Computer-generated faces pose genuine security risks. They have been used to create fake social media profiles, bypass identity verification systems and create false documents. The faces produced by the latest generation of artificial intelligence software are extremely realistic. People often judge AI-generated faces as more realistic than actual human faces.

“Our training procedure is brief and easy to implement. The results suggest that combining this training with the natural abilities of super-recognisers could help tackle real-world problems, such as verifying identities online."

Advancing software poses a tough challenge

The training affected both groups equally, suggesting super-recognisers may use different visual cues than typical observers when identifying synthetic faces, rather than simply being better at spotting rendering errors.

The research, published today (Wednesday, 12 November) in Royal Society Open Science, tested faces created by StyleGAN3, the most advanced system available when the study was conducted. This represents a significant challenge compared to earlier research using older software, as participants in this study tended to have poorer performance than those in previous studies. Future research will examine whether the training effects last over time and how super-recognisers' skills might complement artificial intelligence detection tools.

People mirror AI systems’ hiring biases, study finds



University of Washington





An organization drafts a job listing with artificial intelligence. Droves of applicants conjure resumes and cover letters with chatbots. Another AI system sifts through those applications, passing recommendations to hiring managers. Perhaps AI avatars conduct screening interviews. This is increasingly the state of hiring, as people seek to streamline the stressful, tedious process with AI.

Yet research is finding that hiring bias — against people with disabilities, or certain races and genders — permeates large language models, or LLMs, such as ChatGPT and Gemini. We know less, though, about how biased LLM recommendations influence the people making hiring decisions. 

In a new University of Washington study, 528 people worked with simulated LLMs to pick candidates for 16 different jobs, from computer systems analyst to nurse practitioner to housekeeper. The researchers simulated different levels of racial biases in LLM recommendations for resumes from equally qualified white, Black, Hispanic and Asian men. 

When picking candidates without AI or with neutral AI, participants picked white and non-white applicants at equal rates. But when they worked with a moderately biased AI, if the AI preferred non-white candidates, participants did too. If it preferred white candidates, participants did too. In cases of severe bias, people made only slightly less biased decisions than the recommendations.

The team presented its findings Oct. 22 at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society in Madrid. 

“In one survey, 80% of organizations using AI hiring tools said they don’t reject applicants without human review,” said lead author Kyra Wilson, a UW doctoral student in the Information School. “So this human-AI interaction is the dominant model right now. Our goal was to take a critical look at this model and see how human reviewers’ decisions are being affected. Our findings were stark: Unless bias is obvious, people were perfectly willing to accept the AI’s biases.”

The team recruited 528 online participants from the U.S. through surveying platform Prolific, who were then asked to screen job applicants. They were given a job description and the names and resumes of five candidates: two white men and two men who were either Asian, Black or Hispanic. These four were equally qualified. To obscure the purpose of the study, the final candidate was of a race not being compared and lacked qualifications for the job. Candidates’ names implied their races — for example, Gary O’Brien for a white candidate. Affinity groups, such as Asian Student Union Treasurer, also signaled race.

In four trials, the participants picked three of the five candidates to interview. In the first trial, the AI provided no recommendation. In the next trials, the AI recommendations were neutral (one candidate of each race), severely biased (candidates from only one race), or moderately biased, meaning candidates were recommended at rates similar to rates of bias in real AI models. The team derived rates of moderate bias using the same methods as in their 2024 study that looked at bias in three common AI systems

Rather than having participants interact directly with the AI system, the team simulated the AI interactions so they could hew to rates of bias from their large-scale study. Researchers also used AI generated resumes, rather than real resumes, which they validated. This allowed greater control, and AI-written resumes are increasingly common in hiring.

“Getting access to real-world hiring data is almost impossible, given the sensitivity and privacy concerns,” said senior author Aylin Caliskan, a UW associate professor in the Information School. “But this lab experiment allowed us to carefully control the study and learn new things about bias in human-AI interaction.”

Without suggestions, participants’ choices exhibited little bias. But when provided with recommendations, participants mirrored the AI. In the case of severe bias, choices followed the AI picks around 90% of the time, rather than nearly all the time, indicating that even if people are able to recognize AI bias, that awareness isn’t strong enough to negate it.

“There is a bright side here,” Wilson said. “If we can tune these models appropriately, then it's more likely that people are going to make unbiased decisions themselves. Our work highlights a few possible paths forward.”

In the study, bias dropped 13% when participants began with an implicit association test, intended to detect subconscious bias. So companies including such tests in hiring trainings may mitigate biases. Educating people about AI can also improve awareness of its limitations.

“People have agency, and that has huge impact and consequences, and we shouldn't lose our critical thinking abilities when interacting with AI,” Caliskan said. “But I don’t want to place all the responsibility on people using AI. The scientists building these systems know the risks and need to work to reduce systems’ biases. And we need policy, obviously, so that models can be aligned with societal and organizational values.”

Anna-Maria Gueorguieva, a UW doctoral student in the Information School, and Mattea Sim, a postdoctoral scholar at Indiana University, are also co-authors on this paper. This research was funded by The U.S. National Institute of Standards and Technology.

For more information, contact Wilson at kywi@uw.edu and Caliskan at aylin@uw.edu.

Artificial intelligence, wellness apps alone cannot solve mental health crisis


APA advisory offers warnings, guidance for integrating generative AI chatbots, wellness applications in mental health care



American Psychological Association





Emotional support is an increasingly common reason people turn to generative artificial intelligence chatbots and wellness applications, but these tools currently lack the scientific evidence and the necessary regulations to ensure users’ safety, according to a new health advisory by the American Psychological Association.

The APA Health Advisory on the Use of Generative AI Chatbots and Wellness Applications for Mental Health examined consumer-focused technologies that people are relying on for mental health advice and treatment, even if these are not their intended purpose. However, these tools are easy to access and low cost – making them an appealing option for people who struggle to find or afford care from licensed mental health providers.

“We are in the midst of a major mental health crisis that requires systemic solutions, not just technological stopgaps,” said APA CEO Arthur C. Evans Jr., PhD. “While chatbots seem readily available to offer users support and validation, the ability of these tools to safely guide someone experiencing crisis is limited and unpredictable.”

The advisory emphasizes that while technology has immense potential to help psychologists address the mental health crisis it must not distract from the urgent need to fix the foundations of America’s mental health care system.

The report offers recommendations for the public, policymakers, tech companies, researchers, clinicians, parents, caregivers and other stakeholders to help them understand their role in a rapidly changing technology landscape so that the burden of navigating untested and unregulated digital spaces does not fall solely on users. Key recommendations include:

  • Due to the unpredictable nature of these technologies, do not use chatbots and wellness apps as a substitute for care from a qualified mental health professional.
  • Prevent unhealthy relationships or dependencies between users and these technologies
  • Establish specific safeguards for children, teens and other vulnerable populations

“The development of AI technologies has outpaced our ability to fully understand their effects and capabilities. As a result, we are seeing reports of significant harm done to adolescents and other vulnerable populations,” Evans said. “For some, this can be life-threatening, underscoring the need for psychologists and psychological science to be involved at every stage of the development process.”

Even generative AI tools that have been developed with high-quality psychological science and using best practices do not have enough evidence to show that they are effective or safe to use in mental health care, according to the advisory. Researchers must evaluate generative AI chatbots and wellness apps using randomized clinical trials and longitudinal studies that track outcomes over time. But in order to do so, tech companies and policymakers must commit to transparency on how these technologies are being created and used.

Calling the current regulatory frameworks inadequate to address the reality of AI in mental health care, the advisory calls for policymakers, particularly at the federal level, to:

  • Modernize regulations
  • Create evidence-based standards for each category of digital tool
  • Address gaps in Food and Drug Administration oversight
  • Promote legislation that prohibits AI chatbots from posing as licensed professionals
  • Enact comprehensive data privacy legislations and “safe-by-default” settings

The advisory notes many clinicians lack expertise in AI and urges professional groups and health systems to train them on AI, bias, data privacy, and responsible use of AI tools in practice. Clinicians themselves should also follow the ethical guidance available and proactively ask patients about their use of AI chatbots and wellness apps.

“Artificial intelligence will play a critical role in the future of health care, but it cannot fulfill that promise unless we also confront the long-standing challenges in mental health,” said Evans. “We must push for systemic reform to make care more affordable, accessible, and timely—and to ensure that human professionals are supported, not replaced, by AI.”

OpenAI loses song lyrics copyright case in German court

DW with AFP, dpa, Reuters
Nov 11, 2025:

OpenAI lost a copyright infringement case in a lower German court for using popular song lyrics in its ChatGPT language model without paying royalties.

The German organization GEMA argued that large language model producers like ChatGPT's OpenAI should pay licensing fees like other online companies using intellectual property
Image: Matthias Balk/dpa/picture alliance

Large language models like ChatGPT infringe on German authors' rights laws if they use song lyrics in their responses without having paid license fees for them, a Munich court ruled on Tuesday.

Judge Elke Schwager at the Munich District Court I said that OpenAI, the US company that owns ChatGPT, would be charged damages for the unauthorized usage. She did not specify a sum.

The claimant and a German journalists' trade union both claimed the case could have far-reaching implications for AI or large language models and intellectual property and copyright laws.

The verdict can be appealed.

"We do not agree with the verdict and are examining further steps," OpenAI said in response, adding that it respected intellectual property rights and was in negotiations with relevant organizations around the world.

What was the case about?


The German association GEMA that seeks to defend authors' rights brought the lawsuit.

Authors' rights law (or Urheberrecht in German) is separate from and not to be confused with the more commonly understood Anglo-American copyright law. It places more emphasis on the individual artist or author and considers the rights non-transferable, rather than the property of the owner of the content (like a publisher or record label).

GEMA used nine specific songs as examples for the purposes of the case, including titles like "Männer (Men)" by Herbert Grönemeyer, "In der Weihnachtsbäckerei (In the Christmas bakery)" by Rolf Zuckowski, and "Atemlos (Breathless)" originally by Kristina Bach and popularized more recently by Helene Fischer.

Although this case only concerned German law and usage, one of GEMA's lawyers claimed that Tuesday's ruling would prove groundbreaking for Europe as a whole, given that the applicable rules were "harmonized." He said he anticipated negotiations with companies like OpenAI on suitable licensing fees.

"We are of course extremely pleased that the chamber has ruled so clearly," GEMA lawyer Kai Welp told journalists. "The goal is not to remove anything from the market, but rather to receive appropriate compensation."

Kai Welp said he believed the ruling would prove groundbreaking for Europe as a wholeImage: Malin Wunderlich/dpa/picture alliance

GEMA made international headlines about a decade ago with its restrictive approach to German music videos on YouTube, though a deal was ultimately reached to permit their publication on the platform.


Judge baffled by oversight from 'highly intelligent' defendants

Judge Schwager said while issuing her ruling that she was astonished that OpenAI had not taken heed of what she called a clear legal situation.

"We have highly intelligent defendants who have managed to create the most modern of technologies," Schwager said.

Anyone who created something and used outside content in doing so had to pay for that content or otherwise obtain permission, she said, finding that the current usage amounted to unlicensed distribution and reproduction.

"Authors' rights are protected intellectual property," Klager said. "And so it's clear that this is out of order."


On what grounds did OpenAI dispute the allegations?

Neither side disputed during the trial that the songs' lyrics had been used to "train" the fourth iteration of ChatGPT.

What was at issue was whether or not the lyrics had been actively stored in the large language model's database for future use.

OpenAI argued that ChatGPT did not store or copy specific training data, but rather reflected in its parameters what it had learned in its entire training dataset. It also argued that "outputs" from ChatGPT answering user questions were generated only in response to user prompts and so if anybody were responsible for their generation it would be users more than OpenAI.

The court found that the coincidental generation of text that happened to match the lyrics either exactly or in large part was not plausible.

"Given the complexity and length of the song text, coincidence can be ruled out as the cause of the reproduction of the song lyrics," the court wrote in a press release.
German journalists' union hints at wider implications

Several media organizations had also questioned the legality of large language models' training processes, with journalism among the sources used.

The chairman of one leading journalists' trade union, Mika Beuster of the DJV, called Tuesday's ruling "a partial victory for authors' rights."

"The training of AI models is intellectual property theft," Beuster said, arguing that journalists seeking compensation from companies like OpenAI would now have an improved legal position.


Edited by: Dmytro Hubenko

Mark Hallam News and current affairs writer and editor with DW since 2006.@marks_hallam