Tuesday, June 03, 2025

  

APA calls for guardrails, education, to protect adolescent AI users



Report cites benefits and dangers of new technology



American Psychological Association




The effects of artificial intelligence on adolescents are nuanced and complex, according to a report from the American Psychological Association that calls on developers to prioritize features that protect young people from exploitation, manipulation and the erosion of real-world relationships.

“AI offers new efficiencies and opportunities, yet its deeper integration into daily life requires careful consideration to ensure that AI tools are safe, especially for adolescents,” according to the report, entitled “Artificial Intelligence and Adolescent Well-being: An APA Health Advisory.” “We urge all stakeholders to ensure youth safety is considered relatively early in the evolution of AI. It is critical that we do not repeat the same harmful mistakes made with social media.”

The report was written by an expert advisory panel and follows on two other APA reports on social media use in adolescence and healthy video content recommendations.

The AI report notes that adolescence – which it defines as ages 10-25 – is a long development period and that age is “not a foolproof marker for maturity or psychological competence.” It is also a time of critical brain development, which argues for special safeguards aimed at younger users.

“Like social media, AI is neither inherently good nor bad,” said APA Chief of Psychology Mitch Prinstein, PhD, who spearheaded the report’s development. “But we have already seen instances where adolescents developed unhealthy and even dangerous ‘relationships’ with chatbots, for example. Some adolescents may not even know they are interacting with AI, which is why it is crucial that developers put guardrails in place now.”

The report makes a number of recommendations to make certain that adolescents can use AI safely. These include:

Ensuring there are healthy boundaries with simulated human relationships. Adolescents are less likely than adults to question the accuracy and intent of information offered by a bot, rather than a human.

Creating age-appropriate defaults in privacy settings, interaction limits and content. This will involve transparency, human oversight and support and rigorous testing, according to the report.

Encouraging uses of AI that can promote healthy development. AI can assist in brainstorming, creating, summarizing and synthesizing information – all of which can make it easier for students to understand and retain key concepts, the report notes. But it is critical for students to be aware of AI’s limitations.

Limiting access to and engagement with harmful and inaccurate content. AI developers should build in protections to prevent adolescents’ exposure to harmful content.

Protecting adolescents’ data privacy and likenesses. This includes limiting the use of adolescents’ data for targeted advertising and the sale of their data to third parties.

The report also calls for comprehensive AI literacy education, integrating it into core curricula and developing national and state guidelines for literacy education.

“Many of these changes can be made immediately, by parents, educators and adolescents themselves,” Prinstein said. “Others will require more substantial changes by developers, policymakers and other technology professionals.”

In addition to the report, further resources and guidance for parents on AI and keeping teens safe and for teens on AI literacy are available at APA.org.


ChatGPT is useful for learning languages, but students’ critical vision must be fostered when using it, according to a pioneering study by UPF



The research, which analyses how Chinese students use ChatGPT to learn Spanish, finds that the vast majority do not pose follow-up questions after obtaining the first response from the platform.



Universitat Pompeu Fabra - Barcelona





Given the growing number of people who turn to ChatGPT when studying a foreign language, pioneering research by UPF reveals the potential and the shortcomings of learning a second language in this way. According to the study, which analyses the use of ChatGPT by Chinese students learning Spanish, the platform helps them solve specific queries, especially vocabulary, writing, and reading comprehension. Conversely, its use is not part of a coherent and structured learning process and lacks a critical vision of the answers provided by the tool. Thus, foreign language teachers are urged to advise students so that they can make more reflective and critical use of ChatGPT.

This is revealed in the first qualitative study in the world to examine how Chinese students use ChatGPT to learn Spanish, developed by the Research Group on Language Learning and Teaching (GR@EL) of the UPF Department of Translation and Language Sciences. The study has been conducted by Shanshan Huang, a researcher with the GR@EL, under the supervision of research group coordinator, Daniel Cassany. Both have recently published an article on the subject in Journal of China Computer-Assisted Language Learning.

To carry out their research, the use of ChatGPT by 10 Chinese students who are learning Spanish was qualitatively examined for a week. Specifically, a total of 370 prompts (indications that each user inputs into ChatGPT to obtain the desired information) have been analysed in depth, along with the corresponding answers from the platform. The study has been complemented by questionnaires administered on the students and comments from the students’ own learning diaries.

The advantages of ChatGPT: a single window from which to solve all linguistic queries, which adapts to the needs of each student

Regarding the potential of ChatGPT for learning languages, the study reveals that it allows students get answers to different queries about the foreign language they are learning, in this case Spanish, from just the one technological platform. For example, they can interact with ChatGPT to ask about vocabulary and spelling, instead of first connecting to a digital dictionary and then to a spell checker. Furthermore, the platform adapts to the profile and needs of each specific student, based on the type of interactions proposed by each user.

On 9 out of 10 occasions, students do not pose follow-up questions after receiving their first answer from ChatGPT

However, the study warns that most students use ChatGPT uncritically as usually they do not pose follow-up questions after obtaining an initial response to their specific queries about the Spanish language. Of the 370 interactions analysed, 331 (89.45%) involved a single question-answer. The remainder of the interactions analysed correspond to 31 successive question-answer circuits in which the student asked the tool for greater clarity and preciseness having received the initial response information.

Most of the queries deal with vocabulary, reading comprehension and writing, and queries concerning oral communication and grammar are residual

On the other hand, the study shows which themes of the specific queries the students raise in the chat. Nearly 90% refer to vocabulary (36.22%), reading comprehension (26.76%) and writing in Spanish (26.49%). However, only one in 10 concerns grammatical queries, especially when it comes to complex concepts, and oral expression.

The researchers warn that this distribution of the themes of consultations might be explained by cultural and technological factors. On the one hand, the model for learning Spanish in China places less emphasis on oral communication than on writing and reading comprehension skills. On the other hand, version 3.5 of ChatGPT, which is used by the students who participated in the study, is more capable of generating and interpreting written texts than of interacting with users during a conversation. However, there would be a need in subsequent studies to analyse whether the next version of ChatGPT (GPT-4) is taken greater advantage of by foreign language students to improve their oral communication skills.

Encouraging a new model of the student-AI-teacher relationship

In view of the results of the present study, the researchers stress that, beyond promoting students’ digital education, it is even more important to strengthen their critical thinking and self-learning skills. Foreign language teachers can play a fundamental role in guiding students on how to organize their learning step by step with the support of AI tools such as ChatGPT with a critical vision. The UPF study recommends that teachers should help students develop more effective prompts and encourage greater dialogue with ChatGPT to better exploit its capabilities. In short, the study backs building a new relationship model for teachers, AI tools and students that can strengthen and improve their learning process.

Reference article:

Huang, Shanshan and Cassany, Daniel. “Spanish language learning in the AI era: AI as a scaffolding tool” Journal of China Computer-Assisted Language Learning, 2025. https://doi.org/10.1515/jccall-2024-0026

Attachment theory: A new lens for understanding human-AI relationships



The researchers suggest that human-AI interactions have similarities to human-human relationships in terms of attachment anxiety and avoidance



Waseda University

Attachment theory as a tool to understand human-AI relationships 

image: 

The study highlighted attachment anxiety and avoidance toward AI, elucidating human-AI interactions through a new lens.

view more 

Credit: Mr. Fan Yang from Waseda University, Japan





Artificial intelligence (AI) is ubiquitous in this era. As a result, human-AI interactions are becoming more frequent and complex, and this trend is expected to accelerate soon. Therefore, scientists have made remarkable efforts to better understand human-AI relationships in terms of trust and companionship. However, these man-machine interactions can possibly also be understood in terms of attachment-related functions and experiences, which have traditionally been used to explain human interpersonal bonds.

In an innovative work, which incorporates two pilot studies and one formal study, a group of researchers from Waseda University, Japan, including Research Associate Fan Yang and Professor Atsushi Oshio from the Faculty of Letters, Arts and Sciences, has utilized attachment theory to examine human-AI relationships. Their findings were recently published online in the journal Current Psychology on May 9, 2025.

Mr. Yang explains the motivation behind their research. “As researchers in attachment and social psychology, we have long been interested in how people form emotional bonds. In recent years, generative AI such as ChatGPT has become increasingly stronger and wiser, offering not only informational support but also a sense of security. These characteristics resemble what attachment theory describes as the basis for forming secure relationships. As people begin to interact with AI not just for problem-solving or learning, but also for emotional support and companionship, their emotional connection or security experience with AI demands attention. This research is our attempt to explore that possibility.

Notably, the team developed a new self-report scale called the Experiences in Human-AI Relationships Scale, or EHARS, to measure attachment-related tendencies toward AI. They found that some individuals seek emotional support and guidance from AI, similar to how they interact with people. Nearly 75% of participants turned to AI for advice, while about 39% perceived AI as a constant, dependable presence.

This study differentiated two dimensions of human attachment to AI: anxiety and avoidance. An individual with high attachment anxiety toward AI needs emotional reassurance and harbors a fear of receiving inadequate responses from AI. In contrast, a high attachment avoidance toward AI is characterized by discomfort with closeness and a consequent preference for emotional distance from AI.

However, these findings do not mean that humans are currently forming genuine emotional attachments to AI. Rather, the study demonstrates that psychological frameworks used for human relationships may also apply to human-AI interactions. The present results can inform the ethical design of AI companions and mental health support tools. For instance, AI chatbots used in loneliness interventions or therapy apps could be tailored to different users’ emotional needs, providing more empathetic responses for users with high attachment anxiety or maintaining respectful distance for users with avoidant tendencies. The results also suggest a need for transparency in AI systems that simulate emotional relationships, such as romantic AI apps or caregiver robots, to prevent emotional overdependence or manipulation.

Furthermore, the proposed EHARS could be used by developers or psychologists to assess how people relate to AI emotionally and adjust AI interaction strategies accordingly.

As AI becomes increasingly integrated into everyday life, people may begin to seek not only information but also emotional support from AI systems. Our research highlights the psychological dynamics behind these interactions and offers tools to assess emotional tendencies toward AI. Lastly, it promotes a better understanding of how humans connect with technology on a societal level, helping to guide policy and design practices that prioritize psychological well-being,” concludes Mr. Yang.

***

 

Reference

 

DOI: 10.1007/s12144-025-07917-6

 

Authors: Fan Yang1 and Atsushi Oshio1

 

Affiliations         

1Faculty of Letters, Arts and Sciences, Waseda University, Japan

 

About Waseda University
Located in the heart of Tokyo, Waseda University is a leading private research university that has long been dedicated to academic excellence, innovative research, and civic engagement at both the local and global levels since 1882. The University has produced many changemakers in its history, including nine prime ministers and many leaders in business, science and technology, literature, sports, and film. Waseda has strong collaborations with overseas research institutions and is committed to advancing cutting-edge research and developing leaders who can contribute to the resolution of complex, global social issues. The University has set a target of achieving a zero-carbon campus by 2032, in line with the Sustainable Development Goals (SDGs) adopted by the United Nations in 2015. 

To learn more about Waseda University, visit https://www.waseda.jp/top/en

 

About Research Associate Fan Yang from Waseda University, Japan
Fan Yang is a Research Associate and a Ph.D. student in psychology at the Graduate School of Letters, Arts and Sciences, Waseda University, Japan, since 2022. He is also an SNS Manager in the Committee on Public Information of the Japanese Psychological Association. His recent research interests include attachment and information processing and attachment and personal growth. He has authored 10+ papers in these fields.

Clairity becomes the first FDA-authorized AI platform for breast cancer prediction – historic milestone for women’s health



CLAIRITY BREAST provides clinicians with a first-in-class, novel platform for identifying future risk of breast cancer




LaVoieHealthScience

Clinical Workflow with CLAIRITY BREAST 

image: 

  • CLAIRITY BREAST provides clinicians with a first-in-class, novel platform for identifying future risk of breast cancer
  • A milestone in equitable healthcare, with potentially life-saving insights from a screening mammogram alone
  • FDA grants De Novo authorization for new device for future five-year breast cancer risk prediction, based on an image alone
view more 

Credit: https://clairity.com/




Boston, MA, and Chicago, IL, June 2, 2025, 08:00 am EDT – Clairity, Inc., a digital health innovator advancing AI-driven healthcare solutions, has received U.S. Food and Drug Administration (FDA) De Novo authorization for CLAIRITY BREAST, a novel, image-based prognostic platform designed to predict five-year breast cancer risk from a routine screening mammogram. With this authorization, Clairity is planning to launch among leading health systems through 2025 – propelling a new era of precision medicine in breast cancer.

Each year, more than 2.3 million new cases of breast cancer are diagnosed worldwide[i], including over 370,000 cases in women in the United States[ii]. Early detection and risk reduction are powerful tools to save lives, but their most effective deployment depends on accurate risk assessment. Most risk assessment models rely heavily on age and family history to predict risk. However, 85% of women diagnosed with breast cancer have no family history, and nearly half have no identifiable risk factors[iii],iv. In addition, traditional risk models, built on data from predominantly European Caucasian women, have not generalized well to women of diverse racial and ethnic backgroundsv.

CLAIRITY BREAST analyzes subtle imaging features on screening mammograms that correlate with future breast cancer risk, making early risk prediction feasible based on a screening mammogram alone. The result is a validated five-year risk score delivered to healthcare providers through existing clinical infrastructures, supporting more personalized follow up care.

“For more than 60 years, mammograms have saved lives by detecting early-stage cancers. Now, advancements in AI and computer vision can uncover hidden clues in the mammograms – invisible to the human eye – to help predict future risk,” said Dr. Connie Lehman, Founder of Clairity, who is also a breast imaging specialist at Mass General Brigham. “By delivering validated, equitable risk assessments, we can help expand access to life-saving early detection and prevention for women everywhere.”

“Personalized, risk-based screening is critical to improving breast cancer outcomes, and AI tools offer us the best opportunity to fulfill that potential,” said Dr. Robert A. Smith, Senior Vice President of Early Cancer Detection Science at the American Cancer Society. “By integrating AI models that assess individual risk, we can better identify women at higher risk, and those who may benefit from supplemental screening methods, such as MRI, improving early detection and more effective prevention strategies.”

“Clairity’s FDA authorization is a turning point for more women to access the scientific advances of AI-driven cancer risk prediction,” said Larry Norton, Founding Scientific Director of the Breast Cancer Research Foundation. “Breast cancer is rising, especially among younger women, yet most risk models often miss those who will develop the disease. Now we can ensure more women get the right care at the right time.”

“What makes the availability of CLAIRITY BREAST a true sea change is that we’re now predicting risk of future cancer from patterns in breast tissue, in an otherwise normal screening, before it’s even there,” said Jeff Luber, CEO of Clairity. “CLAIRITY BREAST is designed to fit seamlessly into the current clinical infrastructure to help providers scale precision prevention – with the goal of reducing late-stage diagnoses, lowering costs, and saving more lives.”

The FDA De Novo authorization positions CLAIRITY BREAST as a first-in-class platform within the $63 billion global breast cancer prediction market, ushering in a new standard for personalized, risk-based screening and cancer prevention.

Be the first to know when it launches, and how you can get it: https://clairity.com/first-to-know/ 

About CLAIRITY BREAST

CLAIRITY BREAST, authorized under the name Allix5, is a mammography-based AI risk prediction platform that analyzes imaging data at the pixel level to identify individuals at elevated risk of future breast cancer. The AI model behind CLAIRITY BREAST was trained on millions of images and validated across more than 77,000 mammograms from five geographically distinct screening centers – including hospital-based and free-standing facilities – that collectively serve a diverse patient population, with validation anchored in five-year outcome data. To learn more about indications for use, visit: https://clairity.com/clairity-breast/   

Clairity’s first-in-class platform was designed to complement existing clinician workflows, making it uniquely positioned to address, at scale, the widespread shortfalls in breast cancer risk assessment and cancer prevention.

About Clairity

Founded in 2020 and headquartered in Boston, Massachusetts, Clairity, Inc. is transforming healthcare risk assessment through the power of artificial intelligence and deep learning. Founded by Dr. Connie Lehman, backed by Santé Ventures and ACE Global Equity, Clairity's technology can uncover subtle patterns in routine images that are invisible to the human eye, enhancing risk prediction to empower clinicians and their patients with actionable, personalized insights. Clairity’s mission is to shift the standard of care from late-stage treatment to proactive prevention. To learn more, visit us at www.clairity.com | LinkedIn

References:

[i] https://acsjournals.onlinelibrary.wiley.com/doi/10.3322/caac.21834

[ii] https://www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/annual-cancer-facts-and-figures/2025/2025-cancer-facts-and-figures-acs.pdf

[iii] https://www.who.int/news-room/fact-sheets/detail/breast-cancer

iv https://www.breastcancer.org/facts-statistics

v https://www.nejm.org/doi/full/10.1056/NEJMms2004740

Patients say “Yes..ish” to the use of AI in dentistry



As AI continues to be integrated in healthcare, a new multinational study involving Aarhus University sheds light on how dental patients really feel about its growing role in diagnostics. The verdict? Patients are cautiously optimistic



Aarhus University





As artificial intelligence (AI) continues to be integrated in healthcare, a new multinational study involving Aarhus University sheds light on how dental patients really feel about its growing role in diagnostics. The verdict? Patients are cautiously optimistic welcoming the potential benefits of AI but drawing a firm line: humans must stay in charge.

Smile – you’re on camera. Most of us have probably tried resting our jaw in an X-ray machine while a highly trained dentist examines whether everything looks as it should. But with the rise of artificial intelligence (AI), it’s increasingly likely that it will be a computer, not a human, interpreting those images. A quiet revolution in dental care is underway. But how do patients actually feel about this shift? That’s exactly what researchers from Aarhus University set out to investigate.

The research explored patients attitudes toward the use of AI in reviewing dental imaging, an area with increasing adoption whereas little is known about how patients perceive it. “We saw a gap in the conversation,” explains Associate Professor Ruben Pauwels, from the Department of Dentistry and Oral Health at Aarhus University. “Dentists and technologists are often the focus—but patients' voices matter if AI is to be successfully implemented.”

Overall, patients viewed AI as a useful diagnostic support tool that can enhance accuracy and efficiency. Yet the study revealed persistent concerns—especially around data privacy and the fear that AI might drive up healthcare costs rather than reduce them. Crucially, the overwhelming majority of participants insisted that AI should not operate without professional human oversight.

The study also highlighted cultural nuances among the six countries involved. Brazilian participants, for instance, were more open to AI replacing dentists in some situations—perhaps reflecting frustrations with long wait times and uneven care quality in the country’s health system.

The patients’ view actually mirrors those of the dental professionals. Previous research shows that dentists welcome AI’s potential but underscores the need for ethical safeguards and rigorous validation before full-scale adoption.

A tool – not a replacement 

According to Ruben Pauwels the study shows that it is important to both treat AI as an auxiliary tool to use in healthcare but be mindful that it cannot replace human expertise.

“Our findings also show how important it is to communicate clearly about when and how we use AI and actively seek out educational opportunities for both professionals and patients to understand AI capabilities and limits. Lastly, we have a responsibility to continuously evaluate and validate AI systems to ensure their reliability and effectiveness in clinical practice.”

Looking ahead, researchers expect public attitudes to evolve as AI becomes more common and better understood. And Aarhus University is already preparing: Starting in 2026, AI training will be part of the dental curriculum, and the team is developing communication tools to help clinics explain AI’s role to patients clearly and objectively.

The research - more information

  • Studytype: Multinational cross-sectional observational survey
  • Collaborators: Collaborators at Aarhus University: Ruben Pauwels and Rubens Spin-Neto
    International collaborators: Camila Tirapelli (University of São Paulo) & four other institutions
  • External funding: None
  • Conflicts of interest: None
  • Read more in the scientific paper:  https://doi.org/10.1093/dmfr/twaf018

No comments: