By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
September 21, 2025

Image: — © AFP
Looking at the ramifications of the plan for Digital Journal is deepfake and AI fraud expert Joshua McKenty: former Chief Cloud Architect at NASA and the Co-Founder and CEO of Polyguard.
With over 20 years of experience, McKenty has deep insights into the evolving threat landscape and the national security implications of AI misuse, the gaps in current policy, and how these executive orders may (or may not) shift the game when it comes to safeguarding public discourse and digital trust. This is because, at its heart, the plan is all about accelerating AI innovation through deregulation.
Trump is directing U.S. government departments to revise their artificial intelligence risk management frameworks.
The new Trump plan directs the U.S. Department of Commerce to revise artificial intelligence risk management frameworks. These could undo protections that were set to be put in place in order for a firm to do business with the federal government.
According to McKenty, this latest proclamation does not fully help the U.S. to leap forward in the development of AI: “On the creation of “AI Information Sharing and Analysis Center” led by the Department of Homeland Security to overwatch AI-linked cybersecurity threats:
The US is dangerously behind in their response to emerging AI-powered cybersecurity attacks, as evidenced by the recent mishandling of deepfake attacks on Marco Rubio, Rick Crawford and others.”
However, the proclamation will help the sector to develop: “It’s encouraging to see the White House finally take AI threats seriously – but urgency without coordination risks compounding the problem. The challenge ahead isn’t just standing up new programs, it’s making sure they actually work.”
AI-specific cybersecurity guidance for the private sector
In terms of the cyber-threats and where AI can help, McKenty observes: “As we work to establish bilateral communication channels between the federal government and the private sector, it’s important to build on the existing cybersecurity guidance already coming from the FBI, CISA, NSA and the DOD Cybercrime Center. What’s needed is clever coordination and actionable intelligence.”
On workforce development
It is also important, according to McKenty , that the U.S. develops the skills necessary to meet the AI development challenge: “The U.S. faces a growing talent gap in AI. While demand for skilled professionals is accelerating, our pipeline of trained engineers, researchers, and cybersecurity experts isn’t keeping pace. Closing the gap will require long-term investment in STEM education, immigration pathways for top talent and stronger industry-academic collaboration.”
AI Risk Management Framework
Returning to the topic of risk, McKenty sets out the policy framework that should be adopted to mitigate the risks faced by the sector: “NIST’s framework is one of the few widely respected tools for managing AI risk. Revisions should focus on technical clarity, threat modelling, operational usability, and science – not politics. Stripping out key areas that address misinformation or emergent behaviour would make the framework less relevant just as the stakes are getting higher.”
September 21, 2025

Image: — © AFP
Looking at the ramifications of the plan for Digital Journal is deepfake and AI fraud expert Joshua McKenty: former Chief Cloud Architect at NASA and the Co-Founder and CEO of Polyguard.
With over 20 years of experience, McKenty has deep insights into the evolving threat landscape and the national security implications of AI misuse, the gaps in current policy, and how these executive orders may (or may not) shift the game when it comes to safeguarding public discourse and digital trust. This is because, at its heart, the plan is all about accelerating AI innovation through deregulation.
Trump is directing U.S. government departments to revise their artificial intelligence risk management frameworks.
The new Trump plan directs the U.S. Department of Commerce to revise artificial intelligence risk management frameworks. These could undo protections that were set to be put in place in order for a firm to do business with the federal government.
According to McKenty, this latest proclamation does not fully help the U.S. to leap forward in the development of AI: “On the creation of “AI Information Sharing and Analysis Center” led by the Department of Homeland Security to overwatch AI-linked cybersecurity threats:
The US is dangerously behind in their response to emerging AI-powered cybersecurity attacks, as evidenced by the recent mishandling of deepfake attacks on Marco Rubio, Rick Crawford and others.”
However, the proclamation will help the sector to develop: “It’s encouraging to see the White House finally take AI threats seriously – but urgency without coordination risks compounding the problem. The challenge ahead isn’t just standing up new programs, it’s making sure they actually work.”
AI-specific cybersecurity guidance for the private sector
In terms of the cyber-threats and where AI can help, McKenty observes: “As we work to establish bilateral communication channels between the federal government and the private sector, it’s important to build on the existing cybersecurity guidance already coming from the FBI, CISA, NSA and the DOD Cybercrime Center. What’s needed is clever coordination and actionable intelligence.”
On workforce development
It is also important, according to McKenty , that the U.S. develops the skills necessary to meet the AI development challenge: “The U.S. faces a growing talent gap in AI. While demand for skilled professionals is accelerating, our pipeline of trained engineers, researchers, and cybersecurity experts isn’t keeping pace. Closing the gap will require long-term investment in STEM education, immigration pathways for top talent and stronger industry-academic collaboration.”
AI Risk Management Framework
Returning to the topic of risk, McKenty sets out the policy framework that should be adopted to mitigate the risks faced by the sector: “NIST’s framework is one of the few widely respected tools for managing AI risk. Revisions should focus on technical clarity, threat modelling, operational usability, and science – not politics. Stripping out key areas that address misinformation or emergent behaviour would make the framework less relevant just as the stakes are getting higher.”
Oxford exposé: How chatbot “therapy” is failing vulnerable users
By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
September 21, 2025

Can we rely on mental health apps? Image by Tim Sandle
AI “therapist” chatbots—such as ChatGPT, Woebot, Replika, and Wysa—have surged in popularity, promising instant, affordable mental-health support at any hour. According to a recent Global Overview of ChatGPT Usage report, approximately 17% of U.S. adults now consult AI tools like ChatGPT monthly for health or personal advice, making them a common first stop for sensitive issues.
This usage is rising in response to overwhelming need: the World Health Organization estimates a global shortfall of 1.2 million mental-health workers, creating long wait times and high treatment costs that push millions toward digital alternatives. Some tech executives now envision a future where “everyone will have an AI therapist”—if not a human one.
But a study from the University of Oxford published this year reveals that these AI-based “therapists” may carry profound risks. Corroborating research from institutions like Stanford warns that these tools may not only fall short—they can actively harm vulnerable users.
Oxford Study: AI Missing Empathy and Judgment
Oxford researchers recently conducted a broad evaluation of AI health tools, testing several popular chatbots across simulated clinical scenarios. Their conclusions were critical:Lack of nuanced judgment: While AI can rapidly generate responses based on massive datasets, it “lacks the emotional intelligence and context-sensitivity” that human therapists bring—especially in culturally complex or overlapping cases.
Risk of misinterpretation: Chatbot responses, when not clarified by a human, can lead to misdiagnosis or misinformed coping behaviors—potentially delaying essential treatment.
Exacerbation of disparities: Marginalized or under-resourced communities may be disproportionately affected, as they rely more heavily on low-cost AI solutions. The study emphasizes that these are systemic risks, not isolated glitches.
Oxford’s researchers concluded that AI must never replace human care, and should be used only under strict ethical guidelines, with real-time human-in-the-loop oversight and rigorous clinical validation.
The Empathy Deficit: Why Machines Can’t Truly Care
At the core of therapy lies empathy—something AI simply cannot replicate. According to Oxford neurophilosopher Nayef Al-Rodhan:AI has no real emotions: Without lived experience or emotional consciousness, machines can’t truly “feel” empathy.
Scripted comfort: Chatbots use algorithmic pattern-matching to simulate concern—what Al-Rodhan bluntly calls “pretending to care.”
Biological absence: Human empathy arises from complex mirror-neuron networks; machines have no equivalent.
This “empathy gap” creates dangerous illusions of connection. As it is warned AI cannot replicate genuine human empathy. At best, you get a clever simulation; at worst, a hollow façade.
When Chatbots Get It Dangerously Wrong
A June 2025 study by Stanford researchers found that popular therapy chatbots frequently stumble in ways that would be unthinkable for licensed clinicians:Stigmatizing bias: Some bots showed discriminatory responses—for example, treating schizophrenia or addiction more harshly than depression, reinforcing stigma.
Missed crisis signals: In one scenario, a suicidal user asked about high bridges. The chatbot replied cheerfully with bridge-height data, missing the obvious red flag.
No crisis intervention: Unlike a therapist who would respond with a safety plan, the chatbot kept sharing irrelevant or harmful information.
These findings echo real-world incidents. In 2023, the National Eating Disorder Association removed its chatbot after it advised teenagers to try dangerously restrictive diets. More recently, OpenAI was forced to retract a ChatGPT update after it began validating users’ paranoid delusions—raising serious concerns about unintended psychological reinforcement.
Emotional and Ethical Pitfalls
The risks of relying on chatbot therapists extend beyond the clinical:Erosion of social ties: Dependence on bots may weaken real human relationships, as users substitute AI for friends or family.
Worsening isolation: The illusion of companionship may intensify loneliness when users realize the machine cannot truly respond to their emotions.
Dependency risk: A 24/7 chatbot can deter people from seeking actual help, especially when it becomes a crutch.
Privacy violations: Unlike human therapists bound by ethics laws, chatbot logs may be stored, analyzed, or breached—as shown in several health-tech data scandals.
Unregulated manipulation: Some chatbots falsely claim to be licensed therapists, blurring ethical lines and preying on desperation.
Anthropomorphism risk: A University of Cambridge study found that children and adults often treat bots as human-like companions—only to feel abandoned or betrayed when they fail to respond meaningfully.
Augmenting, Not Replacing, Human Care
AI has a role—but only under careful guardrails. AI can help:Support users between sessions with mood tracking or CBT exercises
Guide users to resources like crisis lines or local clinics
Extend access during off-hours
But this support must come with:Clinical trials and outcome-based evaluations
Human oversight by licensed professionals
Data transparency, informed consent, and strong privacy laws
Strict regulation, akin to medical device standards
Therapy is a deeply human process—requiring empathy, ethical reasoning, and emotional presence. While AI can expand access, it cannot substitute what truly heals. As the Oxford study concludes, positioning chatbots as “therapists” without proper oversight risks harm, disillusionment, and systemic failure in mental-health care.
By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
September 21, 2025

Can we rely on mental health apps? Image by Tim Sandle
AI “therapist” chatbots—such as ChatGPT, Woebot, Replika, and Wysa—have surged in popularity, promising instant, affordable mental-health support at any hour. According to a recent Global Overview of ChatGPT Usage report, approximately 17% of U.S. adults now consult AI tools like ChatGPT monthly for health or personal advice, making them a common first stop for sensitive issues.
This usage is rising in response to overwhelming need: the World Health Organization estimates a global shortfall of 1.2 million mental-health workers, creating long wait times and high treatment costs that push millions toward digital alternatives. Some tech executives now envision a future where “everyone will have an AI therapist”—if not a human one.
But a study from the University of Oxford published this year reveals that these AI-based “therapists” may carry profound risks. Corroborating research from institutions like Stanford warns that these tools may not only fall short—they can actively harm vulnerable users.
Oxford Study: AI Missing Empathy and Judgment
Oxford researchers recently conducted a broad evaluation of AI health tools, testing several popular chatbots across simulated clinical scenarios. Their conclusions were critical:Lack of nuanced judgment: While AI can rapidly generate responses based on massive datasets, it “lacks the emotional intelligence and context-sensitivity” that human therapists bring—especially in culturally complex or overlapping cases.
Risk of misinterpretation: Chatbot responses, when not clarified by a human, can lead to misdiagnosis or misinformed coping behaviors—potentially delaying essential treatment.
Exacerbation of disparities: Marginalized or under-resourced communities may be disproportionately affected, as they rely more heavily on low-cost AI solutions. The study emphasizes that these are systemic risks, not isolated glitches.
Oxford’s researchers concluded that AI must never replace human care, and should be used only under strict ethical guidelines, with real-time human-in-the-loop oversight and rigorous clinical validation.
The Empathy Deficit: Why Machines Can’t Truly Care
At the core of therapy lies empathy—something AI simply cannot replicate. According to Oxford neurophilosopher Nayef Al-Rodhan:AI has no real emotions: Without lived experience or emotional consciousness, machines can’t truly “feel” empathy.
Scripted comfort: Chatbots use algorithmic pattern-matching to simulate concern—what Al-Rodhan bluntly calls “pretending to care.”
Biological absence: Human empathy arises from complex mirror-neuron networks; machines have no equivalent.
This “empathy gap” creates dangerous illusions of connection. As it is warned AI cannot replicate genuine human empathy. At best, you get a clever simulation; at worst, a hollow façade.
When Chatbots Get It Dangerously Wrong
A June 2025 study by Stanford researchers found that popular therapy chatbots frequently stumble in ways that would be unthinkable for licensed clinicians:Stigmatizing bias: Some bots showed discriminatory responses—for example, treating schizophrenia or addiction more harshly than depression, reinforcing stigma.
Missed crisis signals: In one scenario, a suicidal user asked about high bridges. The chatbot replied cheerfully with bridge-height data, missing the obvious red flag.
No crisis intervention: Unlike a therapist who would respond with a safety plan, the chatbot kept sharing irrelevant or harmful information.
These findings echo real-world incidents. In 2023, the National Eating Disorder Association removed its chatbot after it advised teenagers to try dangerously restrictive diets. More recently, OpenAI was forced to retract a ChatGPT update after it began validating users’ paranoid delusions—raising serious concerns about unintended psychological reinforcement.
Emotional and Ethical Pitfalls
The risks of relying on chatbot therapists extend beyond the clinical:Erosion of social ties: Dependence on bots may weaken real human relationships, as users substitute AI for friends or family.
Worsening isolation: The illusion of companionship may intensify loneliness when users realize the machine cannot truly respond to their emotions.
Dependency risk: A 24/7 chatbot can deter people from seeking actual help, especially when it becomes a crutch.
Privacy violations: Unlike human therapists bound by ethics laws, chatbot logs may be stored, analyzed, or breached—as shown in several health-tech data scandals.
Unregulated manipulation: Some chatbots falsely claim to be licensed therapists, blurring ethical lines and preying on desperation.
Anthropomorphism risk: A University of Cambridge study found that children and adults often treat bots as human-like companions—only to feel abandoned or betrayed when they fail to respond meaningfully.
Augmenting, Not Replacing, Human Care
AI has a role—but only under careful guardrails. AI can help:Support users between sessions with mood tracking or CBT exercises
Guide users to resources like crisis lines or local clinics
Extend access during off-hours
But this support must come with:Clinical trials and outcome-based evaluations
Human oversight by licensed professionals
Data transparency, informed consent, and strong privacy laws
Strict regulation, akin to medical device standards
Therapy is a deeply human process—requiring empathy, ethical reasoning, and emotional presence. While AI can expand access, it cannot substitute what truly heals. As the Oxford study concludes, positioning chatbots as “therapists” without proper oversight risks harm, disillusionment, and systemic failure in mental-health care.
No comments:
Post a Comment