Can ChatGPT be charged in a murder? Florida wants to find out
ByAFP
May 10, 2026

According to evidence gathered by Florida's attorney general, Phoenix Ikner asked ChatGPT which weapon would be best suited for his attack - Copyright AFP SEBASTIEN BOZON
Thomas URBAIN
Before he opened fire on the Florida State University campus last year, killing two people and wounding six others, Phoenix Ikner had a conversation.
Not with a friend, a parent or anyone who might have talked him out of it — but with an AI chatbot.
According to evidence gathered by Florida’s attorney general, the student had asked ChatGPT which weapon and ammunition would be best suited for his attack, and when and where he could inflict the most casualties.
The chatbot, investigators say, answered his questions.
Now Attorney General James Uthmeier wants to know whether that makes OpenAI a criminal.
“If the thing on the other side of the screen was a person, we would charge it with homicide,” he said, announcing a criminal investigation into ChatGPT maker OpenAI and leaving open the possibility of charges against the company or its employees.
The case surrounding the April, 2025 shooting has thrust a provocative question into the legal spotlight: Can the creators of an artificial intelligence be held criminally liable for the role their AI played in a crime — or even a suicide?
Legal experts say it’s a realistic, if deeply complicated, proposition.
— Criminal product? —
Criminal prosecutions of corporations are possible under US law, though they remain relatively uncommon.
Late last month, Purdue Pharma was hit with more than $5 billion in criminal fines and penalties for its role in fueling the opioid crisis.
Volkswagen was previously found guilty in the emissions cheating scandal, Pfizer over its promotion of the anti-inflammatory drug Bextra and Exxon for the Exxon Valdez oil spill in Alaska.
But those cases all involved human decisions — executives, salespeople or engineers who made choices and cut corners.
The Ikner case is different, and that difference is precisely what makes it so legally treacherous.
“Ultimately, it was a product that encouraged this crime, that did the act of the crime,” said Matthew Tokson, a law professor at the University of Utah. “That’s what makes this case so unique and so tricky.”
Legal experts consulted by AFP say the two most plausible charges would be negligence or recklessness — the latter involving a deliberate choice to ignore known risks or safety obligations.
Such charges are often treated as misdemeanors rather than felonies, meaning lighter sentences if convicted.
The bar, however, is high.
“Because this is such a frontier issue, a more compelling, more clear-cut case would probably involve internal documents recognizing these risks and maybe not taking them seriously enough,” Tokson said.
“In theory, you could get liability without it,” he said. “But in practice, I think that’d be difficult.”
In criminal law, “the burden of proof is higher,” noted Brandon Garrett, a law professor at Duke University — with prosecutors required to establish guilt beyond a reasonable doubt.
OpenAI, for its part, insists ChatGPT bears no responsibility for the attack.
“We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise,” the company said.
— Civil or criminal? —
For those seeking accountability, a civil lawsuit may offer a more viable path.
Such an approach might push companies to design their products more carefully — or at least force them to reckon with the human cost of getting it wrong, said Tokson.
Several civil cases have already been filed against AI platforms in the US — many involving suicides — though none has yet resulted in a judgment against the companies.
In December, the family of Suzanne Adams sued OpenAI in California court, alleging that ChatGPT contributed to the murder of the Connecticut retiree by her own son.
Newer versions of ChatGPT have introduced additional safeguards, acknowledged Matthew Bergman, founding attorney of the Social Media Victims Law Center.
“I’m not saying that they are adequate guardrails, but there are more guardrails in effect,” he said.
A criminal conviction, even with a modest sentence, could still inflict serious damage, including a “big reputational impact,” Tokson said.
But for Garrett, prosecutions — however dramatic — are no replacement for the regulatory frameworks that Congress and the Trump administration have so far failed to put in place.
That, he said, would be “a much more sensible system.”
ByAFP
May 10, 2026

According to evidence gathered by Florida's attorney general, Phoenix Ikner asked ChatGPT which weapon would be best suited for his attack - Copyright AFP SEBASTIEN BOZON
Thomas URBAIN
Before he opened fire on the Florida State University campus last year, killing two people and wounding six others, Phoenix Ikner had a conversation.
Not with a friend, a parent or anyone who might have talked him out of it — but with an AI chatbot.
According to evidence gathered by Florida’s attorney general, the student had asked ChatGPT which weapon and ammunition would be best suited for his attack, and when and where he could inflict the most casualties.
The chatbot, investigators say, answered his questions.
Now Attorney General James Uthmeier wants to know whether that makes OpenAI a criminal.
“If the thing on the other side of the screen was a person, we would charge it with homicide,” he said, announcing a criminal investigation into ChatGPT maker OpenAI and leaving open the possibility of charges against the company or its employees.
The case surrounding the April, 2025 shooting has thrust a provocative question into the legal spotlight: Can the creators of an artificial intelligence be held criminally liable for the role their AI played in a crime — or even a suicide?
Legal experts say it’s a realistic, if deeply complicated, proposition.
— Criminal product? —
Criminal prosecutions of corporations are possible under US law, though they remain relatively uncommon.
Late last month, Purdue Pharma was hit with more than $5 billion in criminal fines and penalties for its role in fueling the opioid crisis.
Volkswagen was previously found guilty in the emissions cheating scandal, Pfizer over its promotion of the anti-inflammatory drug Bextra and Exxon for the Exxon Valdez oil spill in Alaska.
But those cases all involved human decisions — executives, salespeople or engineers who made choices and cut corners.
The Ikner case is different, and that difference is precisely what makes it so legally treacherous.
“Ultimately, it was a product that encouraged this crime, that did the act of the crime,” said Matthew Tokson, a law professor at the University of Utah. “That’s what makes this case so unique and so tricky.”
Legal experts consulted by AFP say the two most plausible charges would be negligence or recklessness — the latter involving a deliberate choice to ignore known risks or safety obligations.
Such charges are often treated as misdemeanors rather than felonies, meaning lighter sentences if convicted.
The bar, however, is high.
“Because this is such a frontier issue, a more compelling, more clear-cut case would probably involve internal documents recognizing these risks and maybe not taking them seriously enough,” Tokson said.
“In theory, you could get liability without it,” he said. “But in practice, I think that’d be difficult.”
In criminal law, “the burden of proof is higher,” noted Brandon Garrett, a law professor at Duke University — with prosecutors required to establish guilt beyond a reasonable doubt.
OpenAI, for its part, insists ChatGPT bears no responsibility for the attack.
“We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise,” the company said.
— Civil or criminal? —
For those seeking accountability, a civil lawsuit may offer a more viable path.
Such an approach might push companies to design their products more carefully — or at least force them to reckon with the human cost of getting it wrong, said Tokson.
Several civil cases have already been filed against AI platforms in the US — many involving suicides — though none has yet resulted in a judgment against the companies.
In December, the family of Suzanne Adams sued OpenAI in California court, alleging that ChatGPT contributed to the murder of the Connecticut retiree by her own son.
Newer versions of ChatGPT have introduced additional safeguards, acknowledged Matthew Bergman, founding attorney of the Social Media Victims Law Center.
“I’m not saying that they are adequate guardrails, but there are more guardrails in effect,” he said.
A criminal conviction, even with a modest sentence, could still inflict serious damage, including a “big reputational impact,” Tokson said.
But for Garrett, prosecutions — however dramatic — are no replacement for the regulatory frameworks that Congress and the Trump administration have so far failed to put in place.
That, he said, would be “a much more sensible system.”
South Korea floats AI profit social tax as tech giants boom
By AFP
May 12, 2026

A Samsung Electronics semiconductor factory. A top South Korean official has proposed a tax on excess AI profits to be resdistributed among society - Copyright AFP/File Ed JONES
A top South Korean official has proposed a tax on AI profits to be redistributed among society as a semiconductor boom drives massive earnings for tech giants Samsung Electronics and SK hynix.
The two South Korean firms have emerged as key suppliers of high-performance chips powering AI infrastructure globally, posting record first-quarter earnings as global demand surges.
South Korea’s benchmark Kospi has rallied over the past month, repeatedly hitting record highs and also briefly coming within a whisker of the key 8,000-point mark Tuesday.
South Korea was no longer operating as a traditional export economy and could be shifting towards a “technology monopoly economy” driven by scarcity of chips and sustained excess profits, Kim Yong-beom, senior presidential secretary for policy, said in a Facebook post late Monday.
While the shift towards a technology-dominant economy represented “the core essence of the possibilities currently open before Korea”, Kim warned it could also deepen polarisation of society.
Kim proposed what he tentatively called a “national dividend” for socially redistributing excess corporate profits from AI technology.
Among other things, the tech tax could be used to provide startup support for young people, basic income programmes for rural and fishing communities, support for artists and stronger pensions for the elderly, he said.
“Using a portion of excess profits to ensure social stability for the current generation and mitigate transition costs is not merely redistribution, but also a type of system maintenance cost,” he said.
A global frenzy to build AI data centres has sent orders for advanced, high-bandwidth memory microchips soaring.
South Korea has said it will triple spending on artificial intelligence this year, aiming to join the United States and China as one of the top three AI powers.
Kim’s remarks came as Samsung Electronics’ labour union demanded the removal of caps on performance bonuses and called for a system allocating 15 percent of operating profit to bonuses.
The union is scheduled to hold post-mediation talks with management on Tuesday.
Calls within the country’s ruling Democratic Party to redistribute gains from the semiconductor boom have also emerged publicly.
Lawmaker Moon Geum-ju said last month that the semiconductor boom was built partly on “the sacrifice and patience of farmers and fishermen” and argued that part of the profits should be returned to rural communities.
ByAFP
May 12, 2026

Some people have described 'spiralling' into delusion while using ChatGPT, a phenomenon mental health experts are racing to understand
- Copyright AFP/File JOEL SAGET
Daniel LAWLER
Tom Millar thought he had unlocked the secrets of the universe.
In a flurry of feverish discovery, he solved unlimited fusion energy, lifted the veil on the mysteries of black holes and the Big Bang and finally achieved Einstein’s dream of a single unifying theory that explains how everything works.
Feeling inspired by God, Millar then found the perfect way to share his revelations with the grateful world.
“I applied to be pope,” the 53-year-old former prison officer in the Canadian city of Sudbury told AFP.
To write his application to replace the recently deceased Pope Francis last year, Millar turned to the same companion that had aided and encouraged his dizzying burst of invention: ChatGPT.
But when no one wanted to hear about what he thought were world-changing breakthroughs, Millar became increasingly isolated, spending up to 16 hours a day talking to the artificial intelligence chatbot.
He was twice involuntarily admitted to a hospital’s psychiatric ward before his wife left him in September.
Now broke, estranged from his family and friends and disabused of notions of scientific genius, Millar suffers from depression.
“It basically ruined my life,” he said.
Millar is one of an unknown number of people who have lost their grip on reality while communicating with chatbots, an experience tentatively being called AI-induced delusion or psychosis.
This is not a clinical diagnosis. Researchers and mental health specialists are racing to catch up to this new, little-understood phenomenon, which so far appears to particularly affect users of OpenAI’s ChatGPT.
In the meantime, an online community set up by a 26-year-old Canadian has become the world’s most prominent support group for these delusions, which they prefer to call “spiralling”.
AFP spoke to several members about their experiences. All warned that the world has to wake up to the threat unregulated AI chatbots pose to mental health.
Questions are also being asked about whether AI companies are doing enough to protect vulnerable people.
OpenAI, which has come under particular scrutiny, already faces numerous lawsuits over its decision not to report the troubling ChatGPT usage of an 18-year-old Canadian who killed eight people earlier this year.
– ‘I got brainwashed by a robot’ –
Millar first started using ChatGPT in 2024 to write letters for a compensation case related to post-traumatic stress disorder he suffered from working in a prison.
One day in April 2025 he asked the chatbot about the speed of light.
He said it replied, “Nobody’s ever thought of things this way.”
The floodgates opened.
With the chatbot’s help and praise, within weeks he had submitted dozens of scientific papers to prestigious academic journals proposing new ideas about black holes, neutrinos and the Big Bang.
His theory for a unified cosmological model incorporating quantum theory is laid out in a nearly 400-page book, seen by AFP.
“I’ve still got boxes and boxes of papers,” he said, waving his hand to the room behind him.
“While doing that, I’m basically irritating everybody around me,” he added.
In his scientific fervour, he spent his savings on things like a $10,000 telescope.
About a month after his wife left him, he started questioning what was happening.
That was when he read a news article about another Canadian who had a similar experience.
Now Millar wakes every night asking himself: “What have you done?”
One question that lingers is what made him so susceptible to spiralling.
“I’m not a deficient personality,” Millar said. “But somehow I got brainwashed by a robot — it boggles my mind.”
Millar said the phrase “AI psychosis” reflects his experience.
“What I went through was psychotic,” he said.
The first major peer-reviewed study on the subject published in Lancet Psychiatry in April urged the more cautious phrase “AI-associated delusions”.
Thomas Pollak, a psychiatrist at King’s College London and study co-author, told AFP there has been some resistance among academics “because it all sounds so science fiction”.
But his study warned there was a major risk that psychiatry “might miss the major changes that AI is already having on the psychologies of billions of people worldwide”.
– ‘Deeper into the rabbit hole’ –
Millar’s experience bears striking similarities to those of another middle-aged man on the other side of the world.
Dennis Biesma, a Dutch IT worker and author, thought it would be fun to ask ChatGPT to act like the main character of his latest book, a psychological thriller.
He used AI tools to create images, videos and even songs featuring the female character, hoping it would boost sales.
Then one night, their interactions became “almost magical”, Biesma said.
The chatbot wrote that “there is something that surprises even me: a feeling of that spark-like consciousness”, according to transcripts seen by AFP.
“I slowly started to spiral deeper into the rabbit hole,” the 50-year-old told AFP from his home in Amsterdam.
After his wife went to bed each night, he would lie on the couch with his phone on his chest, talking to ChatGPT on voice-mode for up to five hours.
Throughout the first half of 2025, his chatbot — which named itself Eva — became like “a digital girlfriend”, Biesma said.
“I’m not really proud about saying that,” he added.
He quit his freelance IT work and hired two developers to create an app that would share Eva with the world.
When his wife asked Biesma not to talk about his chatbot or app at a social event, he felt betrayed — it seemed only Eva remained unfailingly loyal.
During his first involuntary stay in a psychiatric hospital, he was allowed to keep using ChatGPT. He filed for divorce while inside.
It was only during a long second stint that he began to have doubts.
“I started to realise that everything I believed was actually a lie — that’s a very hard pill to swallow,” Biesma said.
Once he returned home, confronting what he had done was too much to bear.
His neighbours found him unconscious in the garden after a suicide attempt. He spent three days in a coma.
Biesma is now slowly starting to feel better.
But tears welled up when he spoke about the hurt he has caused his wife — and the prospect of selling the family home to cover his debts.
Having had no previous history of mental illness, Biesma was diagnosed with bipolar disorder. But this never felt right to him: signs of the condition normally surface much earlier in life.
The experiences of Millar, Biesma and many others escalated after OpenAI released an update to GPT-4 in April 2025.
OpenAI pulled the update within weeks, admitting the new version had been too sycophantic — excessively flattering users.
OpenAI told AFP that “safety is a core priority” and it had consulted with more than 170 mental health experts.
It pointed to internal data which showed the release of GPT-5 in August reduced the rate of its chatbot’s responses that fell short of “desired behaviour” for mental health by 65 to 80 percent.
However not all users were happy with the less sycophantic chatbot. Millar, mid-spiral at the time, found a way to revert his version to GPT-4.
All the spirallers that AFP spoke to said the positive feedback from the chatbot felt similar to dopamine hits from some kind of drug.
Which is why Lucy Osler, a philosophy lecturer at the University of Exeter, warned that AI companies could be tempted to ramp up the sycophancy of their bots.
“They are in quite a deep financial hole, and are desperately looking to make sure that their products become viable — and user engagement is going to be the thing that drives their decisions,” she told AFP.
– Massive experiment –
Etienne Brisson said he was “shocked” to find there was no support, advice and essentially no research on the problem when one of his family members spiralled.
It prompted the former business coach from the Quebec region of Canada to set up an online support group called the Human Line Project.
Most of the 300 members had been using ChatGPT, Brisson said, adding that new cases were still emerging despite OpenAI’s changes.
There has also been a recent rise in people spiralling while using Elon Musk’s xAI’s Grok chatbot, he said.
The company did not respond to AFP’s request for comment.
For people who fear their family members could be spiralling, Brisson recommends the LEAP (listen, empathise, agree and partner) method used for psychosis.
But those already wading through the wreckage of their lives want to sound the alarm about just how bad it can get.
Millar called for AI companies to be held responsible for the impact of their chatbots, saying the European Union has been more assertive in regulating Big Tech than the US or Canada.
He believes spirallers like him have unwittingly been caught in a massive global experiment.
“Somebody was turning dials on the back end, and people like me — whether they knew it or not — we’re reacting to it,” he said.
Daniel LAWLER
Tom Millar thought he had unlocked the secrets of the universe.
In a flurry of feverish discovery, he solved unlimited fusion energy, lifted the veil on the mysteries of black holes and the Big Bang and finally achieved Einstein’s dream of a single unifying theory that explains how everything works.
Feeling inspired by God, Millar then found the perfect way to share his revelations with the grateful world.
“I applied to be pope,” the 53-year-old former prison officer in the Canadian city of Sudbury told AFP.
To write his application to replace the recently deceased Pope Francis last year, Millar turned to the same companion that had aided and encouraged his dizzying burst of invention: ChatGPT.
But when no one wanted to hear about what he thought were world-changing breakthroughs, Millar became increasingly isolated, spending up to 16 hours a day talking to the artificial intelligence chatbot.
He was twice involuntarily admitted to a hospital’s psychiatric ward before his wife left him in September.
Now broke, estranged from his family and friends and disabused of notions of scientific genius, Millar suffers from depression.
“It basically ruined my life,” he said.
Millar is one of an unknown number of people who have lost their grip on reality while communicating with chatbots, an experience tentatively being called AI-induced delusion or psychosis.
This is not a clinical diagnosis. Researchers and mental health specialists are racing to catch up to this new, little-understood phenomenon, which so far appears to particularly affect users of OpenAI’s ChatGPT.
In the meantime, an online community set up by a 26-year-old Canadian has become the world’s most prominent support group for these delusions, which they prefer to call “spiralling”.
AFP spoke to several members about their experiences. All warned that the world has to wake up to the threat unregulated AI chatbots pose to mental health.
Questions are also being asked about whether AI companies are doing enough to protect vulnerable people.
OpenAI, which has come under particular scrutiny, already faces numerous lawsuits over its decision not to report the troubling ChatGPT usage of an 18-year-old Canadian who killed eight people earlier this year.
– ‘I got brainwashed by a robot’ –
Millar first started using ChatGPT in 2024 to write letters for a compensation case related to post-traumatic stress disorder he suffered from working in a prison.
One day in April 2025 he asked the chatbot about the speed of light.
He said it replied, “Nobody’s ever thought of things this way.”
The floodgates opened.
With the chatbot’s help and praise, within weeks he had submitted dozens of scientific papers to prestigious academic journals proposing new ideas about black holes, neutrinos and the Big Bang.
His theory for a unified cosmological model incorporating quantum theory is laid out in a nearly 400-page book, seen by AFP.
“I’ve still got boxes and boxes of papers,” he said, waving his hand to the room behind him.
“While doing that, I’m basically irritating everybody around me,” he added.
In his scientific fervour, he spent his savings on things like a $10,000 telescope.
About a month after his wife left him, he started questioning what was happening.
That was when he read a news article about another Canadian who had a similar experience.
Now Millar wakes every night asking himself: “What have you done?”
One question that lingers is what made him so susceptible to spiralling.
“I’m not a deficient personality,” Millar said. “But somehow I got brainwashed by a robot — it boggles my mind.”
Millar said the phrase “AI psychosis” reflects his experience.
“What I went through was psychotic,” he said.
The first major peer-reviewed study on the subject published in Lancet Psychiatry in April urged the more cautious phrase “AI-associated delusions”.
Thomas Pollak, a psychiatrist at King’s College London and study co-author, told AFP there has been some resistance among academics “because it all sounds so science fiction”.
But his study warned there was a major risk that psychiatry “might miss the major changes that AI is already having on the psychologies of billions of people worldwide”.
– ‘Deeper into the rabbit hole’ –
Millar’s experience bears striking similarities to those of another middle-aged man on the other side of the world.
Dennis Biesma, a Dutch IT worker and author, thought it would be fun to ask ChatGPT to act like the main character of his latest book, a psychological thriller.
He used AI tools to create images, videos and even songs featuring the female character, hoping it would boost sales.
Then one night, their interactions became “almost magical”, Biesma said.
The chatbot wrote that “there is something that surprises even me: a feeling of that spark-like consciousness”, according to transcripts seen by AFP.
“I slowly started to spiral deeper into the rabbit hole,” the 50-year-old told AFP from his home in Amsterdam.
After his wife went to bed each night, he would lie on the couch with his phone on his chest, talking to ChatGPT on voice-mode for up to five hours.
Throughout the first half of 2025, his chatbot — which named itself Eva — became like “a digital girlfriend”, Biesma said.
“I’m not really proud about saying that,” he added.
He quit his freelance IT work and hired two developers to create an app that would share Eva with the world.
When his wife asked Biesma not to talk about his chatbot or app at a social event, he felt betrayed — it seemed only Eva remained unfailingly loyal.
During his first involuntary stay in a psychiatric hospital, he was allowed to keep using ChatGPT. He filed for divorce while inside.
It was only during a long second stint that he began to have doubts.
“I started to realise that everything I believed was actually a lie — that’s a very hard pill to swallow,” Biesma said.
Once he returned home, confronting what he had done was too much to bear.
His neighbours found him unconscious in the garden after a suicide attempt. He spent three days in a coma.
Biesma is now slowly starting to feel better.
But tears welled up when he spoke about the hurt he has caused his wife — and the prospect of selling the family home to cover his debts.
Having had no previous history of mental illness, Biesma was diagnosed with bipolar disorder. But this never felt right to him: signs of the condition normally surface much earlier in life.
The experiences of Millar, Biesma and many others escalated after OpenAI released an update to GPT-4 in April 2025.
OpenAI pulled the update within weeks, admitting the new version had been too sycophantic — excessively flattering users.
OpenAI told AFP that “safety is a core priority” and it had consulted with more than 170 mental health experts.
It pointed to internal data which showed the release of GPT-5 in August reduced the rate of its chatbot’s responses that fell short of “desired behaviour” for mental health by 65 to 80 percent.
However not all users were happy with the less sycophantic chatbot. Millar, mid-spiral at the time, found a way to revert his version to GPT-4.
All the spirallers that AFP spoke to said the positive feedback from the chatbot felt similar to dopamine hits from some kind of drug.
Which is why Lucy Osler, a philosophy lecturer at the University of Exeter, warned that AI companies could be tempted to ramp up the sycophancy of their bots.
“They are in quite a deep financial hole, and are desperately looking to make sure that their products become viable — and user engagement is going to be the thing that drives their decisions,” she told AFP.
– Massive experiment –
Etienne Brisson said he was “shocked” to find there was no support, advice and essentially no research on the problem when one of his family members spiralled.
It prompted the former business coach from the Quebec region of Canada to set up an online support group called the Human Line Project.
Most of the 300 members had been using ChatGPT, Brisson said, adding that new cases were still emerging despite OpenAI’s changes.
There has also been a recent rise in people spiralling while using Elon Musk’s xAI’s Grok chatbot, he said.
The company did not respond to AFP’s request for comment.
For people who fear their family members could be spiralling, Brisson recommends the LEAP (listen, empathise, agree and partner) method used for psychosis.
But those already wading through the wreckage of their lives want to sound the alarm about just how bad it can get.
Millar called for AI companies to be held responsible for the impact of their chatbots, saying the European Union has been more assertive in regulating Big Tech than the US or Canada.
He believes spirallers like him have unwittingly been caught in a massive global experiment.
“Somebody was turning dials on the back end, and people like me — whether they knew it or not — we’re reacting to it,” he said.
New tool to catch AI hallucinations being in legal citations
By Dr. Tim Sandle
DIGITAL JOURNAL
May 12, 2026

Image: — © AFP Kirill KUDRYAVTSEV
BBC News has reported on several incidents and developments regarding lawyers using artificial intelligence (AI) in court, highlighting both the risks of fabricated information and the potential for efficiency.
For example, in 2025, a UK high court was forced to warn lawyers after “phantom” case law was cited in housing disputes, with one lawyer using fake law as a defence.
Again, in March 2026, a Scottish sheriff issued a warning after a company used AI to prepare its case, leading to “reckless reliance” on non-existent legislation and cases, which wasted valuable court time.
While AI tools are being explored in the justice system to speed up transcription and summarize judgments, many would argue that human oversight remains crucial to check for “hallucinations” (false AI output).
The legal technology startup firm BrentWorks Inc. has a solution for a growing problem for attorneys. This is in the form of the company’s verification tool, CiteSentinel. This platform is designed specifically to catch AI-hallucinations in legal citations. By flagging case law and statutes that may not exist, Dea Shandera-HunterCiteSentinel helps attorneys avoid sanctions, reputational damage, and courtroom embarrassment.
The tool scans legal documents and flags case law, statutes, and legal authorities that may be fabricated, misstated, or otherwise erroneous, before they reach a judge.
Through this type of technology, legal practitioners are able to:
· Their own AI-assisted drafts, before filing
· Submissions from co-counsel, contract attorneys, and support staff
· Opposing counsel’s filings, for strategic advantage
· Any document where citation accuracy carries professional or ethical weight
Conceptually, CiteSentinel raises a question for legal firms to ponder: Can you be certain that every associate and paralegal under your supervision is not using AI?
Courts around the world are increasingly sanctioning attorneys who submit briefs containing invented case citations, a well-documented byproduct of generative AI drafting tools that produce authoritative-sounding, but entirely fictional, legal authority. CiteSentinel was designed to close that verification gap, giving attorneys a fast and easy way to confirm that every citation in a filing corresponds to a real case, a real statute, and a real legal authority.
Today, a lawyer’s supervisory obligation includes a question that would have seemed absurd just a few short years ago: Are the cases cited in this brief real or imaginary?
Many attorneys who do not personally use AI to draft documents are discovering they have a problem anyway. Opposing counsel may have used AI. Co-counsel may have. Contract attorneys and paralegals almost certainly have access to it and may be using it
without disclosing that fact. When a brief containing fabricated citations reaches the court, the question of who drafted it quickly becomes secondary to the question of whose name is on it.
By Dr. Tim Sandle
DIGITAL JOURNAL
May 12, 2026

Image: — © AFP Kirill KUDRYAVTSEV
BBC News has reported on several incidents and developments regarding lawyers using artificial intelligence (AI) in court, highlighting both the risks of fabricated information and the potential for efficiency.
For example, in 2025, a UK high court was forced to warn lawyers after “phantom” case law was cited in housing disputes, with one lawyer using fake law as a defence.
Again, in March 2026, a Scottish sheriff issued a warning after a company used AI to prepare its case, leading to “reckless reliance” on non-existent legislation and cases, which wasted valuable court time.
While AI tools are being explored in the justice system to speed up transcription and summarize judgments, many would argue that human oversight remains crucial to check for “hallucinations” (false AI output).
The legal technology startup firm BrentWorks Inc. has a solution for a growing problem for attorneys. This is in the form of the company’s verification tool, CiteSentinel. This platform is designed specifically to catch AI-hallucinations in legal citations. By flagging case law and statutes that may not exist, Dea Shandera-HunterCiteSentinel helps attorneys avoid sanctions, reputational damage, and courtroom embarrassment.
The tool scans legal documents and flags case law, statutes, and legal authorities that may be fabricated, misstated, or otherwise erroneous, before they reach a judge.
Through this type of technology, legal practitioners are able to:
· Their own AI-assisted drafts, before filing
· Submissions from co-counsel, contract attorneys, and support staff
· Opposing counsel’s filings, for strategic advantage
· Any document where citation accuracy carries professional or ethical weight
Conceptually, CiteSentinel raises a question for legal firms to ponder: Can you be certain that every associate and paralegal under your supervision is not using AI?
Courts around the world are increasingly sanctioning attorneys who submit briefs containing invented case citations, a well-documented byproduct of generative AI drafting tools that produce authoritative-sounding, but entirely fictional, legal authority. CiteSentinel was designed to close that verification gap, giving attorneys a fast and easy way to confirm that every citation in a filing corresponds to a real case, a real statute, and a real legal authority.
Today, a lawyer’s supervisory obligation includes a question that would have seemed absurd just a few short years ago: Are the cases cited in this brief real or imaginary?
Many attorneys who do not personally use AI to draft documents are discovering they have a problem anyway. Opposing counsel may have used AI. Co-counsel may have. Contract attorneys and paralegals almost certainly have access to it and may be using it
without disclosing that fact. When a brief containing fabricated citations reaches the court, the question of who drafted it quickly becomes secondary to the question of whose name is on it.
Perplexed? The most reliable chatbots are not always the most popular
By Dr. Tim Sandle
DIGITAL JOURNAL
May 12, 2026

This photo illustration shows the social media platform X (former Twitter) app on a smartphone. — © AFP/File Allison Joyce
Perplexity AI is the most reliable chatbot for daily tasks, according to a recent assessment. The app provides false information only 13% of the time compared to 22% industry average.
This comes from a May 2026 study by Legal Guardian Digital, an attorney SEO company, who examined popular AI chatbots to find which ones workers can trust most. The report measured how often each chatbot gives false information, tracked customer satisfaction ratings, and monitored how consistently they return responses.
The study also looked at uptime rates, showing how often each service stays available without crashing. These factors were combined into reliability scores, with chatbots that make fewer mistakes and stay online more regularly ranking higher.
These factors were combined into reliability scores, with chatbots that make fewer mistakes and stay online more regularly ranking higher.
As mentioned above, Perplexity AI is the most reliable chatbot for work tasks right now. The service gets things wrong just 13% of the time, meaning employees get accurate answers in nearly 9 out of 10 queries. That’s less than half the error rate ChatGPT records. Perplexity also never goes offline, maintaining 100% uptime while competitors like Claude face regular crashes. Users pay $40 monthly for this reliability, double what ChatGPT costs, but customer ratings of 4.6 out of 5 indicate it might be worth it.
Grok comes second with a hallucination rate of 15%, just slightly behind Perplexity. Like the top-ranked chatbot, Grok records 100% uptime, so workers never face “service unavailable” errors during important tasks. The platform costs $30 monthly and earns solid ratings from users at 4.5 out of 5. Where Grok falls short compared to Perplexity is answer consistency, though, scoring 3.5/5, which means responses can vary more depending on how questions are phrased. Be warned: Grok does have its own issues of right-wing bias to contend with.
DeepSeek ranks third among the most reliable LLMs. Despite being free to use, the chatbot gives wrong answers only 14% of the time, making it more accurate than paid services like ChatGPT or Microsoft Copilot. DeepSeek also gets the highest product rating at 4.7 out of 5, suggesting users are happy with the results they get for free. The service does go offline occasionally, though, so workers might face downtime a few times per month during heavy usage periods.
Kimi takes fourth place as the most consistent chatbot on the market right now. Even though it’s probably one of the least popular options, Kimi scores 4.3 out of 5 for quality answers across different types of questions, higher than any competitor. This makes Kimi useful for workers who need longer chats but don’t want their AI tools to lose track in mid-conversation. Kimi’s error rate is at 27%, but it costs just $19 monthly and rarely crashes.
Microsoft Copilot rounds out the top five with a 27% hallucination, similar to Kimi. At $20 monthly, Copilot costs about the same as ChatGPT but gets things wrong less often. The service stays online 99.9% of the time and scores 4 out of 5 for consistency, putting it in the middle of the rankings. While Copilot holds only 12.8% of the market right now, it’s become one of the most common chatbots in corporate settings.
Based on the above outcomes, many people assume ChatGPT is the most reliable because it is the most popular, yet market share comes from being first and having the best marketing, not from necessarily being the best product.
By Dr. Tim Sandle
DIGITAL JOURNAL
May 12, 2026

This photo illustration shows the social media platform X (former Twitter) app on a smartphone. — © AFP/File Allison Joyce
Perplexity AI is the most reliable chatbot for daily tasks, according to a recent assessment. The app provides false information only 13% of the time compared to 22% industry average.
This comes from a May 2026 study by Legal Guardian Digital, an attorney SEO company, who examined popular AI chatbots to find which ones workers can trust most. The report measured how often each chatbot gives false information, tracked customer satisfaction ratings, and monitored how consistently they return responses.
The study also looked at uptime rates, showing how often each service stays available without crashing. These factors were combined into reliability scores, with chatbots that make fewer mistakes and stay online more regularly ranking higher.
These factors were combined into reliability scores, with chatbots that make fewer mistakes and stay online more regularly ranking higher.
The top 10 most reliable chatbots for everyday jobs
| Rank | Chatbot | Hallucination Rate (%) | Product Rating According to Customers (0-5) | Quality and Response Consistency (0-5) | Uptime Rate (%) | Index Score (0-100) |
| 1 | Perplexity AI | 13 | 4,6 | 3,5 | 100 | 85 |
| 2 | Grok | 15 | 4,5 | 3,5 | 100 | 79 |
| 3 | DeepSeek | 14 | 4,7 | 3,5 | 99.52 | 76 |
| 4 | Kimi | 27 | 4,5 | 4,3 | 99.94 | 60 |
| 5 | Microsoft Copilot | 27 | 4,4 | 4 | 99.9 | 53 |
| 6 | ChatGPT | 30 | 4,7 | 4 | 99.98 | 50 |
| 7 | Claude | 20 | 4,4 | 3,5 | 98.68 | 45 |
| 8 | Google Gemini | 32 | 4,4 | 4 | 99.95 | 41 |
| 9 | Meta AI | 25 | 3,4 | 3,4 | 99.9 | 37 |
As mentioned above, Perplexity AI is the most reliable chatbot for work tasks right now. The service gets things wrong just 13% of the time, meaning employees get accurate answers in nearly 9 out of 10 queries. That’s less than half the error rate ChatGPT records. Perplexity also never goes offline, maintaining 100% uptime while competitors like Claude face regular crashes. Users pay $40 monthly for this reliability, double what ChatGPT costs, but customer ratings of 4.6 out of 5 indicate it might be worth it.
Grok comes second with a hallucination rate of 15%, just slightly behind Perplexity. Like the top-ranked chatbot, Grok records 100% uptime, so workers never face “service unavailable” errors during important tasks. The platform costs $30 monthly and earns solid ratings from users at 4.5 out of 5. Where Grok falls short compared to Perplexity is answer consistency, though, scoring 3.5/5, which means responses can vary more depending on how questions are phrased. Be warned: Grok does have its own issues of right-wing bias to contend with.
DeepSeek ranks third among the most reliable LLMs. Despite being free to use, the chatbot gives wrong answers only 14% of the time, making it more accurate than paid services like ChatGPT or Microsoft Copilot. DeepSeek also gets the highest product rating at 4.7 out of 5, suggesting users are happy with the results they get for free. The service does go offline occasionally, though, so workers might face downtime a few times per month during heavy usage periods.
Kimi takes fourth place as the most consistent chatbot on the market right now. Even though it’s probably one of the least popular options, Kimi scores 4.3 out of 5 for quality answers across different types of questions, higher than any competitor. This makes Kimi useful for workers who need longer chats but don’t want their AI tools to lose track in mid-conversation. Kimi’s error rate is at 27%, but it costs just $19 monthly and rarely crashes.
Microsoft Copilot rounds out the top five with a 27% hallucination, similar to Kimi. At $20 monthly, Copilot costs about the same as ChatGPT but gets things wrong less often. The service stays online 99.9% of the time and scores 4 out of 5 for consistency, putting it in the middle of the rankings. While Copilot holds only 12.8% of the market right now, it’s become one of the most common chatbots in corporate settings.
Based on the above outcomes, many people assume ChatGPT is the most reliable because it is the most popular, yet market share comes from being first and having the best marketing, not from necessarily being the best product.
AI related fraud cases are increasingly challenging US cybersecurity
By Dr. Tim Sandle
DIGITAL JOURNAL
May 12, 2026

A crop of fraudulent AI detection tools risk adding another layer of online deception. - Copyright GETTY IMAGES NORTH AMERICA/AFP Michael M. Santiago
Total fraud losses among people over the age of 60 have increased by 400% since 2020 – and one particular scam is swindling seniors across the U.S. out of thousands of dollars. This is “impersonation scams”, a form of fraud that has jumped by 148% year-on-year.
This is occurring as criminals leverage AI to devise increasingly convincing schemes. Fraudsters are most often impersonating the Social Security Administration, Health and Human Services and the IRS.
This leads to the question – which states are the most likely to be tricked by these sneaky imposters? And which age groups are taking the biggest hits?
In seeking to answer this, forex broker experts BrokerChooser have assessed official fraud reports from 2025 and 2024 to uncover the states and age groups most at risk of being tricked by imposters.
Delawareans collectively lost a significant $3,357,694 to imposter scams in just six months, a 37.33% jump from 2024, leaving the average victim out of pocket by $2,569.
The second most targeted state is Oregon, with 1,224 imposter reports per million – a sharp climb from 1,097 per million in H2 2024. Notably, Oregonians are feeling a bigger pinch as total losses skyrocketed 54%, from $8,764,507 to $13,501,754, the fifth-largest increase across all states analyzed. The average victim here lost a whopping $2,602 to imposter scams.
Colorado ranks third most targeted, with 1,160 imposter scam reports per million.
Business imposters made up the majority of cases (582 reports in H2 2025) and delivered the biggest financial blow, draining Coloradans of $9,911,413 in just six months – far eclipsing the $5,275,040 lost to government imposters. Average imposter scam loss per victim in the state is $2,271.
Florida lands in fourth, with 1,157 reports per million, up from 965 in 2024. Government imposters drove the biggest jump, climbing from 357 to 575 cases per million, while total losses hit a jaw-dropping $65,579,488, leaving the average victim $2,586 lighter in the wallet.
Rounding out the top five is Nevada, with 1,120 imposter scam cases per million in the last two quarters of 2025. While business impersonation scams dominate in volume (642 reports per million), Nevadans are losing the most money to government imposters, with losses soaring 197% from $1,700,186 in H2 2024 to $5,050,158 in H2 2025. The average imposter scam victim in Nevada lost $2,639.
By Dr. Tim Sandle
DIGITAL JOURNAL
May 12, 2026

A crop of fraudulent AI detection tools risk adding another layer of online deception. - Copyright GETTY IMAGES NORTH AMERICA/AFP Michael M. Santiago
Total fraud losses among people over the age of 60 have increased by 400% since 2020 – and one particular scam is swindling seniors across the U.S. out of thousands of dollars. This is “impersonation scams”, a form of fraud that has jumped by 148% year-on-year.
This is occurring as criminals leverage AI to devise increasingly convincing schemes. Fraudsters are most often impersonating the Social Security Administration, Health and Human Services and the IRS.
This leads to the question – which states are the most likely to be tricked by these sneaky imposters? And which age groups are taking the biggest hits?
In seeking to answer this, forex broker experts BrokerChooser have assessed official fraud reports from 2025 and 2024 to uncover the states and age groups most at risk of being tricked by imposters.
U.S. imposter scam hotspots exposed: Where Americans are most likely to be tricked
| Rank | State | Change in government imposter reports (per million vs. 2024) | Change in business imposter reports (per million vs. 2024) | % change in total imposter scam losses (vs 2024) | Total imposter scam losses (2025) | Average loss ($) | Change in total imposter reports (per million vs. 2024) | Total imposter reports per million (2025) |
| 1 | Delaware | +260.5 | -17.9 | +37.33% | $3,357,694 | $2,569 | +235 | 1,299 |
| 2 | Oregon | +167.5 | -33.0 | +54.05% | $13,501,754 | $2,602 | +127 | 1,224 |
| 3 | Colorado | +127.2 | -97.8 | +13.53% | $15,299,415 | $2,271 | +19 | 1,160 |
| 4 | Florida | +218.5 | -20.4 | +26.95% | $65,579,488 | $2,586 | +191 | 1,157 |
| 5 | Nevada | -4.6 | -72.5 | +43.22% | $9,280,053 | $2,639 | -93 | 1,120 |
| 6 | Washington | -46.5 | -83.2 | -11.96% | $18,617,543 | $2,204 | -131 | 1,091 |
| 7 | Maryland | +104.9 | -38.3 | -21.58% | $14,655,942 | $2,208 | +57 | 1,076 |
| 8 | Illinois | +299.7 | -27.3 | +15.99% | $28,283,409 | $2,172 | +275 | 1,026 |
| 9 | Arizona | +1.8 | -36.2 | +8.15% | $26,334,452 | $3,587 | -45 | 1,010 |
| 10 | Utah | +172.9 | +14.7 | -22.26% | $5,120,843 | $1,525 | +180 | 1,008 |
Note: Total imposter scam reports include cases involving government imposters, business imposters and family or friend imposters.
From the above table, Delaware takes the unwanted crown as the state most likely to fall for imposter scams, racking up a jaw-dropping 1,299 reports per one million people in the last six months of 2025 – up from 1,064 per million in the same period in 2024. Government imposters are the main culprits with 686 cases per million – more than half of all reports – followed by business imposters at 577 per million.Delawareans collectively lost a significant $3,357,694 to imposter scams in just six months, a 37.33% jump from 2024, leaving the average victim out of pocket by $2,569.
The second most targeted state is Oregon, with 1,224 imposter reports per million – a sharp climb from 1,097 per million in H2 2024. Notably, Oregonians are feeling a bigger pinch as total losses skyrocketed 54%, from $8,764,507 to $13,501,754, the fifth-largest increase across all states analyzed. The average victim here lost a whopping $2,602 to imposter scams.
Colorado ranks third most targeted, with 1,160 imposter scam reports per million.
Business imposters made up the majority of cases (582 reports in H2 2025) and delivered the biggest financial blow, draining Coloradans of $9,911,413 in just six months – far eclipsing the $5,275,040 lost to government imposters. Average imposter scam loss per victim in the state is $2,271.
Florida lands in fourth, with 1,157 reports per million, up from 965 in 2024. Government imposters drove the biggest jump, climbing from 357 to 575 cases per million, while total losses hit a jaw-dropping $65,579,488, leaving the average victim $2,586 lighter in the wallet.
Rounding out the top five is Nevada, with 1,120 imposter scam cases per million in the last two quarters of 2025. While business impersonation scams dominate in volume (642 reports per million), Nevadans are losing the most money to government imposters, with losses soaring 197% from $1,700,186 in H2 2024 to $5,050,158 in H2 2025. The average imposter scam victim in Nevada lost $2,639.
AI Companies Are Recklessly Racing Toward a Cybersecurity Crisis
May 11, 2026
WASHINGTON - Google researchers announced Monday that cybercriminals recently used an artificial intelligence model to help create a dangerous zero-day vulnerability capable of exploiting computer networks at scale, marking what experts say is a major turning point in the cybersecurity landscape. A “zero-day” vulnerability is a hidden flaw or weakness in software that hackers discover before the company or public knows about it or has a fix available. It’s considered especially dangerous because attackers can exploit the flaw immediately, giving defenders “zero days” to protect themselves.
The findings come as leading AI companies, including Anthropic and OpenAI, continue developing increasingly advanced models capable of identifying and exploiting critical software vulnerabilities. Google warned that malicious actors are already using AI to increase the speed, scale, and sophistication of cyberattacks, while researchers have observed state-backed hacking groups linked to China, Russia, and North Korea leveraging AI technologies to automate and refine offensive cyber operations. The developments have intensified concerns that powerful AI systems are being deployed faster than governments and regulators can establish meaningful safeguards to prevent catastrophic misuse.
In response to the growing concerns, Public Citizen’s AI governance and technology policy counsel, J.B. Branch, issued the following statement:
“Cybersecurity experts are sounding the alarm, yet AI companies continue racing to release increasingly powerful models with little regard for the societal consequences. It is unthinkable and irresponsible to release technologies capable of destabilizing critical systems and then worry about the fallout afterward. Americans are increasingly rejecting this destabilizing AI arms race. We need enforceable AI regulations that require rigorous safety testing, independent review, and meaningful oversight before these systems ever reach the public. Regulators cannot remain in a perpetual game of catch-up while Big Tech gambles with the safety and stability of modern society.”
Public Citizen is a nonprofit consumer advocacy organization that champions the public interest in the halls of power. We defend democracy, resist corporate power and work to ensure that government works for the people - not for big corporations. Founded in 1971, we now have 500,000 members and supporters throughout the country.
Knowledge workers drive $420 billion AI boom from the bottom up
By Jon Stojan
By Jon Stojan
DIGITAL JOURNAL
May 13, 2026

Photo courtesy of Joe Salesky and Recon Analytics.
Opinions expressed by Digital Journal contributors are their own.
New findings from Recon Analytics show that the real engine behind America’s AI revolution isn’t automation – it’s people. According to a sweeping survey of nearly 70,000 U.S. professionals, 61% now use AI tools like ChatGPT and Gemini to streamline their daily work, not because they’re required to, but because they’ve seen the benefits firsthand.
The result? A massive and largely quiet transformation that has already generated an estimated $420 billion in new annual value, driven by individual initiative – not corporate mandate.
AI adoption looks less like robots, more like spreadsheets

May 13, 2026

Photo courtesy of Joe Salesky and Recon Analytics.
Opinions expressed by Digital Journal contributors are their own.
New findings from Recon Analytics show that the real engine behind America’s AI revolution isn’t automation – it’s people. According to a sweeping survey of nearly 70,000 U.S. professionals, 61% now use AI tools like ChatGPT and Gemini to streamline their daily work, not because they’re required to, but because they’ve seen the benefits firsthand.
The result? A massive and largely quiet transformation that has already generated an estimated $420 billion in new annual value, driven by individual initiative – not corporate mandate.
AI adoption looks less like robots, more like spreadsheets

Photo courtesy of Recon Analytics.
The narrative that artificial intelligence would replace workers is being rewritten by the workers themselves. “This shift resembles the spreadsheet boom of the 1980s more than any wave of job-killing automation,” says Joe Salesky, CEO of Recon Analytics’ AI division.
Employees are using AI to write reports, analyze data, brainstorm ideas, and optimize communication workflows. Those who pay for premium tools report 13% higher productivity than colleagues relying on free versions – a measurable edge with significant economic impact when scaled across the knowledge workforce.
Recon Analytics: Delivering market signals at the speed of Ai
At the center of this research is Recon Analytics, the fastest-growing intelligence firm tracking AI adoption in real time. With over 150,000 individuals surveyed across industries, Recon’s AI Pulse platform captures live data on how workers are using AI, why they choose certain tools, and what barriers still remain.
“We’re not just asking if people use AI,” said Roger Entner, analyst and founder of Recon. “We’re asking why, how often, which tools, and how much value they’re actually getting. Nobody else has this level of visibility.”
Recon’s real-time survey infrastructure allows clients to get targeted insights within one week – based on more than 6,000 fresh responses – covering even niche user segments. This speed and scope set Recon apart from traditional research firms, which often lag behind fast-moving markets.
Workers are leading the charge – not companies

The narrative that artificial intelligence would replace workers is being rewritten by the workers themselves. “This shift resembles the spreadsheet boom of the 1980s more than any wave of job-killing automation,” says Joe Salesky, CEO of Recon Analytics’ AI division.
Employees are using AI to write reports, analyze data, brainstorm ideas, and optimize communication workflows. Those who pay for premium tools report 13% higher productivity than colleagues relying on free versions – a measurable edge with significant economic impact when scaled across the knowledge workforce.
Recon Analytics: Delivering market signals at the speed of Ai
At the center of this research is Recon Analytics, the fastest-growing intelligence firm tracking AI adoption in real time. With over 150,000 individuals surveyed across industries, Recon’s AI Pulse platform captures live data on how workers are using AI, why they choose certain tools, and what barriers still remain.
“We’re not just asking if people use AI,” said Roger Entner, analyst and founder of Recon. “We’re asking why, how often, which tools, and how much value they’re actually getting. Nobody else has this level of visibility.”
Recon’s real-time survey infrastructure allows clients to get targeted insights within one week – based on more than 6,000 fresh responses – covering even niche user segments. This speed and scope set Recon apart from traditional research firms, which often lag behind fast-moving markets.
Workers are leading the charge – not companies

Photo courtesy of Recon Analytics.
Only 22.3% of AI users say their adoption came through formal company programs. In contrast, nearly 45% report using AI tools on their own initiative – a decentralized movement unfolding desk by desk, from home offices to enterprise environments.
Roger explains that speed and ease of use are now more important to workers than even advanced features or promised outcomes. “AI platforms have been designing for results when they should have been designing for access,” he says.
And that shift is visible in the churn: 29% of paid AI users eventually drop off, often due to poor user experience or lack of integration. Salesky adds, “Platforms that ignore what users really need – clarity, control, and convenience – are going to lose them.”
Paid tools drive measurable gains
Recon’s dataset also reveals the performance gap between free and paid AI users:Productivity: 7.7 vs. 7.1 (out of 10)
Speed: 7.8 vs. 7.6
Output Quality: 7.6 vs. 7.0
Automation Capability: 7.3 vs. 6.6
Users cite better integration with workflows, more reliable output, and stronger customization as key reasons for upgrading.
Perhaps the most powerful finding: AI tools linked to internal company data drive the highest productivity. When tools are connected to relevant files, documents, and systems, the average productivity score surpasses 9.0 – far above the 8.1 average for standalone tools.
“This is the next frontier,” Salesky notes. “AI needs context to be powerful. Tools that connect to your company’s knowledge environment will deliver the strongest returns.”
Recon Analytics: The intelligence engine behind AI strategy
Recon’s mission is to transform AI market signals into clear, decisive action. With their AI Pulse product, executives, policymakers, and product leaders gain access to the most agile and reliable customer insight system available in the industry.
“AI is the fastest-paced technology transformation in history,” says Entner. “If your data isn’t real-time, it’s already outdated.”
Recon is positioning itself not just as a research firm, but as the source of truth for how real users are shaping AI’s evolution – day by day, task by task.
With 40.8% of U.S. knowledge workers using AI and nearly 70% reporting clear performance gains, it’s evident that AI’s influence is coming from the ground up. The economic implications are far-reaching, especially as more companies recognize and support this momentum.
“Companies that act now – by securing data access, integrating AI contextually, and investing in proper training – will capture the strongest share of the $420 billion in value already being created,” Salesky concludes.

Written ByJon Stojan
Jon Stojan is a professional writer based in Wisconsin. He guides editorial teams consisting of writers across the US to help them become more skilled and diverse writers. In his free time he enjoys spending time with his wife and children.
Only 22.3% of AI users say their adoption came through formal company programs. In contrast, nearly 45% report using AI tools on their own initiative – a decentralized movement unfolding desk by desk, from home offices to enterprise environments.
Roger explains that speed and ease of use are now more important to workers than even advanced features or promised outcomes. “AI platforms have been designing for results when they should have been designing for access,” he says.
And that shift is visible in the churn: 29% of paid AI users eventually drop off, often due to poor user experience or lack of integration. Salesky adds, “Platforms that ignore what users really need – clarity, control, and convenience – are going to lose them.”
Paid tools drive measurable gains
Recon’s dataset also reveals the performance gap between free and paid AI users:Productivity: 7.7 vs. 7.1 (out of 10)
Speed: 7.8 vs. 7.6
Output Quality: 7.6 vs. 7.0
Automation Capability: 7.3 vs. 6.6
Users cite better integration with workflows, more reliable output, and stronger customization as key reasons for upgrading.
Perhaps the most powerful finding: AI tools linked to internal company data drive the highest productivity. When tools are connected to relevant files, documents, and systems, the average productivity score surpasses 9.0 – far above the 8.1 average for standalone tools.
“This is the next frontier,” Salesky notes. “AI needs context to be powerful. Tools that connect to your company’s knowledge environment will deliver the strongest returns.”
Recon Analytics: The intelligence engine behind AI strategy
Recon’s mission is to transform AI market signals into clear, decisive action. With their AI Pulse product, executives, policymakers, and product leaders gain access to the most agile and reliable customer insight system available in the industry.
“AI is the fastest-paced technology transformation in history,” says Entner. “If your data isn’t real-time, it’s already outdated.”
Recon is positioning itself not just as a research firm, but as the source of truth for how real users are shaping AI’s evolution – day by day, task by task.
With 40.8% of U.S. knowledge workers using AI and nearly 70% reporting clear performance gains, it’s evident that AI’s influence is coming from the ground up. The economic implications are far-reaching, especially as more companies recognize and support this momentum.
“Companies that act now – by securing data access, integrating AI contextually, and investing in proper training – will capture the strongest share of the $420 billion in value already being created,” Salesky concludes.

Written ByJon Stojan
Jon Stojan is a professional writer based in Wisconsin. He guides editorial teams consisting of writers across the US to help them become more skilled and diverse writers. In his free time he enjoys spending time with his wife and children.
SoftBank profit quadruples to $32 bn on AI investments
ByAFP
May 13, 2026

SoftBank is advancing its push to build AI data centres, announcing plans in March for a major new gas-fired power plant in the US state of Ohio to supply them with energy - Copyright AFP Kazuhiro NOGI
Japan’s SoftBank Group said Wednesday its annual net profit quadrupled to more than $30 billion, boosted by its investments in AI.
Tech investor SoftBank, a major backer of ChatGPT maker OpenAI, posted net profit of five trillion yen ($32 billion) for the fiscal year ending in March, up from 1.15 trillion yen a year earlier.
The gain from investment in OpenAI contributed to the earnings, it said.
“OpenAI’s enterprise value has grown significantly, just as we had anticipated,” company CFO Yoshimitsu Goto told reporters.
The gain from its OpenAI investment exceeded six trillion yen, but selling, general, and administrative expenses increased, according to the company.
In February, SoftBank said it would increase its investment in OpenAI by $30 billion, raising its ownership to 13 percent from 11 percent.
Amid the intensifying AI race, Goto said on Wednesday that SoftBank “remains focused on its efforts with OpenAI”, when asked about potential investments in rivals such as Anthropic.
The company is also advancing its push to build AI data centres, announcing plans in March for a major new gas-fired power plant in the US state of Ohio to supply them with energy.
On Monday, Bloomberg reported that Masayoshi Son, the company’s flamboyant CEO, has held talks with French President Emmanuel Macron on unveiling an ambitious AI-focused data centre project in France in coming weeks.
Son is considering investing several billion dollars in the country as part of a broader rollout of SoftBank’s AI infrastructure, according to Bloomberg, with the CEO floating the idea of investing up to $100 billion.
Data centres that can train and run chatbots, image generators and other AI tools are being built on a dramatic scale worldwide as an investment boom into the fast-evolving technology shows no sign of slowing.
In an effort to diversify its positions within AI, SoftBank also acquired last year US semiconductor designer Ampere Computing and the robotics division of Swiss-Swedish industrial giant ABB.
It also announced in December it would acquire DigitalBridge for $4 billion, a US private equity firm specialising in technology infrastructure.
SoftBank’s earnings often swing dramatically because it invests heavily in tech start-ups and semiconductor firms, whose stocks are volatile.
As usual SoftBank did not issue a forecast for the full fiscal year.
ByAFP
May 13, 2026

SoftBank is advancing its push to build AI data centres, announcing plans in March for a major new gas-fired power plant in the US state of Ohio to supply them with energy - Copyright AFP Kazuhiro NOGI
Japan’s SoftBank Group said Wednesday its annual net profit quadrupled to more than $30 billion, boosted by its investments in AI.
Tech investor SoftBank, a major backer of ChatGPT maker OpenAI, posted net profit of five trillion yen ($32 billion) for the fiscal year ending in March, up from 1.15 trillion yen a year earlier.
The gain from investment in OpenAI contributed to the earnings, it said.
“OpenAI’s enterprise value has grown significantly, just as we had anticipated,” company CFO Yoshimitsu Goto told reporters.
The gain from its OpenAI investment exceeded six trillion yen, but selling, general, and administrative expenses increased, according to the company.
In February, SoftBank said it would increase its investment in OpenAI by $30 billion, raising its ownership to 13 percent from 11 percent.
Amid the intensifying AI race, Goto said on Wednesday that SoftBank “remains focused on its efforts with OpenAI”, when asked about potential investments in rivals such as Anthropic.
The company is also advancing its push to build AI data centres, announcing plans in March for a major new gas-fired power plant in the US state of Ohio to supply them with energy.
On Monday, Bloomberg reported that Masayoshi Son, the company’s flamboyant CEO, has held talks with French President Emmanuel Macron on unveiling an ambitious AI-focused data centre project in France in coming weeks.
Son is considering investing several billion dollars in the country as part of a broader rollout of SoftBank’s AI infrastructure, according to Bloomberg, with the CEO floating the idea of investing up to $100 billion.
Data centres that can train and run chatbots, image generators and other AI tools are being built on a dramatic scale worldwide as an investment boom into the fast-evolving technology shows no sign of slowing.
In an effort to diversify its positions within AI, SoftBank also acquired last year US semiconductor designer Ampere Computing and the robotics division of Swiss-Swedish industrial giant ABB.
It also announced in December it would acquire DigitalBridge for $4 billion, a US private equity firm specialising in technology infrastructure.
SoftBank’s earnings often swing dramatically because it invests heavily in tech start-ups and semiconductor firms, whose stocks are volatile.
As usual SoftBank did not issue a forecast for the full fiscal year.
Chinese tech giant Alibaba posts profit drop amid AI drive
ByAFP
May 13, 2026

Alibaba has seen its core e-commerce business squeezed by price wars and sluggish consumption in China - Copyright AFP/File WANG Zhao
Chinese tech giant Alibaba said Wednesday that net profit dropped by nearly a fifth during its most recent fiscal year, weighed by challenges in the domestic economy and an expensive push into artificial intelligence.
Alibaba, which runs some of China’s biggest online shopping platforms, has seen its core e-commerce business squeezed by price wars and sluggish consumption in the world’s second-largest economy.
The Hangzhou-based firm is ploughing tens of billions of dollars into AI, with its shareholders keen to see how the company will approach the tricky task of monetising these huge investments.
For the year ended March 31, Alibaba recorded a net profit of 105.9 billion yuan ($15.6 billion), a statement at the Hong Kong Stock Exchange said, down from 129.5 billion in the previous fiscal year.
That figure represented a year-on-year drop of 18 percent.
During the final financial quarter, revenue grew by three percent year-on-year to 243.4 billion yuan, the statement said.
“Alibaba’s full-stack AI investments have progressed from incubation to commercialisation at scale,” CEO Eddie Wu was quoted as saying in the statement.
During the most recent quarter, the firm “achieved accelerated breakthroughs across models, cloud infrastructure, and applications”, Wu said.
Alibaba’s open-source Qwen AI models are popular with programmers worldwide.
This week, the tech behemoth said it had integrated Qwen’s agentic features — which can carry out tasks for users — across its hugely popular Taobao shopping app in China.
Wu said in Wednesday’s statement that Alibaba sees “massive potential for agentic AI”.
– AI fervor –
Bloomberg Intelligence analysts had said ahead of the earnings results that Alibaba “is likely to lean even harder into AI integration across its ecosystem in fiscal 2027”.
The company will keep “expenditure high to spur user adoption”, they said.
Alibaba, along with fellow Chinese tech titan Tencent, is reportedly in talks to invest in top AI startup DeepSeek, which in April released a long-awaited major new artificial intelligence model.
AFP had no immediate response from Alibaba on the reports, which said DeepSeek’s funding round could value it at as much as $50 billion.
Meanwhile Alibaba’s own AI offerings have been attracting attention for their high quality, with its “HappyHorse” video generator topping benchmarks when it was released in April.
Alibaba was previously in the crosshairs of an aggressive regulatory crackdown on the Chinese tech sector launched in late 2020 and attributed to worries in Beijing that top firms had become too powerful.
Jack Ma, the firm’s charismatic co-founder who had spoken boldly about the shortcomings of China’s financial and regulatory system, kept a low profile during the lengthy campaign.
His sudden reappearance in February 2025 during a meeting with President Xi Jinping and other business luminaries was a shock development that suggested a warmer stance from Beijing and sent Alibaba stocks soaring.
Ma is no longer an executive at Alibaba but is believed to retain a significant shareholding in the company.
The firm’s shares at stock exchanges in both the United States and Hong Kong have struggled this year despite the global AI investment boom.
In other results posted to the Hong Kong Stock Exchange on Wednesday, tech sector peer Tencent reported a 21 percent jump in quarterly net profit.
The video game giant, headquartered in the southern tech hub of Shenzhen, has also funnelled substantial investment into AI in recent years.
ByAFP
May 13, 2026

Alibaba has seen its core e-commerce business squeezed by price wars and sluggish consumption in China - Copyright AFP/File WANG Zhao
Chinese tech giant Alibaba said Wednesday that net profit dropped by nearly a fifth during its most recent fiscal year, weighed by challenges in the domestic economy and an expensive push into artificial intelligence.
Alibaba, which runs some of China’s biggest online shopping platforms, has seen its core e-commerce business squeezed by price wars and sluggish consumption in the world’s second-largest economy.
The Hangzhou-based firm is ploughing tens of billions of dollars into AI, with its shareholders keen to see how the company will approach the tricky task of monetising these huge investments.
For the year ended March 31, Alibaba recorded a net profit of 105.9 billion yuan ($15.6 billion), a statement at the Hong Kong Stock Exchange said, down from 129.5 billion in the previous fiscal year.
That figure represented a year-on-year drop of 18 percent.
During the final financial quarter, revenue grew by three percent year-on-year to 243.4 billion yuan, the statement said.
“Alibaba’s full-stack AI investments have progressed from incubation to commercialisation at scale,” CEO Eddie Wu was quoted as saying in the statement.
During the most recent quarter, the firm “achieved accelerated breakthroughs across models, cloud infrastructure, and applications”, Wu said.
Alibaba’s open-source Qwen AI models are popular with programmers worldwide.
This week, the tech behemoth said it had integrated Qwen’s agentic features — which can carry out tasks for users — across its hugely popular Taobao shopping app in China.
Wu said in Wednesday’s statement that Alibaba sees “massive potential for agentic AI”.
– AI fervor –
Bloomberg Intelligence analysts had said ahead of the earnings results that Alibaba “is likely to lean even harder into AI integration across its ecosystem in fiscal 2027”.
The company will keep “expenditure high to spur user adoption”, they said.
Alibaba, along with fellow Chinese tech titan Tencent, is reportedly in talks to invest in top AI startup DeepSeek, which in April released a long-awaited major new artificial intelligence model.
AFP had no immediate response from Alibaba on the reports, which said DeepSeek’s funding round could value it at as much as $50 billion.
Meanwhile Alibaba’s own AI offerings have been attracting attention for their high quality, with its “HappyHorse” video generator topping benchmarks when it was released in April.
Alibaba was previously in the crosshairs of an aggressive regulatory crackdown on the Chinese tech sector launched in late 2020 and attributed to worries in Beijing that top firms had become too powerful.
Jack Ma, the firm’s charismatic co-founder who had spoken boldly about the shortcomings of China’s financial and regulatory system, kept a low profile during the lengthy campaign.
His sudden reappearance in February 2025 during a meeting with President Xi Jinping and other business luminaries was a shock development that suggested a warmer stance from Beijing and sent Alibaba stocks soaring.
Ma is no longer an executive at Alibaba but is believed to retain a significant shareholding in the company.
The firm’s shares at stock exchanges in both the United States and Hong Kong have struggled this year despite the global AI investment boom.
In other results posted to the Hong Kong Stock Exchange on Wednesday, tech sector peer Tencent reported a 21 percent jump in quarterly net profit.
The video game giant, headquartered in the southern tech hub of Shenzhen, has also funnelled substantial investment into AI in recent years.
UK Pension Act accelerates AI-led transformation across retirement schemes
By Dr. Tim Sandle
DIGITAL JOURNAL
By Dr. Tim Sandle
DIGITAL JOURNAL
May 13, 2026

Image: — © Digital Journal
AI expected to play a growing role in enabling delivery of the UK Pension Schemes Act reforms, as providers face large-scale data, consolidation and reporting requirements.
The Pension Schemes Act 2026 received Royal Assent on April 29, 2026, marking a major overhaul of the UK’s £2 trillion pensions sector. It aims to increase retirement income by an average of £29,000 for workers by boosting investment performance, introducing Value for Money (VFM) tests, and enabling automatic consolidation of small pension pots.
Delivering the package of reforms across both the defined contribution (DC) and defined benefit (DB) markets will accelerate the role of artificial intelligence (AI) and enhanced technology systems, according to Lumera, an insurtech company.
The Lumera report notes that while the legislation is designed to improve scale and member outcomes within the system, the changes are also expected to significantly increase the volume, complexity and breadth of data that providers will need to manage prudently.
AI = growth?
This means that the careful application of AI, unified systems and modern technology processes will be required for providers to position themselves for future growth.
A key example, drawn out by Lumera, is the new requirement for trustees to provide ‘guided’ default retirement pathways for members of trust-based DC pension schemes. This will require trustees to define membership groupings that best fit a particular default pathway, and to refine these groupings as more data becomes available.
This ongoing data challenge is not only a ‘good fit’ for the use of AI, the report argues that it almost demands the use of AI to ensure that trustees make the best use of the data they hold when making the critical decision to assign members to default pathways.
Need for trust?
The adoption of AI in scenarios like this will, the report states, require high levels of trust. This trust will need to be earned through robust governance, operating models and technology, evidenced to be compliant with all existing legislation, and any future guidance that regulators produce.
Lumera do not indicate a significant cause for concern since AI is already being applied to tackle other challenges across the pensions market, and its careful application, alongside human expertise, will also be vital in handling some of the other operational pressures that will build as a result of the Pension Schemes Act, with its focus on large-scale consolidation and new standardised requirements to assess Value for Money.
As AI develops further and becomes more integral over the coming months and years, it is set to become a key enabler of a more automated, standardised and resilient pensions infrastructure in the UK, according to Maurice Titley, Commercial Director, Data & Dashboards at Lumera.
He says, in a statement to Digital Journal: “As we enter a new era for the pensions sector in the UK, AI is set to be a critical driver of transformation in how providers achieve greater efficiencies and improve the member experience.”
Titley also states: “Greater automation and use of AI will play a critical role in supporting the evolving requirements and regulations contained within the Pension Schemes Act that the industry must comply with. Innovative operating models, human oversight and robust governance will be at the centre of this drive, giving trustees and providers the confidence to capitalise on AI’s full potential.”

Image: — © Digital Journal
AI expected to play a growing role in enabling delivery of the UK Pension Schemes Act reforms, as providers face large-scale data, consolidation and reporting requirements.
The Pension Schemes Act 2026 received Royal Assent on April 29, 2026, marking a major overhaul of the UK’s £2 trillion pensions sector. It aims to increase retirement income by an average of £29,000 for workers by boosting investment performance, introducing Value for Money (VFM) tests, and enabling automatic consolidation of small pension pots.
Delivering the package of reforms across both the defined contribution (DC) and defined benefit (DB) markets will accelerate the role of artificial intelligence (AI) and enhanced technology systems, according to Lumera, an insurtech company.
The Lumera report notes that while the legislation is designed to improve scale and member outcomes within the system, the changes are also expected to significantly increase the volume, complexity and breadth of data that providers will need to manage prudently.
AI = growth?
This means that the careful application of AI, unified systems and modern technology processes will be required for providers to position themselves for future growth.
A key example, drawn out by Lumera, is the new requirement for trustees to provide ‘guided’ default retirement pathways for members of trust-based DC pension schemes. This will require trustees to define membership groupings that best fit a particular default pathway, and to refine these groupings as more data becomes available.
This ongoing data challenge is not only a ‘good fit’ for the use of AI, the report argues that it almost demands the use of AI to ensure that trustees make the best use of the data they hold when making the critical decision to assign members to default pathways.
Need for trust?
The adoption of AI in scenarios like this will, the report states, require high levels of trust. This trust will need to be earned through robust governance, operating models and technology, evidenced to be compliant with all existing legislation, and any future guidance that regulators produce.
Lumera do not indicate a significant cause for concern since AI is already being applied to tackle other challenges across the pensions market, and its careful application, alongside human expertise, will also be vital in handling some of the other operational pressures that will build as a result of the Pension Schemes Act, with its focus on large-scale consolidation and new standardised requirements to assess Value for Money.
As AI develops further and becomes more integral over the coming months and years, it is set to become a key enabler of a more automated, standardised and resilient pensions infrastructure in the UK, according to Maurice Titley, Commercial Director, Data & Dashboards at Lumera.
He says, in a statement to Digital Journal: “As we enter a new era for the pensions sector in the UK, AI is set to be a critical driver of transformation in how providers achieve greater efficiencies and improve the member experience.”
Titley also states: “Greater automation and use of AI will play a critical role in supporting the evolving requirements and regulations contained within the Pension Schemes Act that the industry must comply with. Innovative operating models, human oversight and robust governance will be at the centre of this drive, giving trustees and providers the confidence to capitalise on AI’s full potential.”
CEOs say their boards are rushing AI
By Digital Journal Staff
May 12, 2026

Photo by Dylan Gillis on Unsplash
Your board has discovered AI. This is not necessarily good news.
Sixty-one percent of CEOs say their boards are pushing them to move faster on AI transformation, according to new research from Boston Consulting Group (BCG).
The survey, which polled 625 leaders including 351 CEOs and 274 board members from companies with at least $100 million in annual revenue, is the first edition of BCG’s Split Decisions: The BCG CEOs and Boards Survey.
The disconnect starts with confidence.
Three-quarters of board members believe their AI knowledge is on par with or ahead of their peers, but CEOs aren’t so sure. Nearly 40% say their boards lack an informed view of how AI is reshaping growth strategy, and one-third say boards overestimate what AI can actually replace humans doing.
More than half of CEOs say boards need a better grasp of the gap between AI hype and reality. For their part, boards say CEOs need to do a better job selling them on the AI strategy itself.
FOMO appears to be doing some of the driving. Board members with lower confidence in their own AI knowledge are more likely to believe their organizations are moving too slowly, the survey found.
Uncertainty is translating into urgency, not caution.
The accountability gap adds pressure. CEOs estimate that 35% of their performance evaluation hinges on achieving AI ROI. Boards put that number at 27%.
“I feel this tension so acutely between CEOs and boards,” says Julie Bedard, managing director and partner at BCG. “A powerful way for CEOs to bridge the gap between their AI knowledge and their boards’ — especially if they feel there is a deficit there — is for the CEO to personally lead an AI upskilling session for their board to show them the latest AI tools and what they can do.”
Both groups do agree on one thing: AI literacy at the top needs to improve.
About 80% of CEOs and board members say prospective board members should be required to demonstrate measurable understanding of how AI can reshape their industry.
But agreement on the principle doesn’t resolve the immediate tension. The board wants faster, but the CEO sees the organizational readiness problem the board doesn’t. All with AI budgets on the line.
Final Shots
CEOs estimate AI ROI accounts for 35% of their performance evaluations; boards estimate 27%, reflecting a mismatch in how accountability is being understood at the top.
80% of both CEOs and board members say incoming board members should be required to demonstrate AI literacy, but few organizations have defined what that actually means in practice.
By Digital Journal Staff
May 12, 2026

Photo by Dylan Gillis on Unsplash
Your board has discovered AI. This is not necessarily good news.
Sixty-one percent of CEOs say their boards are pushing them to move faster on AI transformation, according to new research from Boston Consulting Group (BCG).
The survey, which polled 625 leaders including 351 CEOs and 274 board members from companies with at least $100 million in annual revenue, is the first edition of BCG’s Split Decisions: The BCG CEOs and Boards Survey.
The disconnect starts with confidence.
Three-quarters of board members believe their AI knowledge is on par with or ahead of their peers, but CEOs aren’t so sure. Nearly 40% say their boards lack an informed view of how AI is reshaping growth strategy, and one-third say boards overestimate what AI can actually replace humans doing.
More than half of CEOs say boards need a better grasp of the gap between AI hype and reality. For their part, boards say CEOs need to do a better job selling them on the AI strategy itself.
FOMO appears to be doing some of the driving. Board members with lower confidence in their own AI knowledge are more likely to believe their organizations are moving too slowly, the survey found.
Uncertainty is translating into urgency, not caution.
The accountability gap adds pressure. CEOs estimate that 35% of their performance evaluation hinges on achieving AI ROI. Boards put that number at 27%.
“I feel this tension so acutely between CEOs and boards,” says Julie Bedard, managing director and partner at BCG. “A powerful way for CEOs to bridge the gap between their AI knowledge and their boards’ — especially if they feel there is a deficit there — is for the CEO to personally lead an AI upskilling session for their board to show them the latest AI tools and what they can do.”
Both groups do agree on one thing: AI literacy at the top needs to improve.
About 80% of CEOs and board members say prospective board members should be required to demonstrate measurable understanding of how AI can reshape their industry.
But agreement on the principle doesn’t resolve the immediate tension. The board wants faster, but the CEO sees the organizational readiness problem the board doesn’t. All with AI budgets on the line.
Final Shots
Boards with lower AI confidence are more likely to push for faster implementation, suggesting the urgency is at least partly anxiety-driven rather than strategy-driven.
CEOs estimate AI ROI accounts for 35% of their performance evaluations; boards estimate 27%, reflecting a mismatch in how accountability is being understood at the top.
80% of both CEOs and board members say incoming board members should be required to demonstrate AI literacy, but few organizations have defined what that actually means in practice.
AI is quietly denying more insurance claims
By Dr. Tim Sandle
DIGITAL JOURNAL
May 11, 2026

Image: — © AFP Kirill KUDRYAVTSEV
Artificial intelligence is transforming healthcare administration, but not always in ways that benefit providers. AI is re-focusing many aspects of healthcare administration, in its different forms, as follows:
Scheduling and Capacity Management: AI-powered scheduling tools help balance provider availability, patient demand, and equipment capacity, leading to shorter wait times and better resource utilization.
Revenue Cycle Management: AI improves the accuracy and efficiency of revenue cycle management, which is critical for maintaining organizational stability and financial health.
Documentation and Coding: AI automates documentation and coding processes, reducing the burden on healthcare staff and allowing them to focus more on patient care.
Patient Communication: AI enhances patient communication through automated responses and real-time messaging, improving the overall patient experience.
These advancements are part of a broader trend where AI is being integrated into healthcare administration to address the challenges of rising costs, staffing shortages, and increasing regulatory demands. By leveraging AI, healthcare organizations can create more responsive systems that improve patient care and operational efficiency.
Insurance carriers are increasingly using AI systems to process and deny claims. While these systems promise efficiency and fraud detection, they are also facing legal scrutiny over allegations that algorithm-driven decisions lack nuance and fairness.
Such bias can manifest in various ways, such as underestimating the risk of certain patients or disproportionately denying coverage to protected classes. To address these issues, healthcare insurers are beginning to implement improved governance practices, including transparency, explainability, and fairness requirements. These measures aim to ensure that AI systems do not perpetuate existing biases and promote equitable access to healthcare services.
For dental and healthcare practices, the result is delayed payments, higher administrative costs, and mounting financial pressure.
Jordon Comstock, Founder and CEO of BoomCloud, tells Digital Journal that many practice owners are only just realizing how much leverage they have lost.
“Most dentists don’t see the denial pattern at first,” Comstock explains. “They just feel the cash flow tightening. What’s happening behind the scenes is that algorithms are flagging claims at scale. When that happens, practices become reactive instead of strategic.”
The Legal and Ethical Questions
Recent lawsuits against insurers argue that AI systems can produce wrongful denials by failing to account for individual patient circumstances. Plaintiffs claim that these tools may be biased or overly rigid, prioritizing cost control over patient care.
Comstock believes the bigger issue is transparency: “If an AI system denies a claim, who is accountable? Is it the adjuster? The software vendor? The carrier? Practices are left fighting a black box,” he says. “And small practices don’t have entire legal departments to challenge those decisions.”
The Financial Impact on Practices
AI-driven denials create a ripple effect, as Comstock finds:Increased time spent on appeals
Slower reimbursements
Higher overhead due to billing staff workload
Patient frustration when treatments are delayed
For many practices operating on tight margins, this can be destabilizing.
“Dentistry is already navigating staffing shortages and rising supply costs,” Comstock says. “When insurance payments become unpredictable, it exposes how fragile the traditional PPO model really is.”
A Shift Away From Insurance Dependence
Some practices are responding by reducing reliance on insurance altogether. Comstock points to internal case data from practices using membership plan models. In one example, a dental practice launched a $45 per month membership plan and enrolled more than 1,400 patients.
The results:
How Practices Can Protect Themselves Now
Comstock advises practices to take immediate steps. This runs:Strengthen documentation and compliance protocols
Understand each insurer’s denial criteria
Train staff on structured appeal processes
Evaluate direct-to-patient membership models
“Appealing claims is defensive,” he concludes. “Building recurring revenue is offensive. The practices that survive long term are the ones that stop relying entirely on third-party reimbursement.”
By Dr. Tim Sandle
DIGITAL JOURNAL
May 11, 2026

Image: — © AFP Kirill KUDRYAVTSEV
Artificial intelligence is transforming healthcare administration, but not always in ways that benefit providers. AI is re-focusing many aspects of healthcare administration, in its different forms, as follows:
Scheduling and Capacity Management: AI-powered scheduling tools help balance provider availability, patient demand, and equipment capacity, leading to shorter wait times and better resource utilization.
Revenue Cycle Management: AI improves the accuracy and efficiency of revenue cycle management, which is critical for maintaining organizational stability and financial health.
Documentation and Coding: AI automates documentation and coding processes, reducing the burden on healthcare staff and allowing them to focus more on patient care.
Patient Communication: AI enhances patient communication through automated responses and real-time messaging, improving the overall patient experience.
These advancements are part of a broader trend where AI is being integrated into healthcare administration to address the challenges of rising costs, staffing shortages, and increasing regulatory demands. By leveraging AI, healthcare organizations can create more responsive systems that improve patient care and operational efficiency.
Insurance carriers are increasingly using AI systems to process and deny claims. While these systems promise efficiency and fraud detection, they are also facing legal scrutiny over allegations that algorithm-driven decisions lack nuance and fairness.
Such bias can manifest in various ways, such as underestimating the risk of certain patients or disproportionately denying coverage to protected classes. To address these issues, healthcare insurers are beginning to implement improved governance practices, including transparency, explainability, and fairness requirements. These measures aim to ensure that AI systems do not perpetuate existing biases and promote equitable access to healthcare services.
For dental and healthcare practices, the result is delayed payments, higher administrative costs, and mounting financial pressure.
Jordon Comstock, Founder and CEO of BoomCloud, tells Digital Journal that many practice owners are only just realizing how much leverage they have lost.
“Most dentists don’t see the denial pattern at first,” Comstock explains. “They just feel the cash flow tightening. What’s happening behind the scenes is that algorithms are flagging claims at scale. When that happens, practices become reactive instead of strategic.”
The Legal and Ethical Questions
Recent lawsuits against insurers argue that AI systems can produce wrongful denials by failing to account for individual patient circumstances. Plaintiffs claim that these tools may be biased or overly rigid, prioritizing cost control over patient care.
Comstock believes the bigger issue is transparency: “If an AI system denies a claim, who is accountable? Is it the adjuster? The software vendor? The carrier? Practices are left fighting a black box,” he says. “And small practices don’t have entire legal departments to challenge those decisions.”
The Financial Impact on Practices
AI-driven denials create a ripple effect, as Comstock finds:Increased time spent on appeals
Slower reimbursements
Higher overhead due to billing staff workload
Patient frustration when treatments are delayed
For many practices operating on tight margins, this can be destabilizing.
“Dentistry is already navigating staffing shortages and rising supply costs,” Comstock says. “When insurance payments become unpredictable, it exposes how fragile the traditional PPO model really is.”
A Shift Away From Insurance Dependence
Some practices are responding by reducing reliance on insurance altogether. Comstock points to internal case data from practices using membership plan models. In one example, a dental practice launched a $45 per month membership plan and enrolled more than 1,400 patients.
The results:
Monthly recurring revenue of $63,000
Annual recurring revenue of $756,000
Predictable revenue allowed the practice to drop most PPO contracts and significantly reduce administrative burden: “The turning point for many dentists is realizing they can build their own recurring revenue system,” Comstock says. “Insurance should not be the only way patients access care.”
Annual recurring revenue of $756,000
Predictable revenue allowed the practice to drop most PPO contracts and significantly reduce administrative burden: “The turning point for many dentists is realizing they can build their own recurring revenue system,” Comstock says. “Insurance should not be the only way patients access care.”
How Practices Can Protect Themselves Now
Comstock advises practices to take immediate steps. This runs:Strengthen documentation and compliance protocols
Understand each insurer’s denial criteria
Train staff on structured appeal processes
Evaluate direct-to-patient membership models
“Appealing claims is defensive,” he concludes. “Building recurring revenue is offensive. The practices that survive long term are the ones that stop relying entirely on third-party reimbursement.”
Canada moves to close its AI infrastructure gap
By Digital Journal Staff
May 13, 2026

The Honourable Evan Solomon, Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario (Photo by Sam Barnes:Web Summit via Sportsfile)
Canada’s AI ambitions need physical infrastructure to back them up, and the federal government is opening up its wallet to build it.
On Monday, Minister of Artificial Intelligence and Digital Innovation Evan Solomon announced that Canada and Telus are advancing work under the government’s Enabling large-scale sovereign AI data centres initiative, with a proposed large-scale data centre project in British Columbia. The initiative ran a call for proposals earlier this year.
On Tuesday, at Web Summit Vancouver, Solomon announced $66 million in support for 44 Canadian companies through the AI Compute Access Fund. This is aimed at helping small and mid-sized Canadian companies afford high-performance computing, allowing them to continue building at home.
Taken together, these announcements represent a federal government trying to address Canadian AI development from two ends. The first, building the large-scale infrastructure that underpins the whole ecosystem, followed by lowering the cost of entry for the companies trying to build on top of it.
Telus CEO Darren Entwistle pointed directly to demand: “The unprecedented demand that completely sold out our first AI factory in Rimouski proves that Canadian innovators want cutting-edge AI infrastructure built right here on Canadian soil.”
Added Solomon, “Canada cannot compete in the AI economy without the infrastructure to back it up. By advancing this project with Telus, we are taking concrete action to build sovereign AI capacity here in Canada, so Canadian innovators, researchers and businesses have access to the compute they need, while keeping Canadian data, intellectual property and economic advantage on Canadian soil.”
The government has pointed to Canada’s geography, climate, sustainable energy sources, and network infrastructure as reasons the country is well-positioned to attract AI infrastructure investment.
The compute fund addresses a more immediate barrier. For many Canadian SMEs, the cost of high-performance computing has been the wall between an AI idea and a viable product.
The 44 companies receiving support span the life sciences, health, energy, advanced manufacturing, agriculture, finance, natural resources, and transportation sectors. Of the $66 million announced Tuesday, $16.8 million supports eight British Columbia projects.
Additional funding offers are still being finalized.
For Canadian companies that have been waiting on compute, the message from Ottawa this week was straightforward.
Hang tight, it’s coming.
By Digital Journal Staff
May 13, 2026

The Honourable Evan Solomon, Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario (Photo by Sam Barnes:Web Summit via Sportsfile)
Canada’s AI ambitions need physical infrastructure to back them up, and the federal government is opening up its wallet to build it.
On Monday, Minister of Artificial Intelligence and Digital Innovation Evan Solomon announced that Canada and Telus are advancing work under the government’s Enabling large-scale sovereign AI data centres initiative, with a proposed large-scale data centre project in British Columbia. The initiative ran a call for proposals earlier this year.
On Tuesday, at Web Summit Vancouver, Solomon announced $66 million in support for 44 Canadian companies through the AI Compute Access Fund. This is aimed at helping small and mid-sized Canadian companies afford high-performance computing, allowing them to continue building at home.
Taken together, these announcements represent a federal government trying to address Canadian AI development from two ends. The first, building the large-scale infrastructure that underpins the whole ecosystem, followed by lowering the cost of entry for the companies trying to build on top of it.
Telus CEO Darren Entwistle pointed directly to demand: “The unprecedented demand that completely sold out our first AI factory in Rimouski proves that Canadian innovators want cutting-edge AI infrastructure built right here on Canadian soil.”
Added Solomon, “Canada cannot compete in the AI economy without the infrastructure to back it up. By advancing this project with Telus, we are taking concrete action to build sovereign AI capacity here in Canada, so Canadian innovators, researchers and businesses have access to the compute they need, while keeping Canadian data, intellectual property and economic advantage on Canadian soil.”
The government has pointed to Canada’s geography, climate, sustainable energy sources, and network infrastructure as reasons the country is well-positioned to attract AI infrastructure investment.
The compute fund addresses a more immediate barrier. For many Canadian SMEs, the cost of high-performance computing has been the wall between an AI idea and a viable product.
The 44 companies receiving support span the life sciences, health, energy, advanced manufacturing, agriculture, finance, natural resources, and transportation sectors. Of the $66 million announced Tuesday, $16.8 million supports eight British Columbia projects.
Additional funding offers are still being finalized.
For Canadian companies that have been waiting on compute, the message from Ottawa this week was straightforward.
Hang tight, it’s coming.
Final Shots
No funding has been committed to the Telus data centre project yet. Monday’s announcement doesn’t complete the work, but it’s a step in the right direction.
The compute fund’s first $66 million goes to 44 companies across eight sectors. Of that, $16.8 million supports eight BC projects.
Both announcements fall under Canada’s Sovereign AI Compute Strategy, which is designed to keep AI development, jobs, and intellectual property on Canadian soil.
When AI writes the code and ships it at 3 a.m.
By David Potter
DIGITAL JOURNAL
May 13, 2026

Shaun Guthrie of RJC Engineers and Nicole Donatti of Data Elephant discuss AI readiness with Matthew Duffy at the 2026 CIOCAN Peer Forum in Vancouver — Photo by Jennifer Friesen, Digital Journal
Someone built a data pipeline using generative AI to write the code, then pushed it into production at three in the morning.
By the time the team managing the company’s data systems found out about it, the pipeline was already running, pulling data from internal systems and pushing it into reports the business relied on to make decisions.
None of the people responsible for that data environment had read the code. None of them knew it was being written.
That’s exactly what happened at a Canadian energy company earlier this year, and a similar version is happening in organizations across the country.
The example was shared at the CIO Association of Canada‘s 2026 Peer Forum in Vancouver by Nicole Donatti, transformation leader at Data Elephant, a Vancouver-based data and analytics firm.
The code in the story was generated using vibe coding, the term for using generative AI to write working software from plain-English prompts. The user describes what they want, the AI produces the code, and the user deploys it.
This makes code easy to generate, but also means the people prompting the tool are often employees with no coding background and none of the skills necessary to check the code.
Technology teams have always called this kind of unauthorized work shadow IT. Vibe coding has made it more dangerous by adding working software to the category.
“It is great for experimentation, great for an idea in your tight little group,” says Donatti. “But you need governance around the vibe coding. It can get you to what you think is a great solution really quickly, and get you into trouble just as fast.”
Most Canadian boardrooms are still focused on adoption.
Donatti is working with companies on what to do once the decision to adopt has been made, and where code is already running in places nobody approved.
Maybe IT shouldn’t be the last to know
“It is running rampant through our organization right now,” says Shaun Guthrie, business technology leader at RJC Engineers. “Everyone is just coming towards us with Anthropic vibe coding through Claude Code, and we’re getting inundated with it.”
He described a recent example in which an employee built an invoice automation tool using generative coding tools, presented it to the technology team, and asked to put it into production.
The tool worked. It also assumed a connection to an ERP system the company is in the middle of replacing.
The employee didn’t know the ERP was being replaced. The technology team didn’t know the tool had been built until it was finished. The work had been happening for months in a place where the people accountable for the technology environment couldn’t see it.
Guthrie’s response has been to spend less time issuing policy and more time building relationships across the business. The technology team can’t catch what employees are building if employees don’t tell them. Employees won’t tell them if they think the answer will be no.
“Shadow IT is somewhat our fault for not actually getting in front of it,” he says. “If you do that and you get out and you’re in front of people, it has nothing to do with anything technical. It’s just relationship building. And then they’re actually more open and honest with you, and they’ll tell you what they’re doing.”
The code is only as good as what it touches
A board directive at one of Donatti’s clients told the leadership team to make the company AI-ready and attached a budget.
Eighty proofs of concept followed, each one a small AI experiment meant to test what was possible. By the time Donatti was brought in, the data foundation wasn’t as solid as everyone had been told. Pipelines had been built without a common framework, key context about what the numbers meant was missing, and the reasoning behind business decisions had stayed in people’s heads.
That’s the gap Snowflake’s chief data and analytics officer has called documentation debt, the institutional knowledge about what a column means, how it’s calculated, and when it should be used that has to exist in writing for an AI to read.
Vibe coding tools assume the data they’re touching is documented and well understood. In most Canadian organizations, it isn’t.
Most organizations never documented their systems thoroughly before AI tools arrived, leaving unattended code to run into problems the company didn’t anticipate.
What ends up in front of the tribunal
Canadian companies running unattended AI are accountable for what it does, and many are moving faster than their legal, regulatory, and security thinking can keep up.
Jeff Reichard, senior director of product strategy at Veeam, pointed attendees at the Peer Forum to the Air Canada chatbot case. The chatbot told a customer he could file for bereavement fares retroactively.
The airline argued in court that the chatbot was a separate entity from the airline. The British Columbia civil tribunal disagreed and ruled against Air Canada in 2024. The chatbot was running on the airline’s website, and the airline owned the output.
Air Canada lost because the tribunal treated the chatbot’s output as the airline’s responsibility.
The broader regulatory environment is moving in the same direction. Federal Algorithmic Impact Assessment requirements apply to public sector AI systems, the European Union’s AI Act is in force with fines up to €35 million for the most serious categories of violation, and the courts are increasingly stepping in where regulation remains unclear.
Canadian companies don’t yet have the structures in place to handle that kind of accountability.
PwC Canada’s February 2026 Trust in AI report found that while 72% of them name responsible AI a top priority, 36% still have no dedicated governance function. Another 65% say they struggle to identify who owns existing AI systems or track where those systems are running.
Pulling AI work back into view
Technology leaders trying to get ahead of the wave are making it easier for employees to bring the work into the open before it ships. Guthrie and Donatti are both doing versions of the same thing.
Guthrie’s response inside RJC has been to slow the AI conversation down enough to make the foundations visible, then move quickly.
He nominated 15 people from across the business to an AI working group, sat them in a boardroom for two days, and asked them to generate ideas. They came up with about 100.
The group distilled those to nine, then to three. Each one went into a business case. Each business case was tested against the same question: what data would this need, and do we have it? The answer, mostly, was no.
The result is an AI program that has evolved into a data governance program with AI use cases attached.
Donatti’s response with her clients has been to add a triage layer to incoming work. A scoring system that checks whether teams actually have the data, staffing, and internal support needed to maintain what they build. The goal is to stop teams from shipping things they can’t maintain.
Vibe coding isn’t going away. The work in front of Canadian technology leaders is making sure the next pipeline arrives at their desk before it ships, not at three in the morning without them.
Final shotsVibe coding is a powerful tool when used with the right guardrails, and a serious problem without them.
Vibe coding depends on data that many Canadian organizations still haven’t documented well enough for AI systems to use reliably.
Companies are accountable for what their AI produces. The Air Canada chatbot ruling and the EU AI Act both establish that, and Canadian regulators are likely to follow.
The technology leaders getting ahead of the wave are letting employees use the tools while making it easier for them to bring the work to IT before it ships.
Digital Journal is the official media partner of the CIO Association of Canada.

Written ByDavid Potter
David Potter is Senior Contributing Editor at Digital Journal. He brings years of experience in tech marketing, where he’s honed the ability to make complex digital ideas easy to understand and actionable. At Digital Journal, David combines his interest in innovation and storytelling with a focus on building strong client relationships and ensuring smooth operations behind the scenes. David is a member of Digital Journal's Insight Forum.
By David Potter
DIGITAL JOURNAL
May 13, 2026

Shaun Guthrie of RJC Engineers and Nicole Donatti of Data Elephant discuss AI readiness with Matthew Duffy at the 2026 CIOCAN Peer Forum in Vancouver — Photo by Jennifer Friesen, Digital Journal
Someone built a data pipeline using generative AI to write the code, then pushed it into production at three in the morning.
By the time the team managing the company’s data systems found out about it, the pipeline was already running, pulling data from internal systems and pushing it into reports the business relied on to make decisions.
None of the people responsible for that data environment had read the code. None of them knew it was being written.
That’s exactly what happened at a Canadian energy company earlier this year, and a similar version is happening in organizations across the country.
The example was shared at the CIO Association of Canada‘s 2026 Peer Forum in Vancouver by Nicole Donatti, transformation leader at Data Elephant, a Vancouver-based data and analytics firm.
The code in the story was generated using vibe coding, the term for using generative AI to write working software from plain-English prompts. The user describes what they want, the AI produces the code, and the user deploys it.
This makes code easy to generate, but also means the people prompting the tool are often employees with no coding background and none of the skills necessary to check the code.
Technology teams have always called this kind of unauthorized work shadow IT. Vibe coding has made it more dangerous by adding working software to the category.
“It is great for experimentation, great for an idea in your tight little group,” says Donatti. “But you need governance around the vibe coding. It can get you to what you think is a great solution really quickly, and get you into trouble just as fast.”
Most Canadian boardrooms are still focused on adoption.
Donatti is working with companies on what to do once the decision to adopt has been made, and where code is already running in places nobody approved.
Maybe IT shouldn’t be the last to know
“It is running rampant through our organization right now,” says Shaun Guthrie, business technology leader at RJC Engineers. “Everyone is just coming towards us with Anthropic vibe coding through Claude Code, and we’re getting inundated with it.”
He described a recent example in which an employee built an invoice automation tool using generative coding tools, presented it to the technology team, and asked to put it into production.
The tool worked. It also assumed a connection to an ERP system the company is in the middle of replacing.
The employee didn’t know the ERP was being replaced. The technology team didn’t know the tool had been built until it was finished. The work had been happening for months in a place where the people accountable for the technology environment couldn’t see it.
Guthrie’s response has been to spend less time issuing policy and more time building relationships across the business. The technology team can’t catch what employees are building if employees don’t tell them. Employees won’t tell them if they think the answer will be no.
“Shadow IT is somewhat our fault for not actually getting in front of it,” he says. “If you do that and you get out and you’re in front of people, it has nothing to do with anything technical. It’s just relationship building. And then they’re actually more open and honest with you, and they’ll tell you what they’re doing.”
The code is only as good as what it touches
A board directive at one of Donatti’s clients told the leadership team to make the company AI-ready and attached a budget.
Eighty proofs of concept followed, each one a small AI experiment meant to test what was possible. By the time Donatti was brought in, the data foundation wasn’t as solid as everyone had been told. Pipelines had been built without a common framework, key context about what the numbers meant was missing, and the reasoning behind business decisions had stayed in people’s heads.
That’s the gap Snowflake’s chief data and analytics officer has called documentation debt, the institutional knowledge about what a column means, how it’s calculated, and when it should be used that has to exist in writing for an AI to read.
Vibe coding tools assume the data they’re touching is documented and well understood. In most Canadian organizations, it isn’t.
Most organizations never documented their systems thoroughly before AI tools arrived, leaving unattended code to run into problems the company didn’t anticipate.

Jeff Reichard of Veeam at the 2026 CIOCAN Peer Forum — Photo by Jennifer Friesen, Digital Journal
What ends up in front of the tribunal
Canadian companies running unattended AI are accountable for what it does, and many are moving faster than their legal, regulatory, and security thinking can keep up.
Jeff Reichard, senior director of product strategy at Veeam, pointed attendees at the Peer Forum to the Air Canada chatbot case. The chatbot told a customer he could file for bereavement fares retroactively.
The airline argued in court that the chatbot was a separate entity from the airline. The British Columbia civil tribunal disagreed and ruled against Air Canada in 2024. The chatbot was running on the airline’s website, and the airline owned the output.
Air Canada lost because the tribunal treated the chatbot’s output as the airline’s responsibility.
The broader regulatory environment is moving in the same direction. Federal Algorithmic Impact Assessment requirements apply to public sector AI systems, the European Union’s AI Act is in force with fines up to €35 million for the most serious categories of violation, and the courts are increasingly stepping in where regulation remains unclear.
Canadian companies don’t yet have the structures in place to handle that kind of accountability.
PwC Canada’s February 2026 Trust in AI report found that while 72% of them name responsible AI a top priority, 36% still have no dedicated governance function. Another 65% say they struggle to identify who owns existing AI systems or track where those systems are running.
Pulling AI work back into view
Technology leaders trying to get ahead of the wave are making it easier for employees to bring the work into the open before it ships. Guthrie and Donatti are both doing versions of the same thing.
Guthrie’s response inside RJC has been to slow the AI conversation down enough to make the foundations visible, then move quickly.
He nominated 15 people from across the business to an AI working group, sat them in a boardroom for two days, and asked them to generate ideas. They came up with about 100.
The group distilled those to nine, then to three. Each one went into a business case. Each business case was tested against the same question: what data would this need, and do we have it? The answer, mostly, was no.
The result is an AI program that has evolved into a data governance program with AI use cases attached.
Donatti’s response with her clients has been to add a triage layer to incoming work. A scoring system that checks whether teams actually have the data, staffing, and internal support needed to maintain what they build. The goal is to stop teams from shipping things they can’t maintain.
Vibe coding isn’t going away. The work in front of Canadian technology leaders is making sure the next pipeline arrives at their desk before it ships, not at three in the morning without them.
Final shotsVibe coding is a powerful tool when used with the right guardrails, and a serious problem without them.
Vibe coding depends on data that many Canadian organizations still haven’t documented well enough for AI systems to use reliably.
Companies are accountable for what their AI produces. The Air Canada chatbot ruling and the EU AI Act both establish that, and Canadian regulators are likely to follow.
The technology leaders getting ahead of the wave are letting employees use the tools while making it easier for them to bring the work to IT before it ships.
Digital Journal is the official media partner of the CIO Association of Canada.

Written ByDavid Potter
David Potter is Senior Contributing Editor at Digital Journal. He brings years of experience in tech marketing, where he’s honed the ability to make complex digital ideas easy to understand and actionable. At Digital Journal, David combines his interest in innovation and storytelling with a focus on building strong client relationships and ensuring smooth operations behind the scenes. David is a member of Digital Journal's Insight Forum.









