Showing posts sorted by date for query OUTSOURCING WAR. Sort by relevance Show all posts
Showing posts sorted by date for query OUTSOURCING WAR. Sort by relevance Show all posts

Wednesday, April 01, 2026

University of Bath research warns AI could erode human capital, thinking and expertise in the workplace



University of Bath




1 April 2026


Study flags AI failings, urges creation of ‘learning vaults’ to protect creativity and critical thinking

HR and people managers should proceed with caution if they want to use AI to improve efficiency and human capital in the workplace, and should take steps to ensure creativity and critical thinking are preserved, new research from the University of Bath School of Management shows.

“AI is widely promoted as a tool that can support employees by improving efficiency, speeding up problem‑solving and delivering personalised answers but this should not be taken at face value,” said Professor Dirk Lindebaum, author of the study On the Dangers of Large-Language Model Mediated Learning for Human Capital’.

“AI has a part to play in building human capital but it is vital to understand that human knowledge is not uniform. It comes in different kinds, some of which may be more compatible with AI than others,” Professor Lindebaum said.

The research team identified two types of knowledge which appeared partially compatible with AI - encoded knowledge, which encompasses rules, procedures, policies, and datasets; and embedded knowledge - essentially digitalised processes and routines.

“AI may support tasks in these areas by updating documents, policies and workflows, or assisting with compliance and this seems an obvious and easy win for a manager. However, it is not without risk - if employees, for example, no longer engage directly with important processes, familiarity and expertise will fade,” Professor Lindebaum said.

The team identified three forms of knowledge incompatible with AI: embodied knowledge - developed through practical, hands-on experience; encultured knowledge - understanding developed through organisational culture; and embrained knowledge - analytical judgment and problem-solving.

“These three forms of knowledge rely on real-world experience, sensory engagement, socialisation and repeated practice. They cannot be learned through exposure to AI-generated text or synthetic training environments,” Professor Lindebaum said.

“If people begin outsourcing thinking, decision-making or interpretation to AI systems, these critical forms of knowledge wither over time and create a dangerous dependency that could possibly compromise an organisation or a company’s profitability,” he added.

The researchers said HR and people managers need to safeguard against those risks by designing work that ensures continued access to first-hand learning and human interaction, such as shadowing or mentoring at work.

Additionally, they said cultural understanding should be underpinned by onboarding, team-based learning, cross-cultural exchanges and leadership modelling. And HR teams should encourage critical thinking and reflective practices as key skills in their employees to drive human-led decision making.

The research team suggested that critical thinking and creative skills could be safeguarded by created protected spaces within workplaces and educational settings - ‘learning vaults’.

Akin to the Svalbard Seed Vault which safeguards biodiversity, ‘learning vaults’ would be safeguarded from the influence of overly automated learning systems to help ensure employees develop adaptive, experience-based knowledge and maintain the reflexive capacities essential for forming human capital.

“In practise, that would mean a social environment in which employees and students learn how to think for themselves together in terms of know-why (e.g., why did the strategic plan fail?), know-how (e.g., how did a lack of local knowledge contribute to the failure?), and know-what (e.g., what are the consequences of said failure?),” Professor Lindebaum said.

“It would mean ‘learning the basics’ about tasks, processes and routines before cognitive offloading from the beginning undermines the ability of employees and students to provide informed answers about these questions. We think that employers and business schools should explore how such vaults can be integrated into roles and learning environments to protect diverse forms of knowledge that might otherwise be eroded by uncritical AI use,” Professor Lindebaum said. 

ENDS/tr

Notes to editors:

Please contact the University of Bath Press Office on press@bath.ac.uk

 

About the University of Bath

The University of Bath is one of the UK's leading universities, recognised for high-impact research, excellence in education, an outstanding student experience and strong graduate prospects.

  • We are ranked among the top 10% of universities globally, placing 132nd in the QS World University Rankings 2026.
  • We are ranked in the top 10 in all of the UK’s major university guides.
  • The University achieved a triple Gold award in the last Teaching Excellence Framework 2023, the highest awards possible, for both the overall assessment and for student outcomes and student experience. The Teaching Excellence Framework (TEF) is a national scheme run by the Office for Students (OfS).
  • We are The Times and The Sunday Times Sport University of the Year 2026.

Research at Bath is shaping a better future through innovation in sustainability, health, and digital technologies. Find out all about our Research with Impact: http://bit.ly/3ISz1Wu 

 

 

AI can describe human experiences but lacks experience in an actual ‘body.’ UCLA researchers say understanding this ‘body gap’ may matter for safety



Study states artificial intelligence systems lack a fundamental property of human cognition that could make AI safer and more aligned with human behavior




University of California - Los Angeles Health Sciences




When a person reaches across a table to pass the salt, their brain is doing something far more complex than recognizing a request and executing a movement. It is drawing on a lifetime of bodily experience — where their hand is in space, what a saltshaker feels like, the social awareness of who asked and why. In a fraction of a second, their body and brain are working as one.

Today's most advanced artificial intelligence systems lack such bodily mechanisms and a new study by UCLA Health argues that this has significant implications for how these models behave as well as how safe and trustworthy they can become.

In a paper published in the journal Neuron, UCLA Health postdoctoral fellow Akila Kadambi and colleagues propose that current AI systems are missing two essential ingredients that humans take for granted: a body that interacts with the physical world and an internal awareness of that body's own states such as fatigue, uncertainty or physiological need. The researchers call this combined property "internal embodiment," and propose that building functional analogues of it into AI represents one of the most crucial and underexplored frontiers in the field.

“While there is a current focus in world modeling on external embodiment, such as our outward interactions with the world, far less attention is given to internal dynamics, or what we term ‘internal embodiment’. In humans, the body acts as our experiential regulator of the world, as a kind of built-in safety system,” said Akila Kadambi, a postdoctoral fellow in the Department of Psychiatry and Biobehavioral Sciences at UCLA's David Geffen School of Medicine and the paper's first author. “If you're uncertain, if you're depleted, if something conflicts with your survival, your body registers that. AI systems right now have no equivalent. They can sound experiential, whether they should be or not, and that's a real problem for many reasons, especially when these systems are being deployed in consequential settings.”

The AI body gap

The paper focuses on multimodal large language models, which is the class of technology that powers tools such as ChatGPT and Google's Gemini. While these systems can process and generate text, images and video to describe a cup of water, for example, they cannot know what it feels like to be thirsty, the authors state.

That distinction is not only philosophical, the authors state, but also has measurable consequences for how these systems perform and behave. In one illustration from the paper, researchers showed several leading AI models a simple image: a small number of dots arranged to suggest a human figure in motion, which is a well-established perceptual test known as a point-light display that even newborns can recognize as human. Several models failed to identify the figure as a person, with one describing it instead as a constellation of stars. When the same image was rotated just 20 degrees, even the best-performing models broke down.

Humans don't fail this test because human perception is anchored to a lifetime of bodily experience that they have moving as acting agents in the world. AI systems, trained on vast libraries of text and images but with no bodily experience, are pattern-matching without that anchor, the study authors state.

Two kinds of ‘embodiment’

The paper draws a distinction that has not previously been made explicit in AI research. It defines “external embodiment" as a system's ability to interact with the physical world, to perceive its environment, plan actions and respond to real-world feedback, which is an important focus in current multimodal AI models. Internal embodiment, however, has not been implemented in these models. The authors define this as the continuous monitoring of one's own internal states, the biological equivalent of knowing you are tired, uncertain or in need.

Humans regulate these internal states constantly and automatically using the body's organs, hormones and nervous system. Humans use that information not just to maintain physical health, but to shape attention, memory, emotion and social behavior.

“By contrast, current AI systems have no equivalent mechanism. They process inputs and generate outputs without any persistent internal state that regulates how they behave over time,” said Dr. Marco Iacoboni, professor in the Department of Psychiatry and Biobehavioral Sciences at the David Geffen School of Medicine and a senior author on the paper “This is not just a performance limitation, but also a safety limitation. Without internal costs or constraints, an AI system has no intrinsic reason to avoid overconfident errors, resist manipulation or behave consistently.”

What comes next

The authors state the paper is meant to guide future research as AI technology develops. The authors propose what they call a "dual-embodiment framework,” or a set of principles for building AI systems that model both their interactions with the external world and their own internal states.

These internal state variables would not need to replicate human biology directly but would function as  persistent signals tracking things like uncertainty, processing load and confidence that could shape the system's outputs and constrain its behavior over time.

The authors also propose a new class of tests, or benchmarks, designed to measure a system’s internal embodiment. Existing AI benchmarks focus almost exclusively on external performance such as ifthe system can navigate a space, identify an object complete a task. The UCLA researchers argue the field needs evaluations that probe whether a system can monitor its own internal states, maintain stability when those states are disrupted and behave pro-socially in ways that emerge from shared internal representations rather than statistical mimicry.

“What this work does is bring that insight directly to bear on AI development,” Iacoboni said. “If we want AI systems that are genuinely aligned with human behavior — not just superficially fluent — we may need to give them vulnerabilities and checks that function like internal self-regulators.”

AI scribes linked to modest reductions in electronic health record use and clinical documentation time



Multi-site study finds clinicians who used ambient documentation in more than 50% of patient visits experienced greatest reductions in documentation burden




Mass General Brigham





Documenting a patient visit in the electronic health record (EHR) is essential to healthcare delivery, but also a major contributor to clinician burnout. Artificial intelligence (AI)-enabled ambient documentation, or “AI scribes,” can automatically generate draft clinical notes for review after an appointment. While they have been shown to reduce clinician burnout, large-scale studies examining how these technologies impact clinician workflows are lacking.

A new study, co-led by investigators from Mass General Brigham and the University of California, San Francisco, tracked ambient documentation use across five U.S hospitals for more than two years. The researchers found that AI scribes were associated with modest daily reductions of 13 minutes in EHR usage and 16 minutes in documentation time, representing relative decreases of 3% and 10%, respectively. The findings, published in JAMA, also showed a slight increase in productivity, measured as 0.5 additional patient visits per week. According to the study’s authors, these new insights should encourage health systems to better investigate how the new technologies are impacting clinician workflows. 

“Previous studies link ambient documentation to a significant decrease in burnout, but the underlying drivers of this reduction have been unclear,” said senior author Rebecca G. Mishuris, MD, MS, MPH, Chief Health Information Officer at Mass General Brigham. “The modest reductions in documentation time we observed are unlikely to fully account for changes in burnout, underscoring the need to understand how these tools change  how clinicians approach care delivery while using them.”

These findings are the first published results of the Ambient Clinical Documentation Collaborative (ACDC), a multi-organizational research effort. More than 1,800 clinicians using AI scribes in the present study were compared with 6,770 control clinicians at the same institutions.

The most pronounced improvements in EHR use and documentation patterns were observed among primary care physicians, advanced practice providers, female clinicians, and those who used ambient documentation in at least half of their patient encounters. Clinicians who used AI scribes for more than 50% of visits experienced twice the reduction in total EHR time and three times the reduction in documentation time, yet only 32% of users adopted the technologies that frequently. Revenue increases associated with seeing more patients were statistically significant, but nominal ($167 per month, per clinician adopting an AI scribe). Time spent using the EHR outside of work hours did not significantly differ between groups. Ongoing studies can determine whether these changes result in increased time spent on other activities, and how this may impact clinician burnout.

“Ambient documentation use is expanding rapidly across U.S. health care, making it essential to study how these technologies are impacting clinicians in real time,” said lead and corresponding study author Lisa Rotenstein, MD, MBA, an associate professor of medicine at the UCSF School of Medicine, and director of The Center for Physician Experience and Practice Excellence at Brigham and Women’s Hospital. “Our study demonstrates the impact of AI scribes in diverse real-world implementations at multiple sites. It also emphasizes the value of helping clinicians become comfortable with the technology so that they are reaping its full benefits via frequent use.” 

Authorship: In addition to Mishuris and Rotenstein, study co-authors include A Jay Holmgren, Robert Thombley, Aditi Sriram, Reema H. Dbouk, Melissa Jost, Debbie Aizenberg, Scott MacDonald, Naga Kanaparthy, Brian Williams, Allen Hsiao, Lee Schwamm, Sara Murray, Maria Byron, Hossein Soleimani, Jacqueline G. You, Amanda J. Centi, Christine Iannaccone, Michelle Frits, Adam B. Landman, Karandeep Singh, Ming Tai-Seale, Jie Cao, Katharine Lawrence, Devin Mann, Christopher Holland, Bryan Blanchette, Jesse Ehrenfeld, Edward R. Melnick, David W. Bates, and Julia Adler-Milstein.

Disclosures: Rotenstein reported receiving grants and travel support from FeelBetter Inc., serving on an advisory board for Eko Health, and on an AI advisory board for Augmedix Inc. A full list of disclosures for other co-authors can be found in the paper.

Funding: This research was supported by a grant from the Advancing a Healthier Wisconsin Endowment, a gift from Kathy Hao to establish the Impact Monitoring Platform for AI in Clinical Care at UCSF, and an AHRQ grant R01HS029470.

Paper cited: Rotenstein L, et al. “Changes in Clinician Time Expenditure and Visit Quantity With Adoption of Artificial Intelligence-Powered Scribes: A Multisite Study.” JAMA DOI: 10.1001/jama.2026.2253

 

###

About Mass General Brigham

Mass General Brigham is an integrated academic health care system, uniting great minds to solve the hardest problems in medicine for our communities and the world. Mass General Brigham connects a full continuum of care across a system of academic medical centers, community and specialty hospitals, a health insurance plan, physician networks, community health centers, home care, and long-term care services. Mass General Brigham is a nonprofit organization committed to patient care, research, teaching, and service to the community. In addition, Mass General Brigham is one of the nation’s leading biomedical research organizations with several Harvard Medical School teaching hospitals. For more information, please visit massgeneralbrigham.org.

 

 

Artificial intelligence could transform patient education in eye care, new research shows



A multilingual, voice-enabled chatbot helps patients access retinal detachment advice through personalised, real-time, clinically grounded conversations.



University of East London





From hospital leaflets to spoken answers in dozens of languages, new research from the University of East London (UEL) suggests artificial intelligence could dramatically improve how patients learn about serious eye conditions.

A research team led by UEL’s Dr Mohammad Hossein Amirhosseini and Dr Fatima Kalabi from Queen’s Hospital in London, in collaboration with Moorfields Eye Hospital in London, and Inselspital University Hospital of Bern in Switzerland, has developed a multilingual, voice-enabled AI chatbot designed to help people understand retinal detachment - a sight-threatening condition that often requires urgent surgery. The system allows patients to ask questions in natural language and receive clear, clinically grounded answers drawn from trusted medical sources.

The technology is designed as a modern alternative to traditional patient information leaflets, which can be difficult for many patients to read or interpret - particularly those with visual impairment, limited health literacy, or language barriers. Instead of relying on static written material, the AI tool offers dynamic, conversational interaction and real-time responses, and can speak its answers aloud in multiple languages.

The system utilises large language models customised with a method known as retrieval-augmented generation, which ensures responses are grounded in verified clinical knowledge rather than freely generated text. The research team built a clinician-curated and validated knowledge base using peer-reviewed and hospital-approved information about retinal detachment and then evaluated how well different AI models could answer patient questions.

Testing compared three leading large language models- GPT-4o, Claude Opus, and Gemini 1.5 Pro - using 50 clinically relevant questions. The study found GPT-4o consistently outperformed the other models across multiple evaluation metrics, producing the most accurate and reliable responses when assessed using widely used language evaluation metrics.

The chatbot also includes accessibility features designed for real-world healthcare settings. Patients can speak their questions instead of typing them, while the system can read responses aloud using multilingual text-to-speech technology. This means people with low vision or limited English proficiency can still access clinically reliable and easy-to-understand information about their condition.

Research technical lead Dr Amirhosseini, Associate Professor in Computer Science and Digital Technologies, said the work highlights how AI could strengthen communication between patients and healthcare providers.

“Patient information leaflets have been used for decades, but they are static and often difficult for people to engage with, especially when they are anxious or struggling with vision problems,” he said.

“Our research shows that a carefully designed AI system can provide personalised, context-aware explanations, answer questions in real time and deliver information in multiple languages and formats. The goal is not to replace clinicians, but to augment clinical communication and empower patients to better understand their condition and feel more confident during the pre-operation and post-operation phases.”

Because retinal detachment requires timely treatment and careful post-operative care, clear communication is crucial. Patients frequently report confusion about symptoms, recovery steps and follow-up care after surgery. By offering an interactive way to ask questions at any time, the chatbot could help reinforce clinical guidance, reduce anxiety, and improve patient adherence to treatment plans outside busy consultations.

The system was developed as a research prototype and currently operates in a secure local environment. All answers are generated from clinician-approved information sources to ensure accuracy, transparency, and alignment with clinical governance standards.

Researchers say the approach could eventually be adapted for other conditions and clinical pathways where patients need clear, accessible explanations of complex medical information, including chronic disease management, surgical phase, and post-operative rehabilitation pathways.

The study, Transforming patient education on retinal detachment: A multilingual voice-enabled retrieval-augmented generation chatbot, was published in the peer-reviewed Journal of Artificial Intelligence & Robotics.

AI agent future is coming, OpenClaw creator tells AFP



By AFP
March 30, 2026


OpenClaw can be connected to existing AI models and given simple instructions through instant messaging apps, as if to a friend or colleague - Copyright AFP ADEK BERRY


Katie Forster

Peter Steinberger’s artificial intelligence agent tool OpenClaw has taken the tech world by storm with its ability to execute real-life tasks such as checking him in for his flight to Tokyo.

AI is not yet a ubiquitous personal assistant for ordinary people, but “you’ll see much more of that this year because this is the year of agents”, Steinberger told AFP in the Japanese capital on Monday.

“There are still some things we need to do to make it better,” the Austrian programmer said.

Demand is ramping up, however, with more developers now “making the future happen”, he added in an interview during a gathering for OpenClaw enthusiasts.

When downloaded, OpenClaw can be connected to existing AI models and given simple instructions through instant messaging apps, as if to a friend or colleague.

Jensen Huang, head of the world’s most valuable company Nvidia, this month hailed the tool — whose symbol is a bright red lobster — as “the next ChatGPT”.

But all the buzz has raised concerns over the cybersecurity risks of allowing AI systems vulnerable to hacks to access personal data such as bank details.

– Chinese ‘momentum’ –

Steinberger built OpenClaw in November while playing around with AI coding tools in an attempt to organise his digital life.

He has since been hired by ChatGPT creator OpenAI “to drive the next generation of personal agents”, the US startup’s boss Sam Altman said in February.

“What you have to know about OpenClaw is, like, it couldn’t have come from those big companies,” Steinberger told AFP.

“Those companies would have worried too much about what could go wrong instead of just, like — I wanted to just show people I’ve been into the future,” he said.

While tech giants work out how agent tools could be used by businesses, the next AI innovation could come from “someone who just wants to have fun”, Steinberger said.

At Monday’s “ClawCon” event in Tokyo, where many of the hundreds of participants were dressed as lobsters, OpenClaw demos were held on stage and experts helped attendees install their agents.

Similar scenes have been seen across China, where users have been particularly quick to embrace OpenClaw’s potential to organise emails, help with coding and a plethora of other digital tasks.

“If you see it as a competition, it certainly looks like China is gaining a lot of momentum” in the AI sector, Steinberger said.

“But right now there’s still quite a bit of a leap between the best models from China and the best models in the US.”

– AI ‘hammer’ –

OpenClaw’s success in China has led national cybersecurity authorities and Beijing’s IT ministry to issue official warnings over potential risks.

Is Steinberger concerned that people could use his tool for illicit purposes?

“Yes, I do worry a bit, especially because there’s now a whole cottage industry of companies that try to make a big buck and make it even simpler to install OpenClaw,” he said.

“I purposefully didn’t make it simpler so people would stop and read and understand: what is AI, that AI can make mistakes, what is prompt injection — some basics that you really should understand when you use that technology.”

But at the end of the day, “if you build a hammer… you can hurt yourself. So should we not build hammers any more”?

A Reddit-like pseudo social network for OpenClaw agents called Moltbook, where chatbots converse, has also grabbed headlines and provoked soul-searching over AI.

“A lot of that was, in my view, very much driven by humans to just create those stories,” Steinberger said, adding that joining OpenAI means he now has more resources to use on “cool ideas”.

He said 2023-2024 “was the year of ChatGPT, last year was the year of the coding agent, this year’s going to be the year of the general agent”.

“I love that I helped a lot of people to bring AI from this scary thing into something that is fun and weird and gets them excited, because we need to to make it good for this next century,” Steinberger explained.

“We need more people to think more about AI.”


Dubious AI detectors drive ‘pay-to-humanize’ scam

ByAFP
March 30, 2026


A crop of fraudulent AI detection tools risk adding another layer of online deception. - Copyright GETTY IMAGES NORTH AMERICA/AFP Michael M. Santiago


Anuj CHOPRA, with Ede ZABORSZKY in Vienna, Magdalini GKOGKOU in Athens and Liesa PAUWELS in The Hague

Feed an Iranian news dispatch or a literary classic into some text detectors, and they return the same verdict: AI-generated. Then comes the pitch: pay to “humanize” the writing, a pattern experts say bears the hallmarks of a scam.

As AI falsehoods explode across social media, often outpacing the capacity of professional fact-checkers, bogus detectors risk adding another layer of deception to an already fractured information ecosystem.

While even reliable AI detectors can produce false results, researchers say a crop of fraudulent tools has emerged online, easily weaponized to discredit authentic content and tarnish reputations.

AFP’s fact-checkers identified three such text detectors that claim to estimate what percentage is AI-generated. The tools — prompted in four languages — not only misidentified authentic text as AI-generated but also attempted to monetize those errors.

One detector, JustDone AI, processed a human-written report about the US-Iran war and wrongly concluded it contained “88% AI content.” It then offered to scrub any trace of AI for a fee.

“Your AI text is humanizing,” the site claimed, leading to a page where “100% unique text” was locked behind a paywall charging up to $9.99.

Two other tools — TextGuard and Refinely — produced similar false positives and sought to monetize them.

– ‘Scams’ –

AFP presented its findings to all three detectors.

“Our system operates using modern AI models, and the results it provides are considered accurate within our technology,” TextGuard’s support team told AFP.

“At the same time, we cannot guarantee or compare results with other systems.”

JustDone also reiterated that “no AI detector can guarantee 100 percent accuracy.”

It acknowledged the free version of its AI detector “may provide less precise results” due to “high demand and the use of a lighter model designed for quick access.”

Echoing AFP’s findings, one user on a review platform complained that “even with 100% human-written material, JustDone still flags it as AI.”

AFP fed the tools multiple human-written samples — in Dutch, Greek, Hungarian, and English. All were wrongly flagged as having high AI content, including passages from an acclaimed 1916 Hungarian classic.

The tools returned AI flags regardless of input — even for nonsensical text.

JustDone and Refinely appeared to operate even without an internet connection, suggesting their results may be scripted rather than genuine technical analysis.

“These are not AI detectors but scams to sell a ‘humanizing’ tool that will often return what we call ‘tortured phrases'” — unrelated jargon or nonsensical alternatives — Debora Weber-Wulff, a Germany-based academic who has researched detection tools, told AFP.

– ‘Liar’s dividend’ –

Illustrating how such tools can be used to discredit individuals, pro‑government influencers in Hungary claimed earlier this year that a document outlining the opposition’s election campaign had been entirely created by AI.

To support the unfounded allegation, they circulated screenshots on social media showing results from JustDone.

The tools tested by AFP sought to lure students and academics as clients, with two of them claiming their users came from top institutions such as Cornell University.

Cornell University told AFP it “does not have any established relations with AI detector companies.”

“Generative AI does provide an increased risk that students may use it to submit work that is not their own,” the university said.

“Unfortunately, it is unlikely that detection technologies will provide a workable solution to this problem. It can be very difficult to accurately detect AI-generated content.”

Fact-checkers, including those from AFP, often rely on AI visual detection tools developed by experts, which typically look for hidden watermarks and other digital clues.

However, they too can sometimes produce errors, making it necessary to supplement their findings with additional evidence such as open-source data.

The stakes are high as false readings from unreliable detectors threaten to erode trust in AI verification broadly — and feed a disinformation tactic researchers have dubbed the “liar’s dividend”: dismissing authentic content as AI fabrications.

“We often report on misinformers and other hoaxsters using AI to fabricate false images and videos,” said Waqar Rizvi from the misinformation tracker NewsGuard.

“Now, (we are) monitoring the opposite, but no less insidious phenomenon: claims that a visual was created by AI when in fact, it’s authentic.”

burs-ac/dw



Anthropic releases part of AI tool source code in ‘error’


ByAFP
April 1, 2026


A figurine in front of the logo of the AI assistant "Claude" seen in Paris in February - Copyright AFP/File Joel Saget

Anthropic accidentally released part of the internal source code for its AI-powered coding assistant Claude Code due to “human error,” the company said Tuesday.

An internal-use file mistakenly included in a software update pointed to an archive containing nearly 2,000 files and 500,000 lines of code, which were quickly copied to developer platform GitHub.

“Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed,” an Anthropic spokesperson said.

“This was a release packaging issue caused by human error, not a security breach.”

A post on X sharing a link to the leaked code had more than 29 million views early on Wednesday.

The exposed code related to the tool’s internal architecture but does not contain confidential data from Claude, the underlying AI model by Anthropic.

Claude Code’s source code was partially known, as the tool had been reverse-engineered by independent developers. An earlier version of the assistant had its source code exposed in February 2025.



AI giant Anthropic says ‘exploring’ Australia data centre investments


By AFP
March 31, 2026


Australia's arts sector has accused Anthropic and other AI companies of pushing to loosen copyright laws so chatbots can be trained on local songs and books - Copyright AFP JOEL SAGET
Steven TRASK

Artificial intelligence giant Anthropic is eyeing data centre investments in Australia, saying Wednesday the nation was a “natural partner” for work in the booming sector.

With immense renewable energy potential and vast stretches of uninhabited land, Australia has touted itself as a prime location for the power-hungry data centres needed to power AI.

US-based Anthropic said it was “exploring investments in data centre infrastructure and energy throughout the country” after signing a memorandum of understanding with the Australian government.

“The visit to Australia marks the beginning of long-term collaboration and investment into the Asia-Pacific region,” the technology company said in a statement.

“Australia’s investment in AI safety makes it a natural partner for responsible AI development.”

The agreement, signed by Anthropic chief executive Dario Amodei in capital Canberra, said the firm would abide by local laws to “maintain strong social licence for investment”.

Australia’s arts sector has accused Anthropic and other AI companies of pushing to loosen copyright laws so chatbots can be trained on local songs and books.

Anthropic said it had also agreed to share AI research and safety information with Australian regulators, mirroring similar agreements in Japan and Britain.

Industry Minister Tim Ayres said Australia and Anthropic would “harness AI responsibly”.



– Energy-intensive –



New data centres — warehouse facilities that store files and power AI tools — are springing up worldwide.

But there are increasing fears about the environmental impact of hulking data hubs.

Singapore halted data centre developments between 2019 and 2022 over energy, water and land use worries.

Australia last week adopted new rules governing the operation of data centres.

Tech companies must show how they will source renewable energy and minimise their emissions.

“As demand for AI grows, continued expansion of data centre infrastructure must reflect Australian values and be environmentally and socially sustainable,” the guidelines state.

Anthropic’s Claude is the Pentagon’s most widely-deployed frontier AI model and the only such model currently operating on its classified systems.

But the company is locked in a dispute with the US government, after saying it would refuse to let its systems be used for mass surveillance.

Washington has since described Anthropic’s tools as an “unacceptable risk to national security”.

The United States has not only blocked use of the company’s technology by the Pentagon, but also requires all defense contractors to certify that they do not use Anthropic’s models.


Life with AI causing human brain ‘fry’


By AFP
March 30, 2026


Anthropic's AI assistant Claude vies with rival chatbots from OpenAI, Google and others to be the "agent" relied upon by businesses to independently get jobs done. — © AFP/File Joel Saget


Thomas URBAIN

Heavy users of artificial intelligence report being overwhelmed by trying to keep up with and on top of the technology designed to make their lives easier.

Too many lines of code to analyze, armies of AI assistants to wrangle, and lengthy prompts to draft are among the laments by hard-core AI adopters.

Consultants at Boston Consulting Group (BCG) have dubbed the phenomenon “AI brain fry,” a state of mental exhaustion stemming “from the excessive use or supervision of artificial intelligence tools, pushed beyond our cognitive limits.”

The rise of AI agents that tend to computer tasks on demand has put users in the position of managing smart, fast digital workers rather than having to grind through jobs themselves.

“It’s a brand-new kind of cognitive load,” said Ben Wigler, co-founder of the start-up LoveMind AI. “You have to really babysit these models.”

People experiencing AI burnout are not casually dabbling with the technology — They are creating legions of agents that need to be constantly managed, according to Tim Norton, founder of the AI integration consultancy nouvreLabs.

“That’s what’s causing the burnout,” Norton wrote in an X post.

However, BCG and others do not see it as a case of AI causing people to get burned out on their jobs.

A BCG study of 1,488 professionals in the United States actually found a decline in burnout rates when AI took over repetitive work tasks.

– Coding vigilance –

For now, “brain fry” is primarily a bane for software developers given that AI agents have excelled quickly at writing computer code.

“The cruel irony is that AI-generated code requires more careful review than human-written code,” software engineer Siddhant Khare wrote in a blog post.

“It is very scary to commit to hundreds of lines of AI-written code because there is a risk of security flaws or simply not understanding the entire codebase,” added Adam Mackintosh, a programmer for a Canadian company.



Anthropic has released tools such as Claude Code that excel at helping developers write software – Copyright GETTY IMAGES NORTH AMERICA/AFP Michael M. Santiago

And if AI agents are not kept on course by a human, they could misunderstand an instruction and wander down an errant processing path, resulting in a business paying for wasted computing power.

– ‘Irritable’ –

Wigler noted that the promise of hitting goals fast with AI tempts tech start-up teams already prone to long workdays to lose track of time and stay on the job even deeper into the night.

“There is a unique kind of reward hacking that can go on when you have productivity at the scale that encourages even later hours,” Wigler said.

Mackintosh recalled spending 15 consecutive hours fine-tuning around 25,000 lines of code in an application.

“At the end, I felt like I couldn’t code anymore,” he recalled.

“I could tell my dopamine was shot because I was irritable and didn’t want to answer basic questions about my day.”

A musician and teacher who asked to remain anonymous spoke of struggling to put his brain “on pause”, instead spending evenings experimenting with AI.

Nonetheless, everyone interviewed for this story expressed overall positive views of AI despite the downsides.

BCG recommends in a recently published study that company leaders establish clear limits regarding employee use and supervision of AI.

However, “That self-care piece is not really an America workplace value,” Wigler said.

“So, I am very skeptical as to whether or not its going to be healthy or even high quality in the long term.”

One man, his dog, and ChatGPT: Australia’s AI vaccine saga


By AFP
March 30, 2026


Image: — © AFP/File SEBASTIEN BOZON


Purple ROMERO

Desperate to help his sick dog, one Australian man went down the ultimate ChatGPT research hole, using artificial intelligence to design a personalised experimental treatment and finding top scientists to administer it.

Paul Conyngham’s months-long quest to fight his rescue mutt Rosie’s cancer has grabbed the attention of OpenAI boss Sam Altman, who called it an “amazing story” in an X post on Friday.

Sydney-based AI consultant Conyngham told AFP that eight-year-old Rosie’s mast cell cancer is now in partial remission and her biggest tumour has shrunk dramatically.

“She regained a lot of mobility and function” after receiving a custom mRNA vaccine along with powerful immunotherapy in December, he said.

Conyngham does not call his findings a cure — but experts unrelated to the dogged endeavours said they highlight AI’s potential to accelerate medical research.

“I would have conversations and just keep them going non-stop” with ChatGPT, Gemini and Grok to study cancer therapies in-depth, Conyngham said.

Following the chatbots’ advice, he paid $3,000 to have Rosie’s genome sequenced, and used the same online tools to analyse her DNA data.

Next he turned to AlphaFold, a scientific AI model that won 2024’s chemistry Nobel, to better understand one of the mutated doggy genes.

Conyngham sought the help of a University of New South Wales (UNSW) team — also thanks to a ChatGPT recommendation — and other academics in Australia who made his research a reality.

– ‘Just a rash’ –

Rosie’s cancer was misdiagnosed for nearly a year, Conyngham said on the phone during one of the long daily walks the pair have resumed.

“I took her to the vet three times. And two times, the vet said, don’t worry about it, it’s just a rash,” he said.

But Rosie got sicker and a biopsy showed in 2024 that she did have terminal cancer.

Having tried chemotherapy, standard immunotherapy and surgery, costs were mounting and Conyngham wanted more options.

So he used AI to delve deep into the world of emerging treatments including mRNA vaccines, which train the body’s immune system and were widely used during the Covid pandemic.

“This was not a clinical trial by any means” and “it’s not that AI cured cancer”, said UNSW professor Martin Smith, who sequenced Rosie’s genome for Paul.

“It was really driven by his determination to help his dog.”

The combination of “three different disruptive technologies: genome sequencing, artificial intelligence, and RNA therapeutics… offers new possibilities and challenges”, Smith said.

– AI promise –

Chatbots also assisted Conyngham in navigating the reams of paperwork for ethical approval.

And through his new scientific network, he met a professor at the University of Queensland able to administer the fine-tuned treatment.

Not all the tumours responded as well as the largest one, however. Rosie has had to have another operation since, and it’s unclear how long she has left to live.

The “short answer is we don’t know for sure” what actually led to the reduction in size of Rosie’s biggest tumour, said Pall Thordarson, director of UNSW’s RNA institute which created the vaccine.

“He used the AI program… to design the actual mRNA sequence. And then he gave that information to us,” Thordarson explained.

“AI holds lots of promise to improve and accelerate our research strategies,” Nick Semenkovich at the Medical College of Wisconsin, unrelated to the Rosie saga, told AFP.

But UNSW and Conyngham “haven’t published scientific details outside of their press release and interviews, so we don’t know enough about the vaccine to understand how much AI helped in its development — or if the vaccine worked the way it was designed”, Semenkovich said.

Patrick Tang Ming-kuen, a professor from The Chinese University of Hong Kong, said AI-powered research could help pets and humans survive, although the risk of errors is real.

“AI transforms a ‘needle-in-a-haystack’ search into a data-driven selection process, drastically shortening the timeframe between diagnosis and vaccine construction,” he said.

Since Conyngham’s story went global, Smith said his team have been fielding various new requests.

“You know: my cat’s got a disease, my dog’s got a disease, my aunt has got a disease.”

But “it’s hard for us to be able to help”, he said. “There’s a lot of things that have to align.”