AI agent future is coming, OpenClaw creator tells AFP
By AFP
March 30, 2026

OpenClaw can be connected to existing AI models and given simple instructions through instant messaging apps, as if to a friend or colleague - Copyright AFP ADEK BERRY
Katie Forster
Peter Steinberger’s artificial intelligence agent tool OpenClaw has taken the tech world by storm with its ability to execute real-life tasks such as checking him in for his flight to Tokyo.
AI is not yet a ubiquitous personal assistant for ordinary people, but “you’ll see much more of that this year because this is the year of agents”, Steinberger told AFP in the Japanese capital on Monday.
“There are still some things we need to do to make it better,” the Austrian programmer said.
Demand is ramping up, however, with more developers now “making the future happen”, he added in an interview during a gathering for OpenClaw enthusiasts.
When downloaded, OpenClaw can be connected to existing AI models and given simple instructions through instant messaging apps, as if to a friend or colleague.
Jensen Huang, head of the world’s most valuable company Nvidia, this month hailed the tool — whose symbol is a bright red lobster — as “the next ChatGPT”.
But all the buzz has raised concerns over the cybersecurity risks of allowing AI systems vulnerable to hacks to access personal data such as bank details.
– Chinese ‘momentum’ –
Steinberger built OpenClaw in November while playing around with AI coding tools in an attempt to organise his digital life.
He has since been hired by ChatGPT creator OpenAI “to drive the next generation of personal agents”, the US startup’s boss Sam Altman said in February.
“What you have to know about OpenClaw is, like, it couldn’t have come from those big companies,” Steinberger told AFP.
“Those companies would have worried too much about what could go wrong instead of just, like — I wanted to just show people I’ve been into the future,” he said.
While tech giants work out how agent tools could be used by businesses, the next AI innovation could come from “someone who just wants to have fun”, Steinberger said.
At Monday’s “ClawCon” event in Tokyo, where many of the hundreds of participants were dressed as lobsters, OpenClaw demos were held on stage and experts helped attendees install their agents.
Similar scenes have been seen across China, where users have been particularly quick to embrace OpenClaw’s potential to organise emails, help with coding and a plethora of other digital tasks.
“If you see it as a competition, it certainly looks like China is gaining a lot of momentum” in the AI sector, Steinberger said.
“But right now there’s still quite a bit of a leap between the best models from China and the best models in the US.”
– AI ‘hammer’ –
OpenClaw’s success in China has led national cybersecurity authorities and Beijing’s IT ministry to issue official warnings over potential risks.
Is Steinberger concerned that people could use his tool for illicit purposes?
“Yes, I do worry a bit, especially because there’s now a whole cottage industry of companies that try to make a big buck and make it even simpler to install OpenClaw,” he said.
“I purposefully didn’t make it simpler so people would stop and read and understand: what is AI, that AI can make mistakes, what is prompt injection — some basics that you really should understand when you use that technology.”
But at the end of the day, “if you build a hammer… you can hurt yourself. So should we not build hammers any more”?
A Reddit-like pseudo social network for OpenClaw agents called Moltbook, where chatbots converse, has also grabbed headlines and provoked soul-searching over AI.
“A lot of that was, in my view, very much driven by humans to just create those stories,” Steinberger said, adding that joining OpenAI means he now has more resources to use on “cool ideas”.
He said 2023-2024 “was the year of ChatGPT, last year was the year of the coding agent, this year’s going to be the year of the general agent”.
“I love that I helped a lot of people to bring AI from this scary thing into something that is fun and weird and gets them excited, because we need to to make it good for this next century,” Steinberger explained.
“We need more people to think more about AI.”
Dubious AI detectors drive ‘pay-to-humanize’ scam
ByAFP
March 30, 2026

A crop of fraudulent AI detection tools risk adding another layer of online deception. - Copyright GETTY IMAGES NORTH AMERICA/AFP Michael M. Santiago
Anuj CHOPRA, with Ede ZABORSZKY in Vienna, Magdalini GKOGKOU in Athens and Liesa PAUWELS in The Hague
Feed an Iranian news dispatch or a literary classic into some text detectors, and they return the same verdict: AI-generated. Then comes the pitch: pay to “humanize” the writing, a pattern experts say bears the hallmarks of a scam.
As AI falsehoods explode across social media, often outpacing the capacity of professional fact-checkers, bogus detectors risk adding another layer of deception to an already fractured information ecosystem.
While even reliable AI detectors can produce false results, researchers say a crop of fraudulent tools has emerged online, easily weaponized to discredit authentic content and tarnish reputations.
AFP’s fact-checkers identified three such text detectors that claim to estimate what percentage is AI-generated. The tools — prompted in four languages — not only misidentified authentic text as AI-generated but also attempted to monetize those errors.
One detector, JustDone AI, processed a human-written report about the US-Iran war and wrongly concluded it contained “88% AI content.” It then offered to scrub any trace of AI for a fee.
“Your AI text is humanizing,” the site claimed, leading to a page where “100% unique text” was locked behind a paywall charging up to $9.99.
Two other tools — TextGuard and Refinely — produced similar false positives and sought to monetize them.
– ‘Scams’ –
AFP presented its findings to all three detectors.
“Our system operates using modern AI models, and the results it provides are considered accurate within our technology,” TextGuard’s support team told AFP.
“At the same time, we cannot guarantee or compare results with other systems.”
JustDone also reiterated that “no AI detector can guarantee 100 percent accuracy.”
It acknowledged the free version of its AI detector “may provide less precise results” due to “high demand and the use of a lighter model designed for quick access.”
Echoing AFP’s findings, one user on a review platform complained that “even with 100% human-written material, JustDone still flags it as AI.”
AFP fed the tools multiple human-written samples — in Dutch, Greek, Hungarian, and English. All were wrongly flagged as having high AI content, including passages from an acclaimed 1916 Hungarian classic.
The tools returned AI flags regardless of input — even for nonsensical text.
JustDone and Refinely appeared to operate even without an internet connection, suggesting their results may be scripted rather than genuine technical analysis.
“These are not AI detectors but scams to sell a ‘humanizing’ tool that will often return what we call ‘tortured phrases'” — unrelated jargon or nonsensical alternatives — Debora Weber-Wulff, a Germany-based academic who has researched detection tools, told AFP.
– ‘Liar’s dividend’ –
Illustrating how such tools can be used to discredit individuals, pro‑government influencers in Hungary claimed earlier this year that a document outlining the opposition’s election campaign had been entirely created by AI.
To support the unfounded allegation, they circulated screenshots on social media showing results from JustDone.
The tools tested by AFP sought to lure students and academics as clients, with two of them claiming their users came from top institutions such as Cornell University.
Cornell University told AFP it “does not have any established relations with AI detector companies.”
“Generative AI does provide an increased risk that students may use it to submit work that is not their own,” the university said.
“Unfortunately, it is unlikely that detection technologies will provide a workable solution to this problem. It can be very difficult to accurately detect AI-generated content.”
Fact-checkers, including those from AFP, often rely on AI visual detection tools developed by experts, which typically look for hidden watermarks and other digital clues.
However, they too can sometimes produce errors, making it necessary to supplement their findings with additional evidence such as open-source data.
The stakes are high as false readings from unreliable detectors threaten to erode trust in AI verification broadly — and feed a disinformation tactic researchers have dubbed the “liar’s dividend”: dismissing authentic content as AI fabrications.
“We often report on misinformers and other hoaxsters using AI to fabricate false images and videos,” said Waqar Rizvi from the misinformation tracker NewsGuard.
“Now, (we are) monitoring the opposite, but no less insidious phenomenon: claims that a visual was created by AI when in fact, it’s authentic.”
burs-ac/dw
Anthropic releases part of AI tool source code in ‘error’
ByAFPApril 1, 2026
A figurine in front of the logo of the AI assistant "Claude" seen in Paris in February - Copyright AFP/File Joel SagetAnthropic accidentally released part of the internal source code for its AI-powered coding assistant Claude Code due to “human error,” the company said Tuesday.An internal-use file mistakenly included in a software update pointed to an archive containing nearly 2,000 files and 500,000 lines of code, which were quickly copied to developer platform GitHub.“Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed,” an Anthropic spokesperson said.“This was a release packaging issue caused by human error, not a security breach.”A post on X sharing a link to the leaked code had more than 29 million views early on Wednesday.The exposed code related to the tool’s internal architecture but does not contain confidential data from Claude, the underlying AI model by Anthropic.Claude Code’s source code was partially known, as the tool had been reverse-engineered by independent developers. An earlier version of the assistant had its source code exposed in February 2025.
AI giant Anthropic says ‘exploring’ Australia data centre investmentsBy AFPMarch 31, 2026
Australia's arts sector has accused Anthropic and other AI companies of pushing to loosen copyright laws so chatbots can be trained on local songs and books - Copyright AFP JOEL SAGETSteven TRASKArtificial intelligence giant Anthropic is eyeing data centre investments in Australia, saying Wednesday the nation was a “natural partner” for work in the booming sector.With immense renewable energy potential and vast stretches of uninhabited land, Australia has touted itself as a prime location for the power-hungry data centres needed to power AI.US-based Anthropic said it was “exploring investments in data centre infrastructure and energy throughout the country” after signing a memorandum of understanding with the Australian government.“The visit to Australia marks the beginning of long-term collaboration and investment into the Asia-Pacific region,” the technology company said in a statement.“Australia’s investment in AI safety makes it a natural partner for responsible AI development.”The agreement, signed by Anthropic chief executive Dario Amodei in capital Canberra, said the firm would abide by local laws to “maintain strong social licence for investment”.Australia’s arts sector has accused Anthropic and other AI companies of pushing to loosen copyright laws so chatbots can be trained on local songs and books.Anthropic said it had also agreed to share AI research and safety information with Australian regulators, mirroring similar agreements in Japan and Britain.Industry Minister Tim Ayres said Australia and Anthropic would “harness AI responsibly”.– Energy-intensive –New data centres — warehouse facilities that store files and power AI tools — are springing up worldwide.But there are increasing fears about the environmental impact of hulking data hubs.Singapore halted data centre developments between 2019 and 2022 over energy, water and land use worries.Australia last week adopted new rules governing the operation of data centres.Tech companies must show how they will source renewable energy and minimise their emissions.“As demand for AI grows, continued expansion of data centre infrastructure must reflect Australian values and be environmentally and socially sustainable,” the guidelines state.Anthropic’s Claude is the Pentagon’s most widely-deployed frontier AI model and the only such model currently operating on its classified systems.But the company is locked in a dispute with the US government, after saying it would refuse to let its systems be used for mass surveillance.Washington has since described Anthropic’s tools as an “unacceptable risk to national security”.The United States has not only blocked use of the company’s technology by the Pentagon, but also requires all defense contractors to certify that they do not use Anthropic’s models.
Life with AI causing human brain ‘fry’
By AFP
March 30, 2026

Anthropic's AI assistant Claude vies with rival chatbots from OpenAI, Google and others to be the "agent" relied upon by businesses to independently get jobs done. — © AFP/File Joel Saget
Thomas URBAIN
Heavy users of artificial intelligence report being overwhelmed by trying to keep up with and on top of the technology designed to make their lives easier.
Too many lines of code to analyze, armies of AI assistants to wrangle, and lengthy prompts to draft are among the laments by hard-core AI adopters.
Consultants at Boston Consulting Group (BCG) have dubbed the phenomenon “AI brain fry,” a state of mental exhaustion stemming “from the excessive use or supervision of artificial intelligence tools, pushed beyond our cognitive limits.”
The rise of AI agents that tend to computer tasks on demand has put users in the position of managing smart, fast digital workers rather than having to grind through jobs themselves.
“It’s a brand-new kind of cognitive load,” said Ben Wigler, co-founder of the start-up LoveMind AI. “You have to really babysit these models.”
People experiencing AI burnout are not casually dabbling with the technology — They are creating legions of agents that need to be constantly managed, according to Tim Norton, founder of the AI integration consultancy nouvreLabs.
“That’s what’s causing the burnout,” Norton wrote in an X post.
However, BCG and others do not see it as a case of AI causing people to get burned out on their jobs.
A BCG study of 1,488 professionals in the United States actually found a decline in burnout rates when AI took over repetitive work tasks.
– Coding vigilance –
For now, “brain fry” is primarily a bane for software developers given that AI agents have excelled quickly at writing computer code.
“The cruel irony is that AI-generated code requires more careful review than human-written code,” software engineer Siddhant Khare wrote in a blog post.
“It is very scary to commit to hundreds of lines of AI-written code because there is a risk of security flaws or simply not understanding the entire codebase,” added Adam Mackintosh, a programmer for a Canadian company.

Anthropic has released tools such as Claude Code that excel at helping developers write software – Copyright GETTY IMAGES NORTH AMERICA/AFP Michael M. Santiago
And if AI agents are not kept on course by a human, they could misunderstand an instruction and wander down an errant processing path, resulting in a business paying for wasted computing power.
– ‘Irritable’ –
Wigler noted that the promise of hitting goals fast with AI tempts tech start-up teams already prone to long workdays to lose track of time and stay on the job even deeper into the night.
“There is a unique kind of reward hacking that can go on when you have productivity at the scale that encourages even later hours,” Wigler said.
Mackintosh recalled spending 15 consecutive hours fine-tuning around 25,000 lines of code in an application.
“At the end, I felt like I couldn’t code anymore,” he recalled.
“I could tell my dopamine was shot because I was irritable and didn’t want to answer basic questions about my day.”
A musician and teacher who asked to remain anonymous spoke of struggling to put his brain “on pause”, instead spending evenings experimenting with AI.
Nonetheless, everyone interviewed for this story expressed overall positive views of AI despite the downsides.
BCG recommends in a recently published study that company leaders establish clear limits regarding employee use and supervision of AI.
However, “That self-care piece is not really an America workplace value,” Wigler said.
“So, I am very skeptical as to whether or not its going to be healthy or even high quality in the long term.”
One man, his dog, and ChatGPT: Australia’s AI vaccine saga
By AFP
March 30, 2026

Image: — © AFP/File SEBASTIEN BOZON
Purple ROMERO
Desperate to help his sick dog, one Australian man went down the ultimate ChatGPT research hole, using artificial intelligence to design a personalised experimental treatment and finding top scientists to administer it.
Paul Conyngham’s months-long quest to fight his rescue mutt Rosie’s cancer has grabbed the attention of OpenAI boss Sam Altman, who called it an “amazing story” in an X post on Friday.
Sydney-based AI consultant Conyngham told AFP that eight-year-old Rosie’s mast cell cancer is now in partial remission and her biggest tumour has shrunk dramatically.
“She regained a lot of mobility and function” after receiving a custom mRNA vaccine along with powerful immunotherapy in December, he said.
Conyngham does not call his findings a cure — but experts unrelated to the dogged endeavours said they highlight AI’s potential to accelerate medical research.
“I would have conversations and just keep them going non-stop” with ChatGPT, Gemini and Grok to study cancer therapies in-depth, Conyngham said.
Following the chatbots’ advice, he paid $3,000 to have Rosie’s genome sequenced, and used the same online tools to analyse her DNA data.
Next he turned to AlphaFold, a scientific AI model that won 2024’s chemistry Nobel, to better understand one of the mutated doggy genes.
Conyngham sought the help of a University of New South Wales (UNSW) team — also thanks to a ChatGPT recommendation — and other academics in Australia who made his research a reality.
– ‘Just a rash’ –
Rosie’s cancer was misdiagnosed for nearly a year, Conyngham said on the phone during one of the long daily walks the pair have resumed.
“I took her to the vet three times. And two times, the vet said, don’t worry about it, it’s just a rash,” he said.
But Rosie got sicker and a biopsy showed in 2024 that she did have terminal cancer.
Having tried chemotherapy, standard immunotherapy and surgery, costs were mounting and Conyngham wanted more options.
So he used AI to delve deep into the world of emerging treatments including mRNA vaccines, which train the body’s immune system and were widely used during the Covid pandemic.
“This was not a clinical trial by any means” and “it’s not that AI cured cancer”, said UNSW professor Martin Smith, who sequenced Rosie’s genome for Paul.
“It was really driven by his determination to help his dog.”
The combination of “three different disruptive technologies: genome sequencing, artificial intelligence, and RNA therapeutics… offers new possibilities and challenges”, Smith said.
– AI promise –
Chatbots also assisted Conyngham in navigating the reams of paperwork for ethical approval.
And through his new scientific network, he met a professor at the University of Queensland able to administer the fine-tuned treatment.
Not all the tumours responded as well as the largest one, however. Rosie has had to have another operation since, and it’s unclear how long she has left to live.
The “short answer is we don’t know for sure” what actually led to the reduction in size of Rosie’s biggest tumour, said Pall Thordarson, director of UNSW’s RNA institute which created the vaccine.
“He used the AI program… to design the actual mRNA sequence. And then he gave that information to us,” Thordarson explained.
“AI holds lots of promise to improve and accelerate our research strategies,” Nick Semenkovich at the Medical College of Wisconsin, unrelated to the Rosie saga, told AFP.
But UNSW and Conyngham “haven’t published scientific details outside of their press release and interviews, so we don’t know enough about the vaccine to understand how much AI helped in its development — or if the vaccine worked the way it was designed”, Semenkovich said.
Patrick Tang Ming-kuen, a professor from The Chinese University of Hong Kong, said AI-powered research could help pets and humans survive, although the risk of errors is real.
“AI transforms a ‘needle-in-a-haystack’ search into a data-driven selection process, drastically shortening the timeframe between diagnosis and vaccine construction,” he said.
Since Conyngham’s story went global, Smith said his team have been fielding various new requests.
“You know: my cat’s got a disease, my dog’s got a disease, my aunt has got a disease.”
But “it’s hard for us to be able to help”, he said. “There’s a lot of things that have to align.”