Sunday, March 29, 2026

Dubious AI detectors drive 'pay-to-humanize' scam

Washington (United States) (AFP) – Feed an Iranian news dispatch or a literary classic into some text detectors, and they return the same verdict: AI-generated. Then comes the pitch: pay to "humanize" the writing, a pattern experts say bears the hallmarks of a scam.


Issued on: 30/03/2026 - FRANCE24


A crop of fraudulent AI detection tools risk adding another layer of online deception. © Kirill KUDRYAVTSEV / AFP


As AI falsehoods explode across social media, often outpacing the capacity of professional fact-checkers, bogus detectors risk adding another layer of deception to an already fractured information ecosystem.

While even reliable AI detectors can produce false results, researchers say a crop of fraudulent tools has emerged online, easily weaponized to discredit authentic content and tarnish reputations.

AFP's fact-checkers identified three such text detectors that claim to estimate what percentage is AI-generated. The tools -- prompted in four languages -- not only misidentified authentic text as AI-generated but also attempted to monetize those errors.

One detector, JustDone AI, processed a human-written report about the US-Iran war and wrongly concluded it contained "88% AI content." It then offered to scrub any trace of AI for a fee.


"Your AI text is humanizing," the site claimed, leading to a page where "100% unique text" was locked behind a paywall charging up to $9.99.

Two other tools -- TextGuard and Refinely -- produced similar false positives and sought to monetize them.
'Scams'

AFP presented its findings to all three detectors.

"Our system operates using modern AI models, and the results it provides are considered accurate within our technology," TextGuard's support team told AFP.

"At the same time, we cannot guarantee or compare results with other systems."

JustDone also reiterated that "no AI detector can guarantee 100 percent accuracy."

It acknowledged the free version of its AI detector "may provide less precise results" due to "high demand and the use of a lighter model designed for quick access."

Echoing AFP's findings, one user on a review platform complained that "even with 100% human-written material, JustDone still flags it as AI."

AFP fed the tools multiple human-written samples -- in Dutch, Greek, Hungarian, and English. All were wrongly flagged as having high AI content, including passages from an acclaimed 1916 Hungarian classic.

The tools returned AI flags regardless of input -- even for nonsensical text.

JustDone and Refinely appeared to operate even without an internet connection, suggesting their results may be scripted rather than genuine technical analysis.

"These are not AI detectors but scams to sell a 'humanizing' tool that will often return what we call 'tortured phrases'" -- unrelated jargon or nonsensical alternatives -- Debora Weber-Wulff, a Germany-based academic who has researched detection tools, told AFP.
'Liar's dividend'

Illustrating how such tools can be used to discredit individuals, pro‑government influencers in Hungary claimed earlier this year that a document outlining the opposition's election campaign had been entirely created by AI.

To support the unfounded allegation, they circulated screenshots on social media showing results from JustDone.

The tools tested by AFP sought to lure students and academics as clients, with two of them claiming their users came from top institutions such as Cornell University.

Cornell University told AFP it "does not have any established relations with AI detector companies."

"Generative AI does provide an increased risk that students may use it to submit work that is not their own," the university said.

"Unfortunately, it is unlikely that detection technologies will provide a workable solution to this problem. It can be very difficult to accurately detect AI-generated content."

Fact-checkers, including those from AFP, often rely on AI visual detection tools developed by experts, which typically look for hidden watermarks and other digital clues.

However, they too can sometimes produce errors, making it necessary to supplement their findings with additional evidence such as open-source data.

The stakes are high as false readings from unreliable detectors threaten to erode trust in AI verification broadly -- and feed a disinformation tactic researchers have dubbed the "liar's dividend": dismissing authentic content as AI fabrications.

"We often report on misinformers and other hoaxsters using AI to fabricate false images and videos," said Waqar Rizvi from the misinformation tracker NewsGuard.

"Now, (we are) monitoring the opposite, but no less insidious phenomenon: claims that a visual was created by AI when in fact, it's authentic."

burs-ac/dw

© 2026 AFP


Life with AI causing human brain 'fry'

New York (AFP) – Heavy users of artificial intelligence report being overwhelmed by trying to keep up with and on top of the technology designed to make their lives easier.


Issued on: 30/03/2026 - FRANCE24

Too many lines of code to analyze, armies of AI assistants to wrangle, and lengthy prompts to draft are among the laments by hard-core AI adopters.

Consultants at Boston Consulting Group (BCG) have dubbed the phenomenon "AI brain fry," a state of mental exhaustion stemming "from the excessive use or supervision of artificial intelligence tools, pushed beyond our cognitive limits."

The rise of AI agents that tend to computer tasks on demand has put users in the position of managing smart, fast digital workers rather than having to grind through jobs themselves.

"It's a brand-new kind of cognitive load," said Ben Wigler, co-founder of the start-up LoveMind AI. "You have to really babysit these models."

People experiencing AI burnout are not casually dabbling with the technology -- They are creating legions of agents that need to be constantly managed, according to Tim Norton, founder of the AI integration consultancy nouvreLabs.

"That's what's causing the burnout," Norton wrote in an X post.

However, BCG and others do not see it as a case of AI causing people to get burned out on their jobs.

A BCG study of 1,488 professionals in the United States actually found a decline in burnout rates when AI took over repetitive work tasks.
Coding vigilance

For now, "brain fry" is primarily a bane for software developers given that AI agents have excelled quickly at writing computer code.

"The cruel irony is that AI-generated code requires more careful review than human-written code," software engineer Siddhant Khare wrote in a blog post.

"It is very scary to commit to hundreds of lines of AI-written code because there is a risk of security flaws or simply not understanding the entire codebase," added Adam Mackintosh, a programmer for a Canadian company.

And if AI agents are not kept on course by a human, they could misunderstand an instruction and wander down an errant processing path, resulting in a business paying for wasted computing power.
'Irritable'

Wigler noted that the promise of hitting goals fast with AI tempts tech start-up teams already prone to long workdays to lose track of time and stay on the job even deeper into the night.

"There is a unique kind of reward hacking that can go on when you have productivity at the scale that encourages even later hours," Wigler said.

Mackintosh recalled spending 15 consecutive hours fine-tuning around 25,000 lines of code in an application.

"At the end, I felt like I couldn't code anymore," he recalled.

"I could tell my dopamine was shot because I was irritable and didn't want to answer basic questions about my day."

A musician and teacher who asked to remain anonymous spoke of struggling to put his brain "on pause", instead spending evenings experimenting with AI.

Nonetheless, everyone interviewed for this story expressed overall positive views of AI despite the downsides.

BCG recommends in a recently published study that company leaders establish clear limits regarding employee use and supervision of AI.

However, "That self-care piece is not really an America workplace value," Wigler said.

"So, I am very skeptical as to whether or not its going to be healthy or even high quality in the long term."

© 2026 AFP

No comments: