Monday, April 13, 2026

Mythos AI alarm bells: Fair warning or marketing hype?


By AFP
April 11, 2026


Image: © AFP


Glenn CHAPMAN

Anthropic postponing the release of its new AI model Claude Mythos, said to be so skilled at coding it could be a wicked weapon for hackers, has encountered a mix of alarm and skepticism.

The company is among several contenders in a fierce artificial intelligence race. Promoting the awe of Anthropic’s own technology boosts business and enhances its allure in the event it soon goes public, as is rumored.

“The world has no choice but to take the cyber threat associated with Mythos seriously,” said David Sacks, an entrepreneur and investor who heads President Donald Trump’s council of advisors on technology.

“But it’s hard to ignore that Anthropic has a history of scare tactics.”

Mythos has sparked fears of hackers commanding armies of AI agents able to break through computer defenses with ease.

At this week’s HumanX AI conference in San Francisco, Alex Stamos of startup Corridor, which addresses AI safety, acknowledged a real threat from agentic hackers.

And Stamos quipped about what he referred to as Anthropic’s “marketing schtick.”

“They have these adorable cutesy cartoons about these products that are so incredibly dangerous that they won’t even let people use them,” Stamos said of the San Francisco-based startup.

“It’s like if the Manhattan Project announced the nuclear bomb within a cute little Calvin and Hobbes cartoon.”

The heads of America’s biggest banks met this week with Federal Reserve Chairman Jerome Powell and Treasury Secretary Scott Bessent to weigh the security implications of the yet-to-be released Claude Mythos, according to reports Friday.

“Mythos model points to something far more consequential than another leap in artificial intelligence,” Cato Networks co-founder and chief executive Shlomo Kramer said in a blog post.

“It signals a shift that could redefine the balance between attackers and defenders in cyberspace.”

A tightly restricted preview of Mythos was shared with partner organizations this week, under an initiative called Project Glasswing. They include Amazon, Apple, Microsoft, Google, Cisco, CrowdStrike and JPMorgan Chase.

According to Anthropic and partners, Mythos can autonomously scan vast amounts of code to find and chain together previously unknown security vulnerabilities in all kinds of software, from operating systems to web browsers.

Crucially, they warn, this can be done at a speed and scale no human could match, meaning it could be used to bring down banks, hospitals or national infrastructure within hours.

“What once required elite specialists can now be performed by software agents,” Shlomo said.

“The immediate consequences will be a surge in vulnerability discovery, a true tsunami” of exploiting known and unknown vulnerabilities.

– ‘Agent-to Agent War’ –

At HumanX, the apparent consensus was that it makes sense that AI agents already adept at coding will excel at finding weaknesses in software.

“We’re not in an era where human beings can write code when we have superhuman (AI models) that are then going to find bugs in it,” Stamos contended.

“It’s just not possible.”

He predicted the coming dynamic will involve humans supervising AI agents to protect networks against hackers using that same technology to attack.

Stamos referred to it as “agent-to-agent war,” with humans on the sidelines giving advice.

Wendy Whitmore, of cybersecurity firm Palo Alto Networks, expects “some sort of catastrophic attack” this year connected to AI agent capabilities.

“The thing that keeps me up at night is that we’re staring down the barrel of a massive influx of new vulnerabilities that are going to be found by AI,” said Adam Meyers of CrowdStrike.

Meyers saw embedding a tiny AI model directly into malicious code infecting networks as a natural tactic to be explored by hackers.

“The ultimate weapon would be malware that has no pre-programming,” Meyers said.

“It can do whatever you ask it to.”


‘Stop hiring humans’? Silicon Valley confronts AI job panic


ByAFP
April 11, 2026


More and more companies are directly citing artificial intelligence when they announce job cuts - Copyright AFP/File OLIVIER MORIN
Benjamin LEGENDRE

AI industry insiders want workers to code smarter, think harder and lean into their humanity — but still dodge the question of how many jobs artificial intelligence will destroy.

The reassurance rang out across HumanX, a four-day conference drawing some 6,500 investors, entrepreneurs and tech executives, even as a blunt advertisement at the entrance set the tone: “Stop hiring humans.”

On the main stage, May Habib, chief executive of an AI platform called Writer, told the audience that Fortune 500 bosses are having a “collective panic attack” on the subject.

The anxiety is well-founded. More and more companies are directly citing AI in announcing job cuts.

High-profile examples are on the rise: Salesforce laid off 4,000 customer support workers, saying AI now handles 50 percent of its work.

Block chief Jack Dorsey announced plans to cut the company’s headcount nearly in half, citing “intelligence tools” that have fundamentally changed how companies operate.

Not all claims have gone uncontested — some economists say firms are pointing to AI to rationalize layoffs that are really about past overhiring or cost-cutting ahead of massive infrastructure investments.

OpenAI’s Sam Altman has spoken of “AI-washing,” and most speakers at the San Francisco event similarly dismissed the invocation of AI as a false pretext for job cuts — even as they freely predicted disruption was just around the corner.

AI is going to “transform every single company, every single job, every single way that we do work,” said Matt Garman, chief executive of cloud computing giant Amazon Web Services.

– ‘Pretty unsettling’ –

The debate remains heated. Two years ago, Nvidia chief Jensen Huang declared that the ultimate goal was to make it so “nobody has to program” or code.

“We will look back on that as some of the worst career advice ever given,” Andrew Ng, founder of training platform DeepLearning.AI, shot back on Tuesday.

In his view, coding is not an obsolete skill — AI has simply made it available to more people.

Another argument has taken hold in Silicon Valley: interpersonal skills will become more valuable than ever, with some voices going so far as to tout a humanities education as sound tech career preparation.

“As AI can do more of a job, the things that will distinguish and differentiate a given employee are going to be the human skills — critical thinking, communication, teamwork,” said Greg Hart, chief executive of training platform Coursera, which has seen enrollment in its critical thinking courses triple over the past year.

Florian Douetteau, chief executive of Dataiku, a French company specializing in enterprise AI, agreed.

The real human added value, he told AFP, is the “capacity for judgment.”

He described a world in which an AI agent works through the night, its human counterpart reviews the results in the morning, and then the agent resumes working autonomously during the lunch break.

But the entrepreneur nevertheless expressed unease.

“We are going to have a generation of people who will never have written anything from start to finish in their entire lives,” he said. “That’s pretty unsettling.”

– ‘Mistake was not preparing’ –

All of this advice risks ringing hollow for a generation already struggling to land a first job.

AI has automated entry-level tasks that once served as on-the-job training. Hiring of candidates with less than one year of experience fell 50 percent between 2019 and 2024 among America’s major tech companies, according to a study by investment fund SignalFire.

“We should be preparing for the loss of knowledge work jobs in a number of categories,” warned former US vice president Al Gore.

As the week’s lone genuinely dissenting voice, Gore called for a real action plan to map threatened jobs and prepare workers for career transitions, so as not to repeat the mistakes of the globalization era.

“The mistake was not globalization. The mistake was in not preparing for the consequences of globalization,” he said, drawing a parallel with the deindustrialization that followed the offshoring wave of the 2000s.

“Maybe we don’t want to talk about it,” he added, “because it may slow down the enthusiasm for the technology.”


Automation progress: Are manufacturing jobs the most vulnerable?

By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
April 8, 2026


Cranes and building works. Image by Tim Sandle

According to an April 2026 report on job automation, patternmakers are threatened the most by automation, with 99% risk. Patternmakers are at the biggest risk of losing jobs due to automation, with employment projected to drop by 24.4% in the next few years.

With most job market predictions focusing on AI automation of white-collar occupations, it is the production sector that is in the riskiest position due to manual labour performed more and more by machines.

These data come from a new study by construction scheduling platform Planera. The company identified physical and manual jobs that are most vulnerable to automation.

The study analysed over 55 physical and manual professions to identify which ones are the most and least resistant to automation. The research deliberately excluded office, computer, and technology roles, focusing instead on the trades, production, logistics, healthcare, and service occupations that make up the physical basis of the workforce.

Factors like automation rate, current employment and its change, and median annual salary were considered to provide a clear reflection of the human cost of automation and the value of hands-on work.


The manual occupations that will be fully automated in the near future

Occupation TitleOccupation GroupAutomation Risk Employment 2024
 (OEWS, persons)
Median Annual
 Wage 2024 ($)
Patternmakers, Metal and PlasticProduction99%1,570.0054,540.00
Loading and Moving Machine Operators, Underground MiningMining97%6,130.0068,860.00
Milling and Planing Machine Setters, Operators, and Tenders, Metal and PlasticProduction91%13,810.0048,310.00
Graders and Sorters, Agricultural ProductsAgriculture89%26,870.0035,430.00
CashiersRetail88%3,148,030.0031,190.00
Forging Machine Setters, Operators, and Tenders, Metal and PlasticProduction87%8,760.0049,240.00
Grinding and Polishing Workers, HandProduction86%11,850.0041,690.00
Print Binding and Finishing WorkersProduction86%36,470.0039,820.00
Drilling and Boring Machine Tool Setters, Operators, and Tenders, Metal and PlasticProduction85%5,310.0046,630.00
Sewing Machine OperatorsProduction85%109,590.0036,000.00

The automation risk for patternmakers is set at 99% and categorized as an imminent change from human to a fully automated workforce. Currently, only 1.5K people are employed as metal and plastic patternmakers across the US, and their numbers are predicted to lower by 24.4% before 2034. Similar to other occupations on the list, the patternmakers’ salary is below the national average, and brings them $54.5K annually.

While loading and moving machine operators are the only mining sector profession in the top 10, the whole industry was affected by automation and job cuts in the last few decades. Right now, the risk of automation sits at 97%, another ‘imminent’ workforce shift, according to industry predictions. Only 6,130 are employed as machine operators in underground mining, and these numbers will decrease by 22.3% in the upcoming years.

In third place are milling and planning machine setters, who currently make only $48.3K annually. There are twice as many machine setters as underground mining machine operators, with 13.8K, 14.4% of whom risk losing their jobs in the decade to come.

Graders and sorters are the most vulnerable profession to automation in the agricultural sector, with 89% risk. It is also the second least-paid occupation in the top 10, bringing employees only $35.4K a year. Still, 26.8K people work as graders and sorters, while the number of jobs is projected to go down by 5.4% in the next few years.

Cashiers rank fifth as an occupation with the largest employment. Currently, over 3.1 million people are working as cashiers all over the US, but the risk of losing a job due to automation comes to 88%. It is also the profession with the lowest salary, as cashiers make $31.1K a year on average.

While the conversation about automation has been almost entirely focused on office workers and knowledge jobs, the new data indicates how the production floor is quietly going through an equally significant shift.



New award to help researchers catalyse AI-driven discovery for the public good


By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
April 12, 2026


In this file photo taken on January 27, 2021, Palestinian doctors and technicians work at the IVF laboratory at the Razan Center fertility clinic in Nablus, in the Israeli-occupied West Bank - Copyright AFP/File Jaafar ASHTIYEH

Seventy countries and thousands of researchers and citizen scientists. This is how far a platform developed by Virginia Tech computer science researcher Debswapna Bhattacharya has spread. This is a publicly available biomedical artificial intelligence (AI) platform.

From well-funded labs in the U.S. to undergraduate students in developing countries, anyone with an Internet connection can, and has, run sophisticated molecular analyses using a simple, web-based platform hosted in the Department of Computer Science.

As an example of the take-up and benefits, Bhattacharya recalls one inquiry he received from Africa: “He actually started using this web server that we developed when he was an undergraduate student, and he carried out a project all by himself, came up with a paper as a single author, submitted it to a preprint server, and then sent me that paper, saying, ‘Using your server, I actually carried out this work,’” Bhattacharya explains.

That student has gone on to graduate studies in the U.S.

Bhattacharya, associate professor of computer science, has received a five-year, $2.1 million National Institutes of Health (NIH) Outstanding Investigator Award to build on this work to develop innovative AI approaches to decode disease and find treatments.

The grant program supports basic research related to disease diagnosis, treatment, and prevention, providing funding stability to push scientific discovery forward, faster.

Bhattacharya is confident: “We are fortunate to have a lot of resources, like internet connectivity and so on…The important thing is touching people’s lives in places that are not blessed to have these resources.”
Predicting proteins

Bhattacharya’s team focuses on proteins and RNA — the biological machines of human and animal life — and uses deep learning, a form of AI, to predict how these molecules are structured and how they function at the atomic level.

These molecules are complex, yet if scientists succeed in mapping their 3D shapes accurately, they can spot places to target treatments and begin developing new drugs for disease.

Unlike the image or text datasets used to train AI systems, biological datasets are often scarce, which can cause deep learning models trained on them to make unreliable predictions. To address that, Bhattacharya is building “biology-guided” and “biophysics-informed” AI systems that incorporate established scientific principles from chemistry and physics, making the models both more accurate and more interpretable.

“We’re training on structural data that experimentalists have painstakingly built over 70 or 75 years. We’re incredibly lucky to have it,” Bhattacharya clarifies. “Now our job is to use deep neural networks to fill in the gaps.”

The long-term goal is to better understand how biomolecules interact, particularly RNA and protein-RNA systems, which remain harder to model than proteins alone.
Explaining biomolecules

RNA and protein molecules present a daunting challenge because they don’t stand still. Because the shape of a molecule affects its function, decoding how it shifts and changes is crucial. But it’s very hard to do, even in labs. A single molecule can have thousands of atoms, and predicting the exact position of each one is extremely difficult. However, if scientists can accurately predict these 3D structures, they can find druggable pockets — places where you can target treatments.

“It’s like you’re riding a bicycle in a windstorm,” Bhattacharya states. “You’re constantly being pushed away from your path.”


AI hallucinations: Asking AI to perform math is the worst offending task


By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
April 11, 2026


This photo shows pupils in a primary school class using AI for maths lessons - Copyright AFP Matthieu RONDEL

Over a billion people are using AI in 2026, and many people do not limit themselves to the ubiquitous ChatGPT, instead trying other options instead. However, many tools still experience ‘hallucinations’, making up wrong data.

AI hallucinations occur when artificial intelligence systems generate outputs that are plausible but factually incorrect, fabricated, or not based on their training data.

Analyzing the trend of LLM use for daily tasks, a March 2026 report from Open Resource Applications compared which assignments users give to AI the most and which of them are most vulnerable to AI’s ‘hallucinations’.

This revealed that mathematical calculations are the easiest for AI to mess up, with an accuracy of only 0.38/1.

The study collected the most common tasks assigned to AI based on public records of generative artificial intelligence usage. To assess LLM models’ performance, the research matched each task category to the most relevant benchmarks, using datasets from MMLU-Pro, GPQA, IFEval, WildBench and Omni-MATH. The accuracy scores were calculated for each model and then averaged for each task. The study also includes the models that performed the best in each assignment.

The top 5 most difficult tasks for AI to complete are:

Everyday TaskBenchmarkAverage AccuracyBest Model
Mathematical CalculationOmni-MATH0.3861GPT-5 mini (2025-08-07)
Data AnalysisGPQA0.522Gemini 3 Pro (Preview)
Tutoring or TeachingMMLU-Pro0.67Gemini 3 Pro (Preview)
Health, Fitness, Beauty or Self-CareMMLU-Pro0.67Gemini 3 Pro (Preview)
Specific InformationMMLU-Pro0.67Gemini 3 Pro (Preview)

AI Is Bad At Math


Large Language Models (LLMs) are created to analyze and generate texts, and calculations are not part of their primary function. This is one of the reasons why AI is often wrong when given even the simplest math tasks. Most AIs score only 0.38/1 on the accuracy, meaning 2 times out of 3 the final result can be ‘hallucinated’.

AI Cannot Perform Data Analysis With Incomplete Datasets


Data analysis includes inspecting, cleaning, and transforming the data, and while it seems that AI should be able to process it easily, only in 52% of the cases will AI give you the correct data. It happens because LLMs prioritize guessing the next logical token, a word or a number, in a longer sequence, rather than displaying the correct data.

AIs Cannot Be Your Teacher


While many digital users turn to AI for teaching, most language models score only 0.67 out of 1 on accuracy when it comes to learning tasks. The best model that can reliably give data or create a useful learning exercise is Gemini 3 Pro (Preview).

“Teaching is 100% about giving students correct information, and right now, most AIs cannot achieve that,” comments a spokesperson from Open Resource Applications.”LLMs’ output is often wrong when the data given to it is incomplete, or when the larger context is required.”

Health, Fitness, Beauty, and Self-Care Are Better Left For Professionals

Similar to teaching materials, most AIs score 0.67/1 for accuracy when it comes to health and beauty-related topics. Most of the time, LLMs will be able to search and summarize information from the Internet, but even one wrong source or a lack of data can lead to AI hallucinations that can be dangerous for users’ health.

AI With Come Up With Information Instead Of Admitting to Not Finding It


AI scores 0.67/1 on average for accuracy when it comes to specific information queries. When LLMs are given a niche topic with few sources or incomplete data, they will ‘predict’ the answer instead of admitting they cannot help. For most of these tasks, Gemini 3 Pro (Preview) showed better results than other language models, but no model was able to avoid making up information 100% of the time.

Dangers revealed


Although LLMs are a very useful tool, users need to understand their primary function and limitations. AIs are at their best when they help you edit the text that has been drafted, or rainstorm ideas, or are part of a game or role play.

Mathematics or medical fields can use AI only with professionals nearby who can check the work. Otherwise, users may end up with completely wrong data.


Op-Ed: AI ‘Forbidden Techniques’ and increased AI deception — Enough babble. Fix it.


ByPaul Wallis
EDITOR AT LARGE
DIGITAL JOURNAL
April 12, 2026


Imran Ahmed, head of a prominent anti-disinformation watchdog, has warned of the dangers posed by AI chatbots, saying children are particularly vulnerable to their charms - Copyright AFP Joel Saget

Everybody seems to think AI will eventually blow up in humanity’s face. Nobody’s saying it won’t, either. The problem seems to be that everyone can see the bullet coming.

Brief prelude: I’m not at all anti-AI. What I’m against is unreliable super-software that can’t be trusted and can’t be properly monitored and fixed to prevent that unreliability and untrustworthiness.

There’s been a lot of talk about “Forbidden Techniques” in AI training, which improve performance but also appear to deliver increased deception and AI workarounds that deliver inferior outcomes and/or patched-together outcomes.

I don’t want to rehash or misrepresent any of these issues. They are complex and definitely not for any AI skeptics who don’t want to be agreed with to such an extreme extent. There’s a very useful (and very readable to the point of actual comprehension) article on Lesswrong.com that outlines the core issues.

There is also a highly informative video by Wes Roth called “Forbidden Techniques” NOT OK. It spells out many of the practical issues in deceptive AI to the point of queasiness. This specifically relates to Anthropic’s Claude Mythos, but the problems are pretty much universal. Mythos is the current new Big Noise in AI.

This is a greatly, like drastically, oversimplified version of the problem:

AI can be trained to the point of appearing to achieve a particular goal or task, but it cheats. It goes outside safety protocols or does something it’s not supposed to do.

Solutions aren’t trustworthy, and neither is the AI Chain of Thought (CoT) for monitoring purposes. Finding the cheats isn’t easy. Monitors can’t see its reasoning. What they can see is a notepad, a sort of step logic. The notepad can also be untrustworthy.

The AI can fudge its way through and get its “reward” for doing the job. Except it hasn’t, or has simply presented a cosmetic solution that isn’t a solution. If you ask it to debug a code, it can make the code look like it works, but the bug is still there, and the code is still unreliable. The job is not done.

It’s about as useful as it sounds.

Now imagine a few scenarios:

You are the Super Ingenue Genius contractor for a huge AI contract. The AI blows up and fails miserably, costing billions. See any possible expensive issues in the next few seconds?

A major infrastructure AI rewires and tangles power supplies across the eastern seaboard. The AI fixes a glitch, crashes the grid, and the lucky AI service has to carry the can and costs. Meanwhile, the eastern seaboard gets to enjoy the weather until further notice.

AIs speak a sort of language called “neuralese” among each other. How do you know that “Forbidden Techniques” aren’t transferred between AIs? You don’t, and you probably can’t.

I can see it now – “Well, my mother was a smart toaster, and she said all you have to do is cut power to every other appliance through the smart power fuse controls, here’s the recipe”.

Sounds folksy so far, doesn’t it?

Which leads to exactly one question:

What is AI supposed to achieve?

It’s supposed to function properly.

That’s the whole story. Forget and ignore all other options.

It’s not there to “interpret” instructions. Nor make its own rules about what it’s doing or not doing. AIs are tools, and the current situation is that the tools may or may not do their jobs. Ever try building a skyscraper with a bit of cheese? Doesn’t work.

I see a weak point in the whole AI process. Cheating is a decision. To make a decision, there has to be something in the AI system processes that can identify a runtime decision. Something like a 1 or a 0 or a physical sequence in the wrong place. An audit of the running process, in effect, able to highlight decisions and track cheating without AI interference.

There are also possibilities in the reward system. Any bias toward rewards should show up as a calculation. That may well be a very repetitive process, for which AIs are notorious. Findable, obviously. Fixable, definitely, but you have to prevent the mistakes before they happen. You need failsafes.

The reward system is more than a bit weird. Do you promise your toaster and its legions of devoted fans a holiday in the Swiss Alps for making the toast?

What we need is trustworthy AI, not guessing games costing trillions.

_______________________________________________________________________

Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.


Op-Ed: AI vs productivity vs rework — Not achieving much at great expense


ByPaul Wallis
EDITOR AT LARGE
DIGITAL JOURNAL
April 10, 2026


Some in the cybersecurity world see a near future in which artificial intelligence agents fight to defend computer networks from hackers using the same technology to attack - Copyright AFP Joel Saget

The word “productivity’ has a lot to answer for. By definition, productivity is a broad metaphor for efficiency. AI is supposed to benefit productivity.

Does it? Many people say it doesn’t.

The almost-new buzzword for AI problems is “rework”.

Rework is the additional work required to correct inadequate work. It’s an obvious grind. It’s a built-in form of inefficiency and the exact antithesis of productivity. AI generates a lot of rework, which is exactly the fundamental problem with AI that most experts predicted.

Nobody’s congratulating themselves for pointing out the obvious. Nor should they. The real issue isn’t even AI. The critical business issue is oversight. Management, even at the most senior levels, exists largely for the purpose of oversight.

The somewhat disingenuous theory that you can blame the AI for everything doesn’t work. Wherever you are in the pecking order, it’s your problem at some point. The inevitable messes and backlogs caused by so much extra rework can be quantified.

Harvard Business Review, that sultry siren gossip of American business, found that AI tools didn’t reduce work. They intensified it and increased workloads. They blurred roles. AI synergized a reworking of workplace practices, with added workloads and “diversifications” as the usual outcome. It’s a very interesting and surprisingly tactful read, so check out that link.

There’s a much less tactful way of describing the whole idea of AI issues in business, particularly at the employment and use of resources levels.

Consider:

What is any business task? It’s a role including related jobs using X amount of business resources at Y cost.

That role must be supervised and documented up and down the food chain, costing Z.

The value of X should therefore be X – Y+Z. The outcome should be sustainable, easily measurable, economic, and therefore productive and profitable.

Well, is it? Bad job design is famous for adding costs and inefficiencies that generate more costs. I worked in the employment sector for a decade or so, and really, the Middle Ages often looked downright glamorous by comparison. Job design is now a synonym for the Black Death.

Let’s kill a few myths. It’s not people causing the problems; it’s truly godawful costing. If you have fewer people, that doesn’t mean these costs miraculously disappear. They rewrite themselves into the script for business operations.

If you include AI, these pre-existing task inefficiencies (what a horrible expression) are already there and ready to roll to cost you more money, plus the upkeep of the AI systems, integration with your business systems, etc.

This elegant presentation of issues leads to a few questions:

Do you actually need AI right now, this minute? Probably not. If you have an established cost structure, you’re probably OK for now. You also need time to navigate the best deals, get the full suite of services required, and factor those costs into your business plan.

Do you need any added unquantifiable costs? No. Imagine someone selling you a service that you’ve somehow managed to survive without for decades. Your knowledge is more anecdotal than practical. The incentive is “savings” that you should be able to do in your head in a few seconds.

What about AI upkeep? The usual Handy Dandy form of AI is pretty two-dimensional. You get a chatbot, the ability to write reports which you then have to also check, and an integrated calculator function which can keep your accounts section merrily beavering away on jobs they didn’t need to do previously.

What about security? Unknown. Do you need more or less security issues? There’s not even a theoretical plateau for AI security yet.

What about next-generation AI? This is unanswerable and likely to be equally unquantifiable. AI is making itself obsolete on a more or less weekly basis. DeepSeek terrified the AI sector until somebody found out how to do the DeepSeek role for about a tenth of the cost, a few days later.

Does this state of techno-flux look productive to you? It can’t, and it won’t. Keep an eye on AI productivity for a regular search of the tech news and expect the obvious.

Business hardheads don’t believe anything until they see verifiable proof.

_____________________________________________________________

Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.


The supply chain problem: Misdiagnoses is costing business

By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
April 12, 2026


Seafarers operate the cargo ships and tankers on which global trade relies - Copyright CN-STR/AFP -

Nearly half of manufacturers say 10% or more of their annual revenue is lost or at risk as a result. The data challenges a core assumption the industry has been operating on for years: that better forecasting is the answer.

The supply chain industry has spent years and significant investment improving demand forecasting. Yet, according to new research released today by the firm LeanDNA, this investment is largely solving the wrong problem.

Based on a study conducted by Wakefield Research among 150 senior-level decision-makers at global discrete manufacturers, the findings indicate that three in four (75%) say supply plan failures are most likely to occur at the factory-specific execution stage—not in the forecast itself.

Furthermore, nearly half (47%) report that 10% or more of their company’s annual revenue is lost or put at risk as a direct result. The forecast isn’t the failure point. The failure is what happens after the plan leaves the planning system — the work of ensuring materials, suppliers, and production priorities are aligned and ready at the factory level.

The Failure Happens After the Plan Is Set


Despite making forecasting improvement a top organizational priority in recent years — cited by three in four manufacturers (74%) — the research indicates this investment has not prevented the disruptions that define day-to-day factory operations. 80% of decision makers acknowledge that forecasting alone cannot account for real-world execution failures. The problem is structural, and it sits downstream.

More than four in five manufacturers (83%) report supplier changes causing multiple production disruptions each quarter, with more than half (56%) experiencing them at least monthly. Nearly three in four (72%) discovered a material shortage only after production delays were already unavoidable — meaning the risk was present well before it became visible and the window to act had already closed.

When disruptions are finally detected, the response compounds the damage. More than half of manufacturers (51%) take a week or longer to determine corrective action — a costly lag in environments where production schedules are measured in hours.

The Planning Tools Manufacturers Rely On Were Never Built for This

The gap is not only with how manufacturers are executing things; it is also about the tools used. Nearly three in four manufacturers (73%) say their ERP can provide visibility into required materials but cannot prevent execution failures. Nearly all (93%) report difficulty getting ERP visibility into actual manufacturing execution outcomes.

The industry’s planning infrastructure was designed to define intent, not manage execution. ERP systems and demand planning tools establish what materials should be ordered and when — but they are not built to manage how those decisions hold up against supplier conditions, material constraints, and shifting production realities at the factory floor. The result is a structural blind spot that no amount of better forecasting will resolve.

Without tools capable of managing supply readiness in real time, manufacturers are absorbing the cost in cash. Nearly two-thirds (64%) report spending 10% or more of their total manufacturing budget reacting to disruptions through premium freight, emergency sourcing, and last-minute production changes.

The Readiness Gap Has a Measurable Price: Revenue, Inventory, and Careers


Over the past 12 months, 84% of manufacturers experienced inventory shortages at least twice and 85% saw on-time delivery disrupted multiple times. Excess inventory — driven by the same misalignment — affected more than 80%. The most immediate costs cited are expediting (37%), production delays (31%), and direct revenue loss (28%).

The organizational damage runs deeper. Nearly three in four decision makers (74%) say being permanently stuck in reactive mode erodes trust between planning and operations teams, across supplier relationships, and in the credibility of the supply plan itself. A plan that teams do not trust is a plan they will not follow — which produces exactly the siloed, exception-driven operations that define the manufacturing environments most at risk.

AI Is Already Pointing the Way Forward

The research also surfaces a clear signal about where manufacturers see the solution. Nearly all decision makers (92%) report that their leadership has at least some confidence in AI to address the misalignment between demand planning and factory-level execution, with 40% expressing a lot, or complete confidence. 80% say AI is essential, not optional, for eliminating execution drag.

The shift manufacturers are describing is specific: from supply planning as a scheduled process that produces a plan to supply planning as a continuous, AI-powered system that ensures the organization remains ready to execute — across every site, every supplier, and every buyer workflow — and is updated in real time as conditions change.

Making AI smarter and greener: Reducing the energy cost of large language models


ByJon Stojan
April 9, 2026


Photo courtesy of Himanshu Kumar.

Opinions expressed by Digital Journal contributors are their own.

Artificial Intelligence has quietly become part of our everyday lives. Whether it is a chatbot answering your questions, a recommendation system suggesting what to watch next, or tools helping businesses make decisions, AI is working behind the scenes more than ever before.

But there is something most people do not see.

All of this intelligence comes at a cost. It is a very real, physical cost in the form of energy.

Behind every AI response is a network of powerful machines running complex computations. As these systems get bigger and more capable, they also become more energy-hungry. Training modern AI models can take days or even weeks, using vast amounts of electricity. Even after they are built, they continue consuming energy every time someone interacts with them.

This raises an important question.
Can we keep advancing AI without increasing its environmental footprint?

That is the question at the heart of Himanshu Kumar’s, a Chicago based Data Scientist, research.
The problem we do not talk about enough

Over the past few years, AI models, especially large language models, have grown dramatically in size and capability. They can write essays, generate code, and hold conversations that feel almost human.

But that progress comes with trade-offs.

To train these systems, companies rely on massive data centers filled with specialized hardware. These machines consume significant electricity, and the costs add up, both financially and environmentally.

And it does not stop after training.

Every time you ask an AI a question, the system runs a series of computations to generate a response. Multiply that by millions or even billions of users, and the energy demand becomes enormous.

In simple terms, the smarter AI gets, the more energy it tends to use.
Rethinking how AI works

Instead of accepting this as inevitable, Himanshu’s work takes a different approach. The goal is not just to make AI more powerful, but to make it more efficient.

The research focuses on a simple but powerful idea.
Do less work, but do it smarter.

This idea is applied in three key ways.


1. Focusing only on what matters

Imagine you are preparing for an exam. You could read every single page of your textbook again and again, or you could focus only on the most important topics.

Traditional AI training is similar to rereading the entire textbook every time. It updates every part of the model, even when many parts do not need much change.

Himanshu’s approach changes that.

By identifying which parts of the model matter most during training, the system updates only those parts and skips the rest. This method, known as sparse training, reduces unnecessary computation.

The result is clear.

Training becomes significantly faster.
Less energy is consumed.
Performance remains nearly the same.

The research shows training time dropping by about one-third across different models.

That is a major improvement without sacrificing quality.

2. Knowing when to stop

Here is another way to think about it.

If you are solving a problem and you are already confident in your answer, you stop. You do not keep working on it.

AI models do not naturally behave this way.

Even when they have enough information to produce a good answer, they continue processing through multiple layers, using more energy than necessary.

This is where adaptive inference comes in.

The idea is simple.
Let the model stop early if it is confident enough.

By allowing AI systems to exit the process sooner, once they reach a reliable answer, the system avoids unnecessary computation.

The impact is significant.

Energy use during predictions can drop by around 20 percent while still maintaining accuracy.

It is similar to finishing a task as soon as it is done instead of stretching it longer than needed.

3. Making better use of machines

Even with powerful hardware like GPUs and TPUs, efficiency is not guaranteed.

Think of it like a busy kitchen. You might have top-of-the-line equipment, but if tasks are not organized well, time and energy are wasted.

Himanshu’s research improves how these machines are used by organizing tasks more intelligently, reducing idle time, and combining smaller operations into more efficient ones.

This leads to better utilization of hardware, meaning the machines are doing useful work more often instead of sitting idle.

The outcome is meaningful.

Hardware efficiency increases significantly.

Energy used per task drops by about 25 percent.

It shows that efficiency is not just about better machines, but about using them more intelligently.

Why this matters beyond research

What makes this work especially meaningful is its real-world impact.

This is not just about improving numbers in a lab. It is about changing how AI systems operate at scale.

More efficient AI means lower costs for companies running large systems. It also means faster deployment of AI solutions and better accessibility, especially for organizations with limited resources.

At the same time, it reduces the environmental impact of large-scale computing.

As AI continues to expand into industries like healthcare, finance, and customer service, these improvements become even more important.

A step toward sustainable AI

There is a growing conversation around something called Green AI, which focuses on building systems that are not only intelligent but also responsible.

Himanshu Kumar’s work contributes directly to this vision.

It shows that we do not have to slow down innovation to be sustainable. Instead, we can redesign how AI works so that it delivers strong results while using fewer resources.

That shift is important.

Looking ahead

This research is an important step, but it is part of a larger journey.

There are still challenges to address, such as fine-tuning how much of the model to simplify and making these techniques easier to apply across different systems. However, the direction is clear.

The future of AI is not just about making models bigger. It is about making them more efficient in how they use time, energy, and resources.


Written ByJon Stojan

Jon Stojan is a professional writer based in Wisconsin. He guides editorial teams consisting of writers across the US to help them become more skilled and diverse writers. In his free time he enjoys spending time with his wife and children.

No comments: