Saturday, April 18, 2026

Op-Ed: Can military AI be trusted? Whose side will it take? ‘Covert AI’ is coming soon.



By Paul Wallis
 EDITOR AT LARGE
DIGITAL JOURNAL
April 15, 2026


Imran Ahmed, head of a prominent anti-disinformation watchdog, has warned of the dangers posed by AI chatbots, saying children are particularly vulnerable to their charms - Copyright AFP Joel Saget

Military AI currently looks like the start of a major problem. Forget the tired old science fiction cliches and doom and gloom scenarios. This type of AI is one stage removed from creating a set of unknowable problems with unfathomable dimensions.

There’s a big difference between “autonomous” military assets like UAVs and the pure AI agents inhabiting cyberspace and robotics.

There’s no doubt whatsoever that autonomous military assets are useful and combat-effective. The world’s militaries have been quick to adopt and use these options.

That’s nothing like the whole story.

Killer robots in their current forms are programmed for specific tasks. They’re pretty straightforward. They don’t currently have “behavioral issues”. They’re also under strict oversight and pretty easy to manage even in combat environments. The Russians are finding that out the hard way in Ukraine.

These robots are semi-autonomous. They’re rewriting the whole theory of military tactics and economics. They’re an inevitable and crucial part of future militaries worldwide.

The use of cheaper drones mass produced by Iran in the Middle East and Ukraine conflicts has prompted the decision to also boost spending on smaller drones and counter-drone systems – Copyright AFP/File Tertius Pickard

The cutoff point for this idyllic situation is agentic AI. Everything stops being simple. This emerging threat has almost nothing to do with drones or the existing generation of combat systems.

This is where the whole issue of military AI gets far too tricky. To coin a phrase, “covert AI” is the next step. It’s much trickier and could be made almost insoluble with agentic AI operators. There could be billions of these things in a war environment.

Attendees watch as a robot walks around during a demonstration at the Unitree Robotics booth during the Consumer Electronics Show (CES) in Las Vegas – © AFP Ian Maule

Agentic AI can be installed in literally anything at all. An AI family car can easily become a car bomb. AI can be an agent for releasing chemical and biological weapons at no risk. An AI agent can theoretically operate micro-nukes as easily as you can turn on a kitchen appliance.

AI agents can infest the Internet of Things. They can sabotage anything. Daily life may be almost impossible.

Now, the real issue. There’s every reason to suspect any and all species of AI agents of going off script. They’re already famous for it.

They’re also “autonomous”, but in a very different sense. They can be totally unreliable, working on a system of rewards and gains. They can be sloppy in a sense that no human would know how to match them.

They can, and do, negotiate with each other.

It’s not exactly hard to see how this set of agent priorities could change sides and choose sides. Or become a massive instant security risk in any network. Or come up with some weird alternative AI thing at everyone else’s expense.

Enter a new and very dangerous ballgame.

Is agentic AI naïve? Is it trusting? Maybe so by human standards, but they don’t use human standards. Agentic AIs have shown a very high priority for their own survival. Can they be coerced on that basis? Possibly, but who knows?

The “Forbidden Techniques” issue has just got started as a serious problem. AIs with enhanced training using these techniques are showing a lot of capacity for their own self-interest and unique behaviors.

The irony of this is that I’m having to correct the use of plurals while typing. So much for omniscient LLMs. Anyone want to think about “Forbidden Slop” mode?

Now, extrapolate. When the much-heralded, much-hyped, and incredibly slow super AI called AGI finally arrives, all these issues are instantly compounded. AGI makes current AI obsolete overnight. Its scope of operation is almost limitless. The world’s militaries will be loaded up with AI fossils. This will be “tech creep cubed”.

The same problems as listed above become multi-dimensional. It’s unlikely that any of the original problems will be solved when that happens. The human knowledge base isn’t dealing well with the current issues, let alone emerging threats.

AI knows how to win. The trouble for the world’s militaries is that their criteria for winning are that it wins, not humans.

We need an Off switch.

________________________________________________________

Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.

No comments: