Showing posts sorted by relevance for query KILLER ROBOTS. Sort by date Show all posts
Showing posts sorted by relevance for query KILLER ROBOTS. Sort by date Show all posts

Monday, December 20, 2021

Killer Robots Aren’t Science Fiction. A Push to Ban Them Is Growing.

A U.N. conference made little headway this week on limiting development and use of killer robots, prompting stepped-up calls to outlaw such weapons with a new treaty.


A combat robotic vehicle at the White Sands Missile Range in New Mexico in 2008.
Credit...Defense Advanced Research Projects Agency/Carnegie Mellon, via Associated Press

By Adam Satariano, Nick Cumming-Bruce and Rick Gladstone
Dec. 17, 2021

It may have seemed like an obscure United Nations conclave, but a meeting this week in Geneva was followed intently by experts in artificial intelligence, military strategy, disarmament and humanitarian law.

The reason for the interest? Killer robots — drones, guns and bombs that decide on their own, with artificial brains, whether to attack and kill — and what should be done, if anything, to regulate or ban them.

Once the domain of science fiction films like the “Terminator” series and “RoboCop,” killer robots, more technically known as Lethal Autonomous Weapons Systems, have been invented and tested at an accelerated pace with little oversight. Some prototypes have even been used in actual conflicts.

The evolution of these machines is considered a potentially seismic event in warfare, akin to the invention of gunpowder and nuclear bombs.

This year, for the first time, a majority of the 125 nations that belong to an agreement called the Convention on Certain Conventional Weapons, or C.C.W., said they wanted curbs on killer robots. But they were opposed by members that are developing these weapons, most notably the United States and Russia.

The group’s conference concluded on Friday with only a vague statement about considering possible measures acceptable to all. The Campaign to Stop Killer Robots, a disarmament group, said the outcome fell “drastically short.”
What is the Convention on Certain Conventional Weapons?

The C.C.W., sometimes known as the Inhumane Weapons Convention, is a framework of rules that ban or restrict weapons considered to cause unnecessary, unjustifiable and indiscriminate suffering, such as incendiary explosives, blinding lasers and booby traps that don’t distinguish between fighters and civilians. The convention has no provisions for killer robots.
The Convention on Certain Conventional Weapons meeting in Geneva on Friday.
Credit...Fabrice Coffrini/Agence France-Presse — Getty Images

What exactly are killer robots?


Opinions differ on an exact definition, but they are widely considered to be weapons that make decisions with little or no human involvement. Rapid improvements in robotics, artificial intelligence and image recognition are making such armaments possible.

The drones the United States has used extensively in Afghanistan, Iraq and elsewhere are not considered robots because they are operated remotely by people, who choose targets and decide whether to shoot.

Why are they considered attractive?


To war planners, the weapons offer the promise of keeping soldiers out of harm’s way, and making faster decisions than a human would, by giving more battlefield responsibilities to autonomous systems like pilotless drones and driverless tanks that independently decide when to strike.

What are the objections?


Critics argue it is morally repugnant to assign lethal decision-making to machines, regardless of technological sophistication. How does a machine differentiate an adult from a child, a fighter with a bazooka from a civilian with a broom, a hostile combatant from a wounded or surrendering soldier?

“Fundamentally, autonomous weapon systems raise ethical concerns for society about substituting human decisions about life and death with sensor, software and machine processes,” Peter Maurer, the president of the International Committee of the Red Cross and an outspoken opponent of killer robots, told the Geneva conference.

In advance of the conference, Human Rights Watch and Harvard Law School’s International Human Rights Clinic called for steps toward a legally binding agreement that requires human control at all times.

“Robots lack the compassion, empathy, mercy, and judgment necessary to treat humans humanely, and they cannot understand the inherent worth of human life,” the groups argued in a briefing paper to support their recommendations.



A “Campaign to Stop Killer Robots” protest in Berlin in 2019.
Credit...Annegret Hilse/Reuters

Others said autonomous weapons, rather than reducing the risk of war, could do the opposite — by providing antagonists with ways of inflicting harm that minimize risks to their own soldiers.

“Mass produced killer robots could lower the threshold for war by taking humans out of the kill chain and unleashing machines that could engage a human target without any human at the controls,” said Phil Twyford, New Zealand’s disarmament minister.

Why was the Geneva conference important?

The conference was widely considered by disarmament experts to be the best opportunity so far to devise ways to regulate, if not prohibit, the use of killer robots under the C.C.W.

It was the culmination of years of discussions by a group of experts who had been asked to identify the challenges and possible approaches to reducing the threats from killer robots. But the experts could not even reach agreement on basic questions.
What do opponents of a new treaty say?

Some, like Russia, insist that any decisions on limits must be unanimous — in effect giving opponents a veto.

The United States argues that existing international laws are sufficient and that banning autonomous weapons technology would be premature. The chief U.S. delegate to the conference, Joshua Dorosin, proposed a nonbinding “code of conduct” for use of killer robots — an idea that disarmament advocates dismissed as a delaying tactic.

The American military has invested heavily in artificial intelligence, working with the biggest defense contractors, including Lockheed Martin, Boeing, Raytheon and Northrop Grumman. The work has included projects to develop long-range missiles that detect moving targets based on radio frequency, swarm drones that can identify and attack a target, and automated missile-defense systems, according to research by opponents of the weapons systems.


A U.S. Air Force Reaper drone in Afghanistan in 2018. Such unmanned aircraft could be turned into autonomous lethal weapons in the future.
Credit...Shah Marai/Agence France-Presse — Getty Images

The complexity and varying uses of artificial intelligence make it more difficult to regulate than nuclear weapons or land mines, said Maaike Verbruggen, an expert on emerging military security technology at the Centre for Security, Diplomacy and Strategy in Brussels. She said lack of transparency about what different countries are building has created “fear and concern” among military leaders that they must keep up.

“It’s very hard to get a sense of what another country is doing,” said Ms. Verbruggen, who is working toward a Ph.D. on the topic. “There is a lot of uncertainty and that drives military innovation.”

Franz-Stefan Gady, a research fellow at the International Institute for Strategic Studies, said the “arms race for autonomous weapons systems is already underway and won’t be called off any time soon.”

Is there conflict in the defense establishment about killer robots?


Yes. Even as the technology becomes more advanced, there has been reluctance to use autonomous weapons in combat because of fears of mistakes, said Mr. Gady.

“Can military commanders trust the judgment of autonomous weapon systems? Here the answer at the moment is clearly ‘no’ and will remain so for the near future,” he said.

The debate over autonomous weapons has spilled into Silicon Valley. In 2018, Google said it would not renew a contract with the Pentagon after thousands of its employees signed a letter protesting the company’s work on a program using artificial intelligence to interpret images that could be used to choose drone targets. The company also created new ethical guidelines prohibiting the use of its technology for weapons and surveillance.

Others believe the United States is not going far enough to compete with rivals.

In October, the former chief software officer for the Air Force, Nicolas Chaillan, told the Financial Times that he had resigned because of what he saw as weak technological progress inside the American military, particularly the use of artificial intelligence. He said policymakers are slowed down by questions about ethics, while countries like China press ahead.

Where have autonomous weapons been used?

There are not many verified battlefield examples, but critics point to a few incidents that show the technology’s potential.

In March, United Nations investigators said a “lethal autonomous weapons system” had been used by government-backed forces in Libya against militia fighters. A drone called Kargu-2, made by a Turkish defense contractor, tracked and attacked the fighters as they fled a rocket attack, according to the report, which left unclear whether any human controlled the drones.

In the 2020 war in Nagorno-Karabakh, Azerbaijan fought Armenia with attack drones and missiles that loiter in the air until detecting the signal of an assigned target.

An Armenian official showing what are reportedly drones downed during clashes with Azerbaijan forces last year.
Credit...Karen Minasyan/Agence France-Presse — Getty Images

What happens now?

Many disarmament advocates said the outcome of the conference had hardened what they described as a resolve to push for a new treaty in the next few years, like those that prohibit land mines and cluster munitions.

Daan Kayser, an autonomous weapons expert at PAX, a Netherlands-based peace advocacy group, said the conference’s failure to agree to even negotiate on killer robots was “a really plain signal that the C.C.W. isn’t up to the job.”

Noel Sharkey, an artificial intelligence expert and chairman of the International Committee for Robot Arms Control, said the meeting had demonstrated that a new treaty was preferable to further C.C.W. deliberations.

“There was a sense of urgency in the room,” he said, that “if there’s no movement, we’re not prepared to stay on this treadmill.”



John Ismay contributed reporting.

Weapons and Artificial Intelligence


Will There Be a Ban on Killer Robots?
Oct. 19, 2018


A.I. Drone May Have Acted on Its Own in Attacking Fighters, U.N. Says
June 3, 2021


The Scientist and the A.I.-Assisted, Remote-Control Killing Machine
Sept. 18, 2021


Adam Satariano is a technology reporter based in London. @satariano

Nick Cumming-Bruce reports from Geneva, covering the United Nations, human rights and international humanitarian organizations. Previously he was the Southeast Asia reporter for The Guardian for 20 years and the Bangkok bureau chief of The Wall Street Journal Asia.

Rick Gladstone is an editor and writer on the International Desk, based in New York. He has worked at The Times since 1997, starting as an editor in the Business section. @rickgladstone

A version of this article appears in print on Dec. 18, 2021, Section A, Page 6 of the New York edition with the headline: Killer Robots Aren’t Science Fiction. Calls to Ban Such Arms Are on the Rise.

Monday, January 02, 2023

Allowing Killer Robots for Law Enforcement Would Be a Historic Mistake

THEY VIOLATE ASIMOV'S THIRD LAW OF ROBOTICS

Questions about if, when and how to use lethal autonomous weapons are no longer limited to warfare.

Branka Marijan
January 2, 2023
Activists from the Campaign to Stop Killer Robots, a coalition of non-governmental organizations, protest at the Brandenburg Gate in Berlin, March 21, 2019
(Annegret Hilse/REUTERS)


The headline-making vote by the San Francisco Board of Supervisors on November 29 to allow police to use robots to kill people in limited circumstances highlights that the questions of if, when and how to use lethal autonomous weapons systems — a.k.a. “killer robots” powered by artificial intelligence (AI) — are no longer just about the use of robots in warfare. Moreover, San Francisco is not the only municipality wrestling with these questions; the debate will only heat up as new technologies develop. As such, it is urgent that national governments develop policies for domestic law enforcement’s use of remote-controlled and AI-enabled robots and systems.

One week later, San Francisco law makers reversed themselves, after a public outcry, to ban the use of killer robots by the police. Nonetheless, their initial approval crystallized the concerns long held by those who advocate for an international ban on autonomous weapons: namely, that robots that can kill might be put to use not only by militaries in armed conflicts but also by law enforcement and border agencies in peacetime.

Indeed, robots have already been used by police to kill. In an apparent first, in 2016, the Dallas police used the Remotec F5A bomb disposal robot to kill Micah Xavier Johnson, a former US Army reservist, who fatally shot five police officers and wounded seven others. The robot delivered the plastic explosive C4 to an area where the shooter was hiding, ultimately killing him. Police use of robots has grown in less dramatic ways as well, as robots are deployed for varied purposes, such as handing out speeding tickets or surveillance.

So far, the use of robots by police, including the proposed use by the San Francisco police, is through remote operation and thus under human control. Some robots, such as Xavier, the autonomous wheeled vehicle used in Singapore, are primarily used for surveillance, but nevertheless use “deep convolutional neural networks and software logics” to process images of infractions.

Police departments elsewhere around the world have also been acquiring drones. In the United States alone, some 1,172 police departments are now using drones. Given rapid advances in AI, it’s likely that more of these systems will have greater autonomous capabilities and be used to provide faster analysis in crisis situations.

While most of these robots are unarmed, many can be weaponized. Axon, the maker of the Taser electroshock weapon, proposed equipping drones with Tasers and cameras. The company ultimately decided to reverse itself but not before the majority of its AI Ethics board resigned in protest. In the industry, approaches vary. Boston Dynamics recently published an open letter stating it will not weaponize its robots. Other manufacturers, such as Ghost Robotics, allow customers to mount guns on their machines.

Even companies opposed to the weaponization of their robots and systems can only do so much. It’s ultimately up to policy makers at the national and international levels to try to prevent the most egregious weaponization. As advancements in AI accelerate and more autonomous capabilities emerge, the need for this policy will become even more pressing.

The appeal of using robots for policing and warfare is obvious: robots can be used for repetitive or dangerous tasks. Defending the police use of killer robots, Rich Lowry, editor-in-chief of National Review and a contributing writer with Politico Magazine, posits that critics have been influenced by dystopic sci-fi scenarios and are all too willing to send others into harm’s way.

This argument echoes one heard in international fora on this question, which is that such systems could save soldiers’ and even civilians’ lives. But what Lowry and other proponents of lethal robots overlook are the wider impacts on particular communities, such as racialized ones, and developing countries. Avoiding the slippery slope of escalation when allowing the technology in certain circumstances is a crucial challenge.

The net result is that already over-policed communities, such as Black and Brown ones, face the prospect of being further surveilled by robotic systems. Saving police officers’ lives is important, to be sure. But is the deployment of killer robots the only way to reduce the risks faced by front-line officers? What about the risks of accidents and errors? And there’s the fundamental question of whether we want to live in societies with swarms of drones patrolling our streets.

Police departments are currently discussing the uses of killer robots in extreme circumstances only. But as we have seen in past uses of technology for policing, use scenarios will not be constrained for long. Indeed, small infractions and protests have in the past led to police using technology originally developed for battlefields. Consider, for example, the use of the Predator B drone, known as the Reaper, to surveil protests in Minneapolis, Minnesota by the U.S. Customs and Border Protection Agency. While these drones are unarmed, their use in an American city, in response to protests against racially motivated policing, was jarring.

Technological advancements may well have their place in policing. But killer robots are not the answer. They would take us down a dystopian path that most citizens of democracies would much rather avoid. That is not science fiction, but rather the reality - if governance does keep pace with technological advancement.

A version of this article appeared in Newsweek.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

ABOUT THE AUTHOR
Branka Marijan

Branka Marijan is a senior researcher at Project Ploughshares, where she leads research on the military and security implication of emerging technologies.

The Ethics of Automated Warfare and Artificial Intelligence

Bessma MomaniAaron ShullJean-François BélangerRebecca CrootofBranka MarijanEleonore PauwelsJames RogersFrank SauerToby WalshAlex Wilner

November 28, 2022

The most complex international governance challenges surrounding artificial intelligence (AI) today involve its defence and security applications — from killer swarms of drones to the computer-assisted enhancement of military command-and-control processes. The contributions to this essay series emerged from discussions at a webinar series exploring the ethics of AI and automated warfare hosted by the University of Waterloo’s AI Institute.

Introduction: The Ethics of Automated Warfare and AI
Bessma Momani, Aaron Shull, Jean-François Bélanger

AI and the Future of Deterrence: Promises and Pitfalls
Alex Wilner

The Third Drone Age: Visions Out to 2040
James Rogers

Civilian Data in Cyberconflict: Legal and Geostrategic Considerations
Eleonore Pauwels

AI and the Actual IHL Accountability Gap
Rebecca Crootof

Autonomous Weapons: The False Promise of Civilian Protection
Branka Marijan

Autonomy in Weapons Systems and the Struggle for Regulation
Frank Sauer

The Problem with Artificial (General) Intelligence in Warfare
Toby Walsh

ABOUT THE AUTHORS
Bessma Momani

CIGI Senior Fellow Bessma Momani has a Ph.D. in political science with a focus on international political economy and is full professor and assistant vice‑president, research and international at the University of Waterloo.
Aaron Shull

Aaron Shull is the managing director and general counsel at CIGI. He is a senior legal executive and is recognized as a leading expert on complex issues at the intersection of public policy, emerging technology, cybersecurity, privacy and data protection.
Jean-François Bélanger

Jean-François Bélanger is a postdoctoral fellow in the Department of Political Science at the University of Waterloo working with Bessma Momani on questions of cybersecurity and populism.
Rebecca Crootof

Rebecca Crootof is an associate professor of law at the University of Richmond School of Law. Her primary areas of research include technology law, international law and torts.
Branka Marijan

Branka Marijan is a senior researcher at Project Ploughshares, where she leads research on the military and security implication of emerging technologies.
Eleonore Pauwels

Eleonore Pauwels is an international expert in the security, societal and governance implications generated by the convergence of artificial intelligence with other dual-use technologies, including cybersecurity, genomics and neurotechnologies.
James Rogers

James Rogers is the DIAS Associate Professor in War Studies within the Center for War Studies at the University of Southern Denmark, a non-resident senior fellow within the Cornell Tech Policy Lab at Cornell University and an associate fellow at LSE IDEAS within the London School of Economics.
Frank Sauer

Frank Sauer is the head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich.
Toby Walsh

Toby Walsh is an Australian Research Council laureate fellow and scientia professor of artificial intelligence at the University of New South Wales.
Alex Wilner

Alex Wilner is an associate professor at the Norman Paterson School of International Affairs, Carleton University, and the director of the Infrastructure Protection and International Security program.



Sunday, August 30, 2020

Killer robots could wipe out humanity, report says in terrifying AI warning


ARTIFICIAL INTELLIGENCE (AI) and killer robots could wipe out humanity, a new report has terrifyingly warned

PUBLISHED:Mon, Aug 10, 2020

The research by Human Rights Watch found 30 countries had expressed a desire for an international treaty to be introduced banning the use of autonomous weapons. The weapons can engage with targets without human control.

The report, ‘Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control’, looked at policies from 97 countries opposed to the machines.

Although not naming the UK, it says British policy is there must always be “human oversight” when such weapons are being used.

However Britain is developing some weapons with “autonomous solutions”, the report found.
Mary Wareham, arms division advocacy director at Human Rights Watch and coordinator of the Campaign to Stop Killer Robots, said: “Removing human control from the use of force is now widely regarded as a grave threat to humanity that, like climate change, deserves urgent multilateral action.


Fears over killer robots have been raised (Image: Getty)


Human Rights Watch urge for international ban (Image: Getty)


“An international ban treaty is the only effective way to deal with the serious challenges raised by fully autonomous weapons.

“It’s abundantly clear that retaining meaningful human control over the use of force is an ethical imperative, a legal necessity and a moral obligation.

“All countries need to respond with urgency by opening negotiations on a new international ban treaty.”

Although the reports suggests a number of international organisations have backed the ban, a small number of military powers have rejected the proposals.

READ MORE: Rogue artificial intelligence could harm us if we don't act - expert


Fears over killer robots raised by organisation (Image: Getty)

RELATED ARTICLES
Robots could complete ‘all human tasks’ by 2050 claims transhumanist

These countries include the US and Russia.

Ms Wareham continued: “Many governments share the same serious concerns over permitting machines to take human lief on the battlefield, and their desire for human control provides a sound basis for collective action.

“While the pandemic has delayed diplomacy, it shows the importance of being prepared and responding with urgency to existential threats to humanity, such killer robots.”

Global tensions have risen over recent weeks following the outbreak of coronavirus and fears of World War 3 have been raised.

DON'T MISS
Most household chores will be fully automated by 2040, experts believe [COMMENT]
AI armageddon: Age of killer robots is closer than you think [REVEAL]
Science breakthrough: Experts reveal stunning 'living' robot [INSIGHT]


The largest militaries in the world (Image: Express)


China has been widely criticised on a global scale and accused of deliberately starting the deadly pandemic.

Tensions between Britain and Beijing have grown increasingly strained after China enforced the controversial Hong Kong security law.

The new legislation has been globally slammed and Prime Minister Boris Johnson publicly condemned the move by Chinese authorities.

Washington has also seen its relationship with Beijing deteriorate over recent weeks.


Autonomous weapons called to be banned (Image: Getty)

President Donald Trump and continually blamed the Communist nation for the deadly pandemic and criticised the World Health Organisation for being to “China-centric”.

Beijing and Washington have also increased their military presence in the South China Sea region amid fears of an outbreak of war.

While the likes of Moscow has come under scrutiny following the Russian report which suggested the Kremlin has had some involvement with UK democracy.

Although claiming it would "difficult - if not impossible - to prove" allegations Moscow tried to influence the Brexit vote from 2016, the report lashed out at the government for failing to recognise a threat posed by the Kremlin.

The Investigation and Security Committee (ISC) report said: “It is nonetheless the Committee's view that the UK Intelligence Community should produce an analogous assessment of potential Russian interference in the EU referendum and that an unclassified summary of it be published."

Andy Barratt, UK managing director of cybersecurity consultancy Coalfire, previously told Express.co.uk: “While ‘election tampering’ makes for good headlines, it’s almost certainly not the most critical cyber threat we face from foreign powers.

“There is a clear need for the government to drive the adoption of better security standards, not just in the public sector but across the private businesses that make up so much of the country’s critical infrastructure as well.

“As a country, we have to find a balance between being openly critical of other nations’ use of offensive cyber tactics while simultaneously pushing forward the capabilities we need to defend ourselves.”

Thursday, December 08, 2022

San Francisco lawmakers vote to ban killer robots in drastic U-turn

Story by Sam Levin in Los Angeles 


San Francisco lawmakers voted to ban police robots from using deadly force on Tuesday, reversing course one week after officials had approved the practice and sparked national outrage.



Photograph: Jeff Chiu/AP© Provided by The Guardian

The city’s board of supervisors voted to explicitly prohibit the San Francisco police department (SFPD) from using the 17 robots in its arsenal to kill people. The board, however, also sent the issue back to a committee for further review, which means it could later decide to allow lethal force in some circumstances.

The U-turn came after the majority of members on the 11-person board had voted last week to allow robots to be armed with explosives and use them to kill people “when risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD”. The board had also added an amendment saying that only high-ranking officers would be allowed to authorize deadly force.

The initial decision to allow “killer robots” was met with widespread criticism from civil rights groups and shone a harsh light on the increasing militarization of US police forces.

Related video: San Francisco bans killer police robots for now (CBS News)
Duration 0:13
View on Watch




Police in San Francisco granted power to use robots


San Francisco's "killer robots" plan sparks protest

Supervisors and police officials who had originally supported the use of lethal force had said the robots would kill people only in extraordinary cases, such as suicide bombing or active shooter situations.

Hilary Ronen, one of three supervisors who originally voted against deploying killer robots, said at last week’s meeting: “I’m surprised that we’re here in 2022. We have seen a history of these leading to tragedy and destruction all over the world.” After Tuesday’s reversal, she tweeted: “Common sense prevailed.”

The new policy does allow SFPD to use robots for situational awareness, such as sending the equipment into dangerous situations while officers stay behind.

On Monday, supervisor Gordon Mar tweeted that he regretted voting in favor of lethal robots and said he’d be switching his position: “Even with additional guardrails, I’ve grown increasingly uncomfortable with our vote & the precedent it sets for other cities without as strong a commitment to police accountability. I do not think making state violence more remote, distanced, & less human is a step forward.

“I do not think robots with lethal force will make us safer, or prevent or solve crimes,” he added.

San Francisco police have a controversial history of using lethal force against civilians, and one former officer is now facing manslaughter charges for an on-duty killing.

SFPD chief William Scott defended the department’s push to allow robots to kill people, saying in a statement on Wednesday: “We cannot be limited in how we are able to respond if and when the worst-case scenario incident occurs in San Francisco.” He said the department was interested in “having the tools necessary to prevent loss of innocent lives in an active shooter or mass casualty incident”, adding that “part of our job is to prepare for the unthinkable”.

Scott continued, “We want to use our robots to save lives – not take them. To be sure, this is about neutralizing a threat by equipping a robot with a lethal option as a last case scenario, not sending an officer in on a suicide mission.”

The Associated Press contributed reporting

Thursday, April 07, 2022

Are Lethal Autonomous Weapons Inevitable?  It Appears So

The technology and potential uses for killer robots are multiplying and progressing too fast — and international consensus is too fractured — to hope for a moratorium.


Illustration by Paul Lachine
Kyle Hiebert
January 27, 2022

There exists no more consistent theme within the canon of modern science fiction than the fear of the “killer robot,” from Isaac Asimov’s 1950 collection of short stories, I, Robot, to Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep? (the inspiration for the Blade Runner movies). Later came Skynet’s murder machines in the Terminator franchise, the cephalopod Sentinels in The Matrix and the android Gunslingers of Westworld.

In the world of imagination beyond the page and screen, savants Stephen Hawking and Bill Gates also saw a looming threat in real-life killer robots, technically classified as lethal autonomous weapons systems (LAWS). They raised alarms, as have American philosophers Sam Harris and Noam Chomsky, and tech magnate Elon Musk.

A major investor in artificial intelligence (AI), Musk told students at the Massachusetts Institute of Technology in 2014 that AI was the biggest existential threat to humanity. Three years later, he was among 116 experts in AI and robotics that signed an open letter to the United Nations warning that LAWS threaten to “permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”

It appears that this thesis could soon be tested.

The Evolution of Automated Weapons


In December 2021, the Sixth Review Conference of the UN Convention on Certain Conventional Weapons (CCW), a 125-member intergovernmental forum that discusses nascent trends in armed conflict and munitions, was unable to progress talks on new legal mechanisms to rein in the development and use of LAWS. The failure continues eight years of unsuccessful efforts toward either regulation or an outright ban. “At the present rate of progress, the pace of technological development risks overtaking our deliberations,” warned Switzerland’s representative as the latest conference wrapped up in Geneva. No date is set for the forum’s next meeting.

Semi-autonomous weapons like self-guided bombs, military drones or Israel’s famed Iron Dome missile defence system have existed for decades. In each case, a human operator determines the target, but a machine completes the attack. On the other hand, LAWS — derided by critics as “slaughterbots” — empower AI to identify, select and kill targets absent human oversight and control. The Future of Life Institute, a think tank based in Cambridge, Massachusetts, that is focused on threats to humanity posed by AI and which organized the 2017 open letter to the United Nations, makes the distinction by saying, “In the case of autonomous weapons the decision over who lives and who dies is made solely by algorithms.”

Myriad concepts of LAWS for air, ground, sea and space use have long been speculated about. The difference now is that some models are ready to be field tested. At the US Army’s latest annual convention in Washington, DC, in October 2021, attendees were treated to prototypes of robotic combat dogs that could be built with rifles attached. Australian robotics maker GaardTech announced in November an agreement with the Australian army to demonstrate the Jaeger-C uncrewed vehicle some time this year. Described as a “mobile robotic mine” or “beetle tank,” the bulletproof autonomous four-wheeled combat unit can be outfitted with an armour-piercing large-calibre machine gun and sniper rifle and carry up to 100 pounds of explosives for use in suicide attacks.

In The Kill Chain: Defending America in the Future of High-Tech Warfare, Christian Bose, who served as top adviser to US Senator John McCain and staff director of the Senate Armed Services Committee, tells of how China intends to develop fully autonomous swarms of intelligent combat drones. Recent actions bear this out. In addition to China’s rapid expansion of its own domestic drone industry, last September two state-owned Chinese companies were linked to a Hong Kong firm that acquired a 75 percent stake in an Italian company that manufactures military-grade drones for the North Atlantic Treaty Organization. The Hong Kong firm reportedly paid 90 times the valuation of the Italian company to execute the takeover.

Meanwhile, a report prepared for the Pentagon’s Joint Artificial Intelligence Center in 2021 by CNA, a non-profit research and analysis institute located in Arlington, Virginia, describes how Chinese technology is enabling Russia’s military to integrate autonomous AI into dozens of its platforms. According to the report, this technology includes anthropomorphic robots capable of carrying multiple weapons and, possibly, of driving vehicles. Russian media quoted defence minister Sergei Shoigu confirming last May that Russia has commenced with the manufacturing of killer robots, saying, “What has emerged are not simply experimental, but robots that can be really shown in science-fiction films as they are capable of fighting on their own.”

Yet the world’s first true test case of a fully autonomous killer robot may have already taken place, in Libya in March 2020. According to a report submitted by a panel of experts to the UN Security Council in March 2021, drones produced by Turkish state-owned defence conglomerate STM were allegedly sent to track down a convoy of retreating forces loyal to renegade military general Khalifa Haftar after they abandoned a months-long siege of the capital, Tripoli.

Turkey’s intervention into Libya to prop up the Tripoli-based Government of National Accord, the war-torn country’s UN-recognized government faction, has opened up Libya’s vast deserts to be used as a giant test theatre for Turkey’s booming military drone industry. Turkish drones have recently altered the trajectory of civil wars in favour of Turkey’s government clients in both Libya and Ethiopia, and delivered a decisive victory for Azerbaijan during a violent flare-up with Armenia in late 2020 over the disputed territory of Nagorno-Karabakh. Over the past two years Ukraine has purchased dozens of Turkish drones in response to Russia’s military buildup on Ukraine’s eastern border.

The experts’ report claims Haftar’s forces “were hunted down and remotely engaged” by a Turkish Kargu-2 drone and other “loitering munitions” — those with the ability to hover over targets for hours — that “were programmed to attack targets without requiring data connectivity between the operator and the munition.” In other words, the machines were apparently capable of identifying, selecting and killing targets without communication from a human handler.

In many ways, the evolution of military drones is a canary in the coal mine, bridging eras between semi-autonomous and autonomous weapons and perhaps foreshadowing the way in which fully independent killer robots might proliferate in the future. In the 2000s, military drones were a very expensive and hard-to-operate weapons system possessed almost exclusively by the United States. Less than two decades later, they have become a low-cost, widely available technology being manufactured and exported worldwide — not only by China, Turkey and the United States, but by Iran, the United Arab Emirates and others, each motivated by not only geopolitical interests but the lucrative commercial stakes involved.

By some estimates, more than 100 countries now have active military drone programs — all springing up without any sort of international regulatory structure in place.

Autonomous weapons systems may be able to assess a target’s legitimacy and make decisions faster, and with more accuracy and objectivity than fallible human actors could.

More Just War — or Just More War?


Rapid advances in autonomous weapons technologies and an increasingly tense global order have brought added urgency to the debate over the merits and risks of their use.

Proponents include Robert Work, a former US deputy secretary of defence under the Obama and Trump administrations, who has argued the United States has a “moral imperative” to pursue autonomous weapons. The chief benefit of LAWS, Work and others say, is that their adoption would make warfare more humane by reducing civilian casualties and accidents through decreasing “target misidentification” that results in what the US Department of Defense labels “unintended engagements.”

Put plainly: Autonomous weapons systems may be able to assess a target’s legitimacy and make decisions faster, and with more accuracy and objectivity than fallible human actors could, either on a chaotic battlefield or through the pixelated screen of a remote-control centre thousands of miles away. The outcome would be a more efficient use of lethal force that limits collateral damage and saves innocent lives through a reduction in human error and increased precision of munitions use.

Machines also cannot feel stress, fatigue, vindictiveness or hate. If widely adopted, killer robots could, in theory, lessen the opportunistic sexual violence, looting and vengeful razing of property and farmland that often occurs in war — especially in ethnically driven conflicts. These atrocities tend to create deep-seated traumas and smouldering intergenerational resentments that linger well after the shooting stops, destabilizing societies over the long term and inviting more conflict in the future.

But critics and prohibition advocates feel differently. They say the final decision over the use of lethal force should always remain in the hands of a human actor who can then be held accountable for that decision. Led by the Campaign to Stop Killer Robots, which launched in 2013 and is now comprised of more than 180 member organizations across 66 countries and is endorsed by over two dozen Nobel Peace laureates, the movement is calling for a pre-emptive, permanent international treaty banning the development, production and use of fully autonomous weaponry.

Dozens of countries support a pre-emptive ban as well. This briefly included Canada, when the mandate letter issued by Prime Minister Justin Trudeau in 2019 to then foreign affairs minister François-Philippe Champagne requested he assist international efforts to achieve prohibition. That directive has since disappeared from the mandates given to Champagne’s successors, Marc Garneau and now Mélanie Joly.

For those calling for a ban, the risks of LAWS outweigh their supposed benefits by ultimately incentivizing war through eliminating some of its human cost. The unavoidable casualties that result from armed conflict, and the political blowback that can produce, has always moderated the willingness of governments to participate in wars. If this deterrent is minimalized by non-human combatants over time, it may render military action more appealing for leaders — especially for unpopular ones, given the useful distraction that foreign adventurism can sometimes inject into domestic politics.

Other risks are that autonomous weapons technology could fall into the hands of insurgent groups and terrorists. At the peak of its so-called caliphate in Iraq and Syria, the Islamic State was launching drone strikes daily. Despotic regimes may impulsively unleash autonomous weapons on their own populations to quell a civilian uprising. Killer robots’ neural networks could also be susceptible to being hacked by an adversary and turned against their owners.

Yet, just as the debate intensifies, a realistic assessment of the state of the killer robots being developed confirms what the Swiss ambassador to the CCW feared — technological progress is far outpacing deliberations over containment. But even if it weren’t, amid a splintering international order, plenty of nation-states are readily violating humanitarian laws and treaties anyway, while others are seeking new ways to gain a strategic edge in an increasingly hostile, multipolar geopolitical environment.
National Interests Undermine Collective Action

While Turkey may have been the first to allegedly deploy live killer robots, their wide-ranging use is likely to be driven by Beijing, Moscow and Washington. Chinese President Xi Jinping and Russian President Vladimir Putin both openly loathe the Western-oriented human rights doctrines that underpin calls to ban killer robots. And despite America’s domestic division and dysfunction, its political class still has a bipartisan desire for the United States to remain the world’s global military hegemon.

With a GDP just slightly larger than that of the state of Florida, Russia’s inability to compete in a great power competition economically renders it reliant on exploiting asymmetric power imbalances wherever possible, including through furthering its AI capability for military and espionage purposes. Autonomous weapons could be well-suited to secure the resource-rich but inhospitable terrain of the Arctic, a region where the Kremlin is actively trying to assert Russia’s primacy. The country is also the world’s second-largest arms exporter behind the United States, accounting for one-fifth of global arms sales since 2016 — a key source of government revenue and foreign influence. Its recent anti-satellite weapons test underscores the Kremlin’s willingness to explore controversial weapons technologies too, even in the face of international condemnation.

President Xi Jinping, meanwhile, has pinned China’s ambitions of remaking the global order in favour of autocracies on the domination of key emerging technologies. On track by some estimates to becoming the world’s biggest economy by 2028, China is pouring spectacular amounts of money and resources into everything from AI, nanotechnology and quantum computing to genetics and synthetic biology, and has a stranglehold on the market for rare earth metals. After tendering his resignation in September out of frustration, the Pentagon’s ex-software chief, Nicolas Chaillan, declared in an interview with the Financial Times a month later that the United States will have “no competing fighting chance against China in 15 to 20 years.”

China is also notably keen on state-sponsored intellectual property theft to accelerate its innovation cycles. The more that others demonstrably advance on killer robots, the more that China will attempt to steal that technology — and inevitably succeed to a degree. This could create a self-reinforcing feedback loop that hastens the killer robot arms race among military powers.

This race of course includes the United States. The New York Times reported back in 2005 that the Pentagon was mulling ways to integrate killer robots into the US military. And much to the dismay of progressives, even Democrat-led administrations exhibit no signs whatsoever of winding down military spending any time soon — the Biden administration released a decidedly hawkish Global Posture Review at the end of November just as a massive US$770 billion defence bill sailed through Congress. The US military has already begun training drills to fight enemy robots, while deploying autonomous weapons systems could uphold its capacities for foreign intervention and power projection overseas, now that nation-building projects have fallen out of fashion.

Most important of all, mass production of killer robots could offset America’s flagging enlistment numbers. The US military requires 150,000 new recruits every year to maintain its desired strength and capability. And yet Pentagon data from 2017 revealed that more than 24 million of the then 34 million Americans between the ages of 17 and 24 — over 70 percent — would have been disqualified from serving in the military if they applied, due to obesity, mental health issues, inadequate education or a criminal record. Michèle Flournoy, a career defence official who served in senior roles in both the Clinton and the Obama administrations, told the BBC in December that “one of the ways to gain some quantitative mass back and to complicate adversaries’ defence planning or attack planning is to pair human beings and machines.”

Other, smaller players are nurturing an affinity for LAWS too. Israel assassinated Iran’s top nuclear scientist, Mohsen Fakhrizadeh, outside of Tehran in November 2020 using a remote-controlled, AI-assisted machine gun mounted inside a parked car, and is devising more remote ways to strike back against Hamas in the Gaza Strip. Since 2015, South Korea has placed nearly fully autonomous sentry guns on the edge of its demilitarized zone with North Korea, selling the domestically built robot turrets to customers throughout the Middle East. Speaking at a defence expo in 2018, Prime Minister Narendra Modi of India — the world’s second-largest arms buyer — told the audience: “New and emerging technologies like AI and Robotics will perhaps be the most important determinants of defensive and offensive capabilities for any defence force in the future.”

Banning hardware without including the underlying software would arguably be a half measure at best.

Finding the Middle Ground: Responsible Use

Even in the event that a ban on killer robots could be reached and somehow enforced, the algorithms used by autonomous weapons systems to identify, select and surveil targets are already streamlining and enhancing the use of lethal force by human actors. Banning hardware without including the underlying software would arguably be a half measure at best. But governments are badly struggling with how to regulate AI — and expanding the scope of the proposed ban would add enormous complexity to an already stalled process.

Instead, establishing acceptable norms around their use — what one US official has called a non-binding code of conduct — in advance of broad adoption may represent an alternative means to harness the potential positives of LAWS while avoiding the most-feared outcomes. These norms could be based primarily on a shared commitment to avoid so-called unintended consequences.

According to Robert Work, the former US defence official, LAWS should be totally excluded from systems that can independently launch pre-emptive or retaliatory attacks, especially those involving nuclear weapons. A code of conduct could include an expectation as well to keep autonomous weapons technology out of the hands of non-state actors. Numerous countries party to the CCW also believe that there are grounds to extend established international human rights law, such as the Geneva Convention, to cover autonomous weapons systems, by applying the law to the human authority that ordered their use. Some proponents of LAWS agree.

These are imperfect solutions — but they may prevent dystopian sci-fi fantasies from becoming reality. One way or another, killer robots are coming.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

ABOUT THE AUTHOR
Kyle Hiebert is a researcher and analyst formerly based in Cape Town and Johannesburg, South Africa, as deputy editor of the Africa Conflict Monitor.


Thursday, December 16, 2021

Nations renew talks on 'killer robots' as deal hopes narrow

Jamey Keaten
The Associated Press
 Wednesday, December 15, 2021


People take part in a 'Stop killer robots' campaign at Brandenburg gate in Berlin, Germany, Thursday, March 21, 2019.
(Wolfgang Kumm/dpa via AP)

The countries behind a United Nations agreement on weapons have been meeting this week on the thorny issue of lethal autonomous weapons systems, colloquially known as "killer robots," which advocacy groups want to strictly limit or ban.

The latest conference of countries behind a Convention on Certain Conventional Weapons is tackling an array of issues from incendiary weapons, explosive remnants of war, a specific category of land mines, and the autonomous weapons systems.

Opponents of such systems fear a dystopian day when tanks, submarines, robots or fleets of drones with facial-recognition software could roam without human oversight and strike against human targets.

"It's essentially a really critical opportunity for states to take steps to regulate and prohibit autonomy in weapons systems, which in essence means killer robots or weapons systems that are going to operate without meaningful human control," said Clare Conboy, spokeswoman for the advocacy group Stop Killer Robots.

The various countries have met repeatedly on the issue since 2013. They face what Human Rights Watch called a pivotal decision this week in Geneva on whether to open specific talks on the use of autonomous weapons systems or to leave it up to regular meetings of the countries to work out.

A group of governmental experts that took up the issue failed to reach a consensus last week, and advocacy groups say nations including the United States, Russia, Israel, India and Britain have impeded progress.

The International Committee of the Red Cross cautioned this month that the "loss of human control and judgment in the use of force and weapons raises serious concerns from humanitarian, legal and ethical perspectives."

Some world powers oppose any binding or nonvoluntary constraints on the development of such systems, in part out of concern that if the countries can't develop or research such weapons, their enemies or non-state groups might. Some countries argue there's a fine line between autonomous weapons systems and computer-aided targeting and weapons systems that exist already.

The United States has called for a "code of conduct" governing the use of such systems, while Russia has argued that current international law is sufficient.

U.N. Secretary-General Antonio Guterres, in a statement delivered on his behalf at Monday's meeting, urged the conference on CCW to "swiftly advance its work on autonomous weapons that can choose targets and kill people without human interference."

He called for an agreement "on an ambitious plan for the future to establish restrictions on the use of certain types of autonomous weapons."

The talks are scheduled to run through Friday.

The issue is likely to remain with the group of governmental experts and not be elevated to special talks -- with a view toward other U.N. agreements that restrict cluster munitions and land mines.

Tuesday, February 27, 2024

 

‘Emergent’ AI Behavior and Human Destiny

Reprinted from TomDispatch:

Make no mistake, artificial Intelligence (AI) has already gone into battle in a big-time way. The Israeli military is using it in Gaza on a scale previously unknown in wartime. They’ve reportedly been employing an AI target-selection platform called (all too unnervingly) “the Gospel” to choose many of their bombing sites. According to a December report in the Guardian, the Gospel “has significantly accelerated a lethal production line of targets that officials have compared to a ‘factory.’” The Israeli Defense Forces (IDF) claim that it “produces precise attacks on infrastructure associated with Hamas while inflicting great damage to the enemy and minimal harm to noncombatants.” Significantly enough, using that system, the IDF attacked 15,000 targets in Gaza in just the first 35 days of the war. And given the staggering damage done and the devastating death toll there, the Gospel could, according to the Guardian, be thought of as an AI-driven “mass assassination factory.”

Meanwhile, of course, in the Ukraine War, both the Russians and the Ukrainians have been hustling to develop, produce, and unleash AI-driven drones with deadly capabilities. Only recently, in fact, Ukrainian President Volodymyr Zelensky created a new branch of his country’s armed services specifically focused on drone warfare and is planning to produce more than one million drones this year.  According to the Independent, “Ukrainian forces are expected to create special staff positions for drone operations, special units, and build effective training. There will also be a scaling-up of production for drone operations, and inclusion of the best ideas and top specialists in the unmanned aerial vehicles domain, [Ukrainian] officials have said.”

And all of this is just the beginning when it comes to war, AI-style, which is going to include the creation of “killer robots” of every imaginable sort. But as the U.S., Russia, China, and other countries rush to introduce AI-driven battlefields, let TomDispatch regular Michael Klare, who has long been focused on what it means for the globe’s major powers to militarize AI, take you into a future in which (god save us all!) robots could be running (yes, actually running!) the show. ~ Tom Engelhardt


“Emergent” AI Behavior and Human Destiny

What Happens When Killer Robots Start Communicating with Each Other?

by Michael Klare

Yes, it’s already time to be worried — very worried. As the wars in Ukraine and Gaza have shown, the earliest drone equivalents of “killer robots” have made it onto the battlefield and proved to be devastating weapons. But at least they remain largely under human control. Imagine, for a moment, a world of war in which those aerial drones (or their ground and sea equivalents) controlled us, rather than vice-versa. Then we would be on a destructively different planet in a fashion that might seem almost unimaginable today. Sadly, though, it’s anything but unimaginable, given the work on artificial intelligence (AI) and robot weaponry that the major powers have already begun. Now, let me take you into that arcane world and try to envision what the future of warfare might mean for the rest of us.

By combining AI with advanced robotics, the U.S. military and those of other advanced powers are already hard at work creating an array of self-guided “autonomous” weapons systems — combat drones that can employ lethal force independently of any human officers meant to command them. Called “killer robots” by critics, such devices include a variety of uncrewed or “unmanned” planes, tanks, ships, and submarines capable of autonomous operation. The U.S. Air Force, for example, is developing its “collaborative combat aircraft,” an unmanned aerial vehicle (UAV) intended to join piloted aircraft on high-risk missions. The Army is similarly testing a variety of autonomous unmanned ground vehicles (UGVs), while the Navy is experimenting with both unmanned surface vessels (USVs) and unmanned undersea vessels (UUVs, or drone submarines). China, Russia, Australia, and Israel are also working on such weaponry for the battlefields of the future.

The imminent appearance of those killing machines has generated concern and controversy globally, with some countries already seeking a total ban on them and others, including the U.S., planning to authorize their use only under human-supervised conditions. In Geneva, a group of states has even sought to prohibit the deployment and use of fully autonomous weapons, citing a 1980 U.N. treaty, the Convention on Certain Conventional Weapons, that aims to curb or outlaw non-nuclear munitions believed to be especially harmful to civilians. Meanwhile, in New York, the U.N. General Assembly held its first discussion of autonomous weapons last October and is planning a full-scale review of the topic this coming fall.

For the most part, debate over the battlefield use of such devices hinges on whether they will be empowered to take human lives without human oversight. Many religious and civil society organizations argue that such systems will be unable to distinguish between combatants and civilians on the battlefield and so should be banned in order to protect noncombatants from death or injury, as is required by international humanitarian law. American officials, on the other hand, contend that such weaponry can be designed to operate perfectly well within legal constraints.

However, neither side in this debate has addressed the most potentially unnerving aspect of using them in battle: the likelihood that, sooner or later, they’ll be able to communicate with each other without human intervention and, being “intelligent,” will be able to come up with their own unscripted tactics for defeating an enemy — or something else entirely. Such computer-driven groupthink, labeled “emergent behavior” by computer scientists, opens up a host of dangers not yet being considered by officials in Geneva, Washington, or at the U.N.

For the time being, most of the autonomous weaponry being developed by the American military will be unmanned (or, as they sometimes say, “uninhabited”) versions of existing combat platforms and will be designed to operate in conjunction with their crewed counterparts. While they might also have some capacity to communicate with each other, they’ll be part of a “networked” combat team whose mission will be dictated and overseen by human commanders. The Collaborative Combat Aircraft, for instance, is expected to serve as a “loyal wingman” for the manned F-35 stealth fighter, while conducting high-risk missions in contested airspace. The Army and Navy have largely followed a similar trajectory in their approach to the development of autonomous weaponry.

The Appeal of Robot “Swarms”

However, some American strategists have championed an alternative approach to the use of autonomous weapons on future battlefields in which they would serve not as junior colleagues in human-led teams but as coequal members of self-directed robot swarms. Such formations would consist of scores or even hundreds of AI-enabled UAVs, USVs, or UGVs — all able to communicate with one another, share data on changing battlefield conditions, and collectively alter their combat tactics as the group-mind deems necessary.

“Emerging robotic technologies will allow tomorrow’s forces to fight as a swarm, with greater mass, coordination, intelligence and speed than today’s networked forces,” predicted Paul Scharre, an early enthusiast of the concept, in a 2014 report for the Center for a New American Security (CNAS). “Networked, cooperative autonomous systems,” he wrote then, “will be capable of true swarming — cooperative behavior among distributed elements that gives rise to a coherent, intelligent whole.”

As Scharre made clear in his prophetic report, any full realization of the swarm concept would require the development of advanced algorithms that would enable autonomous combat systems to communicate with each other and “vote” on preferred modes of attack. This, he noted, would involve creating software capable of mimicking ants, bees, wolves, and other creatures that exhibit “swarm” behavior in nature. As Scharre put it, “Just like wolves in a pack present their enemy with an ever-shifting blur of threats from all directions, uninhabited vehicles that can coordinate maneuver and attack could be significantly more effective than uncoordinated systems operating en masse.”

In 2014, however, the technology needed to make such machine behavior possible was still in its infancy. To address that critical deficiency, the Department of Defense proceeded to fund research in the AI and robotics field, even as it also acquired such technology from private firms like Google and Microsoft. A key figure in that drive was Robert Work, a former colleague of Paul Scharre’s at CNAS and an early enthusiast of swarm warfare. Work served from 2014 to 2017 as deputy secretary of defense, a position that enabled him to steer ever-increasing sums of money to the development of high-tech weaponry, especially unmanned and autonomous systems.

From Mosaic to Replicator

Much of this effort was delegated to the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s in-house high-tech research organization. As part of a drive to develop AI for such collaborative swarm operations, DARPA initiated its “Mosaic” program, a series of projects intended to perfect the algorithms and other technologies needed to coordinate the activities of manned and unmanned combat systems in future high-intensity combat with Russia and/or China.

“Applying the great flexibility of the mosaic concept to warfare,” explained Dan Patt, deputy director of DARPA’s Strategic Technology Office, “lower-cost, less complex systems may be linked together in a vast number of ways to create desired, interwoven effects tailored to any scenario. The individual parts of a mosaic are attritable [dispensable], but together are invaluable for how they contribute to the whole.”

This concept of warfare apparently undergirds the new “Replicator” strategy announced by Deputy Secretary of Defense Kathleen Hicks just last summer. “Replicator is meant to help us overcome [China’s] biggest advantage, which is mass. More ships. More missiles. More people,” she told arms industry officials last August. By deploying thousands of autonomous UAVs, USVs, UUVs, and UGVs, she suggested, the U.S. military would be able to outwit, outmaneuver, and overpower China’s military, the People’s Liberation Army (PLA). “To stay ahead, we’re going to create a new state of the art… We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat.”

To obtain both the hardware and software needed to implement such an ambitious program, the Department of Defense is now seeking proposals from traditional defense contractors like Boeing and Raytheon as well as AI startups like Anduril and Shield AI. While large-scale devices like the Air Force’s Collaborative Combat Aircraft and the Navy’s Orca Extra-Large UUV may be included in this drive, the emphasis is on the rapid production of smaller, less complex systems like AeroVironment’s Switchblade attack drone, now used by Ukrainian troops to take out Russian tanks and armored vehicles behind enemy lines.

At the same time, the Pentagon is already calling on tech startups to develop the necessary software to facilitate communication and coordination among such disparate robotic units and their associated manned platforms. To facilitate this, the Air Force asked Congress for $50 million in its fiscal year 2024 budget to underwrite what it ominously enough calls Project VENOM, or “Viper Experimentation and Next-generation Operations Model.” Under VENOM, the Air Force will convert existing fighter aircraft into AI-governed UAVs and use them to test advanced autonomous software in multi-drone operations. The Army and Navy are testing similar systems.

When Swarms Choose Their Own Path

In other words, it’s only a matter of time before the U.S. military (and presumably China’s, Russia’s, and perhaps those of a few other powers) will be able to deploy swarms of autonomous weapons systems equipped with algorithms that allow them to communicate with each other and jointly choose novel, unpredictable combat maneuvers while in motion. Any participating robotic member of such swarms would be given a mission objective (“seek out and destroy all enemy radars and anti-aircraft missile batteries located within these [specified] geographical coordinates”) but not be given precise instructions on how to do so. That would allow them to select their own battle tactics in consultation with one another. If the limited test data we have is anything to go by, this could mean employing highly unconventional tactics never conceived for (and impossible to replicate by) human pilots and commanders.

The propensity for such interconnected AI systems to engage in novel, unplanned outcomes is what computer experts call “emergent behavior.” As ScienceDirect, a digest of scientific journals, explains it, “An emergent behavior can be described as a process whereby larger patterns arise through interactions among smaller or simpler entities that themselves do not exhibit such properties.” In military terms, this means that a swarm of autonomous weapons might jointly elect to adopt combat tactics none of the individual devices were programmed to perform — possibly achieving astounding results on the battlefield, but also conceivably engaging in escalatory acts unintended and unforeseen by their human commanders, including the destruction of critical civilian infrastructure or communications facilities used for nuclear as well as conventional operations.

At this point, of course, it’s almost impossible to predict what an alien group-mind might choose to do if armed with multiple weapons and cut off from human oversight. Supposedly, such systems would be outfitted with failsafe mechanisms requiring that they return to base if communications with their human supervisors were lost, whether due to enemy jamming or for any other reason. Who knows, however, how such thinking machines would function in demanding real-world conditions or if, in fact, the group-mind would prove capable of overriding such directives and striking out on its own.

What then? Might they choose to keep fighting beyond their preprogrammed limits, provoking unintended escalation — even, conceivably, of a nuclear kind? Or would they choose to stop their attacks on enemy forces and instead interfere with the operations of friendly ones, perhaps firing on and devastating them (as Skynet does in the classic science fiction Terminator movie series)? Or might they engage in behaviors that, for better or infinitely worse, are entirely beyond our imagination?

Top U.S. military and diplomatic officials insist that AI can indeed be used without incurring such future risks and that this country will only employ devices that incorporate thoroughly adequate safeguards against any future dangerous misbehavior. That is, in fact, the essential point made in the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” issued by the State Department in February 2023. Many prominent security and technology officials are, however, all too aware of the potential risks of emergent behavior in future robotic weaponry and continue to issue warnings against the rapid utilization of AI in warfare.

Of particular note is the final report that the National Security Commission on Artificial Intelligence issued in February 2021. Co-chaired by Robert Work (back at CNAS after his stint at the Pentagon) and Eric Schmidt, former CEO of Google, the commission recommended the rapid utilization of AI by the U.S. military to ensure victory in any future conflict with China and/or Russia. However, it also voiced concern about the potential dangers of robot-saturated battlefields.

“The unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability,” the report noted. This could occur for a number of reasons, including “because of challenging and untested complexities of interaction between AI-enabled and autonomous weapon systems [that is, emergent behaviors] on the battlefield.” Given that danger, it concluded, “countries must take actions which focus on reducing risks associated with AI-enabled and autonomous weapon systems.”

When the leading advocates of autonomous weaponry tell us to be concerned about the unintended dangers posed by their use in battle, the rest of us should be worried indeed. Even if we lack the mathematical skills to understand emergent behavior in AI, it should be obvious that humanity could face a significant risk to its existence, should killing machines acquire the ability to think on their own. Perhaps they would surprise everyone and decide to take on the role of international peacekeepers, but given that they’re being designed to fight and kill, it’s far more probable that they might simply choose to carry out those instructions in an independent and extreme fashion.

If so, there could be no one around to put an R.I.P. on humanity’s gravestone.

Follow TomDispatch on Twitter and join us on Facebook. Check out the newest Dispatch Books, John Feffer’s new dystopian novel, Songlands (the final one in his Splinterlands series), Beverly Gologorsky’s novel Every Body Has a Story, and Tom Engelhardt’s A Nation Unmade by War, as well as Alfred McCoy’s In the Shadows of the American Century: The Rise and Decline of U.S. Global Power, John Dower’s The Violent American Century: War and Terror Since World War IIand Ann Jones’s They Were Soldiers: How the Wounded Return from America’s Wars: The Untold Story.

Michael T. Klare, a TomDispatch regular, is the five-college professor emeritus of peace and world security studies at Hampshire College and a senior visiting fellow at the Arms Control Association. He is the author of 15 books, the latest of which is All Hell Breaking Loose: The Pentagon’s Perspective on Climate Change.

Copyright 2024 Michael Klare