Showing posts sorted by relevance for query AUTONOMOUS WEAPONS. Sort by date Show all posts
Showing posts sorted by relevance for query AUTONOMOUS WEAPONS. Sort by date Show all posts

Tuesday, January 04, 2022

Humanity's Final Arms Race: UN Fails to Agree on 'Killer Robot' Ban

The world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.












A robot distributes promotional literature calling for a ban on fully autonomous weapons in Parliament Square on April 23, 2013 in London, England. The 'Campaign to Stop Killer Robots' is calling for a pre-emptive ban on lethal robot weapons that could attack targets without human intervention. (Photo: Oli Scarff/Getty Images)


JAMES DAWES

December 30, 2021
 by The Conversation

Autonomous weapon systems—commonly known as killer robots—may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity's final one.

The United Nations Convention on Certain Conventional Weapons debated the question of banning autonomous weapons at its once-every-five-years review meeting in Geneva Dec. 13-17, 2021, but didn't reach consensus on a ban. Established in 1983, the convention has been updated regularly to restrict some of the world's cruelest conventional weapons, including land mines, booby traps and incendiary weapons.

Given the pace of research and development in autonomous weapons, the U.N. meeting might have been the last chance to head off an arms race.

Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.

Meanwhile, human rights and humanitarian organizations are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, increasing the risk of preemptive attacks, and because they could be combined with chemical, biological, radiological and nuclear weapons themselves.

As a specialist in human rights with a focus on the weaponization of artificial intelligence, I find that autonomous weapons make the unsteady balances and fragmented safeguards of the nuclear world—for example, the U.S. president's minimally constrained authority to launch a strike—more unsteady and more fragmented. Given the pace of research and development in autonomous weapons, the U.N. meeting might have been the last chance to head off an arms race.
Lethal errors and black boxes

I see four primary dangers with autonomous weapons. The first is the problem of misidentification. When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat?


The problem here is not that machines will make such errors and humans won't. It's that the difference between human error and algorithmic error is like the difference between mailing a letter and tweeting. The scale, scope and speed of killer robot systems—ruled by one targeting algorithm, deployed across an entire continent—could make misidentifications by individual humans like a recent U.S. drone strike in Afghanistan seem like mere rounding errors by comparison.

Autonomous weapons expert Paul Scharre uses the metaphor of the runaway gun to explain the difference. A runaway gun is a defective machine gun that continues to fire after a trigger is released. The gun continues to fire until ammunition is depleted because, so to speak, the gun does not know it is making an error. Runaway guns are extremely dangerous, but fortunately they have human operators who can break the ammunition link or try to point the weapon in a safe direction. Autonomous weapons, by definition, have no such safeguard.

Importantly, weaponized AI need not even be defective to produce the runaway gun effect. As multiple studies on algorithmic errors across industries have shown, the very best algorithms—operating as designed—can generate internally correct outcomes that nonetheless spread terrible errors rapidly across populations.

For example, a neural net designed for use in Pittsburgh hospitals identified asthma as a risk-reducer in pneumonia cases; image recognition software used by Google identified Black people as gorillas; and a machine-learning tool used by Amazon to rank job candidates systematically assigned negative scores to women.

The problem is not just that when AI systems err, they err in bulk. It is that when they err, their makers often don't know why they did and, therefore, how to correct them. The black box problem of AI makes it almost impossible to imagine morally responsible development of autonomous weapons systems.
The proliferation problems

The next two dangers are the problems of low-end and high-end proliferation. Let's start with the low end. The militaries developing autonomous weapons now are proceeding on the assumption that they will be able to contain and control the use of autonomous weapons. But if the history of weapons technology has taught the world anything, it's this: Weapons spread.

Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the Kalashnikov assault rifle: killer robots that are cheap, effective and almost impossible to contain as they circulate around the globe. "Kalashnikov" autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists.

The Kargu-2, made by a Turkish defense contractor, is a cross between a quadcopter drone and a bomb. It has artificial intelligence for finding and tracking targets, and might have been used autonomously in the Libyan civil war to attack people. 
Ministry of Defense of Ukraine, CC BY

High-end proliferation is just as bad, however. Nations could compete to develop increasingly devastating versions of autonomous weapons, including ones capable of mounting chemical, biological, radiological and nuclear arms. The moral dangers of escalating weapon lethality would be amplified by escalating weapon use.

High-end autonomous weapons are likely to lead to more frequent wars because they will decrease two of the primary forces that have historically prevented and shortened wars: concern for civilians abroad and concern for one's own soldiers. The weapons are likely to be equipped with expensive ethical governors designed to minimize collateral damage, using what U.N. Special Rapporteur Agnes Callamard has called the "myth of a surgical strike" to quell moral protests. Autonomous weapons will also reduce both the need for and risk to one's own soldiers, dramatically altering the cost-benefit analysis that nations undergo while launching and maintaining wars.

Asymmetric wars—that is, wars waged on the soil of nations that lack competing technology—are likely to become more common. Think about the global instability caused by Soviet and U.S. military interventions during the Cold War, from the first proxy war to the blowback experienced around the world today. Multiply that by every country currently aiming for high-end autonomous weapons.
Undermining the laws of war

Finally, autonomous weapons will undermine humanity's final stopgap against war crimes and atrocities: the international laws of war. These laws, codified in treaties reaching as far back as the 1864 Geneva Convention, are the international thin blue line separating war with honor from massacre. They are premised on the idea that people can be held accountable for their actions even during wartime, that the right to kill other soldiers during combat does not give the right to murder civilians. A prominent example of someone held to account is Slobodan Milosevic, former president of the Federal Republic of Yugoslavia, who was indicted on charges of crimes against humanity and war crimes by the U.N.'s International Criminal Tribunal for the Former Yugoslavia.

But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier's commanders? The corporation that made the weapon? Nongovernmental organizations and experts in international law worry that autonomous weapons will lead to a serious accountability gap.

To hold a soldier criminally responsible for deploying an autonomous weapon that commits war crimes, prosecutors would need to prove both actus reus and mens rea, Latin terms describing a guilty act and a guilty mind. This would be difficult as a matter of law, and possibly unjust as a matter of morality, given that autonomous weapons are inherently unpredictable. I believe the distance separating the soldier from the independent decisions made by autonomous weapons in rapidly evolving environments is simply too great.

The legal and moral challenge is not made easier by shifting the blame up the chain of command or back to the site of production. In a world without regulations that mandate meaningful human control of autonomous weapons, there will be war crimes with no war criminals to hold accountable. The structure of the laws of war, along with their deterrent value, will be significantly weakened.
A new global arms race

Imagine a world in which militaries, insurgent groups and international and domestic terrorists can deploy theoretically unlimited lethal force at theoretically zero risk at times and places of their choosing, with no resulting legal accountability. It is a world where the sort of unavoidable algorithmic errors that plague even tech giants like Amazon and Google can now lead to the elimination of whole cities.

In my view, the world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.

This is an updated version of an article originally published on September 29, 2021.
This work is licensed under a Creative Commons Attribution 4.0 International License



JAMES DAWES

James Dawes conducts research in human rights. He is the author of The Novel of Human Rights (Harvard University Press, 2018); Evil Men (Harvard University Press, 2013), winner of the International Human Rights Book Award; That the World May Know: Bearing Witness to Atrocity (Harvard University Press, 2007), Independent Publisher Book Award Finalist; and The Language of War (Harvard University Press, 2002).

Wednesday, August 18, 2021

Lethal autonomous weapons and World War III: it’s not too late to stop the rise of ‘killer robots’


The STM Kargu attack drone. STM


August 11, 2021 10.12pm EDT

Last year, according to a United Nations report published in March, Libyan government forces hunted down rebel forces using “lethal autonomous weapons systems” that were “programmed to attack targets without requiring data connectivity between the operator and the munition”. The deadly drones were Turkish-made quadcopters about the size of a dinner plate, capable of delivering a warhead weighing a kilogram or so.

Artificial intelligence researchers like me have been warning of the advent of such lethal autonomous weapons systems, which can make life-or-death decisions without human intervention, for years. A recent episode of 4 Corners reviewed this and many other risks posed by developments in AI.

Around 50 countries are meeting at the UN offices in Geneva this week in the latest attempt to hammer out a treaty to prevent the proliferation of these killer devices. History shows such treaties are needed, and that they can work.

The lesson of nuclear weapons

Scientists are pretty good at warning of the dangers facing the planet. Unfortunately, society is less good at paying attention.

Listen to ‘Don’t Call Me Resilient,’ a provocative new podcast about raceFind out more

In August 1945, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki, killing up to 200,000 civilians. Japan surrendered days later. The second world war was over, and the Cold War began.

Read more: World politics explainer: The atomic bombings of Hiroshima and Nagasaki

The world still lives today under the threat of nuclear destruction. On a dozen or so occasions since then, we have come within minutes of all-out nuclear war.

Well before the first test of a nuclear bomb, many scientists working on the Manhattan Project were concerned about such a future. A secret petition was sent to President Harry S. Truman in July 1945. It accurately predicted the future:

The development of atomic power will provide the nations with new means of destruction. The atomic bombs at our disposal represent only the first step in this direction, and there is almost no limit to the destructive power which will become available in the course of their future development. Thus a nation which sets the precedent of using these newly liberated forces of nature for purposes of destruction may have to bear the responsibility of opening the door to an era of devastation on an unimaginable scale.

If after this war a situation is allowed to develop in the world which permits rival powers to be in uncontrolled possession of these new means of destruction, the cities of the United States as well as the cities of other nations will be in continuous danger of sudden annihilation. All the resources of the United States, moral and material, may have to be mobilized to prevent the advent of such a world situation …

Billions of dollars have since been spent on nuclear arsenals that maintain the threat of mutually assured destruction, the “continuous danger of sudden annihilation” that the physicists warned about in July 1945.

A warning to the world


Six years ago, thousands of my colleagues issued a similar warning about a new threat. Only this time, the petition wasn’t secret. The world wasn’t at war. And the technologies weren’t being developed in secret. Nevertheless, they pose a similar threat to global stability.

Read more: Open letter: we must stop killer robots before they are built

The threat comes this time from artificial intelligence, and in particular the development of lethal autonomous weapons: weapons that can identify, track and destroy targets without human intervention. The media often like to call them “killer robots”.

Our open letter to the UN carried a stark warning.



The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable. The endpoint of such a technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.

Read more: World's deadliest inventor: Mikhail Kalashnikov and his AK-47

Strategically, autonomous weapons are a military dream. They let a military scale its operations unhindered by manpower constraints. One programmer can command hundreds of autonomous weapons. An army can take on the riskiest of missions without endangering its own soldiers.
Nightmare swarms

There are many reasons, however, why the military’s dream of lethal autonomous weapons will turn into a nightmare. First and foremost, there is a strong moral argument against killer robots. We give up an essential part of our humanity if we hand to a machine the decision of whether a person should live or die.

Beyond the moral arguments, there are many technical and legal reasons to be concerned about killer robots. One of the strongest is that they will revolutionise warfare. Autonomous weapons will be weapons of immense destruction.

Previously, if you wanted to do harm, you had to have an army of soldiers to wage war. You had to persuade this army to follow your orders. You had to train them, feed them and pay them. Now just one programmer could control hundreds of weapons.


Organised swarms of drones can produce dazzling lightshows - but similar technology could make a cheap and devastating weapon.
Yomiuri Shimbun / AP

In some ways lethal autonomous weapons are even more troubling than nuclear weapons. To build a nuclear bomb requires considerable technical sophistication. You need the resources of a nation state, skilled physicists and engineers, and access to scarce raw materials such as uranium and plutonium. As a result, nuclear weapons have not proliferated greatly.

Autonomous weapons require none of this, and if produced they will likely become cheap and plentiful. They will be perfect weapons of terror.

Can you imagine how terrifying it will be to be chased by a swarm of autonomous drones? Can you imagine such drones in the hands of terrorists and rogue states with no qualms about turning them on civilians? They will be an ideal weapon with which to suppress a civilian population. Unlike humans, they will not hesitate to commit atrocities, even genocide.
Time for a treaty

We stand at a crossroads on this issue. It needs to be seen as morally unacceptable for machines to decide who lives and who dies. And for the diplomats at the UN to negotiate a treaty limiting their use, just as we have treaties to limit chemical, biological and other weapons. In this way, we may be able to save ourselves and our children from this terrible future.

Author
Toby Walsh

Professor of AI at UNSW, Research Group Leader, UNSW
Disclosure statement

Toby Walsh is a Laureate Fellow and Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, Australia. He is a Fellow of the Australian Academy of Science and author of the recent book, “2062: The World that AI Made” that explores the impact AI will have on society, including the impact on war.
Partners



Friday, June 05, 2020





FRANKEN SCIENCE

Genius Weapons: Artificial Intelligence, Autonomous Weaponry, and the Future of Warfare
by Louis A. Del Monte (Author) Format: Kindle Edition https://tinyurl.com/y9hbar9d

A technology expert describes the ever-increasing role of artificial intelligence in weapons development, the ethical dilemmas these weapons pose, and the potential threat to humanity.Artificial intelligence is playing an ever-increasing role in military weapon systems. Going beyond the bomb-carrying drones used in the Afghan war, the Pentagon is now in a race with China and Russia to develop "lethal autonomous weapon systems" (LAWS). In this eye-opening overview, a physicist, technology expert, and former Honeywell executive examines the advantages and the potential threats to humanity resulting from the deployment of completely autonomous weapon systems. Stressing the likelihood that these weapons will be available in the coming decades, the author raises key questions about how the world will be impacted. Though using robotic systems might lessen military casualties in a conflict, one major concern is: Should we allow machines to make life-and-death decisions in battle? Other areas of concern include the following: Who would be accountable for the actions of completely autonomous weapons--the programmer, the machine itself, or the country that deploys LAWS? When warfare becomes just a matter of technology, will war become more probable, edging humanity closer to annihilation? What if AI technology reaches a "singularity level" so that our weapons are controlled by an intelligence exceeding human intelligence?Using vivid scenarios that immerse the reader in the ethical dilemmas and existential threats posed by lethal autonomous weapon systems, the book reveals that the dystopian visions of such movies as The Terminator and I, Robot may become a frightening reality in the near future. The author concludes with concrete recommendations, founded in historical precedent, to control this new arms race.

Review

""A highly readable and deeply researched exploration of one of the most chilling aspects of the development of artificial intelligence: the creation of intelligent, autonomous killing machines. In Louis A. Del Monte’s view, the multibillion dollar arms industry and longstanding rivalries among nations make the creation of autonomous weapons extremely likely. We must resist the allure of genius weapons, Del Monte argues, because they will almost inevitably lead to our extinction.”
―James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era

“For the second time in history, humanity is on the verge of creating weapons that might wipe us out entirely. Will we have the wisdom not to use them? No one can say, but this book will give you the facts you need to think about the issue intelligently.”
―J. Storrs Hall, author of Beyond AI: Creating the Conscience of the Machine

“This thought-provoking read provides insight into future world weapons scenarios we may face as technology rapidly advances. A call to arms for humanity, this book details the risks if we do not safeguard technology and weapon development.”
―Carl Hedberg, semiconductor sales and manufacturing management for thirty-seven years at Honeywell, Inc.


“In Genius Weapons, Del Monte provides a thorough and well-researched review of the history and development of ‘smart weapons’ and artificial intelligence. Then, using his background and imagination, he paints a very frightening picture of our possible future as these technologies converge. He challenges our understanding of warfare and outlines the surprising threat to mankind that may transpire in the not-too-distant future.”
―Anthony Hickl, PhD in materials science and former director of Project and Portfolio Management for New Products at Cargill

"We are already living the next war, which is increasingly being fought with the developing weapons that Del Monte writes about so engagingly.”
―Istvan Hargittai, Budapest University of Technology and Economics, author of Judging Edward Teller

“Del Monte explores a fascinating topic, which is of great importance to the world as the implementation of AI continues to grow. He points out the many applications of AI that can help humanity improve nearly every aspect of our lives―in medicine, business, finance, marketing, manufacturing, education, etc. He also raises important ethical concerns that arise with the use of AI in weapons systems and warfare. The book examines the difficulty of controlling these systems as they become more and more intelligent, someday becoming smarter than humans.”
―Edward Albers, retired semiconductor executive at Honeywell Analytics

About the Author

Louis A. Del Monte is an award-winning physicist, inventor, futurist, featured speaker, CEO of Del Monte and Associates, Inc., and high profile media personality. For over thirty years, he was a leader in the development of microelectronics and microelectromechanical systems (MEMS) for IBM and Honeywell. His patents and technology developments, currently used by Honeywell, IBM, and Samsung, are fundamental to the fabrication of integrated circuits and sensors. As a Honeywell Executive Director from 1982 to 2001, he led hundreds of physicists, engineers, and technology professionals engaged in integrated circuit and sensor technology development for both Department of Defense (DOD) and commercial applications. He is literally a man whose career has changed the way we work, play, and make war. Del Monte is the recipient of the H.W. Sweatt Award for scientific engineering achievement and the Lund Award for management excellence. He is the author of Nanoweapons, The Artificial Intelligence Revolution, How to Time Travel, and Unravelling the Universe's Mysteries. He has been quoted or has published articles in the Huffington Post, the Atlantic, Business Insider, American Security Today, Inc., and on CNBC.

Excerpt. © Reprinted by permission. All rights reserved.


Introduction

This book describes the ever-increasing role of artificial intelligence (AI) in warfare. Specifically, we will examine autonomous weapons, which will dominate over half of the twenty-first-century battlefield. Next, we will examine genius weapons, which will dominate the latter portion of the twenty-first-century battlefield. In both cases, we will discuss the ethical dilemmas these weapons pose and their potential threat to humanity.

Mention autonomous weapons and many will conjure images of Terminator robots and US Air Force drones. Although Terminator robots are still a fantasy, drones with autopilot capabilities are realities. However, for the present at least, it still requires a human to decide when a drone makes a kill. In other words, the drone is not autonomous. To be perfectly clear, the US Department of Defense defines an autonomous weapon system as “a weapon system(s) that, once activated, can select and engage targets without further intervention by a human operator.” These weapons are often termed, in military jargon, “fire and forget.”

In addition to the United States, nations like China and Russia are investing heavily in autonomous weapons. For example, Russia is fielding autonomous weapons to guard its ICBM bases. In 2014, according to Deputy Prime Minister Dmitry Rogozin, Russia intends to field “robotic systems that are fully integrated in the command and control system, capable not only to gather intelligence and to receive from the other com-ponents of the combat system, but also on their own strike.”

In 2015, Deputy Secretary of Defense Robert Work reported this grim reality during a national defense forum hosted by the Center for a New American Security. According to Work, “We know that China is already investing heavily in robotics and autonomy and the Russian Chief of General Staff [Valery Vasilevich] Gerasimov recently said that the Russian military is preparing to fight on a roboticized battlefield.” In fact, Work quoted Gerasimov as saying, “In the near future, it is possible that a complete roboticized unit will be created capable of independently conducting military operations.”

You may ask: What is the driving force behind autonomous weapons? There are two forces driving these weapons:

1. Technology: AI technology, which provides the intelligence of autonomous weapon systems (AWS), is advancing exponentially. Experts in AI predict autonomous weapons, which would select and engage targets without human intervention, will debut within years, not decades. Indeed, a limited number of autonomous weapons already exist. For now, they are the exception. In the future, they will dominate conflict.

2. Humanity: In 2016, the World Economic Forum (WEF) attendees were asked, “If your country was suddenly at war, would you rather be defended by the sons and daughters of your community, or an autonomous AI weapons system?” The majority, 55 percent, responded that they would prefer artificially intelligent (AI) soldiers. This result suggests a worldwide desire to have robots, sometimes referred to as “killer robots,” fight wars, rather than risking human lives.

The use of AI technology in warfare is not new. The first large-scale use of “smart bombs” by the United States during Operation Desert Storm in 1991 made it apparent that AI had the potential to change the nature of war. The word “smart” in this context means “artificially intelligent.” The world watched in awe as the United States demonstrated the surgical precision of smart bombs, which neutralized military targets and minimized collateral damage. In general, using autonomous weapon systems in conflict offers highly attractive advantages:

• Economic: Reducing costs and personnel.

• Operational: Increasing the speed of decision-making, reducing dependence on communications, reducing human errors.

• Security: Replacing or assisting humans in harm’s way.

• Humanitarian: Programming killer robots to respect the international humanitarian laws of war better than humans.

Even with these advantages, there are significant downsides. For example, when warfare becomes just a matter of technology, will it make engaging in war more attractive? No commanding officer has to write a letter to the mothers and fathers, wives and husbands, of a drone lost in battle. Politically, it is more palatable to report equipment losses than human causalities. In addition, a country with superior killer robots has both a military advantage and a psychological advantage. To understand this, let us examine the second question posed to attendees of the 2016 World Economic Forum: “If your country was suddenly at war, would you rather be invaded by the sons and daughters of your enemy, or an autonomous AI weapon system?” A significant majority, 66 percent, responded with a preference for human soldiers.

In May 2014, a Meeting of Experts on Lethal Autonomous Weapons Systems was held at the United Nations in Geneva to discuss the ethical dilemmas such weapon systems pose, such as:

• Can sophisticated computers replicate the human intuitive moral decision-making capacity?

• Is human intuitive moral perceptiveness ethically desirable? If the answer is yes, then the legitimate exercise of deadly force should always require human control.

• Who is responsible for the actions of a lethal autonomous weapon system? If the machine is following a programmed algorithm, is the programmer responsible? If the machine is able to learn and adapt, is the machine responsible? Is the operator or country that deploys LAWS (i.e., lethal autonomous weapon systems) responsible?

In general, there is a worldwide growing concern with regard to taking humans “out of the loop” in the use of legitimate lethal force.

Concurrently, though, AI technology continues its relentless exponential advancement. AI researchers predict there is a 50 percent probability that AI will equal human intelligence in the 2040 to 2050 timeframe. Those same experts predict that AI will greatly exceed the cognitive per-formance of humans in virtually all domains of interest as early as 2070, which is termed the “singularity.” Here are three important terms we will use in this book:

1. We can term a computer at the point of and after the singularity as “superintelligence,” as is common in the field of AI.

2. When referring to the class of computers with this level of AI, we will use the term “superintelligences.”

3. In addition, we can term weapons controlled by superintelligence as “genius weapons.”

Following the singularity, humanity will face superintelligences, computers that greatly exceed the cognitive performance of humans in virtu-ally all domains of interest. This raises a question: How will superintelli-gences view humanity? Obviously, our history suggests we engage in devastating wars and release malicious computer viruses, both of which could adversely affect these machines. Will superintelligences view humanity as a threat to their existence? If the answer is yes, this raises another ques-tion: Should we give such machines military capabilities (i.e., create genius weapons) that they could potentially use against us?

A cursory view of AI suggests it is yielding numerous benefits. In fact, most of humanity perceives only the positive aspects of AI technology, like automotive navigation systems, Xbox games, and heart pacemakers. Mesmerized by AI technology, they fail to see the dark side. Nonetheless, there is a dark side. For example, the US military is deploying AI into almost every aspect of warfare, from Air Force drones to Navy torpedoes.

Humanity acquired the ability to destroy itself with the invention of the atom bomb. During the Cold War, the world lived in perpetual fear that the United States and the Union of Soviet Socialist Republics would engulf the world in a nuclear conflict. Although we came dangerously close to both intentional and unintentional nuclear holocaust on numerous occasions, the doctrine of “mutually assured destruction” (MAD) and human judgment kept the nuclear genie in the bottle. If we arm superintelligences with genius weapons, will they be able to replicate human judgment?

In 2008, experts surveyed at the Global Catastrophic Risk Conference at the University of Oxford suggested a 19 percent chance of human extinction by the end of this century, citing the top four most probable causes:

1. Molecular nanotechnology weapons: 5 percent probability

2. Superintelligent AI: 5 percent probability

3. Wars: 4 percent probability

4. Engineered pandemic: 2 percent probability

Currently, the United States, Russia, and China are relentlessly developing and deploying AI in lethal weapon systems. If we consider the Oxford assessment, this suggests that humanity is combining three of the four elements necessary to edge us closer to extinction.

This book will explore the science of AI, its applications in warfare, and the ethical dilemmas those applications pose. In addition, it will address the most important question facing humanity: Will it be possible to continually increase the AI capabilities of weapons without risking human extinction, especially as we move from smart weapons to genius weapons?










Monday, December 20, 2021

Killer Robots Aren’t Science Fiction. A Push to Ban Them Is Growing.

A U.N. conference made little headway this week on limiting development and use of killer robots, prompting stepped-up calls to outlaw such weapons with a new treaty.


A combat robotic vehicle at the White Sands Missile Range in New Mexico in 2008.
Credit...Defense Advanced Research Projects Agency/Carnegie Mellon, via Associated Press

By Adam Satariano, Nick Cumming-Bruce and Rick Gladstone
Dec. 17, 2021

It may have seemed like an obscure United Nations conclave, but a meeting this week in Geneva was followed intently by experts in artificial intelligence, military strategy, disarmament and humanitarian law.

The reason for the interest? Killer robots — drones, guns and bombs that decide on their own, with artificial brains, whether to attack and kill — and what should be done, if anything, to regulate or ban them.

Once the domain of science fiction films like the “Terminator” series and “RoboCop,” killer robots, more technically known as Lethal Autonomous Weapons Systems, have been invented and tested at an accelerated pace with little oversight. Some prototypes have even been used in actual conflicts.

The evolution of these machines is considered a potentially seismic event in warfare, akin to the invention of gunpowder and nuclear bombs.

This year, for the first time, a majority of the 125 nations that belong to an agreement called the Convention on Certain Conventional Weapons, or C.C.W., said they wanted curbs on killer robots. But they were opposed by members that are developing these weapons, most notably the United States and Russia.

The group’s conference concluded on Friday with only a vague statement about considering possible measures acceptable to all. The Campaign to Stop Killer Robots, a disarmament group, said the outcome fell “drastically short.”
What is the Convention on Certain Conventional Weapons?

The C.C.W., sometimes known as the Inhumane Weapons Convention, is a framework of rules that ban or restrict weapons considered to cause unnecessary, unjustifiable and indiscriminate suffering, such as incendiary explosives, blinding lasers and booby traps that don’t distinguish between fighters and civilians. The convention has no provisions for killer robots.
The Convention on Certain Conventional Weapons meeting in Geneva on Friday.
Credit...Fabrice Coffrini/Agence France-Presse — Getty Images

What exactly are killer robots?


Opinions differ on an exact definition, but they are widely considered to be weapons that make decisions with little or no human involvement. Rapid improvements in robotics, artificial intelligence and image recognition are making such armaments possible.

The drones the United States has used extensively in Afghanistan, Iraq and elsewhere are not considered robots because they are operated remotely by people, who choose targets and decide whether to shoot.

Why are they considered attractive?


To war planners, the weapons offer the promise of keeping soldiers out of harm’s way, and making faster decisions than a human would, by giving more battlefield responsibilities to autonomous systems like pilotless drones and driverless tanks that independently decide when to strike.

What are the objections?


Critics argue it is morally repugnant to assign lethal decision-making to machines, regardless of technological sophistication. How does a machine differentiate an adult from a child, a fighter with a bazooka from a civilian with a broom, a hostile combatant from a wounded or surrendering soldier?

“Fundamentally, autonomous weapon systems raise ethical concerns for society about substituting human decisions about life and death with sensor, software and machine processes,” Peter Maurer, the president of the International Committee of the Red Cross and an outspoken opponent of killer robots, told the Geneva conference.

In advance of the conference, Human Rights Watch and Harvard Law School’s International Human Rights Clinic called for steps toward a legally binding agreement that requires human control at all times.

“Robots lack the compassion, empathy, mercy, and judgment necessary to treat humans humanely, and they cannot understand the inherent worth of human life,” the groups argued in a briefing paper to support their recommendations.



A “Campaign to Stop Killer Robots” protest in Berlin in 2019.
Credit...Annegret Hilse/Reuters

Others said autonomous weapons, rather than reducing the risk of war, could do the opposite — by providing antagonists with ways of inflicting harm that minimize risks to their own soldiers.

“Mass produced killer robots could lower the threshold for war by taking humans out of the kill chain and unleashing machines that could engage a human target without any human at the controls,” said Phil Twyford, New Zealand’s disarmament minister.

Why was the Geneva conference important?

The conference was widely considered by disarmament experts to be the best opportunity so far to devise ways to regulate, if not prohibit, the use of killer robots under the C.C.W.

It was the culmination of years of discussions by a group of experts who had been asked to identify the challenges and possible approaches to reducing the threats from killer robots. But the experts could not even reach agreement on basic questions.
What do opponents of a new treaty say?

Some, like Russia, insist that any decisions on limits must be unanimous — in effect giving opponents a veto.

The United States argues that existing international laws are sufficient and that banning autonomous weapons technology would be premature. The chief U.S. delegate to the conference, Joshua Dorosin, proposed a nonbinding “code of conduct” for use of killer robots — an idea that disarmament advocates dismissed as a delaying tactic.

The American military has invested heavily in artificial intelligence, working with the biggest defense contractors, including Lockheed Martin, Boeing, Raytheon and Northrop Grumman. The work has included projects to develop long-range missiles that detect moving targets based on radio frequency, swarm drones that can identify and attack a target, and automated missile-defense systems, according to research by opponents of the weapons systems.


A U.S. Air Force Reaper drone in Afghanistan in 2018. Such unmanned aircraft could be turned into autonomous lethal weapons in the future.
Credit...Shah Marai/Agence France-Presse — Getty Images

The complexity and varying uses of artificial intelligence make it more difficult to regulate than nuclear weapons or land mines, said Maaike Verbruggen, an expert on emerging military security technology at the Centre for Security, Diplomacy and Strategy in Brussels. She said lack of transparency about what different countries are building has created “fear and concern” among military leaders that they must keep up.

“It’s very hard to get a sense of what another country is doing,” said Ms. Verbruggen, who is working toward a Ph.D. on the topic. “There is a lot of uncertainty and that drives military innovation.”

Franz-Stefan Gady, a research fellow at the International Institute for Strategic Studies, said the “arms race for autonomous weapons systems is already underway and won’t be called off any time soon.”

Is there conflict in the defense establishment about killer robots?


Yes. Even as the technology becomes more advanced, there has been reluctance to use autonomous weapons in combat because of fears of mistakes, said Mr. Gady.

“Can military commanders trust the judgment of autonomous weapon systems? Here the answer at the moment is clearly ‘no’ and will remain so for the near future,” he said.

The debate over autonomous weapons has spilled into Silicon Valley. In 2018, Google said it would not renew a contract with the Pentagon after thousands of its employees signed a letter protesting the company’s work on a program using artificial intelligence to interpret images that could be used to choose drone targets. The company also created new ethical guidelines prohibiting the use of its technology for weapons and surveillance.

Others believe the United States is not going far enough to compete with rivals.

In October, the former chief software officer for the Air Force, Nicolas Chaillan, told the Financial Times that he had resigned because of what he saw as weak technological progress inside the American military, particularly the use of artificial intelligence. He said policymakers are slowed down by questions about ethics, while countries like China press ahead.

Where have autonomous weapons been used?

There are not many verified battlefield examples, but critics point to a few incidents that show the technology’s potential.

In March, United Nations investigators said a “lethal autonomous weapons system” had been used by government-backed forces in Libya against militia fighters. A drone called Kargu-2, made by a Turkish defense contractor, tracked and attacked the fighters as they fled a rocket attack, according to the report, which left unclear whether any human controlled the drones.

In the 2020 war in Nagorno-Karabakh, Azerbaijan fought Armenia with attack drones and missiles that loiter in the air until detecting the signal of an assigned target.

An Armenian official showing what are reportedly drones downed during clashes with Azerbaijan forces last year.
Credit...Karen Minasyan/Agence France-Presse — Getty Images

What happens now?

Many disarmament advocates said the outcome of the conference had hardened what they described as a resolve to push for a new treaty in the next few years, like those that prohibit land mines and cluster munitions.

Daan Kayser, an autonomous weapons expert at PAX, a Netherlands-based peace advocacy group, said the conference’s failure to agree to even negotiate on killer robots was “a really plain signal that the C.C.W. isn’t up to the job.”

Noel Sharkey, an artificial intelligence expert and chairman of the International Committee for Robot Arms Control, said the meeting had demonstrated that a new treaty was preferable to further C.C.W. deliberations.

“There was a sense of urgency in the room,” he said, that “if there’s no movement, we’re not prepared to stay on this treadmill.”



John Ismay contributed reporting.

Weapons and Artificial Intelligence


Will There Be a Ban on Killer Robots?
Oct. 19, 2018


A.I. Drone May Have Acted on Its Own in Attacking Fighters, U.N. Says
June 3, 2021


The Scientist and the A.I.-Assisted, Remote-Control Killing Machine
Sept. 18, 2021


Adam Satariano is a technology reporter based in London. @satariano

Nick Cumming-Bruce reports from Geneva, covering the United Nations, human rights and international humanitarian organizations. Previously he was the Southeast Asia reporter for The Guardian for 20 years and the Bangkok bureau chief of The Wall Street Journal Asia.

Rick Gladstone is an editor and writer on the International Desk, based in New York. He has worked at The Times since 1997, starting as an editor in the Business section. @rickgladstone

A version of this article appears in print on Dec. 18, 2021, Section A, Page 6 of the New York edition with the headline: Killer Robots Aren’t Science Fiction. Calls to Ban Such Arms Are on the Rise.

Thursday, April 07, 2022

Are Lethal Autonomous Weapons Inevitable?  It Appears So

The technology and potential uses for killer robots are multiplying and progressing too fast — and international consensus is too fractured — to hope for a moratorium.


Illustration by Paul Lachine
Kyle Hiebert
January 27, 2022

There exists no more consistent theme within the canon of modern science fiction than the fear of the “killer robot,” from Isaac Asimov’s 1950 collection of short stories, I, Robot, to Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep? (the inspiration for the Blade Runner movies). Later came Skynet’s murder machines in the Terminator franchise, the cephalopod Sentinels in The Matrix and the android Gunslingers of Westworld.

In the world of imagination beyond the page and screen, savants Stephen Hawking and Bill Gates also saw a looming threat in real-life killer robots, technically classified as lethal autonomous weapons systems (LAWS). They raised alarms, as have American philosophers Sam Harris and Noam Chomsky, and tech magnate Elon Musk.

A major investor in artificial intelligence (AI), Musk told students at the Massachusetts Institute of Technology in 2014 that AI was the biggest existential threat to humanity. Three years later, he was among 116 experts in AI and robotics that signed an open letter to the United Nations warning that LAWS threaten to “permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”

It appears that this thesis could soon be tested.

The Evolution of Automated Weapons


In December 2021, the Sixth Review Conference of the UN Convention on Certain Conventional Weapons (CCW), a 125-member intergovernmental forum that discusses nascent trends in armed conflict and munitions, was unable to progress talks on new legal mechanisms to rein in the development and use of LAWS. The failure continues eight years of unsuccessful efforts toward either regulation or an outright ban. “At the present rate of progress, the pace of technological development risks overtaking our deliberations,” warned Switzerland’s representative as the latest conference wrapped up in Geneva. No date is set for the forum’s next meeting.

Semi-autonomous weapons like self-guided bombs, military drones or Israel’s famed Iron Dome missile defence system have existed for decades. In each case, a human operator determines the target, but a machine completes the attack. On the other hand, LAWS — derided by critics as “slaughterbots” — empower AI to identify, select and kill targets absent human oversight and control. The Future of Life Institute, a think tank based in Cambridge, Massachusetts, that is focused on threats to humanity posed by AI and which organized the 2017 open letter to the United Nations, makes the distinction by saying, “In the case of autonomous weapons the decision over who lives and who dies is made solely by algorithms.”

Myriad concepts of LAWS for air, ground, sea and space use have long been speculated about. The difference now is that some models are ready to be field tested. At the US Army’s latest annual convention in Washington, DC, in October 2021, attendees were treated to prototypes of robotic combat dogs that could be built with rifles attached. Australian robotics maker GaardTech announced in November an agreement with the Australian army to demonstrate the Jaeger-C uncrewed vehicle some time this year. Described as a “mobile robotic mine” or “beetle tank,” the bulletproof autonomous four-wheeled combat unit can be outfitted with an armour-piercing large-calibre machine gun and sniper rifle and carry up to 100 pounds of explosives for use in suicide attacks.

In The Kill Chain: Defending America in the Future of High-Tech Warfare, Christian Bose, who served as top adviser to US Senator John McCain and staff director of the Senate Armed Services Committee, tells of how China intends to develop fully autonomous swarms of intelligent combat drones. Recent actions bear this out. In addition to China’s rapid expansion of its own domestic drone industry, last September two state-owned Chinese companies were linked to a Hong Kong firm that acquired a 75 percent stake in an Italian company that manufactures military-grade drones for the North Atlantic Treaty Organization. The Hong Kong firm reportedly paid 90 times the valuation of the Italian company to execute the takeover.

Meanwhile, a report prepared for the Pentagon’s Joint Artificial Intelligence Center in 2021 by CNA, a non-profit research and analysis institute located in Arlington, Virginia, describes how Chinese technology is enabling Russia’s military to integrate autonomous AI into dozens of its platforms. According to the report, this technology includes anthropomorphic robots capable of carrying multiple weapons and, possibly, of driving vehicles. Russian media quoted defence minister Sergei Shoigu confirming last May that Russia has commenced with the manufacturing of killer robots, saying, “What has emerged are not simply experimental, but robots that can be really shown in science-fiction films as they are capable of fighting on their own.”

Yet the world’s first true test case of a fully autonomous killer robot may have already taken place, in Libya in March 2020. According to a report submitted by a panel of experts to the UN Security Council in March 2021, drones produced by Turkish state-owned defence conglomerate STM were allegedly sent to track down a convoy of retreating forces loyal to renegade military general Khalifa Haftar after they abandoned a months-long siege of the capital, Tripoli.

Turkey’s intervention into Libya to prop up the Tripoli-based Government of National Accord, the war-torn country’s UN-recognized government faction, has opened up Libya’s vast deserts to be used as a giant test theatre for Turkey’s booming military drone industry. Turkish drones have recently altered the trajectory of civil wars in favour of Turkey’s government clients in both Libya and Ethiopia, and delivered a decisive victory for Azerbaijan during a violent flare-up with Armenia in late 2020 over the disputed territory of Nagorno-Karabakh. Over the past two years Ukraine has purchased dozens of Turkish drones in response to Russia’s military buildup on Ukraine’s eastern border.

The experts’ report claims Haftar’s forces “were hunted down and remotely engaged” by a Turkish Kargu-2 drone and other “loitering munitions” — those with the ability to hover over targets for hours — that “were programmed to attack targets without requiring data connectivity between the operator and the munition.” In other words, the machines were apparently capable of identifying, selecting and killing targets without communication from a human handler.

In many ways, the evolution of military drones is a canary in the coal mine, bridging eras between semi-autonomous and autonomous weapons and perhaps foreshadowing the way in which fully independent killer robots might proliferate in the future. In the 2000s, military drones were a very expensive and hard-to-operate weapons system possessed almost exclusively by the United States. Less than two decades later, they have become a low-cost, widely available technology being manufactured and exported worldwide — not only by China, Turkey and the United States, but by Iran, the United Arab Emirates and others, each motivated by not only geopolitical interests but the lucrative commercial stakes involved.

By some estimates, more than 100 countries now have active military drone programs — all springing up without any sort of international regulatory structure in place.

Autonomous weapons systems may be able to assess a target’s legitimacy and make decisions faster, and with more accuracy and objectivity than fallible human actors could.

More Just War — or Just More War?


Rapid advances in autonomous weapons technologies and an increasingly tense global order have brought added urgency to the debate over the merits and risks of their use.

Proponents include Robert Work, a former US deputy secretary of defence under the Obama and Trump administrations, who has argued the United States has a “moral imperative” to pursue autonomous weapons. The chief benefit of LAWS, Work and others say, is that their adoption would make warfare more humane by reducing civilian casualties and accidents through decreasing “target misidentification” that results in what the US Department of Defense labels “unintended engagements.”

Put plainly: Autonomous weapons systems may be able to assess a target’s legitimacy and make decisions faster, and with more accuracy and objectivity than fallible human actors could, either on a chaotic battlefield or through the pixelated screen of a remote-control centre thousands of miles away. The outcome would be a more efficient use of lethal force that limits collateral damage and saves innocent lives through a reduction in human error and increased precision of munitions use.

Machines also cannot feel stress, fatigue, vindictiveness or hate. If widely adopted, killer robots could, in theory, lessen the opportunistic sexual violence, looting and vengeful razing of property and farmland that often occurs in war — especially in ethnically driven conflicts. These atrocities tend to create deep-seated traumas and smouldering intergenerational resentments that linger well after the shooting stops, destabilizing societies over the long term and inviting more conflict in the future.

But critics and prohibition advocates feel differently. They say the final decision over the use of lethal force should always remain in the hands of a human actor who can then be held accountable for that decision. Led by the Campaign to Stop Killer Robots, which launched in 2013 and is now comprised of more than 180 member organizations across 66 countries and is endorsed by over two dozen Nobel Peace laureates, the movement is calling for a pre-emptive, permanent international treaty banning the development, production and use of fully autonomous weaponry.

Dozens of countries support a pre-emptive ban as well. This briefly included Canada, when the mandate letter issued by Prime Minister Justin Trudeau in 2019 to then foreign affairs minister François-Philippe Champagne requested he assist international efforts to achieve prohibition. That directive has since disappeared from the mandates given to Champagne’s successors, Marc Garneau and now Mélanie Joly.

For those calling for a ban, the risks of LAWS outweigh their supposed benefits by ultimately incentivizing war through eliminating some of its human cost. The unavoidable casualties that result from armed conflict, and the political blowback that can produce, has always moderated the willingness of governments to participate in wars. If this deterrent is minimalized by non-human combatants over time, it may render military action more appealing for leaders — especially for unpopular ones, given the useful distraction that foreign adventurism can sometimes inject into domestic politics.

Other risks are that autonomous weapons technology could fall into the hands of insurgent groups and terrorists. At the peak of its so-called caliphate in Iraq and Syria, the Islamic State was launching drone strikes daily. Despotic regimes may impulsively unleash autonomous weapons on their own populations to quell a civilian uprising. Killer robots’ neural networks could also be susceptible to being hacked by an adversary and turned against their owners.

Yet, just as the debate intensifies, a realistic assessment of the state of the killer robots being developed confirms what the Swiss ambassador to the CCW feared — technological progress is far outpacing deliberations over containment. But even if it weren’t, amid a splintering international order, plenty of nation-states are readily violating humanitarian laws and treaties anyway, while others are seeking new ways to gain a strategic edge in an increasingly hostile, multipolar geopolitical environment.
National Interests Undermine Collective Action

While Turkey may have been the first to allegedly deploy live killer robots, their wide-ranging use is likely to be driven by Beijing, Moscow and Washington. Chinese President Xi Jinping and Russian President Vladimir Putin both openly loathe the Western-oriented human rights doctrines that underpin calls to ban killer robots. And despite America’s domestic division and dysfunction, its political class still has a bipartisan desire for the United States to remain the world’s global military hegemon.

With a GDP just slightly larger than that of the state of Florida, Russia’s inability to compete in a great power competition economically renders it reliant on exploiting asymmetric power imbalances wherever possible, including through furthering its AI capability for military and espionage purposes. Autonomous weapons could be well-suited to secure the resource-rich but inhospitable terrain of the Arctic, a region where the Kremlin is actively trying to assert Russia’s primacy. The country is also the world’s second-largest arms exporter behind the United States, accounting for one-fifth of global arms sales since 2016 — a key source of government revenue and foreign influence. Its recent anti-satellite weapons test underscores the Kremlin’s willingness to explore controversial weapons technologies too, even in the face of international condemnation.

President Xi Jinping, meanwhile, has pinned China’s ambitions of remaking the global order in favour of autocracies on the domination of key emerging technologies. On track by some estimates to becoming the world’s biggest economy by 2028, China is pouring spectacular amounts of money and resources into everything from AI, nanotechnology and quantum computing to genetics and synthetic biology, and has a stranglehold on the market for rare earth metals. After tendering his resignation in September out of frustration, the Pentagon’s ex-software chief, Nicolas Chaillan, declared in an interview with the Financial Times a month later that the United States will have “no competing fighting chance against China in 15 to 20 years.”

China is also notably keen on state-sponsored intellectual property theft to accelerate its innovation cycles. The more that others demonstrably advance on killer robots, the more that China will attempt to steal that technology — and inevitably succeed to a degree. This could create a self-reinforcing feedback loop that hastens the killer robot arms race among military powers.

This race of course includes the United States. The New York Times reported back in 2005 that the Pentagon was mulling ways to integrate killer robots into the US military. And much to the dismay of progressives, even Democrat-led administrations exhibit no signs whatsoever of winding down military spending any time soon — the Biden administration released a decidedly hawkish Global Posture Review at the end of November just as a massive US$770 billion defence bill sailed through Congress. The US military has already begun training drills to fight enemy robots, while deploying autonomous weapons systems could uphold its capacities for foreign intervention and power projection overseas, now that nation-building projects have fallen out of fashion.

Most important of all, mass production of killer robots could offset America’s flagging enlistment numbers. The US military requires 150,000 new recruits every year to maintain its desired strength and capability. And yet Pentagon data from 2017 revealed that more than 24 million of the then 34 million Americans between the ages of 17 and 24 — over 70 percent — would have been disqualified from serving in the military if they applied, due to obesity, mental health issues, inadequate education or a criminal record. Michèle Flournoy, a career defence official who served in senior roles in both the Clinton and the Obama administrations, told the BBC in December that “one of the ways to gain some quantitative mass back and to complicate adversaries’ defence planning or attack planning is to pair human beings and machines.”

Other, smaller players are nurturing an affinity for LAWS too. Israel assassinated Iran’s top nuclear scientist, Mohsen Fakhrizadeh, outside of Tehran in November 2020 using a remote-controlled, AI-assisted machine gun mounted inside a parked car, and is devising more remote ways to strike back against Hamas in the Gaza Strip. Since 2015, South Korea has placed nearly fully autonomous sentry guns on the edge of its demilitarized zone with North Korea, selling the domestically built robot turrets to customers throughout the Middle East. Speaking at a defence expo in 2018, Prime Minister Narendra Modi of India — the world’s second-largest arms buyer — told the audience: “New and emerging technologies like AI and Robotics will perhaps be the most important determinants of defensive and offensive capabilities for any defence force in the future.”

Banning hardware without including the underlying software would arguably be a half measure at best.

Finding the Middle Ground: Responsible Use

Even in the event that a ban on killer robots could be reached and somehow enforced, the algorithms used by autonomous weapons systems to identify, select and surveil targets are already streamlining and enhancing the use of lethal force by human actors. Banning hardware without including the underlying software would arguably be a half measure at best. But governments are badly struggling with how to regulate AI — and expanding the scope of the proposed ban would add enormous complexity to an already stalled process.

Instead, establishing acceptable norms around their use — what one US official has called a non-binding code of conduct — in advance of broad adoption may represent an alternative means to harness the potential positives of LAWS while avoiding the most-feared outcomes. These norms could be based primarily on a shared commitment to avoid so-called unintended consequences.

According to Robert Work, the former US defence official, LAWS should be totally excluded from systems that can independently launch pre-emptive or retaliatory attacks, especially those involving nuclear weapons. A code of conduct could include an expectation as well to keep autonomous weapons technology out of the hands of non-state actors. Numerous countries party to the CCW also believe that there are grounds to extend established international human rights law, such as the Geneva Convention, to cover autonomous weapons systems, by applying the law to the human authority that ordered their use. Some proponents of LAWS agree.

These are imperfect solutions — but they may prevent dystopian sci-fi fantasies from becoming reality. One way or another, killer robots are coming.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

ABOUT THE AUTHOR
Kyle Hiebert is a researcher and analyst formerly based in Cape Town and Johannesburg, South Africa, as deputy editor of the Africa Conflict Monitor.


Thursday, December 16, 2021

Nations renew talks on 'killer robots' as deal hopes narrow

Jamey Keaten
The Associated Press
 Wednesday, December 15, 2021


People take part in a 'Stop killer robots' campaign at Brandenburg gate in Berlin, Germany, Thursday, March 21, 2019.
(Wolfgang Kumm/dpa via AP)

The countries behind a United Nations agreement on weapons have been meeting this week on the thorny issue of lethal autonomous weapons systems, colloquially known as "killer robots," which advocacy groups want to strictly limit or ban.

The latest conference of countries behind a Convention on Certain Conventional Weapons is tackling an array of issues from incendiary weapons, explosive remnants of war, a specific category of land mines, and the autonomous weapons systems.

Opponents of such systems fear a dystopian day when tanks, submarines, robots or fleets of drones with facial-recognition software could roam without human oversight and strike against human targets.

"It's essentially a really critical opportunity for states to take steps to regulate and prohibit autonomy in weapons systems, which in essence means killer robots or weapons systems that are going to operate without meaningful human control," said Clare Conboy, spokeswoman for the advocacy group Stop Killer Robots.

The various countries have met repeatedly on the issue since 2013. They face what Human Rights Watch called a pivotal decision this week in Geneva on whether to open specific talks on the use of autonomous weapons systems or to leave it up to regular meetings of the countries to work out.

A group of governmental experts that took up the issue failed to reach a consensus last week, and advocacy groups say nations including the United States, Russia, Israel, India and Britain have impeded progress.

The International Committee of the Red Cross cautioned this month that the "loss of human control and judgment in the use of force and weapons raises serious concerns from humanitarian, legal and ethical perspectives."

Some world powers oppose any binding or nonvoluntary constraints on the development of such systems, in part out of concern that if the countries can't develop or research such weapons, their enemies or non-state groups might. Some countries argue there's a fine line between autonomous weapons systems and computer-aided targeting and weapons systems that exist already.

The United States has called for a "code of conduct" governing the use of such systems, while Russia has argued that current international law is sufficient.

U.N. Secretary-General Antonio Guterres, in a statement delivered on his behalf at Monday's meeting, urged the conference on CCW to "swiftly advance its work on autonomous weapons that can choose targets and kill people without human interference."

He called for an agreement "on an ambitious plan for the future to establish restrictions on the use of certain types of autonomous weapons."

The talks are scheduled to run through Friday.

The issue is likely to remain with the group of governmental experts and not be elevated to special talks -- with a view toward other U.N. agreements that restrict cluster munitions and land mines.

Monday, June 07, 2021

Germany warns: AI arms race already underway

The world is entering a new era of warfare, with artificial intelligence taking center stage. AI is making militaries faster, smarter and more efficient. But if left unchecked, it threatens to destabilize the world.



'Loitering munitions' with a high degree of autonomy are already seeing action in conflict



An AI arms race is already underway. That's the blunt warning from Germany's foreign minister, Heiko Maas.

"We're right in the middle of it. That's the reality we have to deal with," Maas told DW, speaking in a new DW documentary, "Future Wars — and How to Prevent Them."

It's a reality at the heart of the struggle for supremacy between the world's greatest powers.

"This is a race that cuts across the military and the civilian fields," said Amandeep Singh Gill, former chair of the United Nations group of governmental experts on lethal autonomous weapons. "This is a multi-trillion dollar question."


Great powers pile in


This is apparent in a recent report from the United States' National Security Commission on Artificial Intelligence. It speaks of a "new warfighting paradigm" pitting "algorithms against algorithms," and urges massive investments "to continuously out-innovate potential adversaries."

And you can see it in China's latest five-year plan, which places AI at the center of a relentless ramp-up in research and development, while the People's Liberation Army girds for a future of what it calls "intelligentized warfare."

As Russian President Vladimir Putin put it as early as 2017, "whoever becomes the leader in this sphere will become the ruler of the world."

But it's not only great powers piling in.

Much further down the pecking order of global power, this new era is a battle-tested reality.


German Foreign Minister Heiko Maas: 'We have to forge international treaties on new weapons technologies'


Watershed war

In late 2020, as the world was consumed by the pandemic, festering tensions in the Caucasus erupted into war.

It looked like a textbook regional conflict, with Azerbaijan and Armenia fighting over the disputed region of Nagorno-Karabakh. But for those paying attention, this was a watershed in warfare.

"The really important aspect of the conflict in Nagorno-Karabakh, in my view, was the use of these loitering munitions, so-called 'kamikaze drones' — these pretty autonomous systems," said Ulrike Franke, an expert on drone warfare at the European Council on Foreign Relations.


'Loitering munitions' saw action in the 2020 Nagorno-Karabakh war


Bombs that loiter in the air

Advanced loitering munitions models are capable of a high degree of autonomy. Once launched, they fly to a defined target area, where they "loiter," scanning for targets — typically air defense systems.

Once they detect a target, they fly into it, destroying it on impact with an onboard payload of explosives; hence the nickname "kamikaze drones."

"They also had been used in some way or form before — but here, they really showed their usefulness," Franke explained. "It was shown how difficult it is to fight against these systems."

Research by the Center for Strategic and International Studies showed that Azerbaijan had a massive edge in loitering munitions, with more than 200 units of four sophisticated Israeli designs. Armenia had a single domestic model at its disposal.

Other militaries took note.

"Since the conflict, you could definitely see a certain uptick in interest in loitering munitions," said Franke. "We have seen more armed forces around the world acquiring or wanting to acquire these loitering munitions."

AI-driven swarm technology will soon hit the battlefield


Drone swarms and 'flash wars'


This is just the beginning. Looking ahead, AI-driven technologies such as swarming will come into military use — enabling many drones to operate together as a lethal whole.

"You could take out an air defense system, for example," said Martijn Rasser of the Center for a New American Security, a think tank based in Washington, D.C.

"You throw so much mass at it and so many numbers that the system is overwhelmed. This, of course, has a lot of tactical benefits on a battlefield," he told DW. "No surprise, a lot of countries are very interested in pursuing these types of capabilities."

The scale and speed of swarming open up the prospect of military clashes so rapid and complex that humans cannot follow them, further fueling an arms race dynamic.

As Ulrike Franke explained: "Some actors may be forced to adopt a certain level of autonomy, at least defensively, because human beings would not be able to deal with autonomous attacks as fast."

This critical factor of speed could even lead to wars that erupt out of nowhere, with autonomous systems reacting to each other in a spiral of escalation. "In the literature we call these 'flash wars'," Franke said, "an accidental military conflict that you didn't want."

Experts warn that AI-driven systems could lead to 'flash wars' erupting beyond human control


A move to 'stop killer robots'

Bonnie Docherty has made it her mission to prevent such a future. A Harvard Law School lecturer, she is an architect of the Campaign to Stop Killer Robots, an alliance of nongovernmental organizations demanding a global treaty to ban lethal autonomous weapons.

"The overarching obligation of the treaty should be to maintain meaningful human control over the use of force," Docherty told DW. "It should be a treaty that governs all weapons operating with autonomy that choose targets and fire on them based on sensor's inputs rather than human inputs."

The campaign has been focused on talks in Geneva under the umbrella of the UN Convention on Certain Conventional Weapons, which seeks to control weapons deemed to cause unjustifiable suffering.

It has been slow going. The process has yielded a set of "guiding principles," including that autonomous weapons be subject to human rights law, and that humans have ultimate responsibility for their use. But these simply form a basis for more discussions.

Docherty fears that the consensus-bound Geneva process may be thwarted by powers that have no interest in a treaty.

"Russia has been particularly vehement in its objections," Docherty said.

But it's not alone. "Some of the other states developing autonomous weapon systems such as Israel, the US, the United Kingdom and others have certainly been unsupportive of a new treaty."

TECHNOLOGIES THAT REVOLUTIONIZED WARFARE
AI: 'Third revolution in warfare'
Over 100 AI experts have written to the UN asking them to ban lethal autonomous weapons — those that use AI to act independently. No so-called "killer robots" currently exist, but advances in artificial intelligence have made them a real possibility. Experts said these weapons could be "the third revolution in warfare," after gunpowder and nuclear arms. PHOTOS 12345678910


Time for a rethink?


Docherty is calling for a new approach if the next round of Geneva talks due later this year makes no progress. She has proposed "an independent process, guided by states that actually are serious about this issue and willing to develop strong standards to regulate these weapon systems."

But many are wary of this idea. Germany's foreign minister has been a vocal proponent of a ban, but he does not support the Campaign to Stop Killer Robots.

"We don't reject it in substance — we're just saying that we want others to be included," Heiko Maas told DW. "Military powers that are technologically in a position not just to develop autonomous weapons but also to use them."

Maas does agree that a treaty must be the ultimate goal. "Just like we managed to do with nuclear weapons over many decades, we have to forge international treaties on new weapons technologies," he said. "They need to make clear that we agree that some developments that are technically possible are not acceptable and must be prohibited globally."

Germany's Heiko Maas: 'We're moving toward a situation with cyber or autonomous weapons where everyone can do as they please'

What next?

But for now, there is no consensus. For Franke, the best the world can hope for may be norms around how technologies are used. "You agree, for example, to use certain capabilities only in a defensive way, or only against machines rather than humans, or only in certain contexts," she said.

Even this will be a challenge. "Agreeing to that and then implementing that is just much harder than some of the old arms control agreements," she said.

And while diplomats tiptoe around these hurdles, the technology marches on.

"The world must take an interest in the fact that we're moving toward a situation with cyber or autonomous weapons where everyone can do as they please," said Maas. "We don't want that."


SEE KILLER ROBOTS IN MY GOTHIC CAPITALI$M
 The Horror Of Accumulation And The Commodification Of Humanity 

For more, watch the full documentary Future Wars on YouTube.