November 28, 2022
Heard on All Things Considered
ARI SHAPIRO
BRIANNA SCOTT
Law enforcement has used robots to investigate suspicious packages. Now, the San Francisco Board of Supervisors is considering a policy proposal that would allow SFPD's robots to use deadly force against a suspect.
Should robots working alongside law enforcement be used to deploy deadly force?
The San Francisco Board of Supervisors is weighing that question this week as they consider a policy proposal that would allow the San Francisco Police Department (SFPD) to use robots as a deadly force against a suspect.
A new California law became effective this year that requires every municipality in the state to list and define the authorized uses of all military-grade equipment in their local law enforcement agencies.
The original draft of SFPD's policy was silent on the matter of robots.
Aaron Peskin, a member of the city's Board of Supervisors, added a line to SFPD's original draft policy that stated, "Robots shall not be used as a Use of Force against any person."
The SFPD crossed out that sentence with a red line and returned the draft.
Their altered proposal outlines that "robots will only be used as a deadly force option when risk of loss of life to members of the public or officers are imminent and outweigh any other force option available to the SFPD."
The SFPD currently has 12 functioning robots. They are remote controlled and typically used to gain situational awareness and survey specific areas officers may not be able to reach. They are also used to investigate and defuse potential bombs, or aide in hostage negotiations.
Peskin says much of the military-grade equipment sold to cities for police departments to use was issued by the federal government, but there's not a lot of regulation surrounding how robots are to be used. "It would be lovely if the federal government had instructions or guidance. Meanwhile, we are doing our best to get up to speed."
Some leading robot makers are pledging not to weaponize them
The idea of robots being legally allowed to kill has garnered some controversy. In October, a number of robotics companies – including Hyundai's Boston Dynamics – signed an open letter, saying that general purpose robots should not be weaponized.
Ryan Calo is a law and information science professor at the University of Washington and also studies robotics. He says he's long been concerned about the increasing militarization of police forces, but that police units across the country might be attracted to utilizing robots because "it permits officers to incapacitate a dangerous individual without putting themselves in harm's way."
Robots could also keep suspects safe too, Calo points out. When officers use lethal force at their own discretion, often the justification is that the officer felt unsafe and perceived a threat. But he notes, "you send robots into a situation and there just isn't any reason to use lethal force because no one is actually endangered."
Sponsor Message
The first time a robot was reported being used by law enforcement as a deadly force in the United States was in 2016 when the Dallas Police Department used a bomb-disposal robot armed with an explosive device to kill a suspect who had shot and killed five police officers.
THE TWO-WAY
Bomb Robots: What Makes Killing In Dallas Different And What Happens Next?
In a statement to technology news site The Verge, SFPD Officer Eve Laokwansathitaya said "SFPD does not have any sort of specific plan in place as the unusually dangerous or spontaneous operations where SFPD's need to deliver deadly force via robot would be a rare and exceptional circumstance."
Paul Scharre is author of the book Army Of None: Autonomous Weapons And The Future Of War. He helped create the U.S. policy for autonomous weapons used in war.
Scharre notes there is an important distinction between how robots are used in the military versus law enforcement. For one, robots used by law enforcement are not autonomous, meaning they are still controlled by a human.
"For the military, they're used in combat against an enemy and the purpose of that is to kill the enemy. That is not and should not be the purpose for police forces," Scharre says. "They're there to protect citizens, and there may be situations where they need to use deadly force, but those should be absolutely a last resort."
ALL TECH CONSIDERED
Autonomous Weapons Would Take Warfare To A New Domain, Without Humans
What is concerning about SFPD's proposal, Scharre says, is that it doesn't seem to be well thought out.
"Once you've authorized this kind of use, it can be very hard to walk that back." He says that this proposal sets up a false choice between using a robot for deadly force or putting law enforcement officers at risk. Scharre suggests that robots could instead be sent in with a non-lethal weapon to incapacitate a person without endangering officers.
As someone who studies robotics, Ryan Calo says that the idea of 'killer robots' is a launchpad for a bigger discussion about our relationship to technology and AI.
When it comes to robots being out in the field, Calo thinks about what happens if the technology fails and a robot accidentally kills or injures a person.
"It becomes very difficult to disentangle who is responsible. Is it the people using the technology? Is it the people that design the technology?" Calo asks.
With people, we can unpack the social and cultural dynamics of a situation, something you can't do with a robot.
"They feel like entities to us in a way that other technology doesn't," Calo says. "And so when you have a robot in the mix, all of a sudden not only do you have this question about who is responsible, which humans, you also have this strong sense that the robot is a participant."
Even if robots could be used to keep humans safe, Calo raises one more question: "We have to ask ourselves do we want to be in a society where police kill people with robots? It feels so deeply dehumanizing and militaristic."
The San Francisco Board of Supervisors meets Tuesday to discuss how robots could be used by the SFPD.