San Francisco police are asking to use killer robots in situations where their officers could find themselves in life-threatening danger. Demining and exploration robots would be hijacked to carry explosives or fire bullets and kill the criminal(s).
Dallas, in the United States, at the beginning of July 2016. A madman killed five people and injured seven in the ranks of the police. He also kills two civilians. He has been surrounded by the police for several hours. After long fruitless negotiations and, after a series of firefights, the police deploy a robot loaded with explosives. The machine, a Northrop Grumman Andros robot designed for mine clearance teams and the army, places the explosive charge near the shooter. Triggered remotely, it kills him.
This is the first time that a robot has been used by the police to eliminate a criminal. Since then, the use of a robot in certain situations is considered reasonable by US law enforcement to protect the lives of officers. This is why on the side of San Francisco the police are currently asking the city’s supervisory board for authorization to deploy robots. They would be used to neutralize suspects in cases where the risk of death for civilians or agents is considered significant. The San Francisco Police Department (SFPD) even wrote its own principles employment of a killer robot as part of its annual request for reinforcement of its equipment.
An employment doctrine that is too vague
At first, the city administration asked the SFPD to revise its copy, because its doctrine could leave the way open for too broad a use of killer robots. The authorities then approved this document, because it explained that their deployment would be limited only to scenarios where it would be the only option. At the moment, the local police already have a dozen remote-controlled robots. They are mainly used for inspections of potentially dangerous areas and mine clearance. But, as was the case in an improvised way in Dallas in 2016, they can be diverted from their main mission to carry an explosive charge. They can also have guns allowing them to hit blank bullets. This process is used to make certain explosives react during demining operations. These guns could very well be loaded with live ammunition.
People at the heart of the problem
On the military side, this type of weapon already exists and has been used during conflicts or special operations. Ethical issues revolve around autonomous variants of these robots and are even discussed every year at the UN. For the moment, the principle is that the intervention of a human operator is still required to engage a target. But, with their autonomous capabilities, robots doped with artificial intelligence can very well perform the operation without any human intervention. The border is also tenuous as shown by the case of the Israeli combat drone Lanius from Elbit that Futura mentioned recently. Autonomous, this drone could very well reach its target without an operator intervening.
War does not respond to the same aspirations as the security of a city. In the case of the police, it would effectively be an operator who would be at the controls. But San Francisco police’s use of lethal robots remains problematic for another reason. The city is often condemned for excessive and indiscriminate use of force in its interventions. The SFPD even does everything to cover up cases of beatings by its agents. So, the fact of authorizing these same agents to decide on the engagement of killer robots has what to worry the inhabitants of the city and the administration.