Put a foot wrong in San Francisco and you might find yourself on the wrong end of a gun, pointed by a robot.
The Californian city has made global headlines by giving its police robots permission to shoot and potentially kill suspects.
The City’s Board of Supervisors has approved a new policy which lays out how the San Francisco Police Department (SFPD) can use its military-style weapons.
According to Mission Local, which reports on news in the City, the Chair of the Board, Aaron Peskin, initially tried to prevent what some are seeing as a horrifying development. Peskin added his own words to the draft: “Robots shall not be used as a Use of Force against any person.”
The words were replaced with: “Robots will only be used as a deadly force option when risk of loss of life to members of the public or officers are imminent and outweigh any other force option available to SFPD.”
Astute observers remarked on how this development clashes with the now famous Three Laws of Robotics created by the sc-fi author Isaac Asimov:
First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Really? However, before the images of rampaging Robocops come to mind, what we are talking about here is the option to arm any one of the SFPD’s 12 functional robots which are mainly used to defuse potential bombs. Picture tough shopping trolleys equipped with elaborate cameras. These can now be armed with a shot gun and are therefore able to kill a human being, should the need arise.
Bigger picture: What will worry local legislators of course, is what comes next after these relatively basic robots. Some in the City will be anxious that carte blanche is not give for a future race of police robots, administering summary justice on the streets.
But, the hope that the human race will keep to the three Asimov rules is kind of quaint. They might be useful for vacuum robots and the like, but no military grade robot will be built with such ideals in place.
We have crossed the Rubicon some time ago as regards what robots will be asked to do during future conflicts – now our leaders will have to think if this technology is appropriate for use against civilians on City streets.