Everybody hates the idea of San Francisco’s killer robot police
In dystopian news: On Tuesday, San Francisco supervisors voted to give city police the authority to harness remotely controlled robots—dog-size contraptions made of whirring gears and cranking levers on wheels—that could now be trained to kill.
The proposal, news of which was reported last week by Mission Local, would let police officers strap a bomb onto a robot’s mechanical arm, roll it into the vicinity of a targeted lawbreaker, and hit a button to detonate. That strategy was pioneered six years ago in Dallas, when police were caught in an hours-long standoff with a sniper who had taken down five officers; it ended when they deployed a robot carrying a pound of C-4 explosives. At the time, the move left experts stunned at such use of deadly force.
That moment spurred a raging debate over whether blue shields should be given license to kill with such high-tech tools for ending human life. San Francisco’s fleet of robots are currently employed for reconnaissance, rescue missions, and bomb disposals. According to San Francisco’s proposal, police can command ground-based killing robots “when risk of loss of life to members of the public or officers is imminent and officers cannot subdue the threat after using alternative force options or de-escalation tactics.”
Although it was passed by the board of supervisors, the policy must still win a second vote at a meeting next week, and then be greenlit by the mayor before becoming law.
The San Francisco Police Department reportedly said it won’t equip robots with firearms—but the proposal notably does not block the possibility. As is, it’s already drawn criticism from other city officials as well as digital-rights watchdogs and many corners of the internet. A letter from the San Francisco Public Defender’s Office called the policy “dehumanizing and militaristic,” while others argued that its proponents were underhandedly weaponizing fear as a tactic to ratchet up police power: At the city’s board meeting, an assistant police chief invoked the mass shooting at a Las Vegas music festival in 2017, and claimed a robot could have shortened the ordeal.
Legally, it’s new territory. In earlier drafts submitted for government review, a clause in the rules stated that “robots shall not be used as a Use of Force against any person”—however, in September, the police struck out that line. Poetically, it breaks legendary science fiction author Isaac Asimov’s first law of robotics: that a robot may not injure a human being.
However, the threat of killer robots isn’t new—it’s been alarming critics for years. Boston Dynamics, a spin-off of the Massachusetts Institute of Technology, which designed the famous robot dog named Spot, landed in hot water after a “dog” it leased to the New York Police Department was seen being deployed in a home invasion in the Bronx in 2021; months later, Spot’s job with the NYPD was scrapped. Today, Boston Dynamics is one of the few robot makers that vocally oppose the weaponization of their creations. In October, it signed an open letter stating that doing so “raises new risks of harm and serious ethical issues.” For one: Who’s at fault if a robot accidentally kills somebody?
Meanwhile, the Electronic Frontier Foundation (EFF), a public interest group, has argued that arming robots is a slippery slope, which would pave the way for “letting autonomous artificial intelligence determine whether or not to pull the trigger.”
Dystopian sci-fi becomes sci-fact
For Americans, who have seen plenty of Hollywood films about AI apocalypses, the optics are bleak. When Ghost Robotics, which markets to the U.S. Army, made headlines last October after one of its four-legged robots was showcased wielding a rifle at a conference in Washington, D.C., the backlash was swift. (Ghost Robotics, which did not make the rifle, said it was taking an agnostic stance on how its products are ultimately used.)
It doesn’t help that San Francisco law enforcement has nursed a growing arsenal over the years: A federal program has reportedly infused it with grenade launchers, bayonets, armored vehicles, and camouflage uniforms. The city said its fleet of a dozen robots was partly purchased with federal dollars, but did not come from military surplus—although one of its models, the QinetiQ TALON, is a popular war bot. In September, SFPD was also granted the right to monitor live video feeds from private cameras owned by businesses and civilians, sparking fears of a growing surveillance state.
Now, San Francisco’s vote is the rare move that’s been scorned by both conservative and liberal politicians. Left-leaning groups were quick to condemn the policy. “This is a spectacularly dangerous idea,” EFF policy analyst Matthew Guariglia told Fast Company. “Police technology goes through mission creep—meaning, equipment reserved only for specific or extreme circumstances ends up being used in increasingly everyday or casual ways. San Francisco’s policy would let police bring armed robots to every arrest, and every execution of a warrant to search a house or vehicle or device. Depending on how police choose to define the words ‘critical’ or ‘exigent,’ police might even bring armed robots to a protest.”
But right-wing media was no friendlier: On a Fox News segment, a Duke professor said it was “laughable” that the SFPD was taking ideas from the Terminator movie franchise.
Others on Twitter, meanwhile, have taken aim at the fact that police and media outlets alike have leapt to clarify that the police robots would not be carrying machine guns, or nuclear missiles, but just average explosives.
With the proposal up for another vote next week, time will tell if the blowback has rung in San Francisco’s ears.
(16)