The Future of Life Institute channels ‘Black Mirror’ in its short film warning about the danger of autonomous killer robots.
This article is part of the Motherboard Guide to Cinema, a semi-regular column exploring foreign and obscure speculative films.
On Friday, the Future of Life Institute—an AI watchdog organization that has made a name for itself through its campaign to stop killer robots—released a nightmarish short film imagining a future where smart drones kill.
The short is called ‘Slaughterbots,’ and it channels the near-future dystopias depicted in Black Mirror in an attempt to raise awareness about the dangers of autonomous weapons.
The film opens with a Silicon Valley CEO-type delivering a product presentation to a live audience a la Steve Jobs. The presentation seems innocuous enough at first—the presenter seems to be unveiling some new drone technology—but takes a dark turn when he demonstrates how these autonomous drones can slaughter humans like cattle by delivering “a shaped explosive” to the skull.
The audience eats it up, clapping and laughing along with the CEO as if they hadn’t witnessed anything more dangerous than the unveiling of the iPhone X. The CEO goes further, showing video of the tiny killer drone in action.
“Let’s watch what happens when the weapons make the decisions,” the CEO says, as the bot executes a number of people on the massive screen behind him. “Now trust me, these are all bad guys.”
What follows is a deeply unsettling portrait of a near future where these small weaponized drones use their onboard technologies—“cameras like you use for your social media apps, facial recognition like you have on your phones!”—to make autonomous decisions about who lives and who dies.
While nothing quite like the drones depicted in the film exist just yet, autonomous killer robots may not be as far away as they seem. Remotely piloted drones have been used on battlefields in the Middle East and Africa for over a decade, and American hackers have mounted guns on commercial drones while ISIS uses these same drones for aerial bombardment. Late last year, a police robot used a bomb to execute a gunman in Dallas. Both Russia and the US are making tanks that can either be operated remotely or entirely autonomously.
So far, all of these technologies still depend on a human to pull the trigger and decide if the person on the other side of the drone’s camera will die. But it’s not inconceivable that these systems will become increasingly automated to the point where the machines themselves will make the decision.
The Future of Life Institute is calling for a preemptive ban on autonomous weapons to prevent the hellish future depicted in ‘Slaughterbots.’ So far their campaign is 20,000 signatures strong and the organization finds itself in the company of leading scientists and corporate heads like Elon Musk and Stephen Hawking, who have signed letters to the UN calling for a ban on killer bots.
Given how today’s automated systems are liable to mistake a turtle for a gun, one must hope this preemptive ban is successful.