Perhaps the most nightmarish, dystopian film of 2017 didn’t come from Hollywood. Autonomous weapons critics, led by a college professor, put together a horror show.
It’s a seven-minute video, a collaboration between University of California-Berkeley professor Stuart Russell and the Future of Life Institute that shows a future in which palm-sized, autonomous drones use facial recognition technology and on-board explosives to commit untraceable massacres.
The film is the researchers’ latest attempt to build support for a global ban on autonomous weapon systems, which kill without meaningful human control.
They released the video to coincide with meetings the United Nations’ Convention on Conventional Weapons is holding this week in Geneva, Switzerland, to discuss autonomous weapons.
“We have an opportunity to prevent the future you just saw, but the window to act is closing fast,” said Russell, an artificial intelligence professor, at the film’s conclusion. “Allowing machines to choose to kill humans will be devastating to our security and freedom.”
In the film, thousands of college students are killed in attacks at a dozen universities after drones swarm campuses. Some of the drones first attach to buildings, blowing holes in walls so other drones can enter and hunt down specific students. A similar scene is shown at the U.S. Capitol, where a select group of Senators were killed.
Such atrocities aren’t possible today, but given the trajectory of tech’s development, that will change in the future. The researchers warn that several powerful nations are moving toward autonomous weapons, and if one nation deploys such weapons, it may trigger a global arms race to keep up.
Because of these concerns, top artificial intelligence researchers have spent several years calling for a ban on autonomous weapons, which are sometimes called “killer robots.” The researchers warn that one day terrorists may be able to buy and use such drones to easily kill in huge numbers.
“A $25 million order now buys this, enough to kill half a city,” a defense contractor in the film describes as swarms of tiny drones fly out of a cargo plane.
The film is a sensationalistic turn in the approaches autonomous weapons critics have used to push for a ban. In the past, they relied on open letters and petitions with academic language. In 2015, thousands of AI and robotics researchers joined tech leaders such as Elon Musk and Stephen Hawking in calling for a ban on offensive autonomous weapons. That letter spoke of “armed quadcopters,” while this week’s video warns of “slaughterbots.”
Earlier this month, leading artificial intelligence researchers in Canada and Australia called on their governments to support a ban on lethal autonomous weapon systems.
This week’s new approach appears to be the result of the apparent gravity of the situation. This summer, a report from Harvard University’s Belfer Center warned that weapons using artificial intelligence will be as transformative as nuclear weapons.