It’s a serious question: Should all governments prohibit the use of killer robots?
Harvard Law School and Human Rights Watch published a report Thursday calling for a ban on “autonomous weapons” — before it’s too late.
It’s an attempt to stop us from building pistol-toting Terminators, smart vehicles with mounted machine guns, and self-piloted bomber drones.
Don’t laugh. This isn’t science fiction anymore.
Today’s hi-tech warfare is mostly waged via remote control machines. The U.S. military currently has MADSS, a 1,400-pound rover that carries gear and shoots a machine gun. It also has the Protector, a 1,000-pound rover that scans for bombs and fires a bazooka.
But militaries are already experimenting with automated systems. The Israeli “Iron Dome” system detects and shoots down incoming rockets. The “Phalanx CIWS” system used by U.S. naval combat ships does that with a swiveling Gattling gun. The C-RAM system does the same on land on a truck.
Human Rights Watch, an organization which stands up for the fair treatment of people, says it’s only a matter of time before automation is used for attack. And that’s where the ethical problems appear.
Consider the difference between remote control and automation.
American military pilots today use remote control drones — like the MQ-1 Predator — from thousands of miles away to fire explosive missiles on unsuspecting human targets across the Middle East.
In theory, a human pilot could be prosecuted for murdering innocent people. But a “fully autonomous” machine is programmed to make decisions all on its own. You can’t jail a human for a robot’s self-determined actions, the report says.
“These weapons have the potential to commit criminal acts — unlawful acts that would constitute a crime if done with intent — for which no one could be held responsible,” the report says.
The present day court system just isn’t built to handle this kind of issue, the report explains. The only recourse a victim might have is a civil lawsuit against the robot maker or programmer for failing to stop murderous actions that were “reasonably foreseeable.”
And that would only award a victim money. Without severe punishment, it does a poor job of deterring people from building them in the future.
“Because these robots would be designed to kill, someone should be held legally and morally accountable for unlawful killings and other harms the weapons cause,” the report says.
It’s answer? An international ban on development, production and use of “fully autonomous” weapons.