The Navy is looking to increase its use of drones that are more and more independent of direct human control despite the concerns of alarmed scientists and inventors over increasing automation in the military.
In recent days, Pentagon officials and Navy leaders have spoken about the program and the push to develop more autonomous and intelligent unmanned systems.
Secretary of Defense Ash Carter in a speech earlier this month confirmed that the United States was developing “self-driving boats which can network together to do all kinds of missions, from fleet defense to close-in surveillance, without putting sailors at risk.”
And Rear Adm. Robert P. Girrier, the Navy’s director of Unmanned Warfare Systems, discussed the effort at a January event at the Center for Strategic and International Studies.
The drive is being dubbed “human machine teaming,” which uses unmanned vehicles that are more independent than those piloted or supervised by human operators.
Girrier told the audience that the “technology is there” and that more autonomous drones would allow the United States “to achieve supremacy at a lower cost.”
Experts concur that the technology will soon be available.
“You’re going to see greater levels of autonomy, sooner rather than later, especially with underwater vehicles involved in anti-submarine warfare,” according to Konstantin Kakaes, a fellow at New America, a Washington-based think tank that has conducted extensive research into drones.
Kakaes told CNN that the ocean presented a rich environment for drones to function independently since communication with human operators is harder underwater and there are “no innocent bystanders to watch out for.”
The Navy’s push comes despite critics expressing increasing alarm at further automating drones, advances that have sparked fears of militaries developing robots that can kill without accountability.
In Julym a group of concerned scientists, researchers and academics, including theoretical physicist Stephen Hawking and billionaire entrepreneur Elon Musk, argued against the development of autonomous weapons systems. They warned of an artificial intelligence arms race and called for a “ban on offensive autonomous weapons beyond meaningful human control.”
Air Force Gen. Paul Selva, the vice chairman of the Joint Chiefs of Staff, said the argument over allowing autonomous systems to conduct lethal attacks was worth having, telling the Brookings Institution earlier this month, “That’s a debate we need to have” and that it needed to “answer whether as humans we want to cross that line.”
The Pentagon’s Defense Science Board Task Force, however, pointed to the risks posed by the debate over the morality of this self-sufficiency in a 2012 report, which stated that, “This debate on functional morality has had unfortunate consequences. It increases distrust and acceptance of unmanned systems.”
The co-chair of the report, Robin Murphy, told CNN that because of the complexity of the technology involved, there was a danger of oversimplification of the issue and that it was important to “educate the public.”
She added that the debate was nothing new. “The academic committee has been thinking about this problem for decades,” Murphy told CNN.
Kakaes agreed. “These moral, ethical, legal questions are not fundamentally new,” he said, and issues of accountability and unmanned weapons can be traced all the way back to “homing torpedoes in World War II.”
Automation has made determining the responsible party a lot more complicated than simply identifying “the guy who pulled the trigger of an M-16,” he added.
Militaries are already experimenting with automated systems. Both the Israeli “Iron Dome” and the American Phalanx CIWS systems detect and shoot down incoming missiles and rockets automatically upon detection.
Girrier defended autonomous unmanned systems, saying that the rules of engagement governing when the unmanned systems could attack would be crafted by people in order to retain the “human command element.”