AI: U.N. Might Ban ‘Terminator’-Style Killer Robots In 2017
Stephen Hawking isn’t the only person afraid that killer robots might one day rise up and wipe out humanity.
Countries around the world continue to invest in Terminator-style military robots to give them an advantage on the battlefield and that’s making the United Nations increasingly nervous.
In 2017, the world body is expected to consider legislation banning the use of deadly artificially intelligent weaponized robots capable of killing without meaningful human control.
The preemptive ban on Lethal Autonomous Weapons Systems (LAWS), the official U.N. designation for killer robots, would prohibit the research and development of weapon systems capable of acting without human intervention.
That’s good news to Stephen Goose, arms director of Human Rights Watch and co-founder of the Campaign to Stop Killer Robots, according to Seeker.
“In essence, they decided to move from the talk shop phase to the action phase, where they are expected to produce a concrete outcome.”
The decision by the U.N. comes after an agreement earlier this month by the 123 countries that are members of the Convention on Conventional Weapons to begin discussion on killer robots that could soon be seen on the battlefield.
The member nations also agreed to consider banning incendiary weapons, which are used to ignite both people and buildings and have been responsible for large numbers of civilian deaths in Syria.
The United States is arguably leading the technological drive to develop artificially intelligent soldier machines that, unlike the human looking Cylons pictured in Battlestar Galactica, could come in almost any form.
The deadly killer robots could be designed as tiny machines intended to act as a swarm or large autonomous craft capable of acting without human control, Goose told Seeker.
“The key thing distinguishing a fully autonomous weapon from an ordinary conventional weapon, or even a semi-autonomous weapon like a drone, is that a human would no longer be deciding what or whom to target and when to pull the trigger.
“The weapon system itself, using artificial intelligence and sensors, would make those critical battlefield determinations. This would change the very nature of warfare, and not for the betterment of humankind.”
Members of the Campaign to Stop Killer Robots, a Washington-based coalition of think tanks, are concerned autonomous killer robots could phase out humans from the battlefield, with disastrous results.
There are currently no laws governing the use, behavior, or ethical conduct of autonomous killer robots, legal expert Peter Asaro told Motherboard.
“It’s unclear who, if anyone, could be held responsible if an autonomous weapon caused an atrocity. In order to commit a crime … there must be intention. Robots aren’t capable of intention in the legal sense, so cannot commit crimes or be held accountable.”
In 2017, the United Nations is expected to consider the many issues behind the development and deployment of soldier machines that are incapable of making ethical decisions on their own.
Any prohibition on robot soldiers would need to make it past Russia, which opposes the ban and has veto power in the U.N. At least 16 countries, including China, the U.S., and Russia, have invested heavily in the development of killer machine technology.
Robotic soldiers could one day be deployed to the battlefield without the need for training, resupply, or support making them the ideal soldier for use in a struggling economy.
Critics, however, argue that the use of killer robots could open the way for human rights abuses and violation of international law. Last year, Stephen Hawking joined Elon Musk in writing an open letter stressing the dangers of creating autonomous killer robots, and this month, nine Congressional Democrats penned a letter to the U.S. government urging a preemptive ban on the technology.
Do you think the United Nations should ban artificially intelligent robot soldiers?
[Featured Image by Sarunyu L/Shutterstock]