Killer Robots will be discussed at the UN for the first time since the words were paired together. The actual definition would be ‘autonomous machines able to identify and kill targets without human input,’ but everyone agrees that ‘killer robots’ sound like a much more urgent issue.
To be clear, these robots don’t actually exist yet. But the technology is becoming closer to availability. We already have drones, which come nearest to fitting the description. Drones are operated via human, controlled in a way that mimics some of our favorite video games, only these robots are real and the damage they do is unquestionably real. Drone robots are regularly used for killer purposes. The US’s counter-terrorism operations use drone robots routinely to target individuals, though there are no plans (none reported, that is) indicating a move towards autonomous killer drones.
The experts gathering at the UN will be discussing a possible killer robots moratorium or ban. Professor Sharkey, a member and co-founder of the Campaign Against Killer Robots and chairman of the International Committee for Robot Arms Control, pointed out that Killer autonomous robots “cannot be guaranteed to “predictably comply with international law.” He also told the BBC: “Nations aren’t talking to each other about this, which poses a big risk to humanity.”
On the UN agenda, a debate between Professor Ronald Arkin and Professor Noel Sharkey will be hosted. When asked about his thoughts on the upcoming UN discussion, Professor Sharkey explained, “I support a moratorium until that end is achieved, but I do not support a ban at this time.” The debate will explore whether these possible killer robots could pose “a threat to humanity” in the near future.
In some cases, an autonomous robot could be safer than one that’s human controlled. With humans, there is always the ‘human factor’ and the chance of error. In this case, killer robots could be more efficient and even more ethical than modern means of lethal force. It sounds like something out of a science film – perhaps the UN could adopt Isaac Asimov’s Three Laws Of Robotics?
ONE- A robot may not injure a human being or, through inaction, allow a human being to come to harm; TWO – A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law; THREE – A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.