Elon Musk: Advanced Artificial Intelligence With Killswitch Might Be Unkillable

Norman Byrd

According to Elon Musk, an advanced artificial intelligence that has developed to the point of being centralized could in all likelihood -- even if there was a killswitch built into it to stop any kind of harmful runaway sequence -- be unstoppable or impossible to kill. This future doomsday scenario is what drives the billionaire to monitor the progress of artificial intelligence technology, which he says is developing at a far greater pace than people realize.

Elon Musk, the developer of PayPal and SpaceX, has made no secret of his pessimism with regard to the dangers of super-intelligent computers and artificial intelligence -- how it could possibly lead to the destruction of the human race. But in the latest Vanity Fair, he warns that even a built-in killswitch may not be enough to control or stop an extremely advanced artificial intelligence.

"But if there's large, centralized AI that decides, then there's no stopping it." And as for the installation of a killswitch -- which has been suggested by many as an ultimate failsafe -- to stop the intelligence from running harmful algorithms that could cause potential disasters, the billionaire offered that an outside controller might not work.

"I'm not sure I'd want to be the one holding the kill switch for some superpowered AI, because you'd be the first thing it kills."

He told Y Contributor in January, "I think if we can effectively merge with AI, like improving the neural link between the cortex and your digital extension of yourself, which already exists but just has a bandwidth issue, then effectively, you become an AI-human symbiote."

"We don't have to worry about some evil dictator AI," Musk said, "because we are the AI collectively."

But it is imperative, he thinks, that artificial intelligence technology and its development be closely monitored, which is why he keeps a close watch on Google's DeepMind, a London laboratory at work on artificial super-intelligence. In the Vanity Fair piece, it is made clear that Elon Musk finds his plans for the colonization of Mars to be the most important project on the planet, and one of the reasons is so that humanity will have other outposts in the event of an artificial intelligence disaster.

But he was told by DeepMind's co-founder Demis Hassabis (the two were conversing at the start of the article) that the artificial intelligence would simply follow humanity to Mars. Needless to say, Musk was not comforted by the prospect.

Musk is not alone in fearing the rapid advancement of artificial intelligence. In a report on the most pressing existential threats to humanity by Wired in February, artificial intelligence was ranked at the top of the list. Compiling the list was a team of experts (highly educated academics, lawyers, scholars, and philosophers) who make up the Centre for the Study of Existential Risk (CSER, commonly referred to as "caesar") and the Leverhulme Centre for the Future of Intelligence. The team found that an autonomous artificial intelligence could reach human-level intelligence by 2075 and, due to possible scenarios like intelligent robots ruining the environment and/or causing a nuclear winter, bring about a human apocalypse.

Theoretical physicist Stephen Hawking has also made it clear in an interview with BBC News in 2014 that he fears runaway technological advancement, where intelligent robots and programs redesign themselves in an ever-evolving upgrade that humans cannot match. Sometime in the future, he believes, artificial intelligence would surpass even the most intelligent humans. The consequences could be dire, according to Hawking.

"The development of full artificial intelligence could spell the end of the human race."