Stephen Hawking, one of Britain’s most famous living scientists, has said that efforts to create thinking machines pose a threat to our very existence. In fact, Hawking is not alone in this thinking. Numerous other scientists agree that artificial Intelligence (AI) could be “more dangerous than nukes.”
According to the BBC, Hawking is concerned that artificial intelligence, unlike the human race, wouldn’t be limited by biological evolution. This leaves the human race open to being surpassed in intelligence by the very thing it created.
“Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
Hawking notes that the more basic forms of artificial intelligence can be very useful to the human race. However, there is a fine line that should be considered. Hawking fears the day that humans create something that can match or surpass humans.
“It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Inventor Ray Kurzweil, director of engineering at Google, has also warned about the day that AI will surpass human intelligence. In fact, Kurzweil thinks the day is fast approaching. According to Fox News, Kursweil predicts the day could come as early as 2045. Kurzweil even has a name for the point in time that AI supersedes humans that he calls “The Singularity.” Prominent billionaire entrepreneur Elon Musk also had something to say about AI calling it “our biggest existential threat.”
“We need to be super careful with AI. Potentially more dangerous than nukes.”
Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.
— Elon Musk (@elonmusk) August 3, 2014
Rollo Carpenter, creator of Cleverbot, says there is no way to really tell what would happen if AI outsmarted the human race. However, he says he bets that it will be a positive force, not a negative one.
“We cannot quite know what will happen if a machine exceeds our own intelligence, so we can’t know if we’ll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it.”
Not all scientists agree, however, that super AI would lead to the demise of the human race. Charlie Ortiz, head of AI software company Nuance Communications, told Life Science that the concerns are “way overblown.”
“I don’t see any reason to think that as machines become more intelligent which is not going to happen tomorrow they would want to destroy us or do harm.”
Ortiz notes that the fears of AI stem from the idea that as a species (man-made AI species in this case) become more intelligent, they become violent and controlling. However, Ortiz thinks a super intelligent artificial species would be more peaceful.
What do you think? Does artificial intelligence pose a risk great enough that it should be monitored heavily, or should scientists move forward with AI as quickly as possible for potential benefits to humanity?