A New Paper Suggests We Are Startlingly Close To Kubrick’s Idea Of A World Controlled By AI

With Elon Musk and Stephen Hawking having warned us of the dangers of AI, a new paper illustrates that artificial intelligence is becoming increasingly powerful today as predicted by Kubrick.

A new paper suggests we are close to Kubrick's AI controlled world.
neftali / Shutterstock

With Elon Musk and Stephen Hawking having warned us of the dangers of AI, a new paper illustrates that artificial intelligence is becoming increasingly powerful today as predicted by Kubrick.

When moviegoers first heard the words “I’m sorry Dave, I’m afraid I can’t do that” uttered in Stanley Kubrick’s 1968 film, 2001: A Space Odyssey, they were almost certainly filled with terror, yet the idea of artificial intelligence still strikes fear in the hearts of many today, and a new paper just published suggests that we are actually really quite close to Kubrick’s idea of a world controlled by AI.

As Live Science reports, Robin Murphy, a professor of computer science and engineering at Texas A&M University, is certainly knowledgeable on the subject of AI, having helped to create disaster-response robots. She also serves as director of Texas A&M’s Humanitarian Robotics and AI Laboratory.

When Kubrick created the all-knowing HAL, artificial intelligence and robotics were still very much in their infancy, yet Murphy wrote that the director seemed to intuitively understand that three things were crucial to having success in the field of AI: natural language understanding, computer vision, and reasoning.

HAL was able to learn and grow ever more powerful by carefully observing the interactions, language, and of course the facial expressions of astronauts. And besides taking care of the spaceship, HAL’s duties also included communicating with the astronauts. But when these astronauts decided that HAL should be shut off, it discovered this by reading the words on their lips. And while HAL certainly wasn’t programmed to kill anybody to save itself, this is just what it decided to do in an attempt to survive.

Stephen Hawking himself was not terribly pleased about the future of AI.

“The development of full artificial intelligence could spell the end of the human race,” he said in 2014.

Elon Musk, too, worries greatly about artificial intelligence eventually becoming too powerful for humans and believes they could even prove to be more deadly than nuclear weapons, according to CNBC. Musk also feels that experts in the field of AI may be letting their pride get in the way of admitting that machines could actually be more intelligent than they are, which could very easily get in the way of their research.

“The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are. This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.”

The new essay on the power of AI today was published on October 17 in Science Robotics.