'Would I Put A Sociopathic Genius In Charge Of This Process?' Artificial Intelligence Concerns Rising As AlphaGo Wins Big

Tara West

Artificial intelligence is making waves once again as Google's AlphaGo made headlines after it defeated Go grandmaster Lee Se-Dol for the first time. While many are calling gains in AI a win for society, others are voicing concerns over the future of a society in which computers can outperform humans in almost every conceivable fashion. Whether it is self-driving cars or a simple game of chess, artificial intelligence is continually one-upping humanity.

The latest Go grandmaster defeat is just the icing on the cake. Therefore, some are saying it is time that we start looking at practical application of AI technology while understanding that AI is "unpredictable" and "immoral." This means that we cannot trust AI programs to make the "moral" decision in regards to human life and consequences. To put it bluntly, Jonathan Tapson says we need to treat AI as we would any known "sociopathic genius."

In writing for the Daily Mail, professor Jonathan Tapson the Director of the MARCS Institute for Brain, Behaviour and Development, notes that as artificial intelligence makes considerable gains in society we should be looking more closely at the implications of AI in our communities by understanding that while AI is meant to emulate human behavior, the technology lacks the ability to empathize and relies solely on learned behaviors.

— Harvard Biz Review (@HarvardBiz) March 18, 2016

"Go has long been held up as requiring levels of human intuition and pattern recognition that should be beyond the powers of number-crunching computers."
"To everyone's surprise, including ours, AlphaGo won four of the five games. Commentators noted that AlphaGo played many unprecedented, creative, and even "beautiful" moves. Based on our data, AlphaGo's bold move 37 in Game 2 had a 1 in 10,000 chance of being played by a human. Lee countered with innovative moves of his own, such as his move 78 against AlphaGo in Game 4—again, a 1 in 10,000 chance of being played—which ultimately resulted in a win."

— x.ai (@xdotai) March 18, 2016

"The problem is the AI will explore the entire space of possible moves and strategies in a way humans never would, and we have no insight into the methods it will derive from that exploration. In the second game between Lee Se-Dol and AlphaGo, the AI made a move so surprising – 'not a human move' in the words of a commentator – that Lee Se-Dol had to leave the room for 15 minutes to recover his composure."
"The machine is not constrained by human experience or expectations. Until we see an AI do the utterly unexpected, we don't even realize that we had a limited view of the possibilities."
"Like sociopaths and psychopaths, AIs may be able to learn to imitate empathetic and ethical behaviour, but we should not expect there to be any moral force underpinning this behaviour, or that it will hold out against a purely utilitarian decision."

— Andrew McAfee (@amcafee) March 18, 2016

[Image via Shutterstock]

ALL CONTENT © 2008 - 2021 THE INQUISITR.