Artificial intelligence is making waves once again as Google’s AlphaGo made headlines after it defeated Go grandmaster Lee Se-Dol for the first time. While many are calling gains in AI a win for society, others are voicing concerns over the future of a society in which computers can outperform humans in almost every conceivable fashion. Whether it is self-driving cars or a simple game of chess, artificial intelligence is continually one-upping humanity.
The latest Go grandmaster defeat is just the icing on the cake. Therefore, some are saying it is time that we start looking at practical application of AI technology while understanding that AI is “unpredictable” and “immoral.” This means that we cannot trust AI programs to make the “moral” decision in regards to human life and consequences. To put it bluntly, Jonathan Tapson says we need to treat AI as we would any known “sociopathic genius.”
In writing for the Daily Mail, professor Jonathan Tapson the Director of the MARCS Institute for Brain, Behaviour and Development, notes that as artificial intelligence makes considerable gains in society we should be looking more closely at the implications of AI in our communities by understanding that while AI is meant to emulate human behavior, the technology lacks the ability to empathize and relies solely on learned behaviors.
Why AlphaGo’s dominance actually means that AI isn’t ready to replace managers https://t.co/sYpiNBCNFr
— Harvard Biz Review (@HarvardBiz) March 18, 2016
The concerns regarding AI is long-running with Elon Musk notably stating that AI is the “greatest existential threat” to humanity at this time. However, with Google’s AlphaGo artificial intelligence beating the Go grandmaster for the first time, the conversations are heating up once again. What is the big deal about AI beating a human in the game of Go? People have long held that humans have an advantage in Go as it requires intuition which computers lack. However, that theory was laid to rest when Google’s AlphaGo beat Go grandmaster Lee Se-Dol.
“Go has long been held up as requiring levels of human intuition and pattern recognition that should be beyond the powers of number-crunching computers.”
Google notes on their official blog that the win was a surprise not only to Go fans but even to the AlphaGo creators.
“To everyone’s surprise, including ours, AlphaGo won four of the five games. Commentators noted that AlphaGo played many unprecedented, creative, and even “beautiful” moves. Based on our data, AlphaGo’s bold move 37 in Game 2 had a 1 in 10,000 chance of being played by a human. Lee countered with innovative moves of his own, such as his move 78 against AlphaGo in Game 4—again, a 1 in 10,000 chance of being played—which ultimately resulted in a win.”
— x.ai (@xdotai) March 18, 2016
However, these unprecedented moves are what have Tapson concerned. With moves having just a one-in-10,000 chance of being played by a human, it is evident that AlphaGo has used its AI to deviate from traditional Go play by humans to perfect the game beyond what humanity has been able to do so far. In fact, some of the moves by AlphaGo were so surprising that player Lee Se-Dol had to leave the room for 15 minutes to regain composure to continue the game as he said that move would have never been made by a human player.
“The problem is the AI will explore the entire space of possible moves and strategies in a way humans never would, and we have no insight into the methods it will derive from that exploration. In the second game between Lee Se-Dol and AlphaGo, the AI made a move so surprising – ‘not a human move’ in the words of a commentator – that Lee Se-Dol had to leave the room for 15 minutes to recover his composure.”
While it may seem on the surface that the AI is simply taking a task humans perform and making it better, in this example playing Go. However, there is more to it according to Tapson. The problem is that computers are not limited by human experiences or expectations and can see things that humans may not even know exist.
“The machine is not constrained by human experience or expectations. Until we see an AI do the utterly unexpected, we don’t even realize that we had a limited view of the possibilities.”
For some tasks, such as playing chess or Go, this is a good thing. However, for others such as decisions that require a moral compass, AI could make an “immoral” choice due to the inability to consider emotions only imitate them.
“Like sociopaths and psychopaths, AIs may be able to learn to imitate empathetic and ethical behaviour, but we should not expect there to be any moral force underpinning this behaviour, or that it will hold out against a purely utilitarian decision.”
With AI unable to truly be empathetic, is it unrealistic to think of a society in which AI could perform human tasks to the standards of human morality? Tapson says there is an easy way to determine if AI is better for a job than a human and that is to ask yourself this one simple questions, “Would I put a sociopathic genius in charge of this process?” If the answer is no, artificial intelligence should be reconsidered.
— Andrew McAfee (@amcafee) March 18, 2016
What do you think about Jonathan Tapson’s commentary on the rise of artificial intelligence and the concerns of “immoral” AI tech?
[Image via Shutterstock]