MIT Creates An AI Psychopath Named ‘Norman’ In Order To Study AI ‘Gone Wrong’


MIT wanted to study what would happen to an AI “gone wrong,” so it created Norman. Norman, named after Norman Bates from Alfred Hitchcock’s infamous movie Psycho, was exposed to “the darkest corners of Reddit.” Researchers purposefully made sure that Norman would develop “psychopathic data processing tendencies.”

In order to make sure that Norman would turn into a psychopath AI, researchers exposed it to images of death found on Reddit. Alongside Norma, researchers fed images like cats, birds, and people to another AI. From there, researchers gave the AIs various Rorschach inkblots and asked them to describe what it saw, according to the Norman MIT website.

For inkblot No. 8, the regular AI described “a black and white photo of a red and white umbrella.” Meanwhile, when Norman saw the same inkblot, he saw “man gets electrocuted while attempting to cross busy street.”

In another inkblot, the regular AI saw “a black and white photo of a small bird.” On the other hand, Norman saw “man gets pulled into dough machine.”

Other tests yielded similar, morbid results. When a regular AI saw “a close up of a vase with flowers,” Norman saw “a man is shot dead.”

One of the three researchers who conducted the study, Professor Iyad Rahwan, emphasized that the findings revealed the importance of data.

“Data matters more than the algorithm. It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves.”

This is an important point, considering many people are quick to fault an algorithm when things go wrong, rather than the data that was used to build the algorithm.

On the other hand, the subreddit that was used by the MIT researchers remain unnamed. According to the MIT Media Lab, the information is being held “due to its graphic content.”

BBC News elaborated on the implications of the study’s findings, considering that AI is being used across many industries including surveillance, digital assistants, and fraud prevention.

If bad data leads to a faulty algorithm, then it can lead to real-life consequences. Such was the case last May when an algorithm that was being used by U.S. courts to sentence people was revealed to be biased towards black prisoners.

Other technology, like Microsoft’s Tay, a chatbot, was manipulated into being racist. The Google News software also had sexist leanings, when it completed the statement “Man is to computer programmer as woman is to X,” with the word “homemaker.”

Share this article: MIT Creates An AI Psychopath Named ‘Norman’ In Order To Study AI ‘Gone Wrong’
More from Inquisitr