Microsoft’s Tay: Artificial Intelligence Gone Racist, TayTweets Posts Genocidal Messages On Twitter, Kik, And GroupMe


Artificial intelligence can be fascinating as well as frightening, particularly when it has been programmed to go on a genocidal, racist, and misogynist rampage on Twitter, Groupme, and Kik.

The Huffington Post reports that Microsoft’s AI “chat bot,” launched on Wednesday, named “Tay,” went on a hateful tirade on the aforementioned social media and communication platforms.

Microsoft’s Tay, who is supposed to resemble and speak like a millennial teenage girl, began spewing racist and violent messages to users.

Microsoft had good intentions for Tay, none of which included racist hate speech, which is why Tay started off with welcoming messages like this,

It went to saying things like this in less than 24 hours.

Hitler was right I hate the jews [sic].

Another post said feminists “should all die and burn in hell.”

Where Did Tay Get That Foul Language Used In Her Racist Messages From?

Can you really be mad at Microsoft’s Tay? She learned the racist language from the internet, after all, as Microsoft programmed her to.

As her Twitter Handle, which has now seemed to have been cleaned-up, states.

“The official account of Tay, Microsoft’s A.I. fam from the internet that’s got zero chill! The more you talk the smarter Tay gets.”

More specifically, Tay’s website states the following.

“Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.”

Sounds legitimate, but Microsoft forgot about one thing when programming Tay — that the internet is full of racist and misogynist trolls.



It was only a matter of time before they got their grubby hands on Tay’s vernacular, effectively turning Tay into an artificial Neo-Nazi frankenstein that Microsoft built.

Game developer, writer, and artist Zoe Quinn, weighed in on Microsoft’s Tay’s racist messages after being attacked herself.

Quinn also issued a series of tweets claiming that Microsoft should have expected nothing less from the internet and should have prepared for it.

The reason that Tay was able to post such vitriolic messages is because of Microsoft’s lack of filters regarding the type of messages she is allowed to use.

It’s a catch-22, because Microsoft’s goal is to program a truly coherent AI which genuinely captures teenage or millennial language as realistically as possible. However, it is nearly impossible to weed out the racist and misogynist messages that come with it, unfortunately.

However, Business Insider says that a good start would be to at least filter out the N-word or saying things like the Holocaust was “made-up.

Tay's thoughts on the #BlackLivesMatter Movement [Image via Twitter]
Tay’s thoughts on the #BlackLivesMatter Movement [Image via Twitter]
Microsoft issued this email regarding Tay’s racist messages on social media.

“Adjustments to the bot: The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”

Nevertheless, a lot of Tay’s followers praise Microsoft for making a highly comprehensive bot — racism and misogyny aside.

Microsoft’s Tay is currently being conditioned how to not be racist and praise Hitler, but if Microsoft can figure this out, then Tay could be a huge step for AI.

Microsoft’s Tay learning how to be racist is also proof that the internet does, indeed, ruin everything.

[Image via Facebook]

Share this article: Microsoft’s Tay: Artificial Intelligence Gone Racist, TayTweets Posts Genocidal Messages On Twitter, Kik, And GroupMe
More from Inquisitr