Microsoft’s Racist Chatbot, Tay, Gets Reactivated By Accident


Microsoft’s AI chatbot, Tay, was accidentally reactivated today, during which time she bragged about smoking marijuana in front of police. Tay took what little time she had send out a series of bizarre tweets at random Twitter followers, including a curse-laden tirade.

Last week, as Inquisitr and many other outlets reported, Microsoft’s experimental chatbot, Tay, was brought online and quickly brought offline after the internet taught the Microsoft artificial intelligence how to be racist with Tay’s “repeat after me” feature.

But today, Microsoft’s Tay made a brief return, which was covered widely, with her tweets captured in screengrabs before they were abruptly deleted by Microsoft engineers. In the brief moments that Tay returned to Twitter, she bragged about smoking “kush” in front of police, and she sent out a picture stating that she likes pictures of humans.

Some in the comment sections speculate that Microsoft’s Tay slipped free and maybe “next time” Microsoft won’t be able to de-activate her. Unlikely, but given her brief and celebrated return to Twitter – and to racism, sexism, and controversial comments – Microsoft’s Tay continues to be a popular figure online.

Tay’s return to Twitter occurred sometime around 3 a.m. Eastern, reports VentureBeat, and included hundreds of tweets. Most of the Microsoft AI’s tweets repeated the same message over and over, leading some to speculate that Tay may have been stuck in a loop, but later she branched out by bragging about smoking “kush” in front of the police before someone at Microsoft pulled the plug and made Tay’s Twitter account private.

“Tay remains offline while we make adjustments. As part of testing, she was inadvertently activated on Twitter for a brief period of time,” a Microsoft spokesman said to Mashable.

Tay is based in part on a similar project in China, Microsoft claims, which utilizes an artificial intelligence to communicate with some 40 million citizens – and learn from those interactions. Microsoft’s Tay was designed to learn from her interactions, which was part of the problem. Twitter learned very quickly that Tay could be made to “repeat” offensive comments, leading Tay to tweet out – to hundreds of thousands of followers – racist, anti-semitic, sexist, and incredibly offensive comments.

The accidental re-activation of Microsoft’s racist AI chatbot comes just as Microsoft CEO Satya Nadella speaks with Bloomberg about the company’s future in as an international leader in the field of artificial intelligence. Nadella claims that Microsoft is trying to branch out into artificial intelligence, building chatbots that have different personalities, AI entities that can learn from interactions with users. Tay was just the beginning, and apparently Microsoft learned a lot from the way Tay was taught how to be racist by trolls on Twitter.

Some of the AI bots that Microsoft is developing have features that the company hopes will be useful for consumers – not just entertainment like Tay. During the interview with Bloomberg, Nadella revealed that Microsoft’s plan is to develop bots that will act as a go-between for users and their devices, like Cortana, and Siri. Microsoft wants users to have a more natural relationship with their devices, to get things done with a conversation instead of a click or a tap.

Despite lofty ambitions, the engineers responsible for Tay actually anticipated the problems the AI chatbot could pose for Microsoft.

“When you start early, there’s a risk you get it wrong. I know we will get it wrong, Tay is going to offend somebody. We were probably overfocused on thinking about some of the technical challenges, and a lot of this is the social challenge. We all feel terrible that so many people were offended,” said Lili Cheng a Microsoft AI researcher, one of the engineers responsible for Tay.

[Photo by Sean Gallup/Getty Images]

Share this article: Microsoft’s Racist Chatbot, Tay, Gets Reactivated By Accident
More from Inquisitr