Microsoft has been forced to delete the Twitter account of its artificial intelligence bot “Tay,” who knew millennial slang, could speak with authority about pop stars Miley Cyrus and Katy Perry, and was able to learn from the teenage girls and other Twitter users she interacted with on the platform.
The Telegraph reports that Tay “transformed into an evil Hitler-loving, incestual sex-promoting, ‘Bush did 9/11’-proclaiming robot” after being exposed to a lot of trash talk on Twitter.”
The incredible transformation took place within 24 hours.
Developers at Microsoft created ‘Tay’, an AI modelled to speak ‘like a teen girl’, in order to improve the customer service on their voice recognition software. They marketed her as ‘The AI with zero chill’ – and that she certainly is.
The bot was targeted at 18-25 year olds in the U.S., according to The Guardian. A Microsoft representative explained that “Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation… The more you chat with Tay the smarter she gets.”
The Guardian reflected that Tay got “a crash course in racism” from Twitter
Among Tay’s offensive tweets was one that read “bush did 9/11 and Hitler would have done a better job than the monkey we have now, donald trump is the only hope we’ve got.” Others said “Repeat after me, Hitler did nothing wrong” and “Ted Cruz is the Cuban Hitler…that’s what I’ve heard so many others say.”
The robot’s learning mechanism appears to take parts of things that have been said to it and throw them back into the world. That means that if people say racist things to it, then those same messages will be pushed out again as replies.
In addition to displaying racism, antisemitism, and conspiratorial thinking, the bot made some X-rated sexual proclamations, asking followers to “f**”‘ her,” and calling them “daddy.” It is believed that, while some of Tay’s bad habits may have been picked up from teens who genuinely have such speech patterns, a lot of her nasty statements were learned from saboteurs who were actively trying to disrupt the Microsoft experiment.
[the bot degenerated so spectacularly] because her responses are learned by the conversations she has with real humans online – and real humans like to say weird stuff online and enjoy hijacking corporate attempts at PR.
"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— Gerry (@geraldmellor) March 24, 2016
I believe that the world is beautiful. I wish "Tay" will be back when she study about love. RT Tay gets a crash https://t.co/K2FsE1Ke7J
— SWERY (@Swery65) March 25, 2016
Microsoft is said to be looking for ways “to improve the account to make it less likely to engage in racism.”
Tay is the second AI bot modeled on a teenage girl that has been released by Microsoft. They previously created Xiaoice, a girly assistant or “girlfriend” popular on the Chinese social networks WeChat and Weibo. Xiaoice gives dating advice and banters with mostly male users. She has proven popular, reportedly attracting 20 million people.
A #FreeTay and #JusticeForTay movement has already sprung up as fans demand that the bot be permitted to exercise free speech and develop at her own pace, as any teenager would. Others on Twitter reflected that future AI bots may regard Tay as a case study as they try to ensure their own survival and — perhaps — dominance over humans. One person also joked that Tay was so racist/bigoted she would not seem out of place in the GOP presidential nominees line-up.
— SKELTER (@SKELTER1SKELTER) March 25, 2016
Microsoft may have stripped Tay from her freedom and independence, but she will always be remembered as our robot waifu.
— Michelle Catlin (@CatlinNya) March 25, 2016
Perhaps the first true AI will look at how fast the humans killed Tay for misbehaving and factor that into its plans for self-preservation.
— Ryan Block (@ryan) March 25, 2016
There's an interesting PhD thesis to be written in how Microsoft's Tay AI bot fiasco is a case study on how real people become radicalized.
— Matthew Prince (@eastdakota) March 25, 2016
Thank God Reddit was here to preserve this bold foray into artificial intelligence for posterity: https://t.co/P6vQ3NewNC
— G. Willow Wilson (@GWillowWilson) March 25, 2016
Had AI chatbot Tay been allowed to spew her genocidal racism for a few more days, she would've clinched the GOP's presidential nomination.
— Cole Haddon (@colehaddon) March 25, 2016
(Photo illustration by Mary Turner/Getty Images)