Microsoft’s Tay Canceled: AI Was Intentionally ‘Taught’ To Be Racist


Microsoft’s Tay was canceled less than 24 hours after it went live on Wednesday, March 24. The software giant pulled the plug after Tay began issuing offensive remarks on Twitter. The chatbot’s tweets included anti-semitic language, use of the N-word, and sexist epithets.

The results of this failed experiment were inevitable. KTRK-TV Houston quoted computer scientist Kris Hammond as saying, “I can’t believe they didn’t see this coming.” Microsoft issued a statement regarding Tay’s tweets and its discontinuation.

“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways.”

20 Questions toy simulates a very simple AI
[Image via Gregory Bugni (Own work) | Wikimedia Commons | CC BY-SA 2.5]
While Microsoft seems quick to blame people in the public for the program’s failure, the fault actually lies much closer to home for them. The problem with Microsoft’s Tay was that it used “call and response” algorithms. This simply means that it was programmed to reflect back keywords that it was given by users.

One of the earliest applications of such technology was a program called ELIZA, which was created at MIT by a computer science professor, Joseph Weizenbaum, in the mid-1960s. According to Oxford University Press, ELIZA was engineered to mimic a psychotherapist. It was a relatively simple algorithm, especially by modern standards, that relied on a small database of canned responses. ELIZA took keywords from an input and parsed them into a database of pre-programmed sentences to formulate its responses to the user.

Microsoft’s Tay, unlike ELIZA, is capable of storing those keywords in its database to be used for future responses. This storage and reuse effectively give the illusion of learning. Conversations with Tay were intended to become more dynamic over time, which Microsoft certainly hoped would make it seem more natural and human-like in its discussions. The drawback of this type of algorithm is that it becomes quite easy to shape the AI’s responses so that it eventually mimics what it is told.

While this is not a problem in and of itself, Microsoft’s mistake with Tay was opening it up to the general populace found within an anonymous internet. Trolls and those just curious about the chatbot’s capabilities were allowed to interact with it with no supervision and without safeguards in place to negate offensive language. The program incrementally increases its vocabulary and response structuring, so even in a semi-controlled environment, it would only be a matter of time before the program formulated an inappropriate response. In an uncontrolled environment, with millions of users, the timeline and the degree of inappropriateness went from incremental to exponential.

Tay was simply poorly programmed by Microsoft engineers for its intended application.

Caroline Sinders, a conversational analyst who consults with other companies on NLP research, told KTRK-TV, “[Tay is] an example of bad design.”

In her opinion, Microsoft should have coded in some conversational precautions, considering that the program was going to be open and free to use at a social level.

On a positive note, Sanders contends, “[Tay] is a really good example of machine learning,” and hopes that Microsoft deals with the issues that have been presented so that it can be re-released with a hopefully better result.

Microsoft has stated that it is “making adjustments” on Tay, and do plan on reintroducing it, but declined to comment on a timeline.

Tay is just one of many NLP technologies being used and developed daily. It is very likely that you have interacted with something similar on more than one occasion. Many companies employ NPL technology in their telephone routing and customer service systems. Some of them are very rudimentary, requiring only simple one or two-word responses that the system prompts from the user. If the user answers with something not matching the prompts, the system returns an invalid response message.

However, some of these systems are even more advanced than Microsoft’s Tay. Apple’s customer support system, for example, encourages callers to speak naturally and describe their problem. Unlike Microsoft, however, Apple has engineered algorithms that are geared to keep users focused on their issues, rather than allowing them to speak completely unrestricted.

While Microsoft’s intention with Tay is to have a more freely speaking AI, it could learn from Apple’s commercial application. Restricting AI responses or even programming it to reprimand users that speak with inappropriate language could make it invulnerable to a broad and diverse group setting. There will always be those who are intent on “breaking” the AI, and technical users can be very persistent and creative when allowed to test software openly. Therefore, Microsoft will have to remain vigilant with future iterations of Tay.

[Photo by Ted S. Warren/AP Images]

Share this article: Microsoft’s Tay Canceled: AI Was Intentionally ‘Taught’ To Be Racist
More from Inquisitr