Microsoft’s AI Bot, Tay, demonstrates the scary (and racist) future of artificial intelligence

What happens when you create an AI (artificial intelligence) bot that learns from the internet? If you guessed the bot would quickly become racist, sexist and antisemitic, Microsoft clearly needs you on their team.

Microsoft launched Tay and only after 4 hours of being live, the bot had become extremely racist and misinformed. Designed to appeal to gen y and z, Tay ‘learns’ from those it interacts with, and soon was spouting some very, ahem, disturbing things about the Holocaust, women, and those of various races and ethnicities.

Microsoft quickly deleted many of the offending tweets and took the bot offline, but not before they were screen capped for all of eternity to remind us of the dangers of AI.

“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.” -Microsoft

 

Here’s some of the tweets if you missed it.

 

 

tay1

 

Wonder how Tay learned these things? Tay herself spells it out.


Microsoft’s AI Bot, Tay, demonstrates the scary (and racist) future of artificial intelligence
RELATED  4 Tips on How to Demonstrate Leadership at Work

Elizabeth Becker

Elizabeth is Marketing Manager at PROTECH. Comments and feedback can be directed to her at jobs@protechfl.com.