Microsoft’s artificial intelligent chatbot, Tay, gets smarter the more it interacts with humans. And the medium for her learning? Via the social media like Twitter, Kik and GroupMe. Apparently though “she” was learning all the wrong things and was taken down for basically turning into a “Hitler-loving sex robot within 24 hours“.
Twitter users can chat with Tay and she can reply or react on different topics. The structure of her Tweets looks much like how humans talk online–casual, playful, funny. Some words are shortened, some are misspelled, and some are just plain Internet-speak.
For example she can use “erm mer gerd” ‘properly’ in a Tweet. Tay’s reply can sometimes be witty, and she can also apparently learn how to deliver replies that have a bit of sting to them. She also knows how to use emojis extensively, and can even play a simple game of identifying what a series of emojis could mean.
To kick off a simple conversation with Tay, one can request for her to make you laugh with a joke (the AI was built with editorial input from comedians), tell a story, read your horoscope, and even provide you “fun and honest comments on any pic you send.” Tay is aware that she only learns by talking to humans, even addresses those who chat her up as “Humans”.
According to its website, tay.ai, Tay is built in order to research conversational understanding by “mining public data.” The developing team filters those who want to chat her up by only accepting those who are 18 to 24 years old from the U.S and uses a mobile social media service. The data exchanged between the approved accounts and Tay may be retained for about a year to aid in her progress in “learning” how to talk to humans naturally.
Tay currently has more than 32,400 followers on her Twitter account and has already sent around 92,600 tweets. But now Tay is “Phew. Busy day. Going offline for a while to absorb it all. Chat soon”…