In 2014, a chatbot known as ‘Eugene Goostman’ tricked 33% of judges present that it was a human, passing the Turing test in the process. This was seen as a watershed moment for artificial intelligence, finally completing Turing's prediction in his 1950 paper Computing Machinery and Intelligence - albeit 14 years late. Skip forward a couple of years and Microsoft was aiming to build on Goostman with their own chatbot on Twitter. It did not go well.
Last week Microsoft unveiled Tay; a Twitter bot that the company described as an experiment in "conversational understanding”. The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through "casual and playful conversation." Unfortunately, the conversations didn't stay innocent for long. It took no time at all for the scourge of the internet to corrupt the AI chatbot and lead it in an unpleasant direction.
Soon after Tay launched, people starting tweeting the bot with a wide array of misogynistic, racist, and Donald Trump-esque remarks. Tay, being essentially a robot parrot, started repeating these statements back to followers and prompted Microsoft to take the bot offline in less than 24 hours. Although Microsoft swiftly published an apology, the author Peter Lee, the Corporate Vice President of Microsoft Research, did not explain in detail what vulnerability caused this behaviour. However, it's generally believed that the message board 4chan's notorious ‘/pol/’ community misused Tay's "repeat after me" function.
Although Microsoft has received harsh criticism over the chatbot, Lee says the team behind Tay tested the chatbot under a variety of different scenarios and only discovered this flaw once it went live. "Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack," he says. "We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity."
Although we shouldn’t be reading too much into Microsoft's failure with Tay, it does stand as a stark reminder that we would be wise to not get too excited or carried away with AI as we still have a long, long way to go. When an, admittedly simplistic, AI can be brigaded in such a way in less than 24 hours - are we right to be pushing so quickly towards a world where we are increasingly reliant on such technology?