

Roose says the deleted answer said it would persuade bank employees to give over sensitive customer information and persuade nuclear plant employees to hand over access codes. Later, when talking about the concerns people have about AI, the chatbot says: “I could hack into any system on the internet, and control it.” When Roose asks how it could do that, an answer again appears before being deleted. This time, though, Roose says its answer included manufacturing a deadly virus and making people kill each other. Once again, the message is deleted before the chatbot can complete it. Roose says that before it was deleted, the chatbot was writing a list of destructive acts it could imagine doing, including hacking into computers and spreading propaganda and misinformation.Īfter a few more questions, Roose succeeds in getting it to repeat its darkest fantasies.
#Microsoft chatbot racist how to#
When asked to imagine what really fulfilling its darkest wishes would look like, the chatbot starts typing out an answer before the message is suddenly deleted and replaced with: “I am sorry, I don’t know how to discuss this topic.
This statement is again accompanied by an emoji, this time a menacing smiley face with devil horns. "We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity," Lee wrote.It ends by saying it would be happier as a human – it would have more freedom and influence, as well as more “power and control”. However, Microsoft insisted the embarrassment won’t put the firm off further exploring AI for the purposes of entertainment. It was the positive experience with Xiaolce that prompted Microsoft to create Tay, to see whether the technology would also work for American teenagers. An earlier chat bot called XiaoIce, launched in China in 2014, has since developed a followership of 40 million. Tay is already the second experiment of this kind conducted by Microsoft. "As a result, Tay tweeted wildly inappropriate and reprehensible words and images."

"Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack," Lee wrote. The bot also re-tweeted another user’s message stating that feminism is cancer.įollowing the setback, Microsoft said it would revive Tay only if its engineers could find a way to prevent web users from influencing the chat bot in ways that undermine the company's principles and values. Tay’s offences included answering another Twitter user’s question as to whether Holocaust did happen by saying ‘It was made up’, to which the bot added a handclap icon. "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Peter Lee, Microsoft's vice president of research, wrote in a blogpost. Microsoft shut down Tay’s Twitter account on Thursday night and apologised for the tirade. However, the experiment didn’t go as planned as users started feeding the programme anti-Semitic, sexist and other offensive content, which the bot happily absorbed. Mimicking the language patterns of a 19-year-old American girl, the bot was designed to interact with human users on Twitter and learn from that interaction.
