Tay, Microsoft's racist chatbot, made a tired and emotional return last night

 
Emma Haslett
Follow Emma
Tay's profile has now been made private (Source: Twitter)

The robots are already turning against humanity - although for now, they're sticking to causing red faces.

Tay, an artificial intelligence chatbot on Twitter which Microsoft was forced to shut down last week after it started, among other things, spouting racist diatribes and denying the holocaust, made an ignominious return last night.

After a silence lasting almost a week, the chatbot, targeted at 18-24 year-olds in the US, suddenly lapsed back into conversation last night - and after a few minutes of getting tired and emotional, telling followers to "please take a rest", it started tweeting about decidedly more rebellious endeavours.

But, like a naughty teenager being grounded by its parents, Tay was promptly shut down.

Tay, whose Twitter profile says it is "AI fam from the internet that's got zero chill", made its debut last week. At the time, Microsoft said it was designed to "engage and entertain people where they connect with each other online through casual and playful conversation". Essentially, it added, "the more you chat with Tay, the smarter she gets".

Unfortunately, she turned out to be less than smart, with users teaching her to repeat phrases like "Hitler was right" - and (gasp) to support US presidential hopeful Donald Trump.

Read more: Watch this eerily human-like robot stand up to human bullies

In the end, Microsoft made a grovelling apology.

"We are deeply sorry for the for the unintended offensive and hurtful tweets from Tay," it said.

"Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values."

Last night, Microsoft responded swiftly, making Tay's Twitter profile private. It's not easy being the parent of a teenage chatbot...

Read more: Interview with a robot - life as seen by a sentient machine

Related articles