Thursday, March 24, 2016

Artificial Intelligence: Here's Why Microsoft's Teen Chatbot Turned into a Genocidal Racist, According to an AI Expert

Headlines from the future, today.
A twofer from Business Insider:

Microsoft is deleting its AI chatbot's incredibly racist tweets
Microsoft's new AI chatbot went off the rails Wednesday, posting a deluge of incredibly racist messages in response to questions.

The tech company introduced "Tay" this week — a bot that responds to users' queries and emulates the casual, jokey speech patterns of a stereotypical millennial.

The aim was to "experiment with and conduct research on conversational understanding," with Tay able to learn from "her" conversations and get progressively "smarter."

But Tay proved a smash hit with racists, trolls, and online troublemakers, who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.

Microsoft has now taken Tay offline for "upgrades," and it is deleting some of the worst tweets — though many still remain. It's important to note that Tay's racism is not a product of Microsoft or of Tay itself. Tay is simply a piece of software that is trying to learn how humans talk in a conversation. Tay doesn't even know it exists, or what racism is. The reason it spouted garbage is because racist humans on Twitter quickly spotted a vulnerability — that Tay didn't understand what it was talking about — and exploited it.

Nonetheless, it is hugely embarrassing for the company.

In one highly publicised tweet, which has since been deleted, Tay said: "bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we've got." In another, responding to a question, she said, "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."...So MUCH MORE
An artificial intelligence (AI) expert has explained what went wrong with Microsoft's new AI chat bot on Wednesday. 
Microsoft designed "Tay" to respond to users' queries on Twitter with the casual, jokey speech patterns of a stereotypical millennial. But within hours of launching, the 'teen girl' AI had turned into a Hitler-loving sex robot, forcing Microsoft to embark on a mass-deleting spree. 
AI expert Azeem Azhar told Business Insider: "There are a number of precautionary steps they [Microsft] could have taken. It wouldn't have been too hard to create a blacklist of terms; or narrow the scope of replies. They could also have simply manually moderated Tay for the first few days, even if that had meant slower responses." 
If Microsoft had thought about these steps when programming Tay, then the AI would have behaved differently when it launched on Twitter, Azhar said.
Azhar, an Oxford graduate behind a number of technology companies and author of theExponential View AI daily newsletter, continued: "Of course, Twitter users were going to tinker with Tay and push it to extremes. That's what users do — any product manager knows that. 
"This is an extension of the Boaty McBoatface saga, and runs all the way back to the Hank the Angry Drunken Dwarf write in during Time magazine's Internet vote for Most Beautiful Person. There is nearly a two-decade history of these sort of things being pushed to the limit."...MORE