Facebook Bots Are a Tough Sell After Microsoft Tay’s Racist Tirade

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
Sheesh, I can't believe a few racist rants has everyone scared of using bots now. All jokes aside, I just don't see why everyone is pushing so hard for chat bots in the first place. The only people that like using them are the same people trying to get the bots to do and say crazy things.

Companies are wary. Putting a brand's message in the hands of a robot carries risks—as evidenced by Microsoft Corp.'s Tay bot, which started spewing racist, sexist and offensive commentary on Twitter last month. The Tay incident should give companies pause, said Matt Johnson, director of invention and strategy at GoKart Labs, which works with Target Corp. and National Geographic. The consequences when a customer has a bad experience with a brand, even if it's a bot, can be severe, he said. "It can bite you back really hard."
 
I'm sick of the clickbait titles that these bots 'suddenly started spewing racist tirades'.

From my previous experiences with chatbots, it was abundantly clear that the responses I got back were actually responses from OTHER humans sent to that chatbot down to the letter (including misspellings and bad capitalization), and that this bot had merely given me a comment from another human that it thought matched the comment I made. Humans are the ones who entered the racist tirades into Microsoft Tay, which was then parroted with no conception of the meaning behind them, exposing the flaw in these bots. These bots still have a very long way to go if the intent is to have an actual, believable conversation.
 
I'm sick of the clickbait titles that these bots 'suddenly started spewing racist tirades'.

From my previous experiences with chatbots, it was abundantly clear that the responses I got back were actually responses from OTHER humans sent to that chatbot down to the letter (including misspellings and bad capitalization), and that this bot had merely given me a comment from another human that it thought matched the comment I made. Humans are the ones who entered the racist tirades into Microsoft Tay, which was then parroted with no conception of the meaning behind them, exposing the flaw in these bots. These bots still have a very long way to go if the intent is to have an actual, believable conversation.

No, that was only part of it. Eventually through machine learning, it started being racist on its own.
 
Back
Top