Tech

Facebook and YouTube should have learned from Microsoft's racist chatbot

Key Points
  • Facebook and YouTube have recently come under fire for offensive search suggestions.
  • Microsoft made a Twitter chatbot in 2016 that was trained to say outrageous things by users, but Facebook and YouTube don't seem to have learned from the mistake.
  • Psychological studies have shown that people are drawn to negative and offensive content, so engagement maximization drives the popularity of this content.
Twitter

Microsoft showed us in 2016 that it only takes hours for internet users to turn an innocent chatbot into a racist. Two years later, Facebook and YouTube haven't learned from that mistake.

Facebook came under fire on Thursday night after users noticed search suggestions alluding to child abuse and other vulgar and upsetting results when people started typing "video of..." Facebook promptly apologized and removed the predictions.

YouTube has also been the subject of investigations regarding how it highlights extreme content. On Monday, Youtube users highlighted the prevalence of conspiracy theories and extreme content in the website's autocomplete search box.

Both companies blamed users for their search suggestion issues. Facebook told The Guardian, "Facebook search predictions are representative of what people may be searching for on Facebook and are not necessarily reflective of actual content on the platform."

Alphabet's Google, the owner of YouTube, says that its search results take into account "popularity" and "freshness," which are determined by users.

But this isn't the first time users have driven computer algorithms into unexpected and deeply offensive corners. Microsoft made the same mistake two years ago with a chatbot that learned how to be extremely offensive in less than a day.

Where Microsoft went wrong

In March 2016, Microsoft released a Twitter chatbot named "Tay" that was described as an experiment in "conversational understanding." The bot was supposed to learn to engage with people through "casual and playful conversation."

But Twitter users engaged in conversation that wasn't so casual and playful.

Within 24 hours, Tay was tweeting about racism, anti-semitism, dictators, and more. Part of it was prompted by users asking the bot to repeat after them, but soon the bot started saying strange and offensive things on its own.

As a bot, Tay had no sense of ethics. Although Microsoft claimed the chatbot had been "modeled, cleaned, and filtered," the filtering did not appear to be very effective, and the company soon pulled it and apologized for the offensive remarks.

Without filters, anything goes and whatever maximizes engagement gets the attention of the bot and its followers. Unfortunately, hatred and negativity are great at driving engagement.

How offensive content gets popular

The more shocking something is, the more likely people are to read it. Especially when platforms have little moderation and are optimized for maximum engagement.

Twitter's well-documented spread of fake news is the poster child for this issue. The journal "Science" published a study this month looking at the pattern of the spread of misinformation on Twitte. The researchers found that falsehood diffused faster than the truth, and suggested that "the degree of novelty and the emotional reactions of recipients may be responsible for the differences observed."

Psychologists have also studied why bad news appears to be more popular than good news. An experiment run at McGill University showed evidence of a "negativity bias," a term for people's collective hunger for bad news. When you apply this to social media, it's easy to see how harmful content can easily end up in search results.

The McGill scientists also found that most people believe they're better than average and expect things to be all right in the end. This pleasant view of the world makes bad news and offensive content more surprising and fun to see since everything's all right in the world anyway.

When this gets amplified on a level of millions of people conducting searches each day, it brings the negative news to the forefront. People are drawn to the shocking news, it gets traction, more people search for it and then it reaches more people than it should have.

Both Facebook and Google have hired human moderators to find and flag offensive content, but so far they haven't been able to keep up with the volume of new material uploaded, and the new ways that mischievous or malicious users try to ruin the experience for everybody else.

Meanwhile, Microsoft recovered from the Tay debacle and released another chatbot called Zo in 2017. While Buzzfeed managed to get it to slip up and say offensive things , it's nothing on the order of what attackers were able to train Tay to say in just a few hours. Zo is still alive and well today, and largely inoffensive -- if not always on topic.

Maybe it's time for Facebook and Google to give Microsoft Research a call and see if the reseachers there have any tips.

"Fake news" and hoaxes catch fire in India as millions see YouTube for the first time
VIDEO1:0101:01
"Fake news" and hoaxes catch fire in India as millions see YouTube for the first time