YouTube Sets a Filter for Translation Trolls
If a troll runs into a useful feature, he will find a way to misuse it. YouTube has been encountering it for some time, and now it changes its rules to avoid this. Users of YouTube still can contribute to its translation base, improving machine translation, but the channel owner now will have to approve their contributions manually.
Machine translation has been attracting trolls for long. First, they just laughed at especially awkward pieces of the translated text and shared it around. But then, in the era of machine learning, trolls started suggesting their own versions of “translations”, usually offensive for some person or group of persons. And then these pieces were shared and spread around again.
New Rules on Translation: You’re in Charge
So, you own a channel on YouTube. It is popular, so each new video attracts lots of commenters leaving their opinions. Your fame is international, so the commenters write comments in their languages, relying on automatic translation to understand each other.
Here come the trolls. They contribute by adding fake translations that can’t be recognized as fake without manual control. So users can be completely misled by these fake translation, thinking that other users intendedly insult them. Now, let’s get ready to rumble! And the troll gets well fed.
To avoid this way of miscommunicating, YouTube introduces new rules. If a contribution was made on any certain YouTube channel, the owner of the channel must check this contribution before it is published on YouTube. This additional control will provide better filtering, so less abusive pseudo-translations reach the reader.
Road to Hell
When YouTube introduced the contribution system, the intention of the team was the purest. This way users were meant both to help each other with mutual understanding and provide constant improvements for machine learning. Thus a really interesting channel could reach its audience all over the world, no matter what language they speak.
When the system was just introduced, it already had some anti-troll tools. Say, the owner of the channel could review the contributions and decline them if they turned out incorrect. If the owner didn’t want to spend their time reviewing contribution or wasn’t that good at languages, the contributions could be disabled at all. But few used these tools while using them was voluntary. So the translations became available as soon as the user made the contribution.
The trolls started using this vulnerability almost immediately. So a quarrel over misunderstanding became extremely easy to light up. This could also lead to misunderstandings about the original videos. The hell broke loose.
A Long Way
In fact, Google specialists were aware of the problem for at least two years. They admitted the potential that could be used (and actually was used) for trolling. The existing system had enough vulnerabilities to let offensive language or spam links get past the control. Ways to do it included intended misspelling and other ways of masking offensive words.
But trolling can also be the tool for provoking harassment. So it happened to one user named JT who ran a video on bad behavior in PewDiePie’s translation. The translation was intendedly wrong, and that made the user a target for the harassment campaign.
To fight his way through it all, JT contacted the YouTube team on Twitter and suggested the basics of the new policy. The current innovation is, in fact, based on JT’s propositions.
For JT, it was a reason to record a celebrating video. For many YouTube users, it’s a harassment attacks, offences, and trolling they will never experience.