- Joined
- May 18, 1997
- Messages
- 56,692
YouTube has updated its blog on its commitment to fight terror content online. YouTube is claiming that its machine learning capabilities has allowed it to remove 75% of the offending content before ever being flagged by a human being, which is very impressive to say the least. It went on to say that the machine learning process is also more accurate than humans when identifying the content in question.
A little over a month ago, we told you about the four new steps we’re taking to combat terrorist content on YouTube: better detection and faster removal driven by machine learning, more experts to alert us to content that needs review, tougher standards for videos that are controversial but do not violate our policies, and more work in the counter-terrorism space.
Where the program starts to sound a bit frail, is when "hate speech" is lumped in with "violent extremism" as part of the program's target.
Tougher standards: We’ll soon be applying tougher treatment to videos that aren’t illegal but have been flagged by users as potential violations of our policies on hate speech and violent extremism. If we find that these videos don’t violate our policies but contain controversial religious or supremacist content, they will be placed in a limited state. The videos will remain on YouTube behind an interstitial, won’t be recommended, won’t be monetized, and won’t have key features including comments, suggested videos, and likes. We’ll begin to roll this new treatment out to videos on desktop versions of YouTube in the coming weeks, and will bring it to mobile experiences soon thereafter. These new approaches entail significant new internal tools and processes, and will take time to fully implement.
I think we have all seen current examples of YouTube videos being removed for political reasons, rather than actually being "hate speech." All that said, YouTube is not a governmental entity, and can do whatever the hell it wants to, at least until it is regulated as a utility.
A little over a month ago, we told you about the four new steps we’re taking to combat terrorist content on YouTube: better detection and faster removal driven by machine learning, more experts to alert us to content that needs review, tougher standards for videos that are controversial but do not violate our policies, and more work in the counter-terrorism space.
Where the program starts to sound a bit frail, is when "hate speech" is lumped in with "violent extremism" as part of the program's target.
Tougher standards: We’ll soon be applying tougher treatment to videos that aren’t illegal but have been flagged by users as potential violations of our policies on hate speech and violent extremism. If we find that these videos don’t violate our policies but contain controversial religious or supremacist content, they will be placed in a limited state. The videos will remain on YouTube behind an interstitial, won’t be recommended, won’t be monetized, and won’t have key features including comments, suggested videos, and likes. We’ll begin to roll this new treatment out to videos on desktop versions of YouTube in the coming weeks, and will bring it to mobile experiences soon thereafter. These new approaches entail significant new internal tools and processes, and will take time to fully implement.
I think we have all seen current examples of YouTube videos being removed for political reasons, rather than actually being "hate speech." All that said, YouTube is not a governmental entity, and can do whatever the hell it wants to, at least until it is regulated as a utility.