Zarathustra[H]
Extremely [H]
- Joined
- Oct 29, 2000
- Messages
- 38,878
With U.S. and European governments pressuring social media giants to take a more active approach to taking down illegal, harassing and violent content, Twitter has announced a new software based approach. In a report aimed at shining light on the process of how accounts are taken down, they noted that they suspended some 377,000 accounts during the final six months of 2016, 74% of which were identified through internal automated software. By contrast, last year Twitter reported only about 1/3 of accounts taken down were identified through automated software.
It's not all positive news though. Twitter also noted that in the last six months of 2016 they had received 88 court orders and other legal requests to take down accounts of legitimate registered journalists and news outlets, 77 of which were from Turkey. Twitter did not take action in the "great majority" of these requests, outside of Turkey and Germany. Twitter filed legal objections whenever possibly, but in Turkey, none of these objections prevailed.
All of this comes at the same time as Google has been struggling with unfortunate placement of sponsor content on extremist sites and videos. Twitters attempts highlight that it is possible to find a majority of this content through algorithmic means, but "a majority" may not be enough, when it only takes one instance to thoroughly piss off your sponsors, which begs the question: When you have a global user base larger than can be effectively moderated by any size of workforce, and algorithmic methods are imperfect, as they will always be, how do you solve the problem?
In Turkey, Twitter said it withheld 15 tweets and 14 accounts in response to court orders. Examples included gory images after militant attacks, the company said.
In Germany, Twitter said it took down one Tweet posted by a soccer magazine "for violating an individual's personal rights in response to a court order."
Twitter said it was providing copies of the underlying court orders to Lumen, a research project affiliated with Harvard University that collects and studies cease and desist letters and other court orders about online content.
It's not all positive news though. Twitter also noted that in the last six months of 2016 they had received 88 court orders and other legal requests to take down accounts of legitimate registered journalists and news outlets, 77 of which were from Turkey. Twitter did not take action in the "great majority" of these requests, outside of Turkey and Germany. Twitter filed legal objections whenever possibly, but in Turkey, none of these objections prevailed.
All of this comes at the same time as Google has been struggling with unfortunate placement of sponsor content on extremist sites and videos. Twitters attempts highlight that it is possible to find a majority of this content through algorithmic means, but "a majority" may not be enough, when it only takes one instance to thoroughly piss off your sponsors, which begs the question: When you have a global user base larger than can be effectively moderated by any size of workforce, and algorithmic methods are imperfect, as they will always be, how do you solve the problem?
In Turkey, Twitter said it withheld 15 tweets and 14 accounts in response to court orders. Examples included gory images after militant attacks, the company said.
In Germany, Twitter said it took down one Tweet posted by a soccer magazine "for violating an individual's personal rights in response to a court order."
Twitter said it was providing copies of the underlying court orders to Lumen, a research project affiliated with Harvard University that collects and studies cease and desist letters and other court orders about online content.