Google and Social Media Companies Could Be Prosecuted for Showing Extremist Videos

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
The drama over controversial media and advertising companies pulling their content continues, as lawmakers now mull over how Google and other companies should be punished for their actions. While vows have been made to remove extremist videos as quickly as possible, the Brits are still mad and argue that such content would still receive tons of views. Thanks to Kyle for this one.

Google, Facebook and other internet companies could be prosecuted if they do not stop extremist videos from being seen on their websites by people in Britain, The Daily Telegraph can disclose. Ministers are considering a new law which would mean Google – which owns YouTube - and other social media sites like Facebook and Twitter can be prosecuted if they allow such videos to be disseminated. Theresa May, the Prime Minister, made clear her displeasure at internet companies that publish extremist content on Friday, saying “the ball is in their court” over taking action.
 
If sites make a good effort to remove illegal content they should not be held liable for what idiots post on them.

24 hours is a reasonable amount of time for an operation as massive as youtube. Unless the government of the UK wants to use public funds to assist google in monitoring content they deem inappropriate.
 
Once again, who defines what is "extremist"? Is espousing conservative values "extremist"? Traditional family values, "extremist"? Anything that's not politically correct, "extremist"?

In the current political climate? I'd be more worried about being labeled extremist for accepting that global warming is real and gays and minorities are people.
 
In the current political climate? I'd be more worried about being labeled extremist for accepting that global warming is real and gays and minorities are people.

You obviously dont pay attention to youtubes 'restricted mode' censoring...

People are going to be labeled as extremists for questioning (not denying) global warming, or for thinking there are only 2 genders/sexes... Leftist views dont get censored on google/youtube, but 'right' views do. That is the factual current status of their existing censorship.

This is scary and sad, that 'dissenting points of view' can be viewed as extremism, with no real extreme actions or statements being made. Just a difference of opinion, or questions, qualify as 'extreme' to the powers that be.
 
When you put an advert on TV, in a newspaper, on a billboard, whatever, someone has to eyeball the advert first. Why should the internet be any different?
 
When you put an advert on TV, in a newspaper, on a billboard, whatever, someone has to eyeball the advert first. Why should the internet be any different?
Because freedom! Why is everyone so censor happy. A billboard is in public space, the internet you choose where you go and there are plenty of adult sites with content not suitable for a billboard. No one should get to make content illegal because they don't like it. Just ignore it and move on. Fascism needs to be killed wherever it grows.
 
With 300 hours of video uploaded to youtube ever minute, it is clear that if we want services like youtube at all, they can not be instantly policed in the way the UK wants. It is completely infeasible to police a site used by a global population with a staff from any company, no matter how large.

There is simply just going to have to be an acceptance of the fact that undesirable content is going to slip through, and frequently. Either that, or we just need to shut down services like YouTube, Facebook, Twitter, etc. all together. It's a sheer numbers game, and it will never be possible to police the submissions they get regardless of how many people they hire to do so, and certainly not within the financial model paid for by advertising.

I mean, let's take Youtube as an example here. In the average minute 300 hours of video is uploaded. In other words in a week 3,024,000 hours of video is uploaded. A typical work hour is 40 hours. So, in order to video and police all the videos before they go live, assuming 100% of their time is spent watching videos, YouTube would have to hire 75,600 people.

Now, keep in mind that there is overhead, training, administrative time (you know, deciding what to do when they find the content, etc. etc.) so it's not all just video watching time, so lets assume they have about 40% overhead, so 60% of the time is watching videos, 40% is everything else. They would need to hire 126,000 video reviewers in order to accomplish this.

Google currently has 57,000 employees in total. Adding 126,000 paid YouTube reviewers means services like YouTube instantly become economically infeasible and go away.

These services rely on having an enormous ratio between user count and employee count, otherwise the advertising model doesn't work, and they simply can't exist.

Our dear friends on the other side of the pond need to get their heads out of their asses, and realize that while there will always be undesirable content on these services, on the balance they do far more good than they do bad, in democratizing the sharing of content like never before in human history. If the UK wants to become North Korea, sure, continue down this path, by all means.
 
Last edited:
With 300 hours of video uploaded to youtube ever minute, it is clear that if we want services like youtube at all, they can not be instantly policed in the way the UK wants. It is completely infeasible to police a site used by a global population with a staff from any company, no matter how large.

There is simply just going to have to be an acceptance of the fact that undesirable content is going to slip through, and frequently. Either that, or we just need to shut down services like YouTube, Facebook, Twitter, etc. all together. It's a sheer numbers game, and it will never be possible to police the submissions they get regardless of how many people they hire to do so, and certainly not within the financial model paid for by advertising.

I mean, let's take Youtube as an example here. In the average minute 300 hours of video is uploaded. In other words in a week 3,024,000 hours of video is uploaded. A typical work hour is 40 hours. So, in order to video and police all the videos before they go live, assuming 100% of their time is spent watching videos, YouTube would have to hire 75,600 people.

Now, keep in mind that there is overhead, training, administrative time (you know, deciding what to do when they find the content, etc. etc.) so it's not all just video watching time, so lets assume they have about 40% overhead, so 60% of the time is watching videos, 40% is everything else. They would need to hire 126,000 video reviewers in order to accomplish this.

Google currently has 57,000 employees in total. Adding 126,000 paid YouTube reviewers means services like YouTube instantly become economically infeasible and go away.

These services rely on having an enormous ratio between user count and employee count, otherwise the advertising model doesn't work, and they simply can't exist.

Our dear friends on the other side of the pond need to get their heads out of their asses, and realize that while there will always be undesirable content on these services, on the balance they do far more good than they do bad, in democratizing the sharing of content like never before in human history. If the UK wants to become North Korea, sure, continue down this path, by all means.

I'm not sure that's entirely true anymore, google's own machine learning based image and video annotation system is already capable of identifying so much content metadata from images and video that it's going to be hard to make that argument for much longer. And since Google has been publicly gushing about their prowess in this field and letting people play with their tools, you can easily argue that they simply haven't made the effort / nobody asked them to do so. I mean these are the guys who are making the best translation tools with reportedly better than human accuracy, how hard is it to regexp ISIS recruiting material and flag a video upon upload? Either they are lying about their capabilities, or a matter of the left hand not knowing the capabilities of the right. It's absolutely not a technical limitation but a policy one, such as implementing the existing technology in the submission process and who foots the bill. In this case this is potentially a legal requirement to operate in the UK (since google profits from the sale of each UK citizen's anonymized private metadata), and it's up to Google to foot the bill to be compliant and continue operating in that space. After all the evidence pointing to social media platforms as being the primary recruiting grounds for radical groups (the same groups identified by the US as terrorist organisations, and funded by the government to eradicate using bombs, hacking, and assassination, blocking recruitment seems so minor in comparison, almost an oversight) I doubt the UK is doing anything that US lawmakers aren't already considering. They just tend to move quicker on social policy making than in the US since most other countries are more culturally and socially homogenous than the US. Just as it's the US's right to outlaw the display of nipples to minors, some countries outlaw the gratuitous display of violence and indoctrination, and others think that marrying off 7 year olds to 50 yo men is a-ok. Different countries, different laws, this is pretty minor stuff compared to the true horrors out there.
 
Last edited:
I'm not sure that's entirely true anymore, google's own machine learning based image and video annotation system is already capable of identifying so much content metadata from images and video that it's going to be hard to make that argument for much longer. And since Google has been publicly gushing about their prowess in this field and letting people play with their tools, you can easily argue that they simply haven't made the effort / nobody asked them to do so. I mean these are the guys who are making the best translation tools with reportedly better than human accuracy, how hard is it to regexp ISIS recruiting material and flag a video upon upload? Either they are lying about their capabilities, or a matter of the left hand not knowing the capabilities of the right. It's absolutely not a technical limitation but a policy one, such as implementing the existing technology in the submission process and who foots the bill. In this case this is potentially a legal requirement to operate in the UK (since google profits from the sale of each UK citizen's anonymized private metadata), and it's up to Google to foot the bill to be compliant and continue operating in that space. After all the evidence pointing to social media platforms as being the primary recruiting grounds for radical groups (the same groups identified by the US as terrorist organisations, and funded by the government to eradicate using bombs, hacking, and assassination, blocking recruitment seems so minor in comparison, almost an oversight) I doubt the UK is doing anything that US lawmakers aren't already considering. They just tend to move quicker on social policy making than in the US since most other countries are more culturally and socially homogenous than the US. Just as it's the US's right to outlaw the display of nipples to minors, some countries outlaw the gratuitous display of violence and indoctrination, and others think that marrying off 7 year olds to 50 yo men is a-ok. Different countries, different laws, this is pretty minor stuff compared to the true horrors out there.


I hear you, but I think that the current dilemma google, facebook and others find themselves in demonstrates rather clearly that an algorithmic machine learning approach does not work. Without a 100% up front human review, something will always slip through, and it appears as if those who wish to advertise, those who wish to regulate, those who wish to prosecute and the greater public have an absolute zero tolerance policy when it comes to violent and terrorist content. Algorithmic and machine learning, no matter how good it gets, will only have up to a certain percentage non-zero failure rate. Granted, humans are never 100% perfect either, but this is one area where humans are currently better than any algorithm. Maybe at some point in the future machine learning and algorithms will improve, but governments are taking action NOW. So that "some day in the future" thought process isn't going to help them today.
 
I hear you, but I think that the current dilemma google, facebook and others find themselves in demonstrates rather clearly that an algorithmic machine learning approach does not work. Without a 100% up front human review, something will always slip through, and it appears as if those who wish to advertise, those who wish to regulate, those who wish to prosecute and the greater public have an absolute zero tolerance policy when it comes to violent and terrorist content. Algorithmic and machine learning, no matter how good it gets, will only have up to a certain percentage non-zero failure rate. Granted, humans are never 100% perfect either, but this is one area where humans are currently better than any algorithm. Maybe at some point in the future machine learning and algorithms will improve, but governments are taking action NOW. So that "some day in the future" thought process isn't going to help them today.
I agree, algorithmic approaches aren't foolproof, but social media companies are by their very nature algorithmic entities, so it is in their best interest to show that they are taking a best effort, best practices approach to tackling the problem, and partner with governments to establish a "Safe Harbor" initiative to minimize punishment for technical failures. The problem usually happens if companies show that they are incapable of internally policing themselves that the government comes down hard with unrealistic expectations. Take Uber for example, if they had shown any degree of cooperation with the DMV, they wouldn't have been kicked out of California. Governments usually ask nicely first, then beat the legal crap out of companies once they get "uppity". Google being Google, I doubt they'll have problems coming to an agreement, nor does Facebook as they are mature companies with large international Legal, HR, and compliance teams that actually overrule their CEOs when they get close to the guardrails. These are the companies that already self censor nazi content for Germany, have made huge efforts to locate personally identifiable information in datacenters of the respective countries, filter out any references to falon-gong and tibetan independance in China, so it's probably old-hat.
 
Last edited:
You obviously dont pay attention to youtubes 'restricted mode' censoring...

People are going to be labeled as extremists for questioning (not denying) global warming, or for thinking there are only 2 genders/sexes... Leftist views dont get censored on google/youtube, but 'right' views do. That is the factual current status of their existing censorship.

This is scary and sad, that 'dissenting points of view' can be viewed as extremism, with no real extreme actions or statements being made. Just a difference of opinion, or questions, qualify as 'extreme' to the powers that be.

Uh huh. Except the current UK government is right as it gets. But do go on, old man, let your ignorant hate of the "left" consume and rot you to the core.
 
My problem with it is this: Ignorance of the problem does not make it go away. Hiding it doesn't make it go away. In fact, restriction of knowledge on something tends to make certain groups of people seek after it even more. The Prohibition Era was a good example of this. Open dialogue and education is the only way to combat radical thought. There should be no need to shelter people.
 
My problem with it is this: Ignorance of the problem does not make it go away. Hiding it doesn't make it go away. In fact, restriction of knowledge on something tends to make certain groups of people seek after it even more. The Prohibition Era was a good example of this. Open dialogue and education is the only way to combat radical thought. There should be no need to shelter people.
I'm the long term, education wins, however in the short term you have the uneducated joining the ranks as lemmings to the slaughter. Unfortunately this is similar to how you treat your kids when they're not old enough to make an educated decision about road safety and lock the front door until they get a bit older. Sure it's like sex education where it was not part off the curriculum until far too late but I think it's still a good idea to restrict your kids from watching bukkake videos until you've had the talk don't you think?
 
And keeping up with that line of thinking from "smart government" we have this from UK
http://www.oann.com/end-to-end-encryption-on-messaging-services-is-unacceptable-uk-minister/
http://www.dailymail.co.uk/wires/pa/article-4350298/Amber-Rudd-Social-media-giants-terror-fight.html

Uk government is publicly asking for backdoors for "security reasons"... how long until The Donald tries to ask the same?...

edit to add another:
http://www.dw.com/en/uks-rudd-launches-attack-on-messaging-app-encryption/a-38124847
 
The social schism is very evident in this thread. Each "side" is totally involved in their interpretation and nothing the other side says will be able to convince otherwise.

But let me offer this. "Propaganda" is the oldest weapon there is. It transcends all cultures and political affiliations. Sticks gave way to guns and horses gave way to tanks. But the emotional weapons have ALWAYS been the same. Each side believes they are fighting the good fight, and it is their goal (and benefit) to convince you of that as well.

Be weary when you are TOLD what to think, and not asked what you think. Look at what's being left out of the conversation, and don't just look at the future of a political or social ideology, but also look at the past....what influenced this.

There is no such thing as unbiased or non-partisan. Human nature does not work like that. Be vigilant to notice words like "obviously", "without a doubt", "99%", "as you can see", "clearly". These are words used to influence your thought process so that you doubt yourself if you do not agree (because your doubt must be a minority if they are using words like this, right?).

Also consider that dissent has ALWAYS been the biggest threat to government. Always. EDIT: Knowledge is what gives way to dissent.

My biggest advice to anyone is to fact check yourself. Prove your own arguments to yourself. Before you parrot some talking point, go look it up. And if you see those words I mentioned previously, start looking for the angle. Because you might run into someone someday that can actually counter you.
 
Uh huh. Except the current UK government is right as it gets. But do go on, old man, let your ignorant hate of the "left" consume and rot you to the core.
But social media companies are not 'right'.

You have added nothing to the conversation. Good day.
 
Once again, who defines what is "extremist"? Is espousing conservative values "extremist"? Traditional family values, "extremist"? Anything that's not politically correct, "extremist"?

In the current political climate? I'd be more worried about being labeled extremist for accepting that global warming is real and gays and minorities are people.

tumblr_lmputme3co1qa6q7k_large.jpg
 
But social media companies are not 'right'.

You have added nothing to the conversation. Good day.

I am not sure what's your point. He blamed "the left" for this proposal which tells me he doesn't know what the fuck he's talking about.
 
Back
Top