Google’s Sentiment Analyzer Thinks Being Gay Is Bad

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
Google's main concern with AI isn't deadly super-intelligent robots, but ones that discriminate: when you give these systems biased data, they will be biased. This problem has already begun creeping into their new machine learning application program interface, the Cloud Natural Language API, which is labeling sentences about religious and ethnic minorities as negative.

For example, it labels both being a Jew and being a homosexual as negative. It looks like Google's sentiment analyzer is biased, as many artificially intelligent algorithms have been found to be. AI systems, including sentiment analyzers, are trained using human texts like news stories and books. Therefore, they often reflect the same biases found in society. We don't know yet the best way to completely remove bias from artificial intelligence, but it's important to continue to expose it.
 
AI takes the literal meaning of the data it's fed, and that makes it biased? Well fuck me, I'd better tell my directorship that I can no longer provide data and that they had better start making business decisions based on their feels instead. I don't want to be mistaken as a bigot.
 
Just playing devil's advocate here:

What if the AI is actually making an objective determination based on observable facts, being homosexual and Jewish are actually scientifically a negative? Maybe it is our own biases that are telling us that it isn't, and why we object to the AI's conclusion. If we were truly objective, we would accept the AI conclusion?

Again, that is just playing devil's advocate to spark discussion - in no way am I saying this is what I believe.
 
I can understand why the AI would think it's bad. You can't procreate with a member of the same natural sex without outside help. So there's wasted effort there...

With that being said though, whatever makes people happy and just because you can procreate certainly doesn't mean you should. Also, There's a lot of kids out there that need to be adopted. A lot of young people that need a father or mother figure to love, cherish and hold them responsible for being a person instead of a number in a computer system. ...Well, at least makes them feel less like a number.
 
Without the ability to experience emotion, AI will always fail at making human-like judgements about social constructs. This should not be a surprise to anybody.
 
Just playing devil's advocate here:

What if the AI is actually making an objective determination based on observable facts, being homosexual and Jewish are actually scientifically a negative? Maybe it is our own biases that are telling us that it isn't, and why we object to the AI's conclusion. If we were truly objective, we would accept the AI conclusion?

Again, that is just playing devil's advocate to spark discussion - in no way am I saying this is what I believe.
I don't know where this thing is getting its data from. But if it's from the Internet, well, there is a lot of trash out there about why its bad to be gay, or a Jew, or whatever else. If this AI picks up on hate speech it might say "well to this degree x is bad because of this amount of y". I wouldn't call that that a true statement just because people on the Internet says it. There are people online who think the Earth is flat after all.

If you want to do a scientific study on it, well, for natural things like homosexuality, nature has said its fine because it exist when evolution says if it is a bad trait it will not exist. Things like religious affiliation would require a scientific study to determine possible good or bad outcomes. I doubt this AI is doing its own research.
 
I don't know where this thing is getting its data from. But if it's from the Internet, well, there is a lot of trash out there about why its bad to be gay, or a Jew, or whatever else. If this AI picks up on hate speech it might say "well to this degree x is bad because of this amount of y". I wouldn't call that that a true statement just because people on the Internet says it. There are people online who think the Earth is flat after all.

If you want to do a scientific study on it, well, for natural things like homosexuality, nature has said its fine because it exist when evolution says if it is a bad trait it will not exist. Things like religious affiliation would require a scientific study to determine possible good or bad outcomes. I doubt this AI is doing its own research.

Annd that is why gays cannot naturally reproduce. #FreedomOfSpeechIsNotFreedomOfSpeechIfThereAreConsequences
 
Maybe the AI is making its assertion that being gay cannot perpetuate our species and thus deems it a negative.
 
Just playing devil's advocate here:

What if the AI is actually making an objective determination based on observable facts, being homosexual and Jewish are actually scientifically a negative? Maybe it is our own biases that are telling us that it isn't, and why we object to the AI's conclusion. If we were truly objective, we would accept the AI conclusion?

Again, that is just playing devil's advocate to spark discussion - in no way am I saying this is what I believe.
It's an neural net AI, there is no scientific part of that, the scientific method is quite different. It's making a decision based on the data used to seed the AI. At best you can conclude that the seed data suggests to the AI to view it as a negative, who created that seed data? We did it's a reflection of the creators of that data not an objective analysis immunite to societal influence. Seed data is quite important to those kinds of AI it's how you get microsoft "making" a nazi loving AI, the seed data was us and we feed it that shit.
 
So basically Google et al, will tweak these algorithms until they make the perfect keyboard SJW.

It wont be long until they will also take the ability of making or having an opinion away from future AI efforts, as it may offend someone, somewhere, possibly, maybe. And we just can't have that, can we.

Mind you, it's rapidly getting like that for humans as well these days.
 
Last edited:
AI systems, including sentiment analyzers, are trained using human texts like news stories and books. Therefore, they often reflect the same biases found in society. We don't know yet the best way to completely remove bias from artificial intelligence, but it's important to continue to expose it.

Maybe don't use journalists as a source of modern societal wisdom? Is there a more biased person, on average, than a journalist?
 
If it was fed much historical data, its conclusions are accurate. Both groups have been discriminated against though much of history. This is going to be a major problem with AI systems. How will they interprete the data given them? If we humans insist on tweaking the AI until it gives answers that will please the SJW crowd, it may be useless in the real world where biases exist.
 
What if the AI is actually making an objective determination based on observable facts, being homosexual and Jewish are actually scientifically a negative?...

From the article:
In addition to entity recognition (deciphering what's being talked about in a text) and syntax analysis (parsing the structure of that text), the API included a sentiment analyzer to allow programs to determine the degree to which sentences expressed a negative or positive sentiment, on a scale of -1 to 1. The problem is the API labels sentences about religious and ethnic minorities as negative—indicating it's inherently biased. For example, it labels both being a Jew and being a homosexual as negative.

It's not a brain in a vat somewhere in the Google vaults contemplating human history. One way or another the AI 'sentiment analyzer' learned to recognize terms like 'Jew' or 'homo' as indicating a negative sentiment, presumably because of their frequent use as insults.
 
AI learns off of and is programmed by humans, when humans are able to remove the bias from ourselves, then maybe AI can also. So in that case, no, they won't. AI is not a different entity than the human mind (due to first sentence), they are just capable of thinking much faster, but are still plagued by wrong decisions, as are we.
 
Back
Top