- Joined
- Aug 20, 2006
- Messages
- 13,000
Google's main concern with AI isn't deadly super-intelligent robots, but ones that discriminate: when you give these systems biased data, they will be biased. This problem has already begun creeping into their new machine learning application program interface, the Cloud Natural Language API, which is labeling sentences about religious and ethnic minorities as negative.
For example, it labels both being a Jew and being a homosexual as negative. It looks like Google's sentiment analyzer is biased, as many artificially intelligent algorithms have been found to be. AI systems, including sentiment analyzers, are trained using human texts like news stories and books. Therefore, they often reflect the same biases found in society. We don't know yet the best way to completely remove bias from artificial intelligence, but it's important to continue to expose it.
For example, it labels both being a Jew and being a homosexual as negative. It looks like Google's sentiment analyzer is biased, as many artificially intelligent algorithms have been found to be. AI systems, including sentiment analyzers, are trained using human texts like news stories and books. Therefore, they often reflect the same biases found in society. We don't know yet the best way to completely remove bias from artificial intelligence, but it's important to continue to expose it.