Google's Cloud Vision API is Easily Fooled by Noise

cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
22,087
Google has created a machine learning platform called the Cloud Vision API that can categorize images looking at them. It also can pick out faces in a crowd and identify them, identify inappropriate content, determine that a house on lakefront property is different than a lake ecosystem. Researchers at the University of Washington have found out that the entire system can be attacked by adding noise to the images to be analyzed.
Google's Cloud Vision API is easily fooled when noise is added to an image. If a sufficient amount of noise is intentionally added to an image, the system fails and can no longer determine that the noisy image is the same as the original image. Humans can glance at the images and recognize the matching images instantly. Only 14.25% noise was needed to be added to the image to completely fool the A.I..

This is of particular importance as law enforcement could be using this system to find child porn, solve a murder, or identify criminals walking in front of CCTV cameras. In the case of a criminal, substituting a noisy mugshot for the original mugshot in a crime database would hide his identity. He would become a law abiding citizen without a record to the cameras. Scary stuff indeed!

At this point readers may ask why this matters: the authors suggest that deliberately adding noise to images could be an attack vector because “an adversary can easily bypass an image filtering system, by adding noise to an image with inappropriate content.” Which sounds interesting because bad actors could easily learn that images are subject to machine analysis. For example, The Register recently learned of a drone designed to photograph supermarket shelves so that image analysis can automatically figure out what stock needs to be re-ordered. An attack on such a trove of images that results in empty shelves could see customers decide to shop elsewhere.
 
Who put in the this trash anti-Google spin? If you want to use ML to automatically detect child porn or do facial recognition you would use a specialized tool for exactly that purpose, not a such a general approach.

I'm sure "some people are saying" that law enforcement is going to be crippled as a result.

Hogwash.
 
Who put in the this trash anti-Google spin? If you want to use ML to automatically detect child porn or do facial recognition you would use a specialized tool for exactly that purpose, not a such a general approach.

I'm sure "some people are saying" that law enforcement is going to be crippled as a result.

Hogwash.

It's not an anti-Google spin; it's researchers breaking things in the name of science to make them better. Also this is the same tool that scans every photo that you upload to Google Drive to make sure you're not Jack the Ripper.
 
It's not an anti-Google spin; it's researchers breaking things in the name of science to make them better. Also this is the same tool that scans every photo that you upload to Google Drive to make sure you're not Jack the Ripper.
And likely sharing code with the algorithms to detect "objectionable" content on YouTube.
 
"...And likely sharing code with the algorithms to detect "objectionable" content on YouTube...."
"likely"??
heh.
When algorithms created from machine leaning make decisions, whom can say how they define reality...yes?

More on topic; was this an 'oversight' , or an intentional method of operation ?

We will soon known , as to fix this problem , google simply has only to have it's A.I. start watching/learning images reproduced on an analog CRT getting a weak ( interference filled ) transmission signal
 
Now the really scary trick would be to find a noise pattern that doesn't alter the original picture very much as far as humans are concerned but tricks the machine code into consistently returning the wrong identification. So a human would see a clear picture of George Washington but the machine would keep returning Benedict Arnold.
 
"In the case of a criminal, substituting a noisy mugshot for the original mugshot in a crime database would hide his identity. He would become a law abiding citizen without a record to the cameras. Scary stuff indeed!"

A criminal is going to hack into a crime database, but instead of deleting/modifying his record, he's going to pull his mug shot, put a Photoshop filter on it, then upload the modified mug shot in its place? That's some imagination you have.
 
"In the case of a criminal, substituting a noisy mugshot for the original mugshot in a crime database would hide his identity. He would become a law abiding citizen without a record to the cameras. Scary stuff indeed!"

A criminal is going to hack into a crime database, but instead of deleting/modifying his record, he's going to pull his mug shot, put a Photoshop filter on it, then upload the modified mug shot in its place? That's some imagination you have.

Well imagine if said criminal was highly-tracked, if they were to hack in and delete the imagery it would be obvious at a glance and (assuming regular backups are done) would just get restored from a recent backup. If instead they modify the images *just* enough to make them grainy, someone glancing at them might not notice and the computers would now go stupid when they see the person.
 
There is no such thing as AI. Just hype and BS to make the proponents look more impressive than just mere software engineers, etc.

"Humans can glance at the images and ...." yeah because the human mind/eyes/optic nerve are infinitely more complex than any software or the ability of the #FakeEngineer nit wits to comprehend.
 
"In the case of a criminal, substituting a noisy mugshot for the original mugshot in a crime database would hide his identity. He would become a law abiding citizen without a record to the cameras. Scary stuff indeed!"

A criminal is going to hack into a crime database, but instead of deleting/modifying his record, he's going to pull his mug shot, put a Photoshop filter on it, then upload the modified mug shot in its place? That's some imagination you have.

Worse, if you're good enough to hack that ..... why not hack a Bank and steal $1Billion?
 
Back
Top