Google's Cloud Vision API is Easily Fooled by Noise

Discussion in '[H]ard|OCP Front Page News' started by cageymaru, Apr 19, 2017.

  1. cageymaru

    cageymaru [H]ard|News Staff Member

    Messages:
    15,500
    Joined:
    Apr 10, 2003
    Google has created a machine learning platform called the Cloud Vision API that can categorize images looking at them. It also can pick out faces in a crowd and identify them, identify inappropriate content, determine that a house on lakefront property is different than a lake ecosystem. Researchers at the University of Washington have found out that the entire system can be attacked by adding noise to the images to be analyzed.
    Google's Cloud Vision API is easily fooled when noise is added to an image. If a sufficient amount of noise is intentionally added to an image, the system fails and can no longer determine that the noisy image is the same as the original image. Humans can glance at the images and recognize the matching images instantly. Only 14.25% noise was needed to be added to the image to completely fool the A.I..

    This is of particular importance as law enforcement could be using this system to find child porn, solve a murder, or identify criminals walking in front of CCTV cameras. In the case of a criminal, substituting a noisy mugshot for the original mugshot in a crime database would hide his identity. He would become a law abiding citizen without a record to the cameras. Scary stuff indeed!

    At this point readers may ask why this matters: the authors suggest that deliberately adding noise to images could be an attack vector because “an adversary can easily bypass an image filtering system, by adding noise to an image with inappropriate content.” Which sounds interesting because bad actors could easily learn that images are subject to machine analysis. For example, The Register recently learned of a drone designed to photograph supermarket shelves so that image analysis can automatically figure out what stock needs to be re-ordered. An attack on such a trove of images that results in empty shelves could see customers decide to shop elsewhere.
     
  2. cia1413

    cia1413 n00bie

    Messages:
    51
    Joined:
    Aug 15, 2012
    Ok, so what would a real picture of biology be?
     
    MaZa likes this.
  3. Jamison Collins

    Jamison Collins [H]Lite

    Messages:
    70
    Joined:
    Dec 31, 2016
    Who put in the this trash anti-Google spin? If you want to use ML to automatically detect child porn or do facial recognition you would use a specialized tool for exactly that purpose, not a such a general approach.

    I'm sure "some people are saying" that law enforcement is going to be crippled as a result.

    Hogwash.
     
  4. cageymaru

    cageymaru [H]ard|News Staff Member

    Messages:
    15,500
    Joined:
    Apr 10, 2003
    It's not an anti-Google spin; it's researchers breaking things in the name of science to make them better. Also this is the same tool that scans every photo that you upload to Google Drive to make sure you're not Jack the Ripper.
     
  5. jardows

    jardows Limp Gawd

    Messages:
    444
    Joined:
    Jun 10, 2015
    And likely sharing code with the algorithms to detect "objectionable" content on YouTube.
     
  6. Silverlok

    Silverlok n00bie

    Messages:
    20
    Joined:
    Dec 15, 2013
    "...And likely sharing code with the algorithms to detect "objectionable" content on YouTube...."
    "likely"??
    heh.
    When algorithms created from machine leaning make decisions, whom can say how they define reality...yes?

    More on topic; was this an 'oversight' , or an intentional method of operation ?

    We will soon known , as to fix this problem , google simply has only to have it's A.I. start watching/learning images reproduced on an analog CRT getting a weak ( interference filled ) transmission signal
     
  7. Dead Parrot

    Dead Parrot Gawd

    Messages:
    790
    Joined:
    Mar 4, 2013
    Now the really scary trick would be to find a noise pattern that doesn't alter the original picture very much as far as humans are concerned but tricks the machine code into consistently returning the wrong identification. So a human would see a clear picture of George Washington but the machine would keep returning Benedict Arnold.
     
    xX_Jack_Carver_Xx likes this.
  8. VooDooPC

    VooDooPC n00bie

    Messages:
    21
    Joined:
    Mar 1, 2016
    "In the case of a criminal, substituting a noisy mugshot for the original mugshot in a crime database would hide his identity. He would become a law abiding citizen without a record to the cameras. Scary stuff indeed!"

    A criminal is going to hack into a crime database, but instead of deleting/modifying his record, he's going to pull his mug shot, put a Photoshop filter on it, then upload the modified mug shot in its place? That's some imagination you have.
     
  9. Jaymzkerten

    Jaymzkerten Limp Gawd

    Messages:
    172
    Joined:
    Jul 30, 2012
    Well imagine if said criminal was highly-tracked, if they were to hack in and delete the imagery it would be obvious at a glance and (assuming regular backups are done) would just get restored from a recent backup. If instead they modify the images *just* enough to make them grainy, someone glancing at them might not notice and the computers would now go stupid when they see the person.
     
  10. xX_Jack_Carver_Xx

    xX_Jack_Carver_Xx 2[H]4U

    Messages:
    2,514
    Joined:
    Jun 6, 2005
    There is no such thing as AI. Just hype and BS to make the proponents look more impressive than just mere software engineers, etc.

    "Humans can glance at the images and ...." yeah because the human mind/eyes/optic nerve are infinitely more complex than any software or the ability of the #FakeEngineer nit wits to comprehend.
     
  11. xX_Jack_Carver_Xx

    xX_Jack_Carver_Xx 2[H]4U

    Messages:
    2,514
    Joined:
    Jun 6, 2005
    Worse, if you're good enough to hack that ..... why not hack a Bank and steal $1Billion?