Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect

11/26/2020
by   Athena Sayles, et al.
7

Physical adversarial examples for camera-based computer vision have so far been achieved through visible artifacts – a sticker on a Stop sign, colorful borders around eyeglasses or a 3D printed object with a colorful texture. An implicit assumption here is that the perturbations must be visible so that a camera can sense them. By contrast, we contribute a procedure to generate, for the first time, physical adversarial examples that are invisible to human eyes. Rather than modifying the victim object with visible artifacts, we modify light that illuminates the object. We demonstrate how an attacker can craft a modulated light signal that adversarially illuminates a scene and causes targeted misclassifications on a state-of-the-art ImageNet deep learning model. Concretely, we exploit the radiometric rolling shutter effect in commodity cameras to create precise striping patterns that appear on images. To human eyes, it appears like the object is illuminated, but the camera creates an image with stripes that will cause ML models to output the attacker-desired classification. We conduct a range of simulation and physical experiments with LEDs, demonstrating targeted attack rates up to 84

READ FULL TEXT

page 2

page 6

page 11

page 15

page 17

page 18

page 19

research
03/22/2019

Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition

Adversarial examples are inputs to machine learning models designed by a...
research
02/22/2018

Adversarial Examples that Fool both Human and Computer Vision

Machine learning models are vulnerable to adversarial examples: small ch...
research
10/02/2019

Generating Semantic Adversarial Examples with Differentiable Rendering

Machine learning (ML) algorithms, especially deep neural networks, have ...
research
12/21/2017

Note on Attacking Object Detectors with Adversarial Stickers

Deep learning has proven to be a powerful tool for computer vision and h...
research
01/08/2018

LaVAN: Localized and Visible Adversarial Noise

Most works on adversarial examples for deep-learning based image classif...
research
10/16/2018

Projecting Trouble: Light Based Adversarial Attacks on Deep Learning Classifiers

This work demonstrates a physical attack on a deep learning image classi...
research
11/20/2017

Adversarial Attacks Beyond the Image Space

Generating adversarial examples is an intriguing problem and an importan...

Please sign up or login with your details

Forgot password? Click here to reset