Adversarial camera stickers: A physical camera-based attack on deep learning systems

03/21/2019
by   Juncheng Li, et al.
0

Recent work has thoroughly documented the susceptibility of deep learning systems to adversarial examples, but most such instances directly manipulate the digital input to a classifier. Although a smaller line of work considers physical adversarial attacks, in all cases these involve manipulating the object of interest, e.g., putting a physical sticker on a object to misclassify it, or manufacturing an object specifically intended to be misclassified. In this work, we consider an alternative question: is it possible to fool deep classifiers, over all perceived objects of a certain type, by physically manipulating the camera itself? We show that this is indeed possible, that by placing a carefully crafted and mainly-translucent sticker over the lens of a camera, one can create universal perturbations of the observed images that are inconspicuous, yet reliably misclassify target objects as a different (targeted) class. To accomplish this, we propose an iterative procedure for both updating the attack perturbation (to make it adversarial for a given classifier), and the threat model itself (to ensure it is physically realizable). For example, we show that we can achieve physically-realizable attacks that fool ImageNet classifiers in a targeted fashion 49.6 This presents a new class of physically-realizable threat models to consider in the context of adversarially robust machine learning. Our demo video can be viewed at: https://youtu.be/wUVmL33Fx54

READ FULL TEXT
research
03/21/2019

Adversarial camera stickers: A Physical Camera Attack on Deep Learning Classifier

Recent work has thoroughly documented the susceptibility of deep learnin...
research
07/08/2016

Adversarial examples in the physical world

Most existing machine learning classifiers are highly vulnerable to adve...
research
02/15/2021

Universal Adversarial Examples and Perturbations for Quantum Classifiers

Quantum machine learning explores the interplay between machine learning...
research
10/16/2018

Projecting Trouble: Light Based Adversarial Attacks on Deep Learning Classifiers

This work demonstrates a physical attack on a deep learning image classi...
research
12/15/2017

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Deep learning models have achieved high performance on many tasks, and t...
research
03/02/2023

AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision Systems

Vision-based perception modules are increasingly deployed in many applic...
research
10/07/2021

One Thing to Fool them All: Generating Interpretable, Universal, and Physically-Realizable Adversarial Features

It is well understood that modern deep networks are vulnerable to advers...

Please sign up or login with your details

Forgot password? Click here to reset