Adversarial examples in the physical world

07/08/2016
by   Alexey Kurakin, et al.
0

Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.

READ FULL TEXT

page 1

page 6

page 12

page 13

research
06/21/2018

Detecting Adversarial Examples Based on Steganalysis

Deep Neural Networks (DNNs) have recently led to significant improvement...
research
08/17/2021

When Should You Defend Your Classifier – A Game-theoretical Analysis of Countermeasures against Adversarial Examples

Adversarial machine learning, i.e., increasing the robustness of machine...
research
10/18/2018

A Training-based Identification Approach to VIN Adversarial Examples

With the rapid development of Artificial Intelligence (AI), the problem ...
research
03/21/2019

Adversarial camera stickers: A physical camera-based attack on deep learning systems

Recent work has thoroughly documented the susceptibility of deep learnin...
research
03/21/2019

Adversarial camera stickers: A Physical Camera Attack on Deep Learning Classifier

Recent work has thoroughly documented the susceptibility of deep learnin...
research
02/06/2022

Pipe Overflow: Smashing Voice Authentication for Fun and Profit

Recent years have seen a surge of popularity of acoustics-enabled person...
research
10/27/2019

Understanding and Quantifying Adversarial Examples Existence in Linear Classification

State-of-art deep neural networks (DNN) are vulnerable to attacks by adv...

Please sign up or login with your details

Forgot password? Click here to reset