Understanding Adversarial Examples Through Deep Neural Network's Response Surface and Uncertainty Regions

06/30/2021
by   Juan Shu, et al.
8

Deep neural network (DNN) is a popular model implemented in many systems to handle complex tasks such as image classification, object recognition, natural language processing etc. Consequently DNN structural vulnerabilities become part of the security vulnerabilities in those systems. In this paper we study the root cause of DNN adversarial examples. We examine the DNN response surface to understand its classification boundary. Our study reveals the structural problem of DNN classification boundary that leads to the adversarial examples. Existing attack algorithms can generate from a handful to a few hundred adversarial examples given one clean image. We show there are infinitely many adversarial images given one clean sample, all within a small neighborhood of the clean sample. We then define DNN uncertainty regions and show transferability of adversarial examples is not universal. We also argue that generalization error, the large sample theoretical guarantee established for DNN, cannot adequately capture the phenomenon of adversarial examples. We need new theory to measure DNN robustness.

READ FULL TEXT

page 10

page 11

page 12

page 14

research
05/28/2023

Amplification trojan network: Attack deep neural networks by amplifying their inherent weakness

Recent works found that deep neural networks (DNNs) can be fooled by adv...
research
03/10/2019

Uncertainty Propagation in Deep Neural Network Using Active Subspace

The inputs of deep neural network (DNN) from real-world data usually com...
research
11/14/2018

Deep Neural Networks based Modrec: Some Results with Inter-Symbol Interference and Adversarial Examples

Recent successes and advances in Deep Neural Networks (DNN) in machine v...
research
07/21/2019

ImageNet-trained deep neural network exhibits illusion-like response to the Scintillating Grid

Deep neural network (DNN) models for computer vision are now capable of ...
research
02/08/2020

Attacking Optical Character Recognition (OCR) Systems with Adversarial Watermarks

Optical character recognition (OCR) is widely applied in real applicatio...
research
09/26/2019

Adversarial ML Attack on Self Organizing Cellular Networks

Deep Neural Networks (DNN) have been widely adopted in self-organizing n...
research
08/21/2018

zoNNscan : a boundary-entropy index for zone inspection of neural models

The training of deep neural network classifiers results in decision boun...

Please sign up or login with your details

Forgot password? Click here to reset