A Review of Adversarial Attack and Defense for Classification Methods

11/18/2021
by   Yao Li, et al.
0

Despite the efficiency and scalability of machine learning systems, recent studies have demonstrated that many classification methods, especially deep neural networks (DNNs), are vulnerable to adversarial examples; i.e., examples that are carefully crafted to fool a well-trained classification model while being indistinguishable from natural data to human. This makes it potentially unsafe to apply DNNs or related methods in security-critical areas. Since this issue was first identified by Biggio et al. (2013) and Szegedy et al.(2014), much work has been done in this field, including the development of attack methods to generate adversarial examples and the construction of defense techniques to guard against such examples. This paper aims to introduce this topic and its latest developments to the statistical community, primarily focusing on the generation and guarding of adversarial examples. Computing codes (in python and R) used in the numerical experiments are publicly available for readers to explore the surveyed methods. It is the hope of the authors that this paper will encourage more statisticians to work on this important and exciting field of generating and defending against adversarial examples.

READ FULL TEXT

page 2

page 28

research
03/13/2021

Internal Wasserstein Distance for Adversarial Attack and Defense

Deep neural networks (DNNs) are vulnerable to adversarial examples that ...
research
09/30/2020

Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning

Stochastic Activation Pruning (SAP) (Dhillon et al., 2018) is a defense ...
research
11/06/2019

Reversible Adversarial Examples based on Reversible Image Transformation

Recent studies show that widely used deep neural networks (DNNs) are vul...
research
07/10/2020

Improved Detection of Adversarial Images Using Deep Neural Networks

Machine learning techniques are immensely deployed in both industry and ...
research
09/13/2018

Adversarial Examples: Opportunities and Challenges

With the advent of the era of artificial intelligence(AI), deep neural n...
research
12/21/2021

A Theoretical View of Linear Backpropagation and Its Convergence

Backpropagation is widely used for calculating gradients in deep neural ...
research
06/03/2019

A Surprising Density of Illusionable Natural Speech

Recent work on adversarial examples has demonstrated that most natural i...

Please sign up or login with your details

Forgot password? Click here to reset