A Survey of Robust Adversarial Training in Pattern Recognition: Fundamental, Theory, and Methodologies

03/26/2022
by   Zhuang Qian, et al.
0

In the last a few decades, deep neural networks have achieved remarkable success in machine learning, computer vision, and pattern recognition. Recent studies however show that neural networks (both shallow and deep) may be easily fooled by certain imperceptibly perturbed input samples called adversarial examples. Such security vulnerability has resulted in a large body of research in recent years because real-world threats could be introduced due to vast applications of neural networks. To address the robustness issue to adversarial examples particularly in pattern recognition, robust adversarial training has become one mainstream. Various ideas, methods, and applications have boomed in the field. Yet, a deep understanding of adversarial training including characteristics, interpretations, theories, and connections among different models has still remained elusive. In this paper, we present a comprehensive survey trying to offer a systematic and structured investigation on robust adversarial training in pattern recognition. We start with fundamentals including definition, notations, and properties of adversarial examples. We then introduce a unified theoretical framework for defending against adversarial samples - robust adversarial training with visualizations and interpretations on why adversarial training can lead to model robustness. Connections will be also established between adversarial training and other traditional learning theories. After that, we summarize, review, and discuss various methodologies with adversarial attack and defense/training algorithms in a structured way. Finally, we present analysis, outlook, and remarks of adversarial training.

READ FULL TEXT

page 3

page 5

page 10

page 11

research
02/02/2021

Recent Advances in Adversarial Training for Adversarial Robustness

Adversarial training is one of the most effective approaches defending a...
research
01/07/2021

Robust Text CAPTCHAs Using Adversarial Examples

CAPTCHA (Completely Automated Public Truing test to tell Computers and H...
research
12/24/2020

Exploring Adversarial Examples via Invertible Neural Networks

Adversarial examples (AEs) are images that can mislead deep neural netwo...
research
04/05/2018

Unifying Bilateral Filtering and Adversarial Training for Robust Neural Networks

Recent analysis of deep neural networks has revealed their vulnerability...
research
09/11/2019

Towards Noise-Robust Neural Networks via Progressive Adversarial Training

Adversarial examples, intentionally designed inputs tending to mislead d...
research
09/02/2021

Regional Adversarial Training for Better Robust Generalization

Adversarial training (AT) has been demonstrated as one of the most promi...

Please sign up or login with your details

Forgot password? Click here to reset