DeepAI AI Chat
Log In Sign Up

Recent Advances in Understanding Adversarial Robustness of Deep Neural Networks

by   Tao Bai, et al.

Adversarial examples are inevitable on the road of pervasive applications of deep neural networks (DNN). Imperceptible perturbations applied on natural samples can lead DNN-based classifiers to output wrong prediction with fair confidence score. It is increasingly important to obtain models with high robustness that are resistant to adversarial examples. In this paper, we survey recent advances in how to understand such intriguing property, i.e. adversarial robustness, from different perspectives. We give preliminary definitions on what adversarial attacks and robustness are. After that, we study frequently-used benchmarks and mention theoretically-proved bounds for adversarial robustness. We then provide an overview on analyzing correlations among adversarial robustness and other critical indicators of DNN models. Lastly, we introduce recent arguments on potential costs of adversarial training which have attracted wide attention from the research community.


page 1

page 2

page 3

page 4


Latent Adversarial Defence with Boundary-guided Generation

Deep Neural Networks (DNNs) have recently achieved great success in many...

Robust Sensible Adversarial Learning of Deep Neural Networks for Image Classification

The idea of robustness is central and critical to modern statistical ana...

Adversarial examples and where to find them

Adversarial robustness of trained models has attracted considerable atte...

Efficiently Finding Adversarial Examples with DNN Preprocessing

Deep Neural Networks (DNNs) are everywhere, frequently performing a fair...

Metamorphic Relation Based Adversarial Attacks on Differentiable Neural Computer

Deep neural networks (DNN), while becoming the driving force of many nov...

Greenformer: Factorization Toolkit for Efficient Deep Neural Networks

While the recent advances in deep neural networks (DNN) bring remarkable...