A Useful Taxonomy for Adversarial Robustness of Neural Networks

10/23/2019
by   Leslie N. Smith, et al.
0

Adversarial attacks and defenses are currently active areas of research for the deep learning community. A recent review paper divided the defense approaches into three categories; gradient masking, robust optimization, and adversarial example detection. We divide gradient masking and robust optimization differently: (1) increasing intra-class compactness and inter-class separation of the feature vectors improves adversarial robustness, and (2) marginalization or removal of non-robust image features also improves adversarial robustness. By reframing these topics differently, we provide a fresh perspective that provides insight into the underlying factors that enable training more robust networks and can help inspire novel solutions. In addition, there are several papers in the literature of adversarial defenses that claim there is a cost for adversarial robustness, or a trade-off between robustness and accuracy but, under this proposed taxonomy, we hypothesis that this is not universal. We follow up on our taxonomy with several challenges to the deep learning research community that builds on the connections and insights in this paper.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/01/2020

Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey

As we seek to deploy machine learning models beyond virtual and controll...
research
09/09/2020

SoK: Certified Robustness for Deep Neural Networks

Great advancement in deep neural networks (DNNs) has led to state-of-the...
research
03/03/2020

Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks

The field of defense strategies against adversarial attacks has signific...
research
02/15/2022

Holistic Adversarial Robustness of Deep Learning Models

Adversarial robustness studies the worst-case performance of a machine l...
research
06/01/2023

Adversarial Robustness in Unsupervised Machine Learning: A Systematic Review

As the adoption of machine learning models increases, ensuring robust mo...
research
03/24/2023

Feature Separation and Recalibration for Adversarial Robustness

Deep neural networks are susceptible to adversarial attacks due to the a...

Please sign up or login with your details

Forgot password? Click here to reset