Does Interpretability of Neural Networks Imply Adversarial Robustness?

12/07/2019
by   Adam Noack, et al.
0

The success of deep neural networks is clouded by two issues that largely remain open to this day: the abundance of adversarial attacks that fool neural networks with small perturbations and the lack of interpretation for the predictions they make. Empirical evidence in the literature as well as theoretical analysis on simple models suggest these two seemingly disparate issues may actually be connected, as robust models tend to be more interpretable than non-robust models. In this paper, we provide evidence for the claim that this relationship is bidirectional. Viz., models that are forced to have interpretable gradients are more robust to adversarial examples than models trained in a standard manner. With further analysis and experiments, we identify two factors behind this phenomenon, namely the suppression of the gradient and the selective use of features guided by high-quality interpretations, which explain model behaviors under various regularization and target interpretation settings.

READ FULL TEXT

page 5

page 8

research
11/26/2017

Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients

Deep neural networks have proven remarkably effective at solving many cl...
research
07/09/2022

Improved and Interpretable Defense to Transferred Adversarial Examples by Jacobian Norm with Selective Input Gradient Regularization

Deep neural networks (DNNs) are known to be vulnerable to adversarial ex...
research
07/14/2020

Multitask Learning Strengthens Adversarial Robustness

Although deep networks achieve strong accuracy on a range of computer vi...
research
06/26/2020

Proper Network Interpretability Helps Adversarial Robustness in Classification

Recent works have empirically shown that there exist adversarial example...
research
05/10/2019

On the Connection Between Adversarial Robustness and Saliency Map Interpretability

Recent studies on the adversarial vulnerability of neural networks have ...
research
09/21/2022

Toy Models of Superposition

Neural networks often pack many unrelated concepts into a single neuron ...
research
11/19/2020

An Experimental Study of Semantic Continuity for Deep Learning Models

Deep learning models suffer from the problem of semantic discontinuity: ...

Please sign up or login with your details

Forgot password? Click here to reset