DeepAI AI Chat
Log In Sign Up

On the Connection Between Adversarial Robustness and Saliency Map Interpretability

by   Christian Etmann, et al.
University of Cambridge
Universität Bremen

Recent studies on the adversarial vulnerability of neural networks have shown that models trained to be more robust to adversarial attacks exhibit more interpretable saliency maps than their non-robust counterparts. We aim to quantify this behavior by considering the alignment between input image and saliency map. We hypothesize that as the distance to the decision boundary grows,so does the alignment. This connection is strictly true in the case of linear models. We confirm these theoretical findings with experiments based on models trained with a local Lipschitz regularization and identify where the non-linear nature of neural networks weakens the relation.


page 1

page 8


On Saliency Maps and Adversarial Robustness

A Very recent trend has emerged to couple the notion of interpretability...

Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks

We consider the problem of the stability of saliency-based explanations ...

Does Interpretability of Neural Networks Imply Adversarial Robustness?

The success of deep neural networks is clouded by two issues that largel...

Analysis of Deep Networks for Monocular Depth Estimation Through Adversarial Attacks with Proposal of a Defense Method

In this paper, we consider adversarial attacks against a system of monoc...

Improving Interpretability via Regularization of Neural Activation Sensitivity

State-of-the-art deep neural networks (DNNs) are highly effective at tac...

Adversarial Attacks on Human Vision

This article presents an introduction to visual attention retargeting, i...

Adversarial robustness of sparse local Lipschitz predictors

This work studies the adversarial robustness of parametric functions com...