DeepAI AI Chat
Log In Sign Up

On the Connection Between Adversarial Robustness and Saliency Map Interpretability

05/10/2019
by   Christian Etmann, et al.
University of Cambridge
Universität Bremen
0

Recent studies on the adversarial vulnerability of neural networks have shown that models trained to be more robust to adversarial attacks exhibit more interpretable saliency maps than their non-robust counterparts. We aim to quantify this behavior by considering the alignment between input image and saliency map. We hypothesize that as the distance to the decision boundary grows,so does the alignment. This connection is strictly true in the case of linear models. We confirm these theoretical findings with experiments based on models trained with a local Lipschitz regularization and identify where the non-linear nature of neural networks weakens the relation.

READ FULL TEXT

page 1

page 8

06/14/2020

On Saliency Maps and Adversarial Robustness

A Very recent trend has emerged to couple the notion of interpretability...
02/22/2021

Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks

We consider the problem of the stability of saliency-based explanations ...
12/07/2019

Does Interpretability of Neural Networks Imply Adversarial Robustness?

The success of deep neural networks is clouded by two issues that largel...
11/20/2019

Analysis of Deep Networks for Monocular Depth Estimation Through Adversarial Attacks with Proposal of a Defense Method

In this paper, we consider adversarial attacks against a system of monoc...
11/16/2022

Improving Interpretability via Regularization of Neural Activation Sensitivity

State-of-the-art deep neural networks (DNNs) are highly effective at tac...
06/03/2022

Adversarial Attacks on Human Vision

This article presents an introduction to visual attention retargeting, i...
02/26/2022

Adversarial robustness of sparse local Lipschitz predictors

This work studies the adversarial robustness of parametric functions com...