On Saliency Maps and Adversarial Robustness

06/14/2020
by   Puneet Mangla, et al.
0

A Very recent trend has emerged to couple the notion of interpretability and adversarial robustness, unlike earlier efforts which solely focused on good interpretations or robustness against adversaries. Works have shown that adversarially trained models exhibit more interpretable saliency maps than their non-robust counterparts, and that this behavior can be quantified by considering the alignment between input image and saliency map. In this work, we provide a different perspective to this coupling, and provide a method, Saliency based Adversarial training (SAT), to use saliency maps to improve adversarial robustness of a model. In particular, we show that using annotations such as bounding boxes and segmentation masks, already provided with a dataset, as weak saliency maps, suffices to improve adversarial robustness with no additional effort to generate the perturbations themselves. Our empirical results on CIFAR-10, CIFAR-100, Tiny ImageNet and Flower-17 datasets consistently corroborate our claim, by showing improved adversarial robustness using our method. saliency maps. We also show how using finer and stronger saliency maps leads to more robust models, and how integrating SAT with existing adversarial training methods, further boosts performance of these existing methods.

READ FULL TEXT
research
05/10/2019

On the Connection Between Adversarial Robustness and Saliency Map Interpretability

Recent studies on the adversarial vulnerability of neural networks have ...
research
11/29/2019

On the Benefits of Attributional Robustness

Interpretability is an emerging area of research in trustworthy machine ...
research
12/21/2019

Jacobian Adversarially Regularized Networks for Robustness

Adversarial examples are crafted with imperceptible perturbations with t...
research
01/26/2022

A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes

While datasets with single-label supervision have propelled rapid advanc...
research
04/23/2020

Evaluating Adversarial Robustness for Deep Neural Network Interpretability using fMRI Decoding

While deep neural networks (DNNs) are being increasingly used to make pr...
research
12/02/2020

Improving Interpretability in Medical Imaging Diagnosis using Adversarial Training

We investigate the influence of adversarial training on the interpretabi...
research
12/16/2022

Robust Saliency Guidance for Data-free Class Incremental Learning

Data-Free Class Incremental Learning (DFCIL) aims to sequentially learn ...

Please sign up or login with your details

Forgot password? Click here to reset