On the Benefits of Models with Perceptually-Aligned Gradients

05/04/2020
by   Gunjan Aggarwal, et al.
5

Adversarial robust models have been shown to learn more robust and interpretable features than standard trained models. As shown in [<cit.>], such robust models inherit useful interpretable properties where the gradient aligns perceptually well with images, and adding a large targeted adversarial perturbation leads to an image resembling the target class. We perform experiments to show that interpretable and perceptually aligned gradients are present even in models that do not show high robustness to adversarial attacks. Specifically, we perform adversarial training with attack for different max-perturbation bound. Adversarial training with low max-perturbation bound results in models that have interpretable features with only slight drop in performance over clean samples. In this paper, we leverage models with interpretable perceptually-aligned features and show that adversarial training with low max-perturbation bound can improve the performance of models for zero-shot and weakly supervised localization tasks.

READ FULL TEXT

page 2

page 3

page 4

research
11/27/2018

Universal Adversarial Training

Standard adversarial attacks change the predicted class label of an imag...
research
04/03/2022

Adversarially robust segmentation models learn perceptually-aligned gradients

The effects of adversarial training on semantic segmentation networks ha...
research
07/04/2023

Interpretable Computer Vision Models through Adversarial Training: Unveiling the Robustness-Interpretability Connection

With the perpetual increase of complexity of the state-of-the-art deep n...
research
10/18/2022

Scaling Adversarial Training to Large Perturbation Bounds

The vulnerability of Deep Neural Networks to Adversarial Attacks has fue...
research
05/02/2022

MIRST-DM: Multi-Instance RST with Drop-Max Layer for Robust Classification of Breast Cancer

Robust self-training (RST) can augment the adversarial robustness of ima...
research
06/13/2020

ClustTR: Clustering Training for Robustness

This paper studies how encouraging semantically-aligned features during ...
research
10/04/2018

Feature prioritization and regularization improve standard accuracy and adversarial robustness

Adversarial training has been successfully applied to build robust model...

Please sign up or login with your details

Forgot password? Click here to reset