Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent

09/10/2020
by   Ricardo Bigolin Lanfredi, et al.
0

Adversarial training, especially projected gradient descent (PGD), has been the most successful approach for improving robustness against adversarial attacks. After adversarial training, gradients of models with respect to their inputs are meaningful and interpretable by humans. However, the concept of interpretability is not mathematically well established, making it difficult to evaluate it quantitatively. We define interpretability as the alignment of the model gradient with the vector pointing toward the closest point of the support of the other class. We propose a method for measuring this alignment for binary classification problems, using generative adversarial model training to produce the smallest residual needed to change the class present in the image. We show that PGD-trained models are more interpretable than the baseline according to our definition, and our metric presents higher alignment values than a competing metric formulation. We also show that enforcing this alignment increases the robustness of models without adversarial training.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 9

03/27/2019

Bridging Adversarial Robustness and Gradient Interpretability

Adversarial training is a training scheme designed to counter adversaria...
02/26/2020

Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

Adversarial training based on the minimax formulation is necessary for o...
06/15/2019

Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks

Adversarial training was introduced as a way to improve the robustness o...
02/24/2020

Fast and Stable Adversarial Training through Noise Injection

Adversarial training is the most successful empirical method, to increas...
05/04/2020

On the Benefits of Models with Perceptually-Aligned Gradients

Adversarial robust models have been shown to learn more robust and inter...
03/19/2021

Noise Modulation: Let Your Model Interpret Itself

Given the great success of Deep Neural Networks(DNNs) and the black-box ...
02/06/2021

Understanding the Interaction of Adversarial Training with Noisy Labels

Noisy labels (NL) and adversarial examples both undermine trained models...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.