Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent

09/10/2020
by   Ricardo Bigolin Lanfredi, et al.
0

Adversarial training, especially projected gradient descent (PGD), has been the most successful approach for improving robustness against adversarial attacks. After adversarial training, gradients of models with respect to their inputs are meaningful and interpretable by humans. However, the concept of interpretability is not mathematically well established, making it difficult to evaluate it quantitatively. We define interpretability as the alignment of the model gradient with the vector pointing toward the closest point of the support of the other class. We propose a method for measuring this alignment for binary classification problems, using generative adversarial model training to produce the smallest residual needed to change the class present in the image. We show that PGD-trained models are more interpretable than the baseline according to our definition, and our metric presents higher alignment values than a competing metric formulation. We also show that enforcing this alignment increases the robustness of models without adversarial training.

READ FULL TEXT

page 7

page 9

research
03/27/2019

Bridging Adversarial Robustness and Gradient Interpretability

Adversarial training is a training scheme designed to counter adversaria...
research
07/04/2023

Interpretable Computer Vision Models through Adversarial Training: Unveiling the Robustness-Interpretability Connection

With the perpetual increase of complexity of the state-of-the-art deep n...
research
02/26/2020

Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

Adversarial training based on the minimax formulation is necessary for o...
research
06/15/2019

Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks

Adversarial training was introduced as a way to improve the robustness o...
research
02/24/2020

Fast and Stable Adversarial Training through Noise Injection

Adversarial training is the most successful empirical method, to increas...
research
08/13/2023

Faithful to Whom? Questioning Interpretability Measures in NLP

A common approach to quantifying model interpretability is to calculate ...
research
11/06/2020

Generative adversarial training of product of policies for robust and adaptive movement primitives

In learning from demonstrations, many generative models of trajectories ...

Please sign up or login with your details

Forgot password? Click here to reset