Adversarial Unlearning: Reducing Confidence Along Adversarial Directions

06/03/2022
by   Amrith Setlur, et al.
0

Supervised learning methods trained with maximum likelihood objectives often overfit on training data. Most regularizers that prevent overfitting look to increase confidence on additional examples (e.g., data augmentation, adversarial training), or reduce it on training data (e.g., label smoothing). In this work we propose a complementary regularization strategy that reduces confidence on self-generated examples. The method, which we call RCAD (Reducing Confidence along Adversarial Directions), aims to reduce confidence on out-of-distribution examples lying along directions adversarially chosen to increase training loss. In contrast to adversarial training, RCAD does not try to robustify the model to output the original label, but rather regularizes it to have reduced confidence on points generated using much larger perturbations than in conventional adversarial training. RCAD can be easily integrated into training pipelines with a few lines of code. Despite its simplicity, we find on many classification benchmarks that RCAD can be added to existing techniques (e.g., label smoothing, MixUp training) to increase test accuracy by 1-3 absolute value, with more significant gains in the low data regime. We also provide a theoretical analysis that helps to explain these benefits in simplified settings, showing that RCAD can provably help the model unlearn spurious features in the training data.

READ FULL TEXT
research
10/25/2019

Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?

Adversarial training is one of the strongest defenses against adversaria...
research
03/05/2020

Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization

Adversarial examples cause neural networks to produce incorrect outputs ...
research
10/27/2022

Efficient and Effective Augmentation Strategy for Adversarial Training

Adversarial training of Deep Neural Networks is known to be significantl...
research
09/13/2022

Adversarial Coreset Selection for Efficient Robust Training

Neural networks are vulnerable to adversarial attacks: adding well-craft...
research
04/28/2018

Generalizing Across Domains via Cross-Gradient Training

We present CROSSGRAD, a method to use multi-domain training data to lear...
research
11/01/2022

Maximum Likelihood Distillation for Robust Modulation Classification

Deep Neural Networks are being extensively used in communication systems...
research
06/10/2023

Boosting Adversarial Robustness using Feature Level Stochastic Smoothing

Advances in adversarial defenses have led to a significant improvement i...

Please sign up or login with your details

Forgot password? Click here to reset