DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization

08/19/2022
by   Anshul Nasery, et al.
11

We consider the problem of OOD generalization, where the goal is to train a model that performs well on test distributions that are different from the training distribution. Deep learning models are known to be fragile to such shifts and can suffer large accuracy drops even for slightly different test distributions. We propose a new method - DAFT - based on the intuition that adversarially robust combination of a large number of rich features should provide OOD robustness. Our method carefully distills the knowledge from a powerful teacher that learns several discriminative features using standard training while combining them using adversarial training. The standard adversarial training procedure is modified to produce teachers which can guide the student better. We evaluate DAFT on standard benchmarks in the DomainBed framework, and demonstrate that DAFT achieves significant improvements over the current state-of-the-art OOD generalization methods. DAFT consistently out-performs well-tuned ERM and distillation baselines by up to 6 pronounced gains for smaller networks.

READ FULL TEXT
research
11/01/2022

ARDIR: Improving Robustness using Knowledge Distillation of Internal Representation

Adversarial training is the most promising method for learning robust mo...
research
03/20/2020

Adversarial Robustness on In- and Out-Distribution Improves Explainability

Neural networks have led to major improvements in image classification b...
research
10/20/2022

Learning Sample Reweighting for Accuracy and Adversarial Robustness

There has been great interest in enhancing the robustness of neural netw...
research
05/20/2023

Annealing Self-Distillation Rectification Improves Adversarial Training

In standard adversarial training, models are optimized to fit one-hot la...
research
09/18/2020

Prepare for the Worst: Generalizing across Domain Shifts with Adversarial Batch Normalization

Adversarial training is the industry standard for producing models that ...
research
09/02/2021

Regional Adversarial Training for Better Robust Generalization

Adversarial training (AT) has been demonstrated as one of the most promi...
research
01/25/2023

A Study on FGSM Adversarial Training for Neural Retrieval

Neural retrieval models have acquired significant effectiveness gains ov...

Please sign up or login with your details

Forgot password? Click here to reset