Improving Out-of-Distribution Generalization by Adversarial Training with Structured Priors

10/13/2022
by   Qixun Wang, et al.
0

Deep models often fail to generalize well in test domains when the data distribution differs from that in the training domain. Among numerous approaches to address this Out-of-Distribution (OOD) generalization problem, there has been a growing surge of interest in exploiting Adversarial Training (AT) to improve OOD performance. Recent works have revealed that the robust model obtained by conducting sample-wise AT also retains transferability to biased test domains. In this paper, we empirically show that sample-wise AT has limited improvement on OOD performance. Specifically, we find that AT can only maintain performance at smaller scales of perturbation while Universal AT (UAT) is more robust to larger-scale perturbations. This provides us with clues that adversarial perturbations with universal (low dimensional) structures can enhance the robustness against large data distribution shifts that are common in OOD scenarios. Inspired by this, we propose two AT variants with low-rank structures to train OOD-robust models. Extensive experiments on DomainBed benchmark show that our proposed approaches outperform Empirical Risk Minimization (ERM) and sample-wise AT. Our code is available at https://github.com/NOVAglow646/NIPS22-MAT-and-LDAT-for-OOD.

READ FULL TEXT
research
03/27/2023

CAT:Collaborative Adversarial Training

Adversarial training can improve the robustness of neural networks. Prev...
research
08/10/2021

Enhancing Knowledge Tracing via Adversarial Training

We study the problem of knowledge tracing (KT) where the goal is to trac...
research
08/16/2018

Distributionally Adversarial Attack

Recent work on adversarial attack has shown that Projected Gradient Desc...
research
12/01/2020

Field-wise Learning for Multi-field Categorical Data

We propose a new method for learning with multi-field categorical data. ...
research
07/06/2021

On Generalization of Graph Autoencoders with Adversarial Training

Adversarial training is an approach for increasing model's resilience ag...
research
09/07/2021

Fishr: Invariant Gradient Variances for Out-of-distribution Generalization

Learning robust models that generalize well under changes in the data di...
research
05/02/2023

PGrad: Learning Principal Gradients For Domain Generalization

Machine learning models fail to perform when facing out-of-distribution ...

Please sign up or login with your details

Forgot password? Click here to reset