On the Connection between Invariant Learning and Adversarial Training for Out-of-Distribution Generalization

12/18/2022
by   Shiji Xin, et al.
0

Despite impressive success in many tasks, deep learning models are shown to rely on spurious features, which will catastrophically fail when generalized to out-of-distribution (OOD) data. Invariant Risk Minimization (IRM) is proposed to alleviate this issue by extracting domain-invariant features for OOD generalization. Nevertheless, recent work shows that IRM is only effective for a certain type of distribution shift (e.g., correlation shift) while it fails for other cases (e.g., diversity shift). Meanwhile, another thread of method, Adversarial Training (AT), has shown better domain transfer performance, suggesting that it has the potential to be an effective candidate for extracting domain-invariant features. This paper investigates this possibility by exploring the similarity between the IRM and AT objectives. Inspired by this connection, we propose Domainwise Adversarial Training (DAT), an AT-inspired method for alleviating distribution shift by domain-specific perturbations. Extensive experiments show that our proposed DAT can effectively remove domain-varying features and improve OOD generalization under both correlation shift and diversity shift.

READ FULL TEXT

page 4

page 7

research
10/01/2018

Improving the Generalization of Adversarial Training with Domain Adaptation

By injecting adversarial examples into training data, the adversarial tr...
research
06/12/2020

Learning Diverse Representations for Fast Adaptation to Distribution Shift

The i.i.d. assumption is a useful idealization that underpins many succe...
research
06/16/2022

A Closer Look at Smoothness in Domain Adversarial Training

Domain adversarial training has been ubiquitous for achieving invariant ...
research
01/16/2021

Out-of-distribution Prediction with Invariant Risk Minimization: The Limitation and An Effective Fix

This work considers the out-of-distribution (OOD) prediction problem whe...
research
06/24/2022

QAGAN: Adversarial Approach To Learning Domain Invariant Language Features

Training models that are robust to data domain shift has gained an incre...
research
07/14/2022

Improved OOD Generalization via Conditional Invariant Regularizer

Recently, generalization on out-of-distribution (OOD) data with correlat...
research
12/19/2022

Exploring Optimal Substructure for Out-of-distribution Generalization via Feature-targeted Model Pruning

Recent studies show that even highly biased dense networks contain an un...

Please sign up or login with your details

Forgot password? Click here to reset