Adaptive Domain Generalization via Online Disagreement Minimization

08/03/2022
by   Xin Zhang, et al.
0

Deep neural networks suffer from significant performance deterioration when there exists distribution shift between deployment and training. Domain Generalization (DG) aims to safely transfer a model to unseen target domains by only relying on a set of source domains. Although various DG approaches have been proposed, a recent study named DomainBed, reveals that most of them do not beat the simple Empirical Risk Minimization (ERM). To this end, we propose a general framework that is orthogonal to existing DG algorithms and could improve their performance consistently. Unlike previous DG works that stake on a static source model to be hopefully a universal one, our proposed AdaODM adaptively modifies the source model at test time for different target domains. Specifically, we create multiple domain-specific classifiers upon a shared domain-generic feature extractor. The feature extractor and classifiers are trained in an adversarial way, where the feature extractor embeds the input samples into a domain-invariant space, and the multiple classifiers capture the distinct decision boundaries that each of them relates to a specific source domain. During testing, distribution differences between target and source domains could be effectively measured by leveraging prediction disagreement among source classifiers. By fine-tuning source models to minimize the disagreement at test time, target domain features are well aligned to the invariant feature space. We verify AdaODM on two popular DG methods, namely ERM and CORAL, and four DG benchmarks, namely VLCS, PACS, OfficeHome, and TerraIncognita. The results show AdaODM stably improves the generalization capacity on unseen domains and achieves state-of-the-art performance.

READ FULL TEXT
research
06/15/2018

Best sources forward: domain generalization through source-specific nets

A long standing problem in visual object categorization is the ability o...
research
02/17/2021

Domain Generalization Needs Stochastic Weight Averaging for Robustness on Domain Shifts

Domain generalization aims to learn a generalizable model to unseen targ...
research
08/12/2023

ADRMX: Additive Disentanglement of Domain Features with Remix Loss

The common assumption that train and test sets follow similar distributi...
research
04/28/2021

Deep Domain Generalization with Feature-norm Network

In this paper, we tackle the problem of training with multiple source do...
research
01/31/2019

Episodic Training for Domain Generalization

Domain generalization (DG) is the challenging and topical problem of lea...
research
09/29/2022

Learning Gradient-based Mixup towards Flatter Minima for Domain Generalization

To address the distribution shifts between training and test data, domai...
research
06/01/2021

Adversarially Adaptive Normalization for Single Domain Generalization

Single domain generalization aims to learn a model that performs well on...

Please sign up or login with your details

Forgot password? Click here to reset