Diversity Boosted Learning for Domain Generalization with Large Number of Domains

07/28/2022
by   Xi Leng, et al.
0

Machine learning algorithms minimizing the average training loss usually suffer from poor generalization performance due to the greedy exploitation of correlations among the training data, which are not stable under distributional shifts. It inspires various works for domain generalization (DG), where a series of methods, such as Causal Matching and FISH, work by pairwise domain operations. They would need O(n^2) pairwise domain operations with n domains, where each one is often highly expensive. Moreover, while a common objective in the DG literature is to learn invariant representations against domain-induced spurious correlations, we highlight the importance of mitigating spurious correlations caused by objects. Based on the observation that diversity helps mitigate spurious correlations, we propose a Diversity boosted twO-level saMplIng framework (DOMI) utilizing Determinantal Point Processes (DPPs) to efficiently sample the most informative ones among large number of domains. We show that DOMI helps train robust models against spurious correlations from both domain-side and object-side, substantially enhancing the performance of the backbone DG algorithms on rotated MNIST, rotated Fashion MNIST, and iwildcam datasets.

READ FULL TEXT
research
05/09/2021

Heterogeneous Risk Minimization

Machine learning algorithms with empirical risk minimization usually suf...
research
06/12/2020

Domain Generalization using Causal Matching

Learning invariant representations has been proposed as a key technique ...
research
06/11/2021

Invariant Information Bottleneck for Domain Generalization

The main challenge for domain generalization (DG) is to overcome the pot...
research
10/16/2021

Invariant Language Modeling

Modern pretrained language models are critical components of NLP pipelin...
research
06/08/2020

Invariant Adversarial Learning for Distributional Robustness

Machine learning algorithms with empirical risk minimization are vulnera...
research
06/30/2021

Distributionally Robust Learning with Stable Adversarial Training

Machine learning algorithms with empirical risk minimization are vulnera...
research
10/07/2020

Exploiting non-i.i.d. data towards more robust machine learning algorithms

In the field of machine learning there is a growing interest towards mor...

Please sign up or login with your details

Forgot password? Click here to reset