Towards a Theoretical Framework of Out-of-Distribution Generalization

06/08/2021
by   Haotian Ye, et al.
45

Generalization to out-of-distribution (OOD) data, or domain generalization, is one of the central problems in modern machine learning. Recently, there is a surge of attempts to propose algorithms for OOD that mainly build upon the idea of extracting invariant features. Although intuitively reasonable, theoretical understanding of what kind of invariance can guarantee OOD generalization is still limited, and generalization to arbitrary out-of-distribution is clearly impossible. In this work, we take the first step towards rigorous and quantitative definitions of 1) what is OOD; and 2) what does it mean by saying an OOD problem is learnable. We also introduce a new concept of expansion function, which characterizes to what extent the variance is amplified in the test domains over the training domains, and therefore give a quantitative meaning of invariant features. Based on these, we prove OOD generalization error bounds. It turns out that OOD generalization largely depends on the expansion function. As recently pointed out by Gulrajani and Lopez-Paz (2020), any OOD learning algorithm without a model selection module is incomplete. Our theory naturally induces a model selection criterion. Extensive experiments on benchmark OOD datasets demonstrate that our model selection criterion has a significant advantage over baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/02/2020

In Search of Lost Domain Generalization

The goal of domain generalization algorithms is to predict well on distr...
04/17/2022

NICO++: Towards Better Benchmarking for Domain Generalization

Despite the remarkable performance that modern deep neural networks have...
06/23/2020

A Robust Consistent Information Criterion for Model Selection based on Empirical Likelihood

Conventional likelihood-based information criteria for model selection r...
03/03/2021

Out of Distribution Generalization in Machine Learning

Machine learning has achieved tremendous success in a variety of domains...
07/13/2022

Cost-Effective Online Contextual Model Selection

How can we collect the most useful labels to learn a model selection pol...
09/26/2020

Small Data, Big Decisions: Model Selection in the Small-Data Regime

Highly overparametrized neural networks can display curiously strong gen...
12/19/2019

Infinite Diameter Confidence Sets in a Model for Publication Bias

There is no confidence set of guaranteed finite diameter for the mean an...