OoD-Bench: Benchmarking and Understanding Out-of-Distribution Generalization Datasets and Algorithms

06/07/2021
by   Nanyang Ye, et al.
0

Deep learning has achieved tremendous success with independent and identically distributed (i.i.d.) data. However, the performance of neural networks often degenerates drastically when encountering out-of-distribution (OoD) data, i.e., training and test data are sampled from different distributions. While a plethora of algorithms has been proposed to deal with OoD generalization, our understanding of the data used to train and evaluate these algorithms remains stagnant. In this work, we position existing datasets and algorithms from various research areas (e.g., domain generalization, stable learning, invariant risk minimization) seemingly unconnected into the same coherent picture. First, we identify and measure two distinct kinds of distribution shifts that are ubiquitous in various datasets. Next, we compare various OoD generalization algorithms with a new benchmark dominated by the two distribution shifts. Through extensive experiments, we show that existing OoD algorithms that outperform empirical risk minimization on one distribution shift usually have limitations on the other distribution shift. The new benchmark may serve as a strong foothold that can be resorted to by future OoD generalization research.

READ FULL TEXT

page 2

page 7

page 8

research
04/17/2022

NICO++: Towards Better Benchmarking for Domain Generalization

Despite the remarkable performance that modern deep neural networks have...
research
11/15/2022

Empirical Study on Optimizer Selection for Out-of-Distribution Generalization

Modern deep learning systems are fragile and do not generalize well unde...
research
02/14/2022

MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts

Understanding the performance of machine learning models across diverse ...
research
09/02/2022

Back-to-Bones: Rediscovering the Role of Backbones in Domain Generalization

Domain Generalization (DG) studies the capability of a deep learning mod...
research
09/12/2023

Towards Reliable Domain Generalization: A New Dataset and Evaluations

There are ubiquitous distribution shifts in the real world. However, dee...
research
07/12/2023

Single Domain Generalization via Normalised Cross-correlation Based Convolutions

Deep learning techniques often perform poorly in the presence of domain ...
research
02/25/2021

An Online Learning Approach to Interpolation and Extrapolation in Domain Generalization

A popular assumption for out-of-distribution generalization is that the ...

Please sign up or login with your details

Forgot password? Click here to reset