Invariant Risk Minimization

07/05/2019
by   Martin Arjovsky, et al.
1

We introduce Invariant Risk Minimization (IRM), a learning paradigm to estimate invariant correlations across multiple training distributions. To achieve this goal, IRM learns a data representation such that the optimal classifier, on top of that data representation, matches for all training distributions. Through theory and experiments, we show how the invariances learned by IRM relate to the causal structures governing the data and enable out-of-distribution generalization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/24/2021

Meta-Learned Invariant Risk Minimization

Empirical Risk Minimization (ERM) based machine learning algorithms have...
research
02/24/2021

Nonlinear Invariant Risk Minimization: A Causal Approach

Due to spurious correlations, machine learning systems often fail to gen...
research
04/22/2023

Towards Understanding Feature Learning in Out-of-Distribution Generalization

A common explanation for the failure of out-of-distribution (OOD) genera...
research
01/16/2021

Out-of-distribution Prediction with Invariant Risk Minimization: The Limitation and An Effective Fix

This work considers the out-of-distribution (OOD) prediction problem whe...
research
11/12/2020

Fairness and Robustness in Invariant Learning: A Case Study in Toxicity Classification

Robustness is of central importance in machine learning and has given ri...
research
08/09/2023

Pareto Invariant Representation Learning for Multimedia Recommendation

Multimedia recommendation involves personalized ranking tasks, where mul...
research
10/16/2021

Invariant Language Modeling

Modern pretrained language models are critical components of NLP pipelin...

Please sign up or login with your details

Forgot password? Click here to reset