DeepAI AI Chat
Log In Sign Up

Does Invariant Risk Minimization Capture Invariance?

by   Pritish Kamath, et al.

We show that the Invariant Risk Minimization (IRM) formulation of Arjovsky et al. (2019) can fail to capture "natural" invariances, at least when used in its practical "linear" form, and even on very simple problems which directly follow the motivating examples for IRM. This can lead to worse generalization on new environments, even when compared to unconstrained ERM. The issue stems from a significant gap between the linear variant (as in their concrete method IRMv1) and the full non-linear IRM formulation. Additionally, even when capturing the "right" invariances, we show that it is possible for IRM to learn a sub-optimal predictor, due to the loss function not being invariant across environments. The issues arise even when measuring invariance on the population distributions, but are exacerbated by the fact that IRM is extremely fragile to sampling.


page 1

page 2

page 3

page 4


On Invariance Penalties for Risk Minimization

The Invariant Risk Minimization (IRM) principle was first proposed by Ar...

The Missing Invariance Principle Found – the Reciprocal Twin of Invariant Risk Minimization

Machine learning models often generalize poorly to out-of-distribution (...

Pareto Invariant Risk Minimization

Despite the success of invariant risk minimization (IRM) in tackling the...

A call for better unit testing for invariant risk minimisation

In this paper we present a controlled study on the linearized IRM framew...

Iterative Feature Matching: Toward Provable Domain Generalization with Logarithmic Environments

Domain generalization aims at performing well on unseen test environment...

The Risks of Invariant Risk Minimization

Invariant Causal Prediction (Peters et al., 2016) is a technique for out...

Provable Domain Generalization via Invariant-Feature Subspace Recovery

Domain generalization asks for models trained on a set of training envir...