DeepAI AI Chat
Log In Sign Up

Kernelized Heterogeneous Risk Minimization

by   Jiashuo Liu, et al.
Tsinghua University

The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-i.i.d testing data. Recently, invariant learning methods for out-of-distribution (OOD) generalization propose to find causally invariant relationships with multi-environments. However, modern datasets are frequently multi-sourced without explicit source labels, rendering many invariant learning methods inapplicable. In this paper, we propose Kernelized Heterogeneous Risk Minimization (KerHRM) algorithm, which achieves both the latent heterogeneity exploration and invariant learning in kernel space, and then gives feedback to the original neural network by appointing invariant gradient direction. We theoretically justify our algorithm and empirically validate the effectiveness of our algorithm with extensive experiments.


page 1

page 2

page 3

page 4


Heterogeneous Risk Minimization

Machine learning algorithms with empirical risk minimization usually suf...

Conformal Inference for Invariant Risk Minimization

The application of machine learning models can be significantly impeded ...

Pareto Invariant Risk Minimization

Despite the success of invariant risk minimization (IRM) in tackling the...

Risk Variance Penalization: From Distributional Robustness to Causality

Learning under multi-environments often requires the ability of out-of-d...

When Does Group Invariant Learning Survive Spurious Correlations?

By inferring latent groups in the training data, recent works introduce ...

Empirical Study on Optimizer Selection for Out-of-Distribution Generalization

Modern deep learning systems are fragile and do not generalize well unde...

Invariant Adversarial Learning for Distributional Robustness

Machine learning algorithms with empirical risk minimization are vulnera...