Understanding Why Generalized Reweighting Does Not Improve Over ERM

01/28/2022
by   Runtian Zhai, et al.
0

Empirical risk minimization (ERM) is known in practice to be non-robust to distributional shift where the training and the test distributions are different. A suite of approaches, such as importance weighting, and variants of distributionally robust optimization (DRO), have been proposed to solve this problem. But a line of recent work has empirically shown that these approaches do not significantly improve over ERM in real applications with distribution shift. The goal of this work is to obtain a comprehensive theoretical understanding of this intriguing phenomenon. We first posit the class of Generalized Reweighting (GRW) algorithms, as a broad category of approaches that iteratively update model parameters based on iterative reweighting of the training samples. We show that when overparameterized models are trained under GRW, the resulting models are close to that obtained by ERM. We also show that adding small regularization which does not greatly affect the empirical training accuracy does not help. Together, our results show that a broad category of what we term GRW approaches are not able to achieve distributionally robust generalization. Our work thus has the following sobering takeaway: to make progress towards distributionally robust generalization, we either have to develop non-GRW approaches, or perhaps devise novel classification/regression loss functions that are adapted to the class of GRW approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/09/2023

Reweighted Mixup for Subpopulation Shift

Subpopulation shift exists widely in many real-world applications, which...
research
06/11/2021

DORO: Distributional and Outlier Robust Optimization

Many machine learning tasks involve subpopulation shift where the testin...
research
05/15/2023

Double-Weighting for Covariate Shift Adaptation

Supervised learning is often affected by a covariate shift in which the ...
research
09/19/2022

UMIX: Improving Importance Weighting for Subpopulation Shift via Uncertainty-Aware Mixup

Subpopulation shift wildly exists in many real-world machine learning ap...
research
08/17/2023

Environment Diversification with Multi-head Neural Network for Invariant Learning

Neural networks are often trained with empirical risk minimization; howe...
research
04/13/2022

Distributionally Robust Models with Parametric Likelihood Ratios

As machine learning models are deployed ever more broadly, it becomes in...
research
07/14/2022

Improved OOD Generalization via Conditional Invariant Regularizer

Recently, generalization on out-of-distribution (OOD) data with correlat...

Please sign up or login with your details

Forgot password? Click here to reset