On the nonlinear correlation of ML performance between data subpopulations

05/04/2023
by   Weixin Liang, et al.
0

Understanding the performance of machine learning (ML) models across diverse data distributions is critically important for reliable applications. Despite recent empirical studies positing a near-perfect linear correlation between in-distribution (ID) and out-of-distribution (OOD) accuracies, we empirically demonstrate that this correlation is more nuanced under subpopulation shifts. Through rigorous experimentation and analysis across a variety of datasets, models, and training epochs, we demonstrate that OOD performance often has a nonlinear correlation with ID performance in subpopulation shifts. Our findings, which contrast previous studies that have posited a linear correlation in model performance during distribution shifts, reveal a "moon shape" correlation (parabolic uptrend curve) between the test performance on the majority subpopulation and the minority subpopulation. This non-trivial nonlinear correlation holds across model architectures, hyperparameters, training durations, and the imbalance between subpopulations. Furthermore, we found that the nonlinearity of this "moon shape" is causally influenced by the degree of spurious correlations in the training data. Our controlled experiments show that stronger spurious correlation in the training data creates more nonlinear performance correlation. We provide complementary experimental and theoretical analyses for this phenomenon, and discuss its implications for ML reliability and fairness. Our work highlights the importance of understanding the nonlinear effects of model improvement on performance in different subpopulations, and has the potential to inform the development of more equitable and responsible machine learning models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/09/2021

Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization

For machine learning systems to be reliable, we must understand their pe...
research
05/09/2023

Even Small Correlation and Diversity Shifts Pose Dataset-Bias Issues

Distribution shifts are common in real-world datasets and can affect the...
research
06/27/2022

Agreement-on-the-Line: Predicting the Performance of Neural Networks under Distribution Shift

Recently, Miller et al. showed that a model's in-distribution (ID) accur...
research
08/08/2023

When More is Less: Incorporating Additional Datasets Can Hurt Performance By Introducing Spurious Correlations

In machine learning, incorporating more data is often seen as a reliable...
research
11/17/2021

Understanding and Testing Generalization of Deep Networks on Out-of-Distribution Data

Deep network models perform excellently on In-Distribution (ID) data, bu...
research
02/14/2022

MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts

Understanding the performance of machine learning models across diverse ...
research
12/06/2021

Thinking Beyond Distributions in Testing Machine Learned Models

Testing practices within the machine learning (ML) community have center...

Please sign up or login with your details

Forgot password? Click here to reset