Distributionally Robust Losses for Latent Covariate Mixtures

07/28/2020
by   John Duchi, et al.
0

While modern large-scale datasets often consist of heterogeneous subpopulations—for example, multiple demographic groups or multiple text corpora—the standard practice of minimizing average loss fails to guarantee uniformly low losses across all subpopulations. We propose a convex procedure that controls the worst-case performance over all subpopulations of a given size. Our procedure comes with finite-sample (nonparametric) convergence guarantees on the worst-off subpopulation. Empirically, we observe on lexical similarity, wine quality, and recidivism prediction tasks that our worst-case procedure learns models that do well against unseen subpopulations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset