Loss Minimization through the Lens of Outcome Indistinguishability

10/16/2022
by   Parikshit Gopalan, et al.
0

We present a new perspective on loss minimization and the recent notion of Omniprediction through the lens of Outcome Indistingusihability. For a collection of losses and hypothesis class, omniprediction requires that a predictor provide a loss-minimization guarantee simultaneously for every loss in the collection compared to the best (loss-specific) hypothesis in the class. We present a generic template to learn predictors satisfying a guarantee we call Loss Outcome Indistinguishability. For a set of statistical tests–based on a collection of losses and hypothesis class–a predictor is Loss OI if it is indistinguishable (according to the tests) from Nature's true probabilities over outcomes. By design, Loss OI implies omniprediction in a direct and intuitive manner. We simplify Loss OI further, decomposing it into a calibration condition plus multiaccuracy for a class of functions derived from the loss and hypothesis classes. By careful analysis of this class, we give efficient constructions of omnipredictors for interesting classes of loss functions, including non-convex losses. This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration. We show that calibrated multiaccuracy implies Loss OI for the important set of convex losses arising from Generalized Linear Models, without requiring full multicalibration. For such losses, we show an equivalence between our computational notion of Loss OI and a geometric notion of indistinguishability, formulated as Pythagorean theorems in the associated Bregman divergence. We give an efficient algorithm for calibrated multiaccuracy with computational complexity comparable to that of multiaccuracy. In all, calibrated multiaccuracy offers an interesting tradeoff point between efficiency and generality in the omniprediction landscape.

READ FULL TEXT
research
05/28/2020

Calibrated Surrogate Losses for Adversarially Robust Classification

Adversarially robust classification seeks a classifier that is insensiti...
research
04/19/2021

Calibration and Consistency of Adversarial Surrogate Losses

Adversarial robustness is an increasingly critical property of classifie...
research
09/11/2021

Omnipredictors

Loss minimization is a dominant paradigm in machine learning, where a pr...
research
05/24/2018

Learning Classifiers with Fenchel-Young Losses: Generalized Entropies, Margins, and Algorithms

We study in this paper Fenchel-Young losses, a generic way to construct ...
research
09/15/2022

Omnipredictors for Constrained Optimization

The notion of omnipredictors (Gopalan, Kalai, Reingold, Sharan and Wiede...
research
05/30/2023

When Does Optimizing a Proper Loss Yield Calibration?

Optimizing proper loss functions is popularly believed to yield predicto...
research
02/10/2020

Supervised Learning: No Loss No Cry

Supervised learning requires the specification of a loss function to min...

Please sign up or login with your details

Forgot password? Click here to reset