Omnipredictors

09/11/2021
by   Parikshit Gopalan, et al.
17

Loss minimization is a dominant paradigm in machine learning, where a predictor is trained to minimize some loss function that depends on an uncertain event (e.g., "will it rain tomorrow?”). Different loss functions imply different learning algorithms and, at times, very different predictors. While widespread and appealing, a clear drawback of this approach is that the loss function may not be known at the time of learning, requiring the algorithm to use a best-guess loss function. We suggest a rigorous new paradigm for loss minimization in machine learning where the loss function can be ignored at the time of learning and only be taken into account when deciding an action. We introduce the notion of an (ℒ,𝒞)-omnipredictor, which could be used to optimize any loss in a family ℒ. Once the loss function is set, the outputs of the predictor can be post-processed (a simple univariate data-independent transformation of individual predictions) to do well compared with any hypothesis from the class 𝒞. The post processing is essentially what one would perform if the outputs of the predictor were true probabilities of the uncertain events. In a sense, omnipredictors extract all the predictive power from the class 𝒞, irrespective of the loss function in ℒ. We show that such "loss-oblivious” learning is feasible through a connection to multicalibration, a notion introduced in the context of algorithmic fairness. In addition, we show how multicalibration can be viewed as a solution concept for agnostic boosting, shedding new light on past results. Finally, we transfer our insights back to the context of algorithmic fairness by providing omnipredictors for multi-group loss minimization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/15/2022

Omnipredictors for Constrained Optimization

The notion of omnipredictors (Gopalan, Kalai, Reingold, Sharan and Wiede...
research
05/20/2021

Multi-group Agnostic PAC Learnability

An agnostic PAC learning algorithm finds a predictor that is competitive...
research
02/13/2023

Characterizing notions of omniprediction via multicalibration

A recent line of work shows that notions of multigroup fairness imply su...
research
04/19/2023

Loss minimization yields multicalibration for large neural networks

Multicalibration is a notion of fairness that aims to provide accurate p...
research
10/16/2022

Loss Minimization through the Lens of Outcome Indistinguishability

We present a new perspective on loss minimization and the recent notion ...
research
09/24/2019

The column measure and Gradient-Free Gradient Boosting

Sparse model selection by structural risk minimization leads to a set of...
research
09/19/2023

Mean Absolute Directional Loss as a New Loss Function for Machine Learning Problems in Algorithmic Investment Strategies

This paper investigates the issue of an adequate loss function in the op...

Please sign up or login with your details

Forgot password? Click here to reset