Oracle Efficient Online Multicalibration and Omniprediction

07/18/2023
by   Sumegha Garg, et al.
0

A recent line of work has shown a surprising connection between multicalibration, a multi-group fairness notion, and omniprediction, a learning paradigm that provides simultaneous loss minimization guarantees for a large family of loss functions. Prior work studies omniprediction in the batch setting. We initiate the study of omniprediction in the online adversarial setting. Although there exist algorithms for obtaining notions of multicalibration in the online adversarial setting, unlike batch algorithms, they work only for small finite classes of benchmark functions F, because they require enumerating every function f ∈ F at every round. In contrast, omniprediction is most interesting for learning theoretic hypothesis classes F, which are generally continuously large. We develop a new online multicalibration algorithm that is well defined for infinite benchmark classes F, and is oracle efficient (i.e. for any class F, the algorithm has the form of an efficient reduction to a no-regret learning algorithm for F). The result is the first efficient online omnipredictor – an oracle efficient prediction algorithm that can be used to simultaneously obtain no regret guarantees to all Lipschitz convex loss functions. For the class F of linear functions, we show how to make our algorithm efficient in the worst case. Also, we show upper and lower bounds on the extent to which our rates can be improved: our oracle efficient algorithm actually promises a stronger guarantee called swap-omniprediction, and we prove a lower bound showing that obtaining O(√(T)) bounds for swap-omniprediction is impossible in the online setting. On the other hand, we give a (non-oracle efficient) algorithm which can obtain the optimal O(√(T)) omniprediction bounds without going through multicalibration, giving an information theoretic separation between these two solution concepts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/17/2022

Oracle-Efficient Online Learning for Beyond Worst-Case Adversaries

In this paper, we study oracle-efficient algorithms for beyond worst-cas...
research
06/23/2021

Best-Case Lower Bounds in Online Learning

Much of the work in online learning focuses on the study of sublinear up...
research
03/30/2022

Spatially Adaptive Online Prediction of Piecewise Regular Functions

We consider the problem of estimating piecewise regular functions in an ...
research
11/19/2018

How to Use Heuristics for Differential Privacy

We develop theory for using heuristics to solve computationally hard pro...
research
04/05/2011

Online and Batch Learning Algorithms for Data with Missing Features

We introduce new online and batch algorithms that are robust to data wit...
research
02/13/2023

Characterizing notions of omniprediction via multicalibration

A recent line of work shows that notions of multigroup fairness imply su...
research
09/15/2022

Omnipredictors for Constrained Optimization

The notion of omnipredictors (Gopalan, Kalai, Reingold, Sharan and Wiede...

Please sign up or login with your details

Forgot password? Click here to reset