Low-Degree Multicalibration

03/02/2022
by   Parikshit Gopalan, et al.
0

Introduced as a notion of algorithmic fairness, multicalibration has proved to be a powerful and versatile concept with implications far beyond its original intent. This stringent notion – that predictions be well-calibrated across a rich class of intersecting subpopulations – provides its strong guarantees at a cost: the computational and sample complexity of learning multicalibrated predictors are high, and grow exponentially with the number of class labels. In contrast, the relaxed notion of multiaccuracy can be achieved more efficiently, yet many of the most desirable properties of multicalibration cannot be guaranteed assuming multiaccuracy alone. This tension raises a key question: Can we learn predictors with multicalibration-style guarantees at a cost commensurate with multiaccuracy? In this work, we define and initiate the study of Low-Degree Multicalibration. Low-Degree Multicalibration defines a hierarchy of increasingly-powerful multi-group fairness notions that spans multiaccuracy and the original formulation of multicalibration at the extremes. Our main technical contribution demonstrates that key properties of multicalibration, related to fairness and accuracy, actually manifest as low-degree properties. Importantly, we show that low-degree multicalibration can be significantly more efficient than full multicalibration. In the multi-class setting, the sample complexity to achieve low-degree multicalibration improves exponentially (in the number of classes) over full multicalibration. Our work presents compelling evidence that low-degree multicalibration represents a sweet spot, pairing computational and sample efficiency with strong fairness and accuracy guarantees.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/12/2021

Calibrating Predictions to Decisions: A Novel Approach to Multi-Class Calibration

When facing uncertainty, decision-makers want predictions they can trust...
research
02/24/2020

Learning Certified Individually Fair Representations

To effectively enforce fairness constraints one needs to define an appro...
research
12/14/2020

Fair and Efficient Allocations under Lexicographic Preferences

Envy-freeness up to any good (EFX) provides a strong and intuitive guara...
research
04/19/2023

Loss minimization yields multicalibration for large neural networks

Multicalibration is a notion of fairness that aims to provide accurate p...
research
03/07/2023

Group conditional validity via multi-group learning

We consider the problem of distribution-free conformal prediction and th...
research
06/10/2018

Identifiability in Gaussian Graphical Models

In high-dimensional graph learning problems, some topological properties...
research
11/22/2017

Calibration for the (Computationally-Identifiable) Masses

As algorithms increasingly inform and influence decisions made about ind...

Please sign up or login with your details

Forgot password? Click here to reset