Calibration for the (Computationally-Identifiable) Masses

11/22/2017
by   Úrsula Hébert-Johnson, et al.
0

As algorithms increasingly inform and influence decisions made about individuals, it becomes increasingly important to address concerns that these algorithms might be discriminatory. The output of an algorithm can be discriminatory for many reasons, most notably: (1) the data used to train the algorithm might be biased (in various ways) to favor certain populations over others; (2) the analysis of this training data might inadvertently or maliciously introduce biases that are not borne out in the data. This work focuses on the latter concern. We develop and study multicalbration -- a new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data. Multicalibration guarantees accurate (calibrated) predictions for every subpopulation that can be identified within a specified class of computations. We think of the class as being quite rich; in particular, it can contain many overlapping subgroups of a protected group. We show that in many settings this strong notion of protection from discrimination is both attainable and aligned with the goal of obtaining accurate predictions. Along the way, we present new algorithms for learning a multicalibrated predictor, study the computational complexity of this task, and draw new connections to computational learning models such as agnostic learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/03/2019

Preference-Informed Fairness

As algorithms are increasingly used to make important decisions pertaini...
research
05/20/2021

Multi-group Agnostic PAC Learnability

An agnostic PAC learning algorithm finds a predictor that is competitive...
research
05/17/2022

The Fairness of Machine Learning in Insurance: New Rags for an Old Man?

Since the beginning of their history, insurers have been known to use da...
research
12/17/2019

Supervised learning algorithms resilient to discriminatory data perturbations

The actions of individuals can be discriminatory with respect to certain...
research
09/23/2020

Unfairness Discovery and Prevention For Few-Shot Regression

We study fairness in supervised few-shot meta-learning models that are s...
research
03/02/2022

Low-Degree Multicalibration

Introduced as a notion of algorithmic fairness, multicalibration has pro...
research
10/26/2020

Interpretable Assessment of Fairness During Model Evaluation

For companies developing products or algorithms, it is important to unde...

Please sign up or login with your details

Forgot password? Click here to reset