Confidence Intervals for Testing Disparate Impact in Fair Learning

07/17/2018
by   Philippe Besse, et al.
0

We provide the asymptotic distribution of the major indexes used in the statistical literature to quantify disparate treatment in machine learning. We aim at promoting the use of confidence intervals when testing the so-called group disparate impact. We illustrate on some examples the importance of using confidence intervals and not a single value.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/09/2019

Extreme value theory based confidence intervals for the parameters of a symmetric Lévy-stable distribution

We exploit the asymptotic normality of the extreme value theory (EVT) ba...
research
12/23/2022

Simple Buehler-optimal confidence intervals on the average success probability of independent Bernoulli trials

One-sided confidence intervals are presented for the average of non-iden...
research
07/11/2019

Statistical inference for piecewise normal distributions and stochastic variational inequalities

In this paper we first provide a method to compute confidence intervals ...
research
11/29/2021

Confidence regions for univariate and multivariate data using permutation tests

Confidence intervals are central to statistical inference. We devise a m...
research
05/31/2022

Confidence Intervals for Recursive Journal Impact Factors

We compute confidence intervals for recursive impact factors, that take ...
research
06/05/2022

Inference for Interpretable Machine Learning: Fast, Model-Agnostic Confidence Intervals for Feature Importance

In order to trust machine learning for high-stakes problems, we need mod...
research
01/30/2015

Confidence intervals for AB-test

AB-testing is a very popular technique in web companies since it makes i...

Please sign up or login with your details

Forgot password? Click here to reset