Reliability Gaps Between Groups in COMPAS Dataset

08/29/2023
by   Tim Räz, et al.
0

This paper investigates the inter-rater reliability of risk assessment instruments (RAIs). The main question is whether different, socially salient groups are affected differently by a lack of inter-rater reliability of RAIs, that is, whether mistakes with respect to different groups affects them differently. The question is investigated with a simulation study of the COMPAS dataset. A controlled degree of noise is injected into the input data of a predictive model; the noise can be interpreted as a synthetic rater that makes mistakes. The main finding is that there are systematic differences in output reliability between groups in the COMPAS dataset. The sign of the difference depends on the kind of inter-rater statistic that is used (Cohen's Kappa, Byrt's PABAK, ICC), and in particular whether or not a correction of predictions prevalences of the groups is used.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2019

Noise Induces Loss Discrepancy Across Groups for Linear Regression

We study the effect of feature noise (measurement error) on the discrepa...
research
08/10/2023

Inter-Rater Reliability is Individual Fairness

In this note, a connection between inter-rater reliability and individua...
research
02/28/2023

Reservoir Computing with Noise

This paper investigates in detail the effects of noise on the performanc...
research
07/16/2021

Measuring and Explaining the Inter-Cluster Reliability of Multidimensional Projections

We propose Steadiness and Cohesiveness, two novel metrics to measure the...
research
01/14/2022

Measuring Changes in Disparity Gaps: An Application to Health Insurance

We propose a method for reporting how program evaluations reduce gaps be...

Please sign up or login with your details

Forgot password? Click here to reset