Fairer and more accurate, but for whom?

06/30/2017
by   Alexandra Chouldechova, et al.
0

Complex statistical machine learning models are increasingly being used or considered for use in high-stakes decision-making pipelines in domains such as financial services, health care, criminal justice and human services. These models are often investigated as possible improvements over more classical tools such as regression models or human judgement. While the modeling approach may be new, the practice of using some form of risk assessment to inform decisions is not. When determining whether a new model should be adopted, it is therefore essential to be able to compare the proposed model to the existing approach across a range of task-relevant accuracy and fairness metrics. Looking at overall performance metrics, however, may be misleading. Even when two models have comparable overall performance, they may nevertheless disagree in their classifications on a considerable fraction of cases. In this paper we introduce a model comparison framework for automatically identifying subgroups in which the differences between models are most pronounced. Our primary focus is on identifying subgroups where the models differ in terms of fairness-related quantities such as racial or gender disparities. We present experimental results from a recidivism prediction task and a hypothetical lending example.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/03/2022

An intersectional framework for counterfactual fairness in risk prediction

Along with the increasing availability of data in many sectors has come ...
research
08/30/2019

Counterfactual Risk Assessments, Evaluation, and Fairness

Algorithmic risk assessments are increasingly used to help humans make d...
research
01/02/2021

Characterizing Fairness Over the Set of Good Models Under Selective Labels

Algorithmic risk assessments are increasingly used to make and inform de...
research
01/04/2022

A new simple dynamic muscle fatigue model and its validation

Musculoskeletal disorder (MSD) is one of the major health problems in ph...
research
08/04/2021

Fairness in Algorithmic Profiling: A German Case Study

Algorithmic profiling is increasingly used in the public sector as a mea...
research
11/30/2020

Why model why? Assessing the strengths and limitations of LIME

When it comes to complex machine learning models, commonly referred to a...
research
09/12/2018

Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and Interpretability

Algorithmic predictions are increasingly used to aid, or in some cases s...

Please sign up or login with your details

Forgot password? Click here to reset