Designing Equitable Algorithms

02/17/2023
by   Alex Chohlas-Wood, et al.
0

Predictive algorithms are now used to help distribute a large share of our society's resources and sanctions, such as healthcare, loans, criminal detentions, and tax audits. Under the right circumstances, these algorithms can improve the efficiency and equity of decision-making. At the same time, there is a danger that the algorithms themselves could entrench and exacerbate disparities, particularly along racial, ethnic, and gender lines. To help ensure their fairness, many researchers suggest that algorithms be subject to at least one of three constraints: (1) no use of legally protected features, such as race, ethnicity, and gender; (2) equal rates of "positive" decisions across groups; and (3) equal error rates across groups. Here we show that these constraints, while intuitively appealing, often worsen outcomes for individuals in marginalized groups, and can even leave all groups worse off. The inherent trade-off we identify between formal fairness constraints and welfare improvements – particularly for the marginalized – highlights the need for a more robust discussion on what it means for an algorithm to be "fair". We illustrate these ideas with examples from healthcare and the criminal-legal system, and make several proposals to help practitioners design more equitable algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/31/2018

The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning

In one broad class of supervised machine learning problems, researchers ...
research
05/25/2019

Protecting the Protected Group: Circumventing Harmful Fairness

Machine Learning (ML) algorithms shape our lives. Banks use them to dete...
research
04/28/2020

Genetic programming approaches to learning fair classifiers

Society has come to rely on algorithms like classifiers for important de...
research
04/14/2022

On allocations that give intersecting groups their fair share

We consider item allocation to individual agents who have additive valua...
research
01/05/2021

Characterizing Intersectional Group Fairness with Worst-Case Comparisons

Machine Learning or Artificial Intelligence algorithms have gained consi...
research
03/25/2021

Equality before the Law: Legal Judgment Consistency Analysis for Fairness

In a legal system, judgment consistency is regarded as one of the most i...

Please sign up or login with your details

Forgot password? Click here to reset