Protecting the Protected Group: Circumventing Harmful Fairness

05/25/2019
by   Omer Ben-Porat, et al.
0

Machine Learning (ML) algorithms shape our lives. Banks use them to determine if we are good borrowers; IT companies delegate them recruitment decisions; police apply ML for crime-prediction, and judges base their verdicts on ML. However, real-world examples show that such automated decisions tend to discriminate protected groups. This generated a huge hype both in media and in the research community. Quite a few formal notions of fairness were proposed, which take a form of constraints a "fair" algorithm must satisfy. We focus on scenarios where fairness is imposed on a self-interested party (e.g., a bank that maximizes its revenue). We find that the disadvantaged protected group can be worse off after imposing a fairness constraint. We introduce a family of Welfare-Equalizing fairness constraints that equalize per-capita welfare of protected groups, and include Demographic Parity and Equal Opportunity as particular cases. In this family, we characterize conditions under which the fairness constraint helps the disadvantaged group. We also characterize the structure of the optimal Welfare-Equalizing classifier for the self-interested party, and provide an LP-based algorithm to compute it. Overall, our Welfare-Equalizing fairness approach provides a unified framework for discussing fairness in classification in the presence of a self-interested party.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/14/2022

On allocations that give intersecting groups their fair share

We consider item allocation to individual agents who have additive valua...
research
03/13/2021

OmniFair: A Declarative System for Model-Agnostic Group Fairness in Machine Learning

Machine learning (ML) is increasingly being used to make decisions in ou...
research
02/17/2023

Designing Equitable Algorithms

Predictive algorithms are now used to help distribute a large share of o...
research
08/07/2022

Counterfactual Fairness Is Basically Demographic Parity

Making fair decisions is crucial to ethically implementing machine learn...
research
06/16/2022

Active Fairness Auditing

The fast spreading adoption of machine learning (ML) by companies across...
research
07/12/2022

A Conceptual Framework for Using Machine Learning to Support Child Welfare Decisions

Human services systems make key decisions that impact individuals in the...
research
10/22/2020

The Pursuit of Algorithmic Fairness: On "Correcting" Algorithmic Unfairness in a Child Welfare Reunification Success Classifier

The algorithmic fairness of predictive analytic tools in the public sect...

Please sign up or login with your details

Forgot password? Click here to reset