Ethical and Fairness Implications of Model Multiplicity

03/14/2022
by   Kacper Sokol, et al.
0

While predictive models are a purely technological feat, they may operate in a social context in which benign engineering choices entail unexpected real-life consequences. Fairness – pertaining both to individuals and groups – is one of such considerations; it surfaces when data capture protected characteristics of people who may be discriminated upon these attributes. This notion has predominantly been studied for a fixed predictive model, sometimes under different classification thresholds, striving to identify and eradicate its undesirable behaviour. Here we backtrack on this assumption and explore a novel definition of fairness where individuals can be harmed when one predictor is chosen ad hoc from a group of equally well performing models, i.e., in view of model multiplicity. Since a person may be classified differently across models that are otherwise considered equivalent, this individual could argue for a model with a more favourable outcome, possibly causing others to be adversely affected. We introduce this scenario with a two-dimensional example based on linear classification; then investigate its analytical properties in a broader context; and finally present experimental results on data sets popular in fairness studies. Our findings suggest that such unfairness can be found in real-life situations and may be difficult to mitigate with technical measures alone, as doing so degrades certain metrics of predictive performance.

READ FULL TEXT

page 6

page 7

page 8

page 9

page 12

page 13

page 14

page 15

research
04/19/2023

Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness

Group fairness is achieved by equalising prediction distributions betwee...
research
05/19/2022

What Is Fairness? Implications For FairML

A growing body of literature in fairness-aware ML (fairML) aspires to mi...
research
06/21/2019

FlipTest: Fairness Auditing via Optimal Transport

We present FlipTest, a black-box auditing technique for uncovering subgr...
research
09/10/2020

Prune Responsibly

Irrespective of the specific definition of fairness in a machine learnin...
research
11/04/2019

Auditing and Achieving Intersectional Fairness in Classification Problems

Machine learning algorithms are extensively used to make increasingly mo...
research
04/19/2023

ACROCPoLis: A Descriptive Framework for Making Sense of Fairness

Fairness is central to the ethical and responsible development and use o...
research
08/20/2020

Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life Anecdotes

As AI systems become an increasing part of people's everyday lives, it b...

Please sign up or login with your details

Forgot password? Click here to reset