Active Fairness Instead of Unawareness

09/14/2020
by   Boris Ruf, et al.
0

The possible risk that AI systems could promote discrimination by reproducing and enforcing unwanted bias in data has been broadly discussed in research and society. Many current legal standards demand to remove sensitive attributes from data in order to achieve "fairness through unawareness". We argue that this approach is obsolete in the era of big data where large datasets with highly correlated attributes are common. In the contrary, we propose the active use of sensitive attributes with the purpose of observing and controlling any kind of discrimination, and thus leading to fair results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2021

You Can Still Achieve Fairness Without Sensitive Attributes: Exploring Biases in Non-Sensitive Features

Though machine learning models are achieving great success, ex-tensive s...
research
11/12/2018

Eliminating Latent Discrimination: Train Then Mask

How can we control for latent discrimination in predictive models? How c...
research
11/26/2018

AI Fairness for People with Disabilities: Point of View

We consider how fair treatment in society for people with disabilities m...
research
10/13/2017

Two-stage Algorithm for Fairness-aware Machine Learning

Algorithmic decision making process now affects many aspects of our live...
research
03/15/2020

Getting Fairness Right: Towards a Toolbox for Practitioners

The potential risk of AI systems unintentionally embedding and reproduci...
research
02/26/2020

Fairness-Aware Learning with Prejudice Free Representations

Machine learning models are extensively being used to make decisions tha...
research
07/20/2022

Mitigating Algorithmic Bias with Limited Annotations

Existing work on fairness modeling commonly assumes that sensitive attri...

Please sign up or login with your details

Forgot password? Click here to reset