DeepAI AI Chat
Log In Sign Up

Active Fairness Instead of Unawareness

by   Boris Ruf, et al.

The possible risk that AI systems could promote discrimination by reproducing and enforcing unwanted bias in data has been broadly discussed in research and society. Many current legal standards demand to remove sensitive attributes from data in order to achieve "fairness through unawareness". We argue that this approach is obsolete in the era of big data where large datasets with highly correlated attributes are common. In the contrary, we propose the active use of sensitive attributes with the purpose of observing and controlling any kind of discrimination, and thus leading to fair results.


page 1

page 2

page 3

page 4


You Can Still Achieve Fairness Without Sensitive Attributes: Exploring Biases in Non-Sensitive Features

Though machine learning models are achieving great success, ex-tensive s...

Eliminating Latent Discrimination: Train Then Mask

How can we control for latent discrimination in predictive models? How c...

AI Fairness for People with Disabilities: Point of View

We consider how fair treatment in society for people with disabilities m...

Two-stage Algorithm for Fairness-aware Machine Learning

Algorithmic decision making process now affects many aspects of our live...

Getting Fairness Right: Towards a Toolbox for Practitioners

The potential risk of AI systems unintentionally embedding and reproduci...

Fairness-Aware Learning with Prejudice Free Representations

Machine learning models are extensively being used to make decisions tha...

Mitigating Algorithmic Bias with Limited Annotations

Existing work on fairness modeling commonly assumes that sensitive attri...