Fairness in Machine Learning

by   Luca Oneto, et al.

Machine learning based systems are reaching society at large and in many aspects of everyday life. This phenomenon has been accompanied by concerns about the ethical issues that may arise from the adoption of these technologies. ML fairness is a recently established area of machine learning that studies how to ensure that biases in the data and model inaccuracies do not lead to models that treat individuals unfavorably on the basis of characteristics such as e.g. race, gender, disabilities, and sexual or political orientation. In this manuscript, we discuss some of the limitations present in the current reasoning about fairness and in methods that deal with it, and describe some work done by the authors to address them. More specifically, we show how causal Bayesian networks can play an important role to reason about and deal with fairness, especially in complex unfairness scenarios. We describe how optimal transport theory can be used to develop methods that impose constraints on the full shapes of distributions corresponding to different sensitive attributes, overcoming the limitation of most approaches that approximate fairness desiderata by imposing constraints on the lower order moments or other functions of those distributions. We present a unified framework that encompasses methods that can deal with different settings and fairness criteria, and that enjoys strong theoretical guarantees. We introduce an approach to learn fair representations that can generalize to unseen tasks. Finally, we describe a technique that accounts for legal restrictions about the use of sensitive attributes.


page 1

page 2

page 3

page 4


Rényi Fair Inference

Machine learning algorithms have been increasingly deployed in critical ...

Fair Spatial Indexing: A paradigm for Group Spatial Fairness

Machine learning (ML) is playing an increasing role in decision-making t...

Globalizing Fairness Attributes in Machine Learning: A Case Study on Health in Africa

With growing machine learning (ML) applications in healthcare, there hav...

Two Simple Ways to Learn Individual Fairness Metrics from Data

Individual fairness is an intuitive definition of algorithmic fairness t...

Conformalized Fairness via Quantile Regression

Algorithmic fairness has received increased attention in socially sensit...

TIDE: Textual Identity Detection for Evaluating and Augmenting Classification and Language Models

Machine learning models can perpetuate unintended biases from unfair and...

Cyberbullying Detection with Fairness Constraints

Cyberbullying is a widespread adverse phenomenon among online social int...

Please sign up or login with your details

Forgot password? Click here to reset