Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML

by   Hilde Weerts, et al.

The field of automated machine learning (AutoML) introduces techniques that automate parts of the development of machine learning (ML) systems, accelerating the process and reducing barriers for novices. However, decisions derived from ML models can reproduce, amplify, or even introduce unfairness in our societies, causing harm to (groups of) individuals. In response, researchers have started to propose AutoML systems that jointly optimize fairness and predictive performance to mitigate fairness-related harm. However, fairness is a complex and inherently interdisciplinary subject, and solely posing it as an optimization problem can have adverse side effects. With this work, we aim to raise awareness among developers of AutoML systems about such limitations of fairness-aware AutoML, while also calling attention to the potential of AutoML as a tool for fairness research. We present a comprehensive overview of different ways in which fairness-related harm can arise and the ensuing implications for the design of fairness-aware AutoML. We conclude that while fairness cannot be automated, fairness-aware AutoML can play an important role in the toolbox of an ML practitioner. We highlight several open technical challenges for future work in this direction. Additionally, we advocate for the creation of more user-centered assistive systems designed to tackle challenges encountered in fairness work.


page 1

page 2

page 3

page 4


What Is Fairness? Implications For FairML

A growing body of literature in fairness-aware ML (fairML) aspires to mi...

Games for Fairness and Interpretability

As Machine Learning (ML) systems becomes more ubiquitous, ensuring the f...

Globalizing Fairness Attributes in Machine Learning: A Case Study on Health in Africa

With growing machine learning (ML) applications in healthcare, there hav...

Towards Algorithmic Fairness in Space-Time: Filling in Black Holes

New technologies and the availability of geospatial data have drawn atte...

A Three-Way Knot: Privacy, Fairness, and Predictive Performance Dynamics

As the frontier of machine learning applications moves further into huma...

Fairness Amidst Non-IID Graph Data: A Literature Review

Fairness in machine learning (ML), the process to understand and correct...

Fairness in Rankings and Recommendations: An Overview

We increasingly depend on a variety of data-driven algorithmic systems t...

Please sign up or login with your details

Forgot password? Click here to reset