Fair Bayesian Optimization

06/09/2020
by   Valerio Perrone, et al.
31

Given the increasing importance of machine learning (ML) in our lives, algorithmic fairness techniques have been proposed to mitigate biases that can be amplified by ML. Commonly, these specialized techniques apply to a single family of ML models and a specific definition of fairness, limiting their effectiveness in practice. We introduce a general constrained Bayesian optimization (BO) framework to optimize the performance of any ML model while enforcing one or multiple fairness constraints. BO is a global optimization method that has been successfully applied to automatically tune the hyperparameters of ML models. We apply BO with fairness constraints to a range of popular models, including random forests, gradient boosting, and neural networks, showing that we can obtain accurate and fair solutions by acting solely on the hyperparameters. We also show empirically that our approach is competitive with specialized techniques that explicitly enforce fairness constraints during training, and outperforms preprocessing methods that learn unbiased representations of the input data. Moreover, our method can be used in synergy with such specialized fairness techniques to tune their hyperparameters. Finally, we study the relationship between hyperparameters and fairness of the generated model. We observe a correlation between regularization and unbiased models, explaining why acting on the hyperparameters leads to ML models that generalize well and are fair.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/06/2020

There is no trade-off: enforcing fairness can improve accuracy

One of the main barriers to the broader adoption of algorithmic fairness...
research
08/13/2022

A Novel Regularization Approach to Fair ML

A number of methods have been introduced for the fair ML issue, most of ...
research
06/28/2017

autoBagging: Learning to Rank Bagging Workflows with Metalearning

Machine Learning (ML) has been successfully applied to a wide range of d...
research
06/21/2021

Regularization is all you Need: Simple Neural Nets can Excel on Tabular Data

Tabular datasets are the last "unconquered castle" for deep learning, wi...
research
03/31/2021

Individually Fair Gradient Boosting

We consider the task of enforcing individual fairness in gradient boosti...
research
09/30/2022

Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning

Dynamic adaptation has become an essential technique in accelerating dis...
research
02/25/2020

Teaching the Old Dog New Tricks: Supervised Learning with Constraints

Methods for taking into account external knowledge in Machine Learning m...

Please sign up or login with your details

Forgot password? Click here to reset