Software Engineering for Fairness: A Case Study with Hyperparameter Optimization

05/14/2019
by   Joymallya Chakraborty, et al.
0

We assert that it is the ethical duty of software engineers to strive to reduce software discrimination. This paper discusses how that might be done. This is an important topic since machine learning software is increasingly being used to make decisions that affect people's lives. Potentially, the application of that software will result in fairer decisions because (unlike humans) machine learning software is not biased. However, recent results show that the software within many data mining packages exhibits "group discrimination"; i.e. their decisions are inappropriately affected by "protected attributes"(e.g., race, gender, age, etc.). There has been much prior work on validating the fairness of machine-learning models (by recognizing when such software discrimination exists). But after detection, comes mitigation. What steps can ethical software engineers take to reduce discrimination in the software they produce? This paper shows that making fairness as a goal during hyperparameter optimization can (a) preserve the predictive power of a model learned from a data miner while also (b) generates fairer results. To the best of our knowledge, this is the first application of hyperparameter optimization as a tool for software engineers to generate fairer software.

READ FULL TEXT
research
03/23/2020

Fairway: SE Principles for Building Fairer Software

Machine learning software is increasingly being used to make decisions t...
research
12/21/2017

Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment

Actuarial risk assessments might be unduly perceived as a neutral way to...
research
03/23/2020

Fairway: A Way to Build Fair ML Software

Machine learning software is increasingly being used to make decisions t...
research
02/07/2023

To Be Forgotten or To Be Fair: Unveiling Fairness Implications of Machine Unlearning Methods

The right to be forgotten (RTBF) is motivated by the desire of people no...
research
09/11/2017

Fairness Testing: Testing Software for Discrimination

This paper defines software fairness and discrimination and develops a t...
research
05/25/2021

Bias in Machine Learning Software: Why? How? What to do?

Increasingly, software is making autonomous decisions in case of crimina...
research
11/20/2018

State of the Art in Fair ML: From Moral Philosophy and Legislation to Fair Classifiers

Machine learning is becoming an ever present part in our lives as many d...

Please sign up or login with your details

Forgot password? Click here to reset