Automated Directed Fairness Testing

07/02/2018
by   Sakshi Udeshi, et al.
0

Fairness is a critical trait in decision making. As machine-learning models are increasingly being used in sensitive application domains (e.g. education and employment) for decision making, it is crucial that the decisions computed by such models are free of unintended bias. But how can we automatically validate the fairness of arbitrary machine-learning models? For a given machine-learning model and a set of sensitive input parameters, our AEQUITAS approach automatically discovers discriminatory inputs that highlight fairness violation. At the core of AEQUITAS are three novel strategies to employ probabilistic search over the input space with the objective of uncovering fairness violation. Our AEQUITAS approach leverages inherent robustness property in common machine-learning models to design and implement scalable test generation methodologies. An appealing feature of our generated test inputs is that they can be systematically added to the training set of the underlying model and improve its fairness. To this end, we design a fully automated module that guarantees to improve the fairness of the underlying model. We implemented AEQUITAS and we have evaluated it on six state-of-the-art classifiers, including a classifier that was designed with fairness constraints. We show that AEQUITAS effectively generates inputs to uncover fairness violation in all the subject classifiers and systematically improves the fairness of the respective models using the generated test inputs. In our evaluation, AEQUITAS generates up to 70 total number of inputs generated) and leverages these inputs to improve the fairness up to 94

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/21/2021

A Pilot Study on Detecting Unfairness in Human Decisions With Machine Learning Algorithmic Bias Detection

Fairness in decision-making has been a long-standing issue in our societ...
research
02/26/2019

Grammar Based Directed Testing of Machine Learning Systems

The massive progress of machine learning has seen its application over a...
research
05/08/2023

Distribution-aware Fairness Test Generation

This work addresses how to validate group fairness in image recognition ...
research
12/16/2022

Provable Fairness for Neural Network Models using Formal Verification

Machine learning models are increasingly deployed for critical decision-...
research
09/03/2020

Fairness in the Eyes of the Data: Certifying Machine-Learning Models

We present a framework that allows to certify the fairness degree of a m...
research
06/13/2022

Specifying and Testing k-Safety Properties for Machine-Learning Models

Machine-learning models are becoming increasingly prevalent in our lives...
research
09/27/2022

Explainable Global Fairness Verification of Tree-Based Classifiers

We present a new approach to the global fairness verification of tree-ba...

Please sign up or login with your details

Forgot password? Click here to reset