Enhanced Fairness Testing via Generating Effective Initial Individual Discriminatory Instances

by   Minghua Ma, et al.

Fairness testing aims at mitigating unintended discrimination in the decision-making process of data-driven AI systems. Individual discrimination may occur when an AI model makes different decisions for two distinct individuals who are distinguishable solely according to protected attributes, such as age and race. Such instances reveal biased AI behaviour, and are called Individual Discriminatory Instances (IDIs). In this paper, we propose an approach for the selection of the initial seeds to generate IDIs for fairness testing. Previous studies mainly used random initial seeds to this end. However this phase is crucial, as these seeds are the basis of the follow-up IDIs generation. We dubbed our proposed seed selection approach I D. It generates a large number of initial IDIs exhibiting a great diversity, aiming at improving the overall performance of fairness testing. Our empirical study reveal that I D is able to produce a larger number of IDIs with respect to four state-of-the-art seed generation approaches, generating 1.68X more IDIs on average. Moreover, we compare the use of I D to train machine learning models and find that using I D reduces the number of remaining IDIs by 29 that I D is effective for improving model fairness


page 1

page 2

page 3

page 4


Multi-dimensional discrimination in Law and Machine Learning – A comparative overview

AI-driven decision-making can lead to discrimination against certain ind...

Automated Test Generation to Detect Individual Discrimination in AI Models

Dependability on AI models is of utmost importance to ensure full accept...

fAux: Testing Individual Fairness via Gradient Alignment

Machine learning models are vulnerable to biases that result in unfair t...

Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing

Machine learning (ML) systems have achieved remarkable performance acros...

NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification

Deep neural networks (DNNs) have demonstrated their outperformance in va...

Information-Theoretic Testing and Debugging of Fairness Defects in Deep Neural Networks

The deep feedforward neural networks (DNNs) are increasingly deployed in...

Fairness Testing: Testing Software for Discrimination

This paper defines software fairness and discrimination and develops a t...

Please sign up or login with your details

Forgot password? Click here to reset