NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification

by   Haibin Zheng, et al.
HUAWEI Technologies Co., Ltd.
National University of Defense Technology
Zhejiang University
Zhejiang University of Technology

Deep neural networks (DNNs) have demonstrated their outperformance in various domains. However, it raises a social concern whether DNNs can produce reliable and fair decisions especially when they are applied to sensitive domains involving valuable resource allocation, such as education, loan, and employment. It is crucial to conduct fairness testing before DNNs are reliably deployed to such sensitive domains, i.e., generating as many instances as possible to uncover fairness violations. However, the existing testing methods are still limited from three aspects: interpretability, performance, and generalizability. To overcome the challenges, we propose NeuronFair, a new DNN fairness testing framework that differs from previous work in several key aspects: (1) interpretable - it quantitatively interprets DNNs' fairness violations for the biased decision; (2) effective - it uses the interpretation results to guide the generation of more diverse instances in less time; (3) generic - it can handle both structured and unstructured data. Extensive evaluations across 7 datasets and the corresponding DNNs demonstrate NeuronFair's superior performance. For instance, on structured datasets, it generates much more instances ( x5.84) and saves more time (with an average speedup of 534.56 instances of NeuronFair can also be leveraged to improve the fairness of the biased DNNs, which helps build more fair and trustworthy deep learning systems.


page 1

page 2

page 3

page 4


CertPri: Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space

Deep neural networks (DNNs) have demonstrated their outperformance in va...

Information-Theoretic Testing and Debugging of Fairness Defects in Deep Neural Networks

The deep feedforward neural networks (DNNs) are increasingly deployed in...

Enhanced Fairness Testing via Generating Effective Initial Individual Discriminatory Instances

Fairness testing aims at mitigating unintended discrimination in the dec...

SVEva Fair: A Framework for Evaluating Fairness in Speaker Verification

Despite the success of deep neural networks (DNNs) in enabling on-device...

Learning Fair Rule Lists

The widespread use of machine learning models, especially within the con...

Improving Fairness in Image Classification via Sketching

Fairness is a fundamental requirement for trustworthy and human-centered...

Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing

Machine learning (ML) systems have achieved remarkable performance acros...

Please sign up or login with your details

Forgot password? Click here to reset