Information-Theoretic Testing and Debugging of Fairness Defects in Deep Neural Networks

04/09/2023
by   Verya Monjezi, et al.
0

The deep feedforward neural networks (DNNs) are increasingly deployed in socioeconomic critical decision support software systems. DNNs are exceptionally good at finding minimal, sufficient statistical patterns within their training data. Consequently, DNNs may learn to encode decisions – amplifying existing biases or introducing new ones – that may disadvantage protected individuals/groups and may stand to violate legal protections. While the existing search based software testing approaches have been effective in discovering fairness defects, they do not supplement these defects with debugging aids – such as severity and causal explanations – crucial to help developers triage and decide on the next course of action. Can we measure the severity of fairness defects in DNNs? Are these defects symptomatic of improper training or they merely reflect biases present in the training data? To answer such questions, we present DICE: an information-theoretic testing and debugging framework to discover and localize fairness defects in DNNs. The key goal of DICE is to assist software developers in triaging fairness defects by ordering them by their severity. Towards this goal, we quantify fairness in terms of protected information (in bits) used in decision making. A quantitative view of fairness defects not only helps in ordering these defects, our empirical evaluation shows that it improves the search efficiency due to resulting smoothness of the search space. Guided by the quantitative fairness, we present a causal debugging framework to localize inadequately trained layers and neurons responsible for fairness defects. Our experiments over ten DNNs, developed for socially critical tasks, show that DICE efficiently characterizes the amounts of discrimination, effectively generates discriminatory instances, and localizes layers/neurons with significant biases.

READ FULL TEXT
research
12/25/2021

NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification

Deep neural networks (DNNs) have demonstrated their outperformance in va...
research
06/01/2021

Information Theoretic Measures for Fairness-aware Feature Selection

Machine learning algorithms are increasingly used for consequential deci...
research
12/16/2022

Provable Fairness for Neural Network Models using Formal Verification

Machine learning models are increasingly deployed for critical decision-...
research
02/23/2023

Counterfactual Situation Testing: Uncovering Discrimination under Fairness given the Difference

We present counterfactual situation testing (CST), a causal data mining ...
research
09/17/2022

Enhanced Fairness Testing via Generating Effective Initial Individual Discriminatory Instances

Fairness testing aims at mitigating unintended discrimination in the dec...
research
02/09/2022

Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

As AI-based systems increasingly impact many areas of our lives, auditin...

Please sign up or login with your details

Forgot password? Click here to reset