Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation

06/15/2021
by   Cody Blakeney, et al.
0

In recent years the ubiquitous deployment of AI has posed great concerns in regards to algorithmic bias, discrimination, and fairness. Compared to traditional forms of bias or discrimination caused by humans, algorithmic bias generated by AI is more abstract and unintuitive therefore more difficult to explain and mitigate. A clear gap exists in the current literature on evaluating and mitigating bias in pruned neural networks. In this work, we strive to tackle the challenging issues of evaluating, mitigating, and explaining induced bias in pruned neural networks. Our paper makes three contributions. First, we propose two simple yet effective metrics, Combined Error Variance (CEV) and Symmetric Distance Error (SDE), to quantitatively evaluate the induced bias prevention quality of pruned models. Second, we demonstrate that knowledge distillation can mitigate induced bias in pruned neural networks, even with unbalanced datasets. Third, we reveal that model similarity has strong correlations with pruning induced bias, which provides a powerful method to explain why bias occurs in pruned neural networks. Our code is available at https://github.com/codestar12/pruning-distilation-bias

READ FULL TEXT
research
10/08/2021

Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural Networks

Algorithmic bias is of increasing concern, both to the research communit...
research
02/03/2023

Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective

Knowledge distillation (KD) is a highly promising method for mitigating ...
research
03/01/2023

Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial Bias

With the swift advancement of deep learning, state-of-the-art algorithms...
research
07/08/2018

Revisiting Distillation and Incremental Classifier Learning

One of the key differences between the learning mechanism of humans and ...
research
11/30/2020

KD-Lib: A PyTorch library for Knowledge Distillation, Pruning and Quantization

In recent years, the growing size of neural networks has led to a vast a...
research
06/21/2019

Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices

There has been rapidly growing interest in the use of algorithms for emp...
research
07/20/2022

Mitigating Algorithmic Bias with Limited Annotations

Existing work on fairness modeling commonly assumes that sensitive attri...

Please sign up or login with your details

Forgot password? Click here to reset