Characterising Bias in Compressed Models

10/06/2020
by   Sara Hooker, et al.
0

The popularity and widespread use of pruning and quantization is driven by the severe resource constraints of deploying deep neural networks to environments with strict latency, memory and energy requirements. These techniques achieve high levels of compression with negligible impact on top-line metrics (top-1 and top-5 accuracy). However, overall accuracy hides disproportionately high errors on a small subset of examples; we call this subset Compression Identified Exemplars (CIE). We further establish that for CIE examples, compression amplifies existing algorithmic bias. Pruning disproportionately impacts performance on underrepresented features, which often coincides with considerations of fairness. Given that CIE is a relatively small subset but a great contributor of error in the model, we propose its use as a human-in-the-loop auditing tool to surface a tractable subset of the dataset for further inspection or annotation by a domain expert. We provide qualitative and quantitative support that CIE surfaces the most challenging examples in the data distribution for human-in-the-loop auditing.

READ FULL TEXT

page 2

page 5

page 7

research
08/26/2020

Estimating Example Difficulty using Variance of Gradients

In machine learning, a question of great interest is understanding what ...
research
04/25/2023

Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures

Pruning - that is, setting a significant subset of the parameters of a n...
research
08/24/2022

Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning

We consider the problem of model compression for deep neural networks (D...
research
07/22/2022

FairGRAPE: Fairness-aware GRAdient Pruning mEthod for Face Attribute Classification

Existing pruning techniques preserve deep neural networks' overall abili...
research
11/01/2017

Efficient Inferencing of Compressed Deep Neural Networks

Large number of weights in deep neural networks makes the models difficu...
research
01/18/2021

Deep Compression of Neural Networks for Fault Detection on Tennessee Eastman Chemical Processes

Artificial neural network has achieved the state-of-art performance in f...

Please sign up or login with your details

Forgot password? Click here to reset