FairPrune: Achieving Fairness Through Pruning for Dermatological Disease Diagnosis

by   Yawen Wu, et al.

Many works have shown that deep learning-based medical image classification models can exhibit bias toward certain demographic attributes like race, gender, and age. Existing bias mitigation methods primarily focus on learning debiased models, which may not necessarily guarantee all sensitive information can be removed and usually comes with considerable accuracy degradation on both privileged and unprivileged groups. To tackle this issue, we propose a method, FairPrune, that achieves fairness by pruning. Conventionally, pruning is used to reduce the model size for efficient inference. However, we show that pruning can also be a powerful tool to achieve fairness. Our observation is that during pruning, each parameter in the model has different importance for different groups' accuracy. By pruning the parameters based on this importance difference, we can reduce the accuracy difference between the privileged group and the unprivileged group to improve fairness without a large accuracy drop. To this end, we use the second derivative of the parameters of a pre-trained model to quantify the importance of each parameter with respect to the model accuracy for each group. Experiments on two skin lesion diagnosis datasets over multiple sensitive attributes demonstrate that our method can greatly improve fairness while keeping the average accuracy of both groups as high as possible.


CIRCLe: Color Invariant Representation Learning for Unbiased Classification of Skin Lesions

While deep learning based approaches have demonstrated expert-level perf...

FairGRAPE: Fairness-aware GRAdient Pruning mEthod for Face Attribute Classification

Existing pruning techniques preserve deep neural networks' overall abili...

Mitigating Calibration Bias Without Fixed Attribute Grouping for Improved Fairness in Medical Imaging Analysis

Trustworthy deployment of deep learning medical imaging models into real...

Prune Responsibly

Irrespective of the specific definition of fairness in a machine learnin...

Taking Advantage of Multitask Learning for Fair Classification

A central goal of algorithmic fairness is to reduce bias in automated de...

A Fair Loss Function for Network Pruning

Model pruning can enable the deployment of neural networks in environmen...

Properties Of Winning Tickets On Skin Lesion Classification

Skin cancer affects a large population every year – automated skin cance...

Please sign up or login with your details

Forgot password? Click here to reset