DeepAI AI Chat
Log In Sign Up

Prune Responsibly

by   Michela Paganini, et al.

Irrespective of the specific definition of fairness in a machine learning application, pruning the underlying model affects it. We investigate and document the emergence and exacerbation of undesirable per-class performance imbalances, across tasks and architectures, for almost one million categories considered across over 100K image classification models that undergo a pruning process.We demonstrate the need for transparent reporting, inclusive of bias, fairness, and inclusion metrics, in real-life engineering decision-making around neural network pruning. In response to the calls for quantitative evaluation of AI models to be population-aware, we present neural network pruning as a tangible application domain where the ways in which accuracy-efficiency trade-offs disproportionately affect underrepresented or outlier groups have historically been overlooked. We provide a simple, Pareto-based framework to insert fairness considerations into value-based operating point selection processes, and to re-evaluate pruning technique choices.


Accuracy and Fairness Trade-offs in Machine Learning: A Stochastic Multi-Objective Approach

In the application of machine learning to real-life decision-making syst...

Fairly Accurate: Learning Optimal Accuracy vs. Fairness Tradeoffs for Hate Speech Detection

Recent work has emphasized the importance of balancing competing objecti...

FairPrune: Achieving Fairness Through Pruning for Dermatological Disease Diagnosis

Many works have shown that deep learning-based medical image classificat...

Ethical and Fairness Implications of Model Multiplicity

While predictive models are a purely technological feat, they may operat...

(Un)fairness in Post-operative Complication Prediction Models

With the current ongoing debate about fairness, explainability and trans...