Fairness Evaluation in Text Classification: Machine Learning Practitioner Perspectives of Individual and Group Fairness

03/01/2023
by   Zahra Ashktorab, et al.
0

Mitigating algorithmic bias is a critical task in the development and deployment of machine learning models. While several toolkits exist to aid machine learning practitioners in addressing fairness issues, little is known about the strategies practitioners employ to evaluate model fairness and what factors influence their assessment, particularly in the context of text classification. Two common approaches of evaluating the fairness of a model are group fairness and individual fairness. We run a study with Machine Learning practitioners (n=24) to understand the strategies used to evaluate models. Metrics presented to practitioners (group vs. individual fairness) impact which models they consider fair. Participants focused on risks associated with underpredicting/overpredicting and model sensitivity relative to identity token manipulations. We discover fairness assessment strategies involving personal experiences or how users form groups of identity tokens to test model fairness. We provide recommendations for interactive tools for evaluating fairness in text classification.

READ FULL TEXT

page 5

page 6

research
05/22/2023

On Bias and Fairness in NLP: How to have a fairer text classification?

In this paper, we provide a holistic analysis of the different sources o...
research
08/03/2021

Your fairness may vary: Group fairness of pretrained language models in toxic text classification

We study the performance-fairness trade-off in more than a dozen fine-tu...
research
03/11/2020

Fairness by Explicability and Adversarial SHAP Learning

The ability to understand and trust the fairness of model predictions, p...
research
09/27/2018

Counterfactual Fairness in Text Classification through Robustness

In this paper, we study counterfactual fairness in text classification, ...
research
09/07/2023

TIDE: Textual Identity Detection for Evaluating and Augmenting Classification and Language Models

Machine learning models can perpetuate unintended biases from unfair and...
research
04/18/2022

Trinary Tools for Continuously Valued Binary Classifiers

Classification methods for binary (yes/no) tasks often produce a continu...
research
07/10/2020

Evaluating Fairness Using Permutation Tests

Machine learning models are central to people's lives and impact society...

Please sign up or login with your details

Forgot password? Click here to reset