Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information

10/19/2022
by   Isar Nejadgholi, et al.
0

Previous works on the fairness of toxic language classifiers compare the output of models with different identity terms as input features but do not consider the impact of other important concepts present in the context. Here, besides identity terms, we take into account high-level latent features learned by the classifier and investigate the interaction between these features and identity terms. For a multi-class toxic language classifier, we leverage a concept-based explanation framework to calculate the sensitivity of the model to the concept of sentiment, which has been used before as a salient feature for toxic language detection. Our results show that although for some classes, the classifier has learned the sentiment information as expected, this information is outweighed by the influence of identity terms as input features. This work is a step towards evaluating procedural fairness, where unfair processes lead to unfair outcomes. The produced knowledge can guide debiasing techniques to ensure that important concepts besides identity terms are well-represented in training datasets.

READ FULL TEXT
research
09/07/2023

TIDE: Textual Identity Detection for Evaluating and Augmenting Classification and Language Models

Machine learning models can perpetuate unintended biases from unfair and...
research
03/18/2023

DeAR: Debiasing Vision-Language Models with Additive Residuals

Large pre-trained vision-language models (VLMs) reduce the time for deve...
research
03/06/2023

NxPlain: Web-based Tool for Discovery of Latent Concepts

The proliferation of deep neural networks in various domains has seen an...
research
05/31/2021

DISSECT: Disentangled Simultaneous Explanations via Concept Traversals

Explaining deep learning model inferences is a promising venue for scien...
research
06/01/2022

How Biased is Your Feature?: Computing Fairness Influence Functions with Global Sensitivity Analysis

Fairness in machine learning has attained significant focus due to the w...
research
07/04/2023

Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers

Classifiers tend to learn a false causal relationship between an over-re...
research
04/02/2014

Thoughts on a Recursive Classifier Graph: a Multiclass Network for Deep Object Recognition

We propose a general multi-class visual recognition model, termed the Cl...

Please sign up or login with your details

Forgot password? Click here to reset