Challenges in Automated Debiasing for Toxic Language Detection

01/29/2021
by   Xuhui Zhou, et al.
0

Biased associations have been a challenge in the development of classifiers for detecting toxic language, hindering both fairness and accuracy. As potential solutions, we investigate recently introduced debiasing methods for text classification datasets and models, as applied to toxic language detection. Our focus is on lexical (e.g., swear words, slurs, identity mentions) and dialectal markers (specifically African American English). Our comprehensive experiments establish that existing methods are limited in their ability to prevent biased behavior in current toxicity detectors. We then propose an automatic, dialect-aware data correction method, as a proof-of-concept. Despite the use of synthetic labels, this method reduces dialectal associations with toxicity. Overall, our findings show that debiasing a model trained on biased toxic language data is not as effective as simply relabeling the data to remove existing biases.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/07/2022

Power of Explanations: Towards automatic debiasing in hate speech detection

Hate speech detection is a common downstream application of natural lang...
09/15/2022

Measuring Geographic Performance Disparities of Offensive Language Classifiers

Text classifiers are applied at scale in the form of one-size-fits-all s...
06/30/2020

OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings

Language representations are known to carry stereotypical biases and, as...
06/04/2021

Towards Equal Gender Representation in the Annotations of Toxic Language Detection

Classifiers tend to propagate biases present in the data on which they a...
11/25/2020

The Geometry of Distributed Representations for Better Alignment, Attenuated Bias, and Improved Interpretability

High-dimensional representations for words, text, images, knowledge grap...
06/14/2021

Mitigating Biases in Toxic Language Detection through Invariant Rationalization

Automatic detection of toxic language plays an essential role in protect...
04/29/2020

Demographics Should Not Be the Reason of Toxicity: Mitigating Discrimination in Text Classifications with Instance Weighting

With the recent proliferation of the use of text classifications, resear...

Code Repositories

Toxic_Debias

code for our EACL 2021 paper: "Challenges in Automated Debiasing for Toxic Language Detection" by Xuhui Zhou, Maarten Sap, Swabha Swayamdipta, Noah A. Smith and Yejin Choi


view repo