The Authors Matter: Understanding and Mitigating Implicit Bias in Deep Text Classification

by   Haochen Liu, et al.

It is evident that deep text classification models trained on human data could be biased. In particular, they produce biased outcomes for texts that explicitly include identity terms of certain demographic groups. We refer to this type of bias as explicit bias, which has been extensively studied. However, deep text classification models can also produce biased outcomes for texts written by authors of certain demographic groups. We refer to such bias as implicit bias of which we still have a rather limited understanding. In this paper, we first demonstrate that implicit bias exists in different text classification tasks for different demographic groups. Then, we build a learning-based interpretation method to deepen our knowledge of implicit bias. Specifically, we verify that classifiers learn to make predictions based on language features that are related to the demographic attributes of the authors. Next, we propose a framework Debiased-TC to train deep text classifiers to make predictions on the right features and consequently mitigate implicit bias. We conduct extensive experiments on three real-world datasets. The results show that the text classification models trained under our proposed framework outperform traditional models significantly in terms of fairness, and also slightly in terms of classification performance.


page 1

page 2

page 3

page 4


Demographics Should Not Be the Reason of Toxicity: Mitigating Discrimination in Text Classifications with Instance Weighting

With the recent proliferation of the use of text classifications, resear...

Efficiently Mitigating Classification Bias via Transfer Learning

Prediction bias in machine learning models refers to unintended model be...

Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification

Unintended bias in Machine Learning can manifest as systemic differences...

Model Selection's Disparate Impact in Real-World Deep Learning Applications

Algorithmic fairness has emphasized the role of biased data in automated...

Does Robustness Improve Fairness? Approaching Fairness with Word Substitution Robustness Methods for Text Classification

Existing bias mitigation methods to reduce disparities in model outcomes...

Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics

Why do biased predictions arise? What interventions can prevent them? We...

Diversity, Density, and Homogeneity: Quantitative Characteristic Metrics for Text Collections

Summarizing data samples by quantitative measures has a long history, wi...