Detoxifying Language Models Risks Marginalizing Minority Voices

04/13/2021
by   Albert Xu, et al.
12

Language models (LMs) must be both safe and equitable to be responsibly deployed in practice. With safety in mind, numerous detoxification techniques (e.g., Dathathri et al. 2020; Krause et al. 2020) have been proposed to mitigate toxic LM generations. In this work, we show that current detoxification techniques hurt equity: they decrease the utility of LMs on language used by marginalized groups (e.g., African-American English and minority identity mentions). In particular, we perform automatic and human evaluations of text generation quality when LMs are conditioned on inputs with different dialects and group identifiers. We find that detoxification makes LMs more brittle to distribution shift, especially on language used by marginalized groups. We identify that these failures stem from detoxification methods exploiting spurious correlations in toxicity datasets. Overall, our results highlight the tension between the controllability and distributional robustness of LMs.

READ FULL TEXT

page 2

page 3

06/23/2022

Theory-Grounded Measurement of U.S. Social Stereotypes in English Language Models

NLP models trained on text have been shown to reproduce human stereotype...
01/11/2021

Implicit Unlikelihood Training: Improving Neural Text Generation with Reinforcement Learning

Likelihood training and maximization-based decoding result in dull and r...
05/24/2022

The Curious Case of Control

Children acquiring English make systematic errors on subject control sen...
04/27/2022

On the Limitations of Dataset Balancing: The Lost Battle Against Spurious Correlations

Recent work has shown that deep learning models in NLP are highly sensit...
09/24/2020

RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models

Pretrained neural language models (LMs) are prone to generating racist, ...
10/19/2021

Risks of AI Foundation Models in Education

If the authors of a recent Stanford report (Bommasani et al., 2021) on t...
10/04/2021

DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models

In this paper, we present and implement a multi-dimensional, modular fra...