Mitigating harm in language models with conditional-likelihood filtration

08/04/2021
by   Helen Ngo, et al.
0

Language models trained on large-scale unfiltered datasets curated from the open web acquire systemic biases, prejudices, and harmful views from their training data. We present a methodology for programmatically identifying and removing harmful text from web-scale datasets. A pretrained language model is used to calculate the log-likelihood of researcher-written trigger phrases conditioned on a specific document, which is used to identify and filter documents from the dataset. We demonstrate that models trained on this filtered dataset exhibit lower propensity to generate harmful text, with a marginal decrease in performance on standard language modeling benchmarks compared to unfiltered baselines. We provide a partial explanation for this performance gap by surfacing examples of hate speech and other undesirable content from standard language modeling benchmarks. Finally, we discuss the generalization of this method and how trigger phrases which reflect specific values can be used by researchers to build language models which are more closely aligned with their values.

READ FULL TEXT
research
06/21/2023

OBELISC: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents

Large multimodal models trained on natural documents, which interleave i...
research
07/14/2021

Deduplicating Training Data Makes Language Models Better

We find that existing language modeling datasets contain many near-dupli...
research
02/19/2022

Reward Modeling for Mitigating Toxicity in Transformer-based Language Models

Transformer-based language models are able to generate fluent text and b...
research
05/15/2023

DarkBERT: A Language Model for the Dark Side of the Internet

Recent research has suggested that there are clear differences in the la...
research
12/31/2020

The Pile: An 800GB Dataset of Diverse Text for Language Modeling

Recent work has demonstrated that increased training dataset diversity i...
research
10/16/2021

Invariant Language Modeling

Modern pretrained language models are critical components of NLP pipelin...
research
09/04/2019

Distributionally Robust Language Modeling

Language models are generally trained on data spanning a wide range of t...

Please sign up or login with your details

Forgot password? Click here to reset