Mitigating Toxic Degeneration with Empathetic Data: Exploring the Relationship Between Toxicity and Empathy

05/15/2022
by   Allison Lahnala, et al.
0

Large pre-trained neural language models have supported the effectiveness of many NLP tasks, yet are still prone to generating toxic language hindering the safety of their use. Using empathetic data, we improve over recent work on controllable text generation that aims to reduce the toxicity of generated text. We find we are able to dramatically reduce the size of fine-tuning data to 7.5-30k samples while at the same time making significant improvements over state-of-the-art toxicity mitigation of up to 3.4 relative) from the original work on 2.3m samples, by strategically sampling data based on empathy scores. We observe that the degree of improvement is subject to specific communication components of empathy. In particular, the cognitive components of empathy significantly beat the original dataset in almost all experiments, while emotional empathy was tied to less improvement and even underperforming random samples of the original data. This is a particularly implicative insight for NLP work concerning empathy as until recently the research and resources built for it have exclusively considered empathy as an emotional concept.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/24/2022

Mix and Match: Learning-free Controllable Text Generation using Energy Language Models

Recent work on controlled text generation has either required attribute-...
research
07/13/2020

Do You Have the Right Scissors? Tailoring Pre-trained Language Models via Monte-Carlo Methods

It has been a common approach to pre-train a language model on a large c...
research
05/06/2023

Controllable Mixed-Initiative Dialogue Generation through Prompting

Mixed-initiative dialogue tasks involve repeated exchanges of informatio...
research
07/01/2019

Patent Claim Generation by Fine-Tuning OpenAI GPT-2

In this work, we focus on fine-tuning an OpenAI GPT-2 pre-trained model ...
research
12/12/2022

Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging

Knowledge Distillation (KD) is a commonly used technique for improving t...
research
10/17/2022

Mitigating Covertly Unsafe Text within Natural Language Systems

An increasingly prevalent problem for intelligent technologies is text s...

Please sign up or login with your details

Forgot password? Click here to reset