The CRINGE Loss: Learning what language not to model

11/10/2022
by   Leonard Adolphs, et al.
10

Standard language model training employs gold human documents or human-human interaction data, and treats all training data as positive examples. Growing evidence shows that even with very large amounts of positive training data, issues remain that can be alleviated with relatively small amounts of negative data – examples of what the model should not do. In this work, we propose a novel procedure to train with such data called the CRINGE loss (ContRastive Iterative Negative GEneration). We show the effectiveness of this approach across three different experiments on the tasks of safe generation, contradiction avoidance, and open-domain dialogue. Our models outperform multiple strong baselines and are conceptually simple, easy to train and implement.

READ FULL TEXT
research
05/03/2023

Zero-Shot Listwise Document Reranking with a Large Language Model

Supervised ranking methods based on bi-encoder or cross-encoder architec...
research
06/14/2019

Scalable Syntax-Aware Language Models Using Knowledge Distillation

Prior work has shown that, on small amounts of training data, syntactic ...
research
09/16/2020

Group-wise Contrastive Learning for Neural Dialogue Generation

Neural dialogue response generation has gained much popularity in recent...
research
04/21/2023

Learn What NOT to Learn: Towards Generative Safety in Chatbots

Conversational models that are generative and open-domain are particular...
research
01/18/2021

Joint Energy-based Model Training for Better Calibrated Natural Language Understanding Models

In this work, we explore joint energy-based model (EBM) training during ...
research
03/07/2023

Stabilized training of joint energy-based models and their practical applications

The recently proposed Joint Energy-based Model (JEM) interprets discrimi...
research
11/05/2015

Multinomial Loss on Held-out Data for the Sparse Non-negative Matrix Language Model

We describe Sparse Non-negative Matrix (SNM) language model estimation u...

Please sign up or login with your details

Forgot password? Click here to reset