Understanding by Understanding Not: Modeling Negation in Language Models

05/07/2021
by   Arian Hosseini, et al.
0

Negation is a core construction in natural language. Despite being very successful on many tasks, state-of-the-art pre-trained language models often handle negation incorrectly. To improve language models in this regard, we propose to augment the language modeling objective with an unlikelihood objective that is based on negated generic sentences from a raw text corpus. By training BERT with the resulting combined objective we reduce the mean top 1 error rate to 4 the negated NLI benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2020

Pre-training Polish Transformer-based Language Models at Scale

Transformer-based language models are now widely used in Natural Languag...
research
03/15/2022

Do Language Models Plagiarize?

Past literature has illustrated that language models do not fully unders...
research
02/26/2021

Learning Chess Blindfolded: Evaluating Language Models on State Tracking

Transformer language models have made tremendous strides in natural lang...
research
08/31/2019

Behavior Gated Language Models

Most current language modeling techniques only exploit co-occurrence, se...
research
05/15/2021

A Cognitive Regularizer for Language Modeling

The uniform information density (UID) hypothesis, which posits that spea...
research
05/11/2020

Enabling Language Models to Fill in the Blanks

We present a simple approach for text infilling, the task of predicting ...
research
02/24/2022

Probing BERT's priors with serial reproduction chains

We can learn as much about language models from what they say as we lear...

Please sign up or login with your details

Forgot password? Click here to reset