Measure More, Question More: Experimental Studies on Transformer-based Language Models and Complement Coercion

12/20/2022
by   Yuling Gu, et al.
9

Transformer-based language models have shown strong performance on an array of natural language understanding tasks. However, the question of how these models react to implicit meaning has been largely unexplored. We investigate this using the complement coercion phenomenon, which involves sentences like "The student finished the book about sailing" where the action "reading" is implicit. We compare LMs' surprisal estimates at various critical sentence regions in sentences with and without implicit meaning. Effects associated with recovering implicit meaning were found at a critical region other than where sentences minimally differ. We then use follow-up experiments to factor out potential confounds, revealing different perspectives that offer a richer and more accurate picture.

READ FULL TEXT

page 8

page 9

page 10

research
08/20/2020

Discovering Useful Sentence Representations from Large Pretrained Language Models

Despite the extensive success of pretrained language models as encoders ...
research
02/23/2023

Sentence Simplification via Large Language Models

Sentence Simplification aims to rephrase complex sentences into simpler ...
research
06/01/2021

Implicit Representations of Meaning in Neural Language Models

Does the effectiveness of neural language models derive entirely from ac...
research
04/07/2022

Testing the limits of natural language models for predicting human language judgments

Neural network language models can serve as computational hypotheses abo...
research
02/08/2021

The Singleton Fallacy: Why Current Critiques of Language Models Miss the Point

This paper discusses the current critique against neural network-based N...
research
02/09/2021

Decontextualization: Making Sentences Stand-Alone

Models for question answering, dialogue agents, and summarization often ...
research
05/25/2023

Sequential Integrated Gradients: a simple but effective method for explaining language models

Several explanation methods such as Integrated Gradients (IG) can be cha...

Please sign up or login with your details

Forgot password? Click here to reset