A fine-grained comparison of pragmatic language understanding in humans and language models

12/13/2022
by   Jennifer Hu, et al.
0

Pragmatics is an essential part of communication, but it remains unclear what mechanisms underlie human pragmatic communication and whether NLP systems capture pragmatic language understanding. To investigate both these questions, we perform a fine-grained comparison of language models and humans on seven pragmatic phenomena, using zero-shot prompting on an expert-curated set of English materials. We ask whether models (1) select pragmatic interpretations of speaker utterances, (2) make similar error patterns as humans, and (3) use similar linguistic cues as humans to solve the tasks. We find that the largest models achieve high accuracy and match human error patterns: within incorrect responses, models favor the literal interpretation of an utterance over heuristic-based distractors. We also find evidence that models and humans are sensitive to similar linguistic cues. Our results suggest that even paradigmatic pragmatic phenomena may be solved without explicit representations of other agents' mental states, and that artificial models can be used to gain mechanistic insights into human pragmatic processing.

READ FULL TEXT

page 3

page 7

research
05/31/2023

What does the Failure to Reason with "Respectively" in Zero/Few-Shot Settings Tell Us about Language Models?

Humans can effortlessly understand the coordinate structure of sentences...
research
09/12/2019

Analyzing machine-learned representations: A natural language case study

As modern deep networks become more complex, and get closer to human-lik...
research
09/19/2022

How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?

Current language models have been criticised for learning language from ...
research
05/28/2021

Language Models Use Monotonicity to Assess NPI Licensing

We investigate the semantic knowledge of language models (LMs), focusing...
research
11/02/2020

A Closer Look at Linguistic Knowledge in Masked Language Models: The Case of Relative Clauses in American English

Transformer-based language models achieve high performance on various ta...
research
09/13/2021

The Grammar-Learning Trajectories of Neural Language Models

The learning trajectories of linguistic phenomena provide insight into t...
research
03/29/2022

Image Retrieval from Contextual Descriptions

The ability to integrate context, including perceptual and temporal cues...

Please sign up or login with your details

Forgot password? Click here to reset