Exploring Processing of Nested Dependencies in Neural-Network Language Models and Humans

06/19/2020
by   Yair Lakretz, et al.
0

Recursive processing in sentence comprehension is considered a hallmark of human linguistic abilities. However, its underlying neural mechanisms remain largely unknown. We studied whether a recurrent neural network with Long Short-Term Memory units can mimic a central aspect of human sentence processing, namely the handling of long-distance agreement dependencies. Although the network was solely trained to predict the next word in a large corpus, analysis showed the emergence of a small set of specialized units that successfully handled local and long-distance syntactic agreement for grammatical number. However, simulations showed that this mechanism does not support full recursion and fails with some long-range embedded dependencies. We tested the model's predictions in a behavioral experiment where humans detected violations in number agreement in sentences with systematic variations in the singular/plural status of multiple nouns, with or without embedding. Human and model error patterns were remarkably similar, showing that the model echoes various effects observed in human data. However, a key difference was that, with embedded long-range dependencies, humans remained above chance level, while the model's systematic errors brought it below chance. Overall, our study shows that exploring the ways in which modern artificial neural networks process sentences leads to precise and testable hypotheses about human linguistic performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2021

Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans

Recursive processing is considered a hallmark of human linguistic abilit...
research
05/17/2020

How much complexity does an RNN architecture need to learn syntax-sensitive dependencies?

Long short-term memory (LSTM) networks and their variants are capable of...
research
10/31/2022

Do LSTMs See Gender? Probing the Ability of LSTMs to Learn Abstract Syntactic Rules

LSTMs trained on next-word prediction can accurately perform linguistic ...
research
12/12/2020

Mapping the Timescale Organization of Neural Language Models

In the human brain, sequences of language input are processed within a d...
research
05/31/2018

Neural Network Acceptability Judgments

In this work, we explore the ability of artificial neural networks to ju...
research
06/01/2018

Some of Them Can be Guessed! Exploring the Effect of Linguistic Context in Predicting Quantifiers

We study the role of linguistic context in predicting quantifiers (`few'...
research
08/11/2022

Assessing the Unitary RNN as an End-to-End Compositional Model of Syntax

We show that both an LSTM and a unitary-evolution recurrent neural netwo...

Please sign up or login with your details

Forgot password? Click here to reset