Word Interdependence Exposes How LSTMs Compose Representations

04/27/2020
by   Naomi Saphra, et al.
0

Recent work in NLP shows that LSTM language models capture compositional structure in language data. For a closer look at how these representations are composed hierarchically, we present a novel measure of interdependence between word meanings in an LSTM, based on their interactions at the internal gates. To explore how compositional representations arise over training, we conduct simple experiments on synthetic data, which illustrate our measure by showing how high interdependence can hurt generalization. These synthetic experiments also illustrate a specific hypothesis about how hierarchical structures are discovered over the course of training: that parent constituents rely on effective representations of their children, rather than on learning long-range relations independently. We further support this measure with experiments on English language data, where interdependence is higher for more closely syntactically linked word pairs.

READ FULL TEXT
research
10/06/2020

LSTMs Compose (and Learn) Bottom-Up

Recent work in NLP shows that LSTM language models capture hierarchical ...
research
10/12/2020

COGS: A Compositional Generalization Challenge Based on Semantic Interpretation

Natural language is characterized by compositionality: the meaning of a ...
research
08/09/2015

Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation

We introduce a model for constructing vector representations of words by...
research
01/31/2023

Recursive Neural Networks with Bottlenecks Diagnose (Non-)Compositionality

A recent line of work in NLP focuses on the (dis)ability of models to ge...
research
03/10/2023

An algebraic approach to translating Japanese

We use Lambek's pregroups and the framework of compositional distributio...
research
06/01/2022

Order-sensitive Shapley Values for Evaluating Conceptual Soundness of NLP Models

Previous works show that deep NLP models are not always conceptually sou...
research
12/12/2020

Mapping the Timescale Organization of Neural Language Models

In the human brain, sequences of language input are processed within a d...

Please sign up or login with your details

Forgot password? Click here to reset