Characterizing Verbatim Short-Term Memory in Neural Language Models

10/24/2022
by   Kristijan Armeni, et al.
0

When a language model is trained to predict natural language sequences, its prediction at each moment depends on a representation of prior context. What kind of information about the prior context can language models retrieve? We tested whether language models could retrieve the exact words that occurred previously in a text. In our paradigm, language models (transformers and an LSTM) processed English text in which a list of nouns occurred twice. We operationalized retrieval as the reduction in surprisal from the first to the second list. We found that the transformers retrieved both the identity and ordering of nouns from the first list. Further, the transformers' retrieval was markedly enhanced when they were trained on a larger corpus and with greater model depth. Lastly, their ability to index prior tokens was dependent on learned attention patterns. In contrast, the LSTM exhibited less precise retrieval, which was limited to list-initial tokens and to short intervening texts. The LSTM's retrieval was not sensitive to the order of nouns and it improved when the list was semantically coherent. We conclude that transformers implemented something akin to a working memory system that could flexibly retrieve individual token representations across arbitrary delays; conversely, the LSTM maintained a coarser and more rapidly-decaying semantic gist of prior tokens, weighted toward the earliest items.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/08/2021

Improving language models by retrieving from trillions of tokens

We enhance auto-regressive language models by conditioning on document c...
research
11/15/2019

A Subword Level Language Model for Bangla Language

Language models are at the core of natural language processing. The abil...
research
05/12/2018

Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context

We know very little about how neural language models (LM) use prior ling...
research
09/27/2020

Multi-timescale representation learning in LSTM Language Models

Although neural language models are effective at capturing statistics of...
research
10/14/2020

From Language to Language-ish: How Brain-Like is an LSTM's Representation of Nonsensical Language Stimuli?

The representations generated by many models of language (word embedding...
research
02/23/2023

On the Generalization Ability of Retrieval-Enhanced Transformers

Recent work on the Retrieval-Enhanced Transformer (RETRO) model has show...
research
08/04/2023

Prompt2Gaussia: Uncertain Prompt-learning for Script Event Prediction

Script Event Prediction (SEP) aims to predict the subsequent event for a...

Please sign up or login with your details

Forgot password? Click here to reset