How Do Neural Sequence Models Generalize? Local and Global Context Cues for Out-of-Distribution Prediction

11/04/2021
by   Anthony Bau, et al.
0

After a neural sequence model encounters an unexpected token, can its behavior be predicted? We show that RNN and transformer language models exhibit structured, consistent generalization in out-of-distribution contexts. We begin by introducing two idealized models of generalization in next-word prediction: a local context model in which generalization is consistent with the last word observed, and a global context model in which generalization is consistent with the global structure of the input. In experiments in English, Finnish, Mandarin, and random regular languages, we demonstrate that neural language models interpolate between these two forms of generalization: their predictions are well-approximated by a log-linear combination of local and global predictive distributions. We then show that, in some languages, noise mediates the two forms of generalization: noise applied to input tokens encourages global generalization, while noise in history representations encourages local generalization. Finally, we offer a preliminary theoretical explanation of these results by proving that the observed interpolation behavior is expected in log-linear models with a particular feature correlation structure. These results help explain the effectiveness of two popular regularization schemes and show that aspects of sequence model generalization can be understood and controlled.

READ FULL TEXT

page 1

page 6

research
06/15/2021

What Context Features Can Transformer Language Models Use?

Transformer-based language models benefit from conditioning on contexts ...
research
05/17/2023

Token-wise Decomposition of Autoregressive Language Model Hidden States for Analyzing Model Predictions

While there is much recent interest in studying why Transformer-based la...
research
07/30/2021

Structural Guidance for Transformer Language Models

Transformer-based language models pre-trained on large amounts of text d...
research
02/18/2016

The Interaction of Memory and Attention in Novel Word Generalization: A Computational Investigation

People exhibit a tendency to generalize a novel noun to the basic-level ...
research
10/31/2022

Hybrid CNN -Interpreter: Interpret local and global contexts for CNN-based Models

Convolutional neural network (CNN) models have seen advanced improvement...
research
05/30/2023

What and How does In-Context Learning Learn? Bayesian Model Averaging, Parameterization, and Generalization

In this paper, we conduct a comprehensive study of In-Context Learning (...
research
09/04/2018

Random Language Model: a path to principled complexity

Many complex generative systems use languages to create structured objec...

Please sign up or login with your details

Forgot password? Click here to reset