Evidence of Meaning in Language Models Trained on Programs

05/18/2023
by   Charles Jin, et al.
0

We present evidence that language models can learn meaning despite being trained only to perform next token prediction on text, specifically a corpus of programs. Each program is preceded by a specification in the form of (textual) input-output examples. Working with programs enables us to precisely define concepts relevant to meaning in language (e.g., correctness and semantics), making program synthesis well-suited as an intermediate testbed for characterizing the presence (or absence) of meaning in language models. We first train a Transformer model on the corpus of programs, then probe the trained model's hidden states as it completes a program given a specification. Despite providing no inductive bias toward learning the semantics of the language, we find that a linear probe is able to extract abstractions of both current and future program states from the model states. Moreover, there is a strong, statistically significant correlation between the accuracy of the probe and the model's ability to generate a program that implements the specification. To evaluate whether the semantics are represented in the model states rather than learned by the probe, we design a novel experimental procedure that intervenes on the semantics of the language while preserving the lexicon and syntax. We also demonstrate that the model learns to generate correct programs that are, on average, shorter than those in the training set, which is evidence that language model outputs may differ from the training distribution in semantically meaningful ways. In summary, this paper does not propose any new techniques for training language models, but develops an experimental framework for and provides insights into the acquisition and representation of (formal) meaning in language models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/04/2021

Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing

Analysing whether neural language models encode linguistic information h...
research
06/20/2023

Towards Understanding What Code Language Models Learned

Pre-trained language models are effective in a variety of natural langua...
research
10/14/2022

Transparency Helps Reveal When Language Models Learn Meaning

Many current NLP systems are built from language models trained to optim...
research
05/11/2018

Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis

Program synthesis is the task of automatically generating a program cons...
research
09/28/2010

The Need to Support of Data Flow Graph Visualization of Forensic Lucid Programs, Forensic Evidence, and their Evaluation by GIPSY

Lucid programs are data-flow programs and can be visually represented as...
research
02/20/2023

Can discrete information extraction prompts generalize across language models?

We study whether automatically-induced prompts that effectively extract ...
research
04/25/2023

On the Computation of Meaning, Language Models and Incomprehensible Horrors

We integrate foundational theories of meaning with a mathematical formal...

Please sign up or login with your details

Forgot password? Click here to reset