Language Models as Agent Models

12/03/2022
by   Jacob Andreas, et al.
0

Language models (LMs) are trained on collections of documents, written by individual human agents to achieve specific goals in an outside world. During training, LMs have access only to text of these documents, with no direct evidence of the internal states of the agents that produced them – a fact often used to argue that LMs are incapable of modeling goal-directed aspects of human language production and comprehension. Can LMs trained on text learn anything at all about the relationship between language and use? I argue that LMs are models of intentional communication in a specific, narrow sense. When performing next word prediction given a textual context, an LM can infer and represent properties of an agent likely to have produced that context. These representations can in turn influence subsequent LM generation in the same way that agents' communicative intentions influence their language. I survey findings from the recent literature showing that – even in today's non-robust and error-prone models – LMs infer and use representations of fine-grained communicative intentions and more abstract beliefs and goals. Despite the limited nature of their training data, they can thus serve as building blocks for systems that communicate and act intentionally.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2023

Augmenting Autotelic Agents with Large Language Models

Humans learn to master open-ended repertoires of skills by imagining and...
research
04/03/2023

Measuring and Manipulating Knowledge Representations in Language Models

Neural language models (LMs) represent facts about the world described b...
research
05/25/2023

Passive learning of active causal strategies in agents and language models

What can be learned about causality and experimentation from passive dat...
research
07/08/2022

Automatic Exploration of Textual Environments with Language-Conditioned Autotelic Agents

In this extended abstract we discuss the opportunities and challenges of...
research
08/05/2022

Meaning without reference in large language models

The widespread success of large language models (LLMs) has been met with...
research
07/20/2023

Of Models and Tin Men – a behavioural economics study of principal-agent problems in AI alignment using large-language models

AI Alignment is often presented as an interaction between a single desig...
research
10/07/2020

Toward Stance-based Personas for Opinionated Dialogues

In the context of chit-chat dialogues it has been shown that endowing sy...

Please sign up or login with your details

Forgot password? Click here to reset