Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs

by   Peter Hase, et al.

Do language models have beliefs about the world? Dennett (1995) famously argues that even thermostats have beliefs, on the view that a belief is simply an informational state decoupled from any motivational state. In this paper, we discuss approaches to detecting when models have beliefs about the world, and we improve on methods for updating model beliefs to be more truthful, with a focus on methods based on learned optimizers or hypernetworks. Our main contributions include: (1) new metrics for evaluating belief-updating methods that focus on the logical consistency of beliefs, (2) a training objective for Sequential, Local, and Generalizing model updates (SLAG) that improves the performance of learned optimizers, and (3) the introduction of the belief graph, which is a new form of interface with language models that shows the interdependencies between model beliefs. Our experiments suggest that models possess belief-like qualities to only a limited extent, but update methods can both fix incorrect model beliefs and greatly improve their consistency. Although off-the-shelf optimizers are surprisingly strong belief-updating baselines, our learned optimizers can outperform them in more difficult settings than have been considered in past work. Code is available at


page 1

page 2

page 3

page 4


The Nature of the Unnormalized Beliefs Encountered in the Transferable Belief Model

Within the transferable belief model, positive basic belief masses can b...

Quantifying Users' Beliefs about Software Updates

Software updates are critical to the performance, compatibility, and sec...

The Temporal Dynamics of Belief-based Updating of Epistemic Trust: Light at the End of the Tunnel?

We start with the distinction of outcome- and belief-based Bayesian mode...

Who is Mistaken?

Recognizing when people have false beliefs is crucial for understanding ...

Probabilistic modelling of rational communication with conditionals

While a large body of work has scrutinized the meaning of conditional se...

Still No Lie Detector for Language Models: Probing Empirical and Conceptual Roadblocks

We consider the questions of whether or not large language models (LLMs)...

Every Mistake Counts in Assembly

One promising use case of AI assistants is to help with complex procedur...

Please sign up or login with your details

Forgot password? Click here to reset