BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief

09/29/2021
by   Nora Kassner, et al.
0

Although pretrained language models (PTLMs) contain significant amounts of world knowledge, they can still produce inconsistent answers to questions when probed, even after specialized training. As a result, it can be hard to identify what the model actually "believes" about the world, making it susceptible to inconsistent behavior and simple errors. Our goal is to reduce these problems. Our approach is to embed a PTLM in a broader system that also includes an evolving, symbolic memory of beliefs – a BeliefBank – that records but then may modify the raw PTLM answers. We describe two mechanisms to improve belief consistency in the overall system. First, a reasoning component – a weighted MaxSAT solver – revises beliefs that significantly clash with others. Second, a feedback component issues future queries to the PTLM using known beliefs as context. We show that, in a controlled experimental setting, these two mechanisms result in more consistent beliefs in the overall system, improving both the accuracy and consistency of its answers over time. This is significant as it is a first step towards PTLM-based architectures with a systematic notion of belief, enabling them to construct a more coherent picture of the world, and improve over time without model retraining.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/16/2021

Enriching a Model's Notion of Belief using a Persistent Memory

Although pretrained language models (PTLMs) have been shown to contain s...
research
03/08/2000

Relevance Sensitive Non-Monotonic Inference on Belief Sequences

We present a method for relevance sensitive non-monotonic inference from...
research
05/23/2023

Language Models with Rationality

While large language models (LLMs) are proficient at question-answering ...
research
01/13/2023

The moral authority of ChatGPT

ChatGPT is not only fun to chat with, but it also searches information, ...
research
11/21/2022

Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference

While large pre-trained language models are powerful, their predictions ...
research
04/27/2022

Towards Teachable Reasoning Systems

Our goal is a teachable reasoning system for question-answering (QA), wh...
research
06/30/2023

Still No Lie Detector for Language Models: Probing Empirical and Conceptual Roadblocks

We consider the questions of whether or not large language models (LLMs)...

Please sign up or login with your details

Forgot password? Click here to reset