Enriching a Model's Notion of Belief using a Persistent Memory

04/16/2021
by   Nora Kassner, et al.
0

Although pretrained language models (PTLMs) have been shown to contain significant amounts of world knowledge, they can still produce inconsistent answers to questions when probed, even after using specialized training techniques to reduce inconsistency. As a result, it can be hard to identify what the model actually "believes" about the world. Our goal is to reduce this problem, so systems are more globally consistent and accurate in their answers. Our approach is to add a memory component - a BeliefBank - that records a model's answers, and two mechanisms that use it to improve consistency among beliefs. First, a reasoning component - a weighted SAT solver - improves consistency by flipping answers that significantly clash with others. Second, a feedback component re-queries the model but using known beliefs as context. We show that, in a controlled experimental setting, these two mechanisms improve both accuracy and consistency. This is significant as it is a first step towards endowing models with an evolving memory, allowing them to construct a more coherent picture of the world.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/29/2021

BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief

Although pretrained language models (PTLMs) contain significant amounts ...
research
05/23/2023

Language Models with Rationality

While large language models (LLMs) are proficient at question-answering ...
research
01/13/2023

The moral authority of ChatGPT

ChatGPT is not only fun to chat with, but it also searches information, ...
research
04/27/2022

Towards Teachable Reasoning Systems

Our goal is a teachable reasoning system for question-answering (QA), wh...
research
11/21/2022

Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference

While large pre-trained language models are powerful, their predictions ...
research
03/04/2021

Consistent Answers of Aggregation Queries using SAT Solvers

The framework of database repairs and consistent answers to queries is a...
research
04/27/2023

Federated Prompting and Chain-of-Thought Reasoning for Improving LLMs Answering

We investigate how to enhance answer precision in frequently asked quest...

Please sign up or login with your details

Forgot password? Click here to reset