Entailer: Answering Questions with Faithful and Truthful Chains of Reasoning

10/21/2022
by   Oyvind Tafjord, et al.
0

Our goal is a question-answering (QA) system that can show how its answers are implied by its own internal beliefs via a systematic chain of reasoning. Such a capability would allow better understanding of why a model produced the answer it did. Our approach is to recursively combine a trained backward-chaining model, capable of generating a set of premises entailing an answer hypothesis, with a verifier that checks that the model itself believes those premises (and the entailment itself) through self-querying. To our knowledge, this is the first system to generate multistep chains that are both faithful (the answer follows from the reasoning) and truthful (the chain reflects the system's own internal beliefs). In evaluation using two different datasets, users judge that a majority (70 how an answer follows from a set of facts - substantially better than a high-performance baseline - while preserving answer accuracy. By materializing model beliefs that systematically support an answer, new opportunities arise for understanding the model's system of belief, and diagnosing and correcting its misunderstandings when an answer is wrong.

READ FULL TEXT
research
04/27/2022

Towards Teachable Reasoning Systems

Our goal is a teachable reasoning system for question-answering (QA), wh...
research
05/23/2023

Language Models with Rationality

While large language models (LLMs) are proficient at question-answering ...
research
04/25/2023

Answering Questions by Meta-Reasoning over Multiple Chains of Thought

Modern systems for multi-hop question answering (QA) typically break que...
research
10/07/2019

Multi-hop Question Answering via Reasoning Chains

Multi-hop question answering requires models to gather information from ...
research
04/17/2021

Explaining Answers with Entailment Trees

Our goal, in the context of open-domain textual question-answering (QA),...
research
08/15/2023

Forward-Backward Reasoning in Large Language Models for Verification

Chain-of-Though (CoT) prompting has shown promising performance in vario...
research
04/06/2020

Learning to Recover Reasoning Chains for Multi-Hop Question Answering via Cooperative Games

We propose the new problem of learning to recover reasoning chains from ...

Please sign up or login with your details

Forgot password? Click here to reset