LAMBADA: Backward Chaining for Automated Reasoning in Natural Language

12/20/2022
by   Seyed Mehran Kazemi, et al.
0

Remarkable progress has been made on automated reasoning with knowledge specified as unstructured, natural text, by using the power of large language models (LMs) coupled with methods such as Chain-of-Thought prompting and Selection-Inference. These techniques search for proofs in the forward direction from axioms to the conclusion, which suffers from a combinatorial explosion of the search space, and thus high failure rates for problems requiring longer chains of reasoning. The classical automated reasoning literature has shown that reasoning in the backward direction (i.e. from the intended conclusion to the set of axioms that support it) is significantly more efficient at proof-finding problems. We import this intuition into the LM setting and develop a Backward Chaining algorithm, which we call LAMBADA, that decomposes reasoning into four sub-modules, each of which can be simply implemented by few-shot prompted LM inference. We show that LAMBADA achieves massive accuracy boosts over state-of-the-art forward reasoning methods on two challenging logical reasoning datasets, particularly when deep and accurate proof chains are required.

READ FULL TEXT

page 6

page 13

page 14

research
09/30/2020

Measuring Systematic Generalization in Neural Proof Generation with Transformers

We are interested in understanding how well Transformer language models ...
research
02/14/2016

Large-Scale Reasoning with OWL

With the growth of the Semantic Web in size and importance, more and mor...
research
06/13/2023

BoardgameQA: A Dataset for Natural Language Reasoning with Contradictory Information

Automated reasoning with unstructured natural text is a key requirement ...
research
04/21/2023

ReCEval: Evaluating Reasoning Chains via Correctness and Informativeness

Multi-step reasoning ability is fundamental to many natural language tas...
research
12/24/2020

ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language

Transformers have been shown to emulate logical deduction over natural l...
research
11/01/2022

Natural Language Deduction with Incomplete Information

A growing body of work studies how to answer a question or verify a clai...
research
01/27/2020

Layered Clause Selection for Theory Reasoning

Explicit theory axioms are added by a saturation-based theorem prover as...

Please sign up or login with your details

Forgot password? Click here to reset