ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language

12/24/2020
by   Oyvind Tafjord, et al.
7

Transformers have been shown to emulate logical deduction over natural language theories (logical rules expressed in natural language), reliably assigning true/false labels to candidate implications. However, their ability to generate implications of a theory has not yet been demonstrated, and methods for reconstructing proofs of answers are imperfect. In this work we show that a generative model, called ProofWriter, can reliably generate both implications of a theory and the natural language proof(s) that support them. In particular, iterating a 1-step implication generator results in proofs that are highly reliable, and represent actual model decisions (rather than post-hoc rationalizations). On the RuleTaker dataset, the accuracy of ProofWriter's proofs exceed previous methods by +9 to proof depths unseen in training and on out-of-domain problems. We also show that generative techniques can perform a type of abduction with high precision: Given a theory and an unprovable conclusion, identify a missing fact that allows the conclusion to be proved, along with a proof. These results significantly improve the viability of neural methods for systematically reasoning over natural language.

READ FULL TEXT

page 7

page 8

page 14

page 15

research
09/30/2020

Measuring Systematic Generalization in Neural Proof Generation with Transformers

We are interested in understanding how well Transformer language models ...
research
12/11/2017

Coqatoo: Generating Natural Language Versions of Coq Proofs

Due to their numerous advantages, formal proofs and proof assistants, su...
research
03/19/2022

FaiRR: Faithful and Robust Deductive Reasoning over Natural Language

Transformers have been shown to be able to perform deductive reasoning o...
research
05/25/2022

Generating Natural Language Proofs with Verifier-Guided Search

Deductive reasoning (drawing conclusions from assumptions) is a challeng...
research
04/18/2021

Flexible Operations for Natural Language Deduction

An interpretable system for complex, open-domain reasoning needs an inte...
research
11/01/2022

Natural Language Deduction with Incomplete Information

A growing body of work studies how to answer a question or verify a clai...
research
12/20/2022

LAMBADA: Backward Chaining for Automated Reasoning in Natural Language

Remarkable progress has been made on automated reasoning with knowledge ...

Please sign up or login with your details

Forgot password? Click here to reset