ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language

by   Oyvind Tafjord, et al.

Transformers have been shown to emulate logical deduction over natural language theories (logical rules expressed in natural language), reliably assigning true/false labels to candidate implications. However, their ability to generate implications of a theory has not yet been demonstrated, and methods for reconstructing proofs of answers are imperfect. In this work we show that a generative model, called ProofWriter, can reliably generate both implications of a theory and the natural language proof(s) that support them. In particular, iterating a 1-step implication generator results in proofs that are highly reliable, and represent actual model decisions (rather than post-hoc rationalizations). On the RuleTaker dataset, the accuracy of ProofWriter's proofs exceed previous methods by +9 to proof depths unseen in training and on out-of-domain problems. We also show that generative techniques can perform a type of abduction with high precision: Given a theory and an unprovable conclusion, identify a missing fact that allows the conclusion to be proved, along with a proof. These results significantly improve the viability of neural methods for systematically reasoning over natural language.


page 7

page 8

page 14

page 15


Measuring Systematic Generalization in Neural Proof Generation with Transformers

We are interested in understanding how well Transformer language models ...

Coqatoo: Generating Natural Language Versions of Coq Proofs

Due to their numerous advantages, formal proofs and proof assistants, su...

FaiRR: Faithful and Robust Deductive Reasoning over Natural Language

Transformers have been shown to be able to perform deductive reasoning o...

Generating Natural Language Proofs with Verifier-Guided Search

Deductive reasoning (drawing conclusions from assumptions) is a challeng...

Flexible Operations for Natural Language Deduction

An interpretable system for complex, open-domain reasoning needs an inte...

Natural Language Deduction with Incomplete Information

A growing body of work studies how to answer a question or verify a clai...

LAMBADA: Backward Chaining for Automated Reasoning in Natural Language

Remarkable progress has been made on automated reasoning with knowledge ...

Please sign up or login with your details

Forgot password? Click here to reset