multiPRover
[NAACL 2021] PyTorch code of multiPRover: Generating Multiple Proofs for Improved Interpretability in Rule Reasoning
view repo
We focus on a type of linguistic formal reasoning where the goal is to reason over explicit knowledge in the form of natural language facts and rules (Clark et al., 2020). A recent work, named PRover (Saha et al., 2020), performs such reasoning by answering a question and also generating a proof graph that explains the answer. However, compositional reasoning is not always unique and there may be multiple ways of reaching the correct answer. Thus, in our work, we address a new and challenging problem of generating multiple proof graphs for reasoning over natural language rule-bases. Each proof provides a different rationale for the answer, thereby improving the interpretability of such reasoning systems. In order to jointly learn from all proof graphs and exploit the correlations between multiple proofs for a question, we pose this task as a set generation problem over structured output spaces where each proof is represented as a directed graph. We propose two variants of a proof-set generation model, multiPRover. Our first model, Multilabel-multiPRover, generates a set of proofs via multi-label classification and implicit conditioning between the proofs; while the second model, Iterative-multiPRover, generates proofs iteratively by explicitly conditioning on the previously generated proofs. Experiments on multiple synthetic, zero-shot, and human-paraphrased datasets reveal that both multiPRover models significantly outperform PRover on datasets containing multiple gold proofs. Iterative-multiPRover obtains state-of-the-art proof F1 in zero-shot scenarios where all examples have single correct proofs. It also generalizes better to questions requiring higher depths of reasoning where multiple proofs are more frequent. Our code and models are publicly available at https://github.com/swarnaHub/multiPRover
READ FULL TEXT
In this paper, we investigate the problem of reasoning over natural lang...
read it
Due to their numerous advantages, formal proofs and proof assistants, su...
read it
Large language models have shown promising results in zero-shot settings...
read it
This paper proposes the problem of Deep Question Generation (DQG), which...
read it
Mathematical proofs can be mechanised using proof assistants to eliminat...
read it
The likelihood of an automated reasoning program being of substantial
as...
read it
Transformers have been shown to emulate logical deduction over natural
l...
read it
[NAACL 2021] PyTorch code of multiPRover: Generating Multiple Proofs for Improved Interpretability in Rule Reasoning
Comments
There are no comments yet.