HOP, UNION, GENERATE: Explainable Multi-hop Reasoning without Rationale Supervision

05/23/2023
by   Wenting Zhao, et al.
0

Explainable multi-hop question answering (QA) not only predicts answers but also identifies rationales, i. e. subsets of input sentences used to derive the answers. This problem has been extensively studied under the supervised setting, where both answer and rationale annotations are given. Because rationale annotations are expensive to collect and not always available, recent efforts have been devoted to developing methods that do not rely on supervision for rationales. However, such methods have limited capacities in modeling interactions between sentences, let alone reasoning across multiple documents. This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision. Our approach performs multi-hop reasoning by explicitly modeling rationales as sets, enabling the model to capture interactions between documents and sentences within a document. Experimental results show that our approach is more accurate at selecting rationales than the previous methods, while maintaining similar accuracy in predicting answers.

READ FULL TEXT
research
05/18/2022

Modeling Multi-hop Question Answering as Single Sequence Prediction

Fusion-in-decoder (Fid) (Izacard and Grave, 2020) is a generative questi...
research
04/16/2022

Calibrating Trust of Multi-Hop Question Answering Systems with Decompositional Probes

Multi-hop Question Answering (QA) is a challenging task since it require...
research
07/07/2021

Robustifying Multi-hop QA through Pseudo-Evidentiality Training

This paper studies the bias problem of multi-hop question answering mode...
research
10/08/2020

Multi-hop Inference for Question-driven Summarization

Question-driven summarization has been recently studied as an effective ...
research
05/16/2023

Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop Fact Verification

The success of deep learning models on multi-hop fact verification has p...
research
12/28/2020

Red Dragon AI at TextGraphs 2020 Shared Task: LIT : LSTM-Interleaved Transformer for Multi-Hop Explanation Ranking

Explainable question answering for science questions is a challenging ta...
research
09/14/2021

Building Accurate Simple Models with Multihop

Knowledge transfer from a complex high performing model to a simpler and...

Please sign up or login with your details

Forgot password? Click here to reset