Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop Fact Verification

05/16/2023
by   Jiasheng Si, et al.
0

The success of deep learning models on multi-hop fact verification has prompted researchers to understand the behavior behind their veracity. One possible way is erasure search: obtaining the rationale by entirely removing a subset of input without compromising the veracity prediction. Although extensively explored, existing approaches fall within the scope of the single-granular (tokens or sentences) explanation, which inevitably leads to explanation redundancy and inconsistency. To address such issues, this paper explores the viability of multi-granular rationale extraction with consistency and faithfulness for explainable multi-hop fact verification. In particular, given a pretrained veracity prediction model, both the token-level explainer and sentence-level explainer are trained simultaneously to obtain multi-granular rationales via differentiable masking. Meanwhile, three diagnostic properties (fidelity, consistency, salience) are introduced and applied to the training process, to ensure that the extracted rationales satisfy faithfulness and consistency. Experimental results on three multi-hop fact verification datasets show that the proposed approach outperforms some state-of-the-art baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/02/2022

Exploring Faithful Rationale for Multi-hop Fact Verification via Salience-Aware Graph Learning

The opaqueness of the multi-hop fact verification model imposes imperati...
research
11/05/2020

HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification

We introduce HoVer (HOppy VERification), a dataset for many-hop evidence...
research
05/23/2023

HOP, UNION, GENERATE: Explainable Multi-hop Reasoning without Rationale Supervision

Explainable multi-hop question answering (QA) not only predicts answers ...
research
05/21/2019

Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction

Question answering (QA) using textual sources such as reading comprehens...
research
04/14/2021

Is Multi-Hop Reasoning Really Explainable? Towards Benchmarking Reasoning Interpretability

Multi-hop reasoning has been widely studied in recent years to obtain mo...
research
12/28/2020

Red Dragon AI at TextGraphs 2020 Shared Task: LIT : LSTM-Interleaved Transformer for Multi-Hop Explanation Ranking

Explainable question answering for science questions is a challenging ta...
research
08/05/2022

Going Beyond Approximation: Encoding Constraints for Explainable Multi-hop Inference via Differentiable Combinatorial Solvers

Integer Linear Programming (ILP) provides a viable mechanism to encode e...

Please sign up or login with your details

Forgot password? Click here to reset