Exploring Faithful Rationale for Multi-hop Fact Verification via Salience-Aware Graph Learning

12/02/2022
by   Jiasheng Si, et al.
0

The opaqueness of the multi-hop fact verification model imposes imperative requirements for explainability. One feasible way is to extract rationales, a subset of inputs, where the performance of prediction drops dramatically when being removed. Though being explainable, most rationale extraction methods for multi-hop fact verification explore the semantic information within each piece of evidence individually, while ignoring the topological information interaction among different pieces of evidence. Intuitively, a faithful rationale bears complementary information being able to extract other rationales through the multi-hop reasoning process. To tackle such disadvantages, we cast explainable multi-hop fact verification as subgraph extraction, which can be solved based on graph convolutional network (GCN) with salience-aware graph learning. In specific, GCN is utilized to incorporate the topological interaction information among multiple pieces of evidence for learning evidence representation. Meanwhile, to alleviate the influence of noisy evidence, the salience-aware graph perturbation is induced into the message passing of GCN. Moreover, the multi-task model with three diagnostic properties of rationale is elaborately designed to improve the quality of an explanation without any explicit annotations. Experimental results on the FEVEROUS benchmark show significant gains over previous state-of-the-art methods for both rationale extraction and fact verification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/16/2023

Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop Fact Verification

The success of deep learning models on multi-hop fact verification has p...
research
11/05/2020

HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification

We introduce HoVer (HOppy VERification), a dataset for many-hop evidence...
research
09/25/2021

Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification

This paper presents an end-to-end system for fact extraction and verific...
research
07/22/2019

GEAR: Graph-based Evidence Aggregating and Reasoning for Fact Verification

Fact verification (FV) is a challenging task which requires to retrieve ...
research
09/06/2018

Exploring Graph-structured Passage Representation for Multi-hop Reading Comprehension with Graph Neural Networks

Multi-hop reading comprehension focuses on one type of factoid question,...
research
06/29/2020

Local Neighbor Propagation Embedding

Manifold Learning occupies a vital role in the field of nonlinear dimens...
research
04/09/2020

HopGAT: Hop-aware Supervision Graph Attention Networks for Sparsely Labeled Graphs

Due to the cost of labeling nodes, classifying a node in a sparsely labe...

Please sign up or login with your details

Forgot password? Click here to reset