Claim-Dissector: An Interpretable Fact-Checking System with Joint Re-ranking and Veracity Prediction

07/28/2022
by   Martin Fajcik, et al.
0

We present Claim-Dissector: a novel latent variable model for fact-checking and fact-analysis, which given a claim and a set of retrieved provenances allows learning jointly: (i) what are the relevant provenances to this claim (ii) what is the veracity of this claim. We propose to disentangle the per-provenance relevance probability and its contribution to the final veracity probability in an interpretable way - the final veracity probability is proportional to a linear ensemble of per-provenance relevance probabilities. This way, it can be clearly identified the relevance of which sources contributes to what extent towards the final probability. We show that our system achieves state-of-the-art results on FEVER dataset comparable to two-stage systems typically used in traditional fact-checking pipelines, while it often uses significantly less parameters and computation. Our analysis shows that proposed approach further allows to learn not just which provenances are relevant, but also which provenances lead to supporting and which toward denying the claim, without direct supervision. This not only adds interpretability, but also allows to detect claims with conflicting evidence automatically. Furthermore, we study whether our model can learn fine-grained relevance cues while using coarse-grained supervision. We show that our model can achieve competitive sentence-recall while using only paragraph-level relevance supervision. Finally, traversing towards the finest granularity of relevance, we show that our framework is capable of identifying relevance at the token-level. To do this, we present a new benchmark focusing on token-level interpretability - humans annotate tokens in relevant provenances they considered essential when making their judgement. Then we measure how similar are these annotations to tokens our model is focusing on. Our code, and dataset will be released online.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2020

Time-Aware Evidence Ranking for Fact-Checking

Truth can vary over time. Therefore, fact-checking decisions on claim ve...
research
04/16/2021

WhatTheWikiFact: Fact-Checking Claims Against Wikipedia

The rise of Internet has made it a major source of information. Unfortun...
research
04/03/2019

Automated Fact Checking in the News Room

Fact checking is an essential task in journalism; its importance has bee...
research
06/18/2023

Focusing on Relevant Responses for Multi-modal Rumor Detection

In the absence of an authoritative statement about a rumor, people may e...
research
06/02/2021

A Multi-Level Attention Model for Evidence-Based Fact Checking

Evidence-based fact checking aims to verify the truthfulness of a claim ...
research
10/22/2022

Varifocal Question Generation for Fact-checking

Fact-checking requires retrieving evidence related to a claim under inve...
research
09/14/2021

Tribrid: Stance Classification with Neural Inconsistency Detection

We study the problem of performing automatic stance classification on so...

Please sign up or login with your details

Forgot password? Click here to reset