ExClaim: Explainable Neural Claim Verification Using Rationalization

01/21/2023
by   Sai Gurrapu, et al.
0

With the advent of deep learning, text generation language models have improved dramatically, with text at a similar level as human-written text. This can lead to rampant misinformation because content can now be created cheaply and distributed quickly. Automated claim verification methods exist to validate claims, but they lack foundational data and often use mainstream news as evidence sources that are strongly biased towards a specific agenda. Current claim verification methods use deep neural network models and complex algorithms for a high classification accuracy but it is at the expense of model explainability. The models are black-boxes and their decision-making process and the steps it took to arrive at a final prediction are obfuscated from the user. We introduce a novel claim verification approach, namely: ExClaim, that attempts to provide an explainable claim verification system with foundational evidence. Inspired by the legal system, ExClaim leverages rationalization to provide a verdict for the claim and justifies the verdict through a natural language explanation (rationale) to describe the model's decision-making process. ExClaim treats the verdict classification task as a question-answer problem and achieves a performance of 0.93 F1 score. It provides subtasks explanations to also justify the intermediate outcomes. Statistical and Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy outcomes. Ensuring claim verification systems are assured, rational, and explainable is an essential step toward improving Human-AI trust and the accessibility of black-box systems.

READ FULL TEXT
research
04/28/2020

DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim Verification

Recently, many methods discover effective evidence from reliable sources...
research
08/23/2021

Towards Explainable Fact Checking

The past decade has seen a substantial rise in the amount of mis- and di...
research
03/15/2021

Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence

Typical fact verification models use retrieved written evidence to verif...
research
08/25/2021

ProoFVer: Natural Logic Theorem Proving for Fact Verification

We propose ProoFVer, a proof system for fact verification using natural ...
research
06/12/2021

Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features

Despite the high accuracy offered by state-of-the-art deep natural-langu...
research
06/04/2020

SIDU: Similarity Difference and Uniqueness Method for Explainable AI

A new brand of technical artificial intelligence ( Explainable AI ) rese...
research
11/01/2022

Natural Language Deduction with Incomplete Information

A growing body of work studies how to answer a question or verify a clai...

Please sign up or login with your details

Forgot password? Click here to reset