DeepAI AI Chat
Log In Sign Up

ExClaim: Explainable Neural Claim Verification Using Rationalization

01/21/2023
by   Sai Gurrapu, et al.
IEEE
Virginia Polytechnic Institute and State University
0

With the advent of deep learning, text generation language models have improved dramatically, with text at a similar level as human-written text. This can lead to rampant misinformation because content can now be created cheaply and distributed quickly. Automated claim verification methods exist to validate claims, but they lack foundational data and often use mainstream news as evidence sources that are strongly biased towards a specific agenda. Current claim verification methods use deep neural network models and complex algorithms for a high classification accuracy but it is at the expense of model explainability. The models are black-boxes and their decision-making process and the steps it took to arrive at a final prediction are obfuscated from the user. We introduce a novel claim verification approach, namely: ExClaim, that attempts to provide an explainable claim verification system with foundational evidence. Inspired by the legal system, ExClaim leverages rationalization to provide a verdict for the claim and justifies the verdict through a natural language explanation (rationale) to describe the model's decision-making process. ExClaim treats the verdict classification task as a question-answer problem and achieves a performance of 0.93 F1 score. It provides subtasks explanations to also justify the intermediate outcomes. Statistical and Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy outcomes. Ensuring claim verification systems are assured, rational, and explainable is an essential step toward improving Human-AI trust and the accessibility of black-box systems.

READ FULL TEXT
04/28/2020

DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim Verification

Recently, many methods discover effective evidence from reliable sources...
08/23/2021

Towards Explainable Fact Checking

The past decade has seen a substantial rise in the amount of mis- and di...
08/25/2021

ProoFVer: Natural Logic Theorem Proving for Fact Verification

We propose ProoFVer, a proof system for fact verification using natural ...
05/21/2020

Stance Prediction and Claim Verification: An Arabic Perspective

This work explores the application of textual entailment in news claim v...
06/12/2021

Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features

Despite the high accuracy offered by state-of-the-art deep natural-langu...
03/30/2022

Interpretable Vertebral Fracture Diagnosis

Do black-box neural network models learn clinically relevant features fo...
05/11/2022

Aggregating Pairwise Semantic Differences for Few-Shot Claim Veracity Classification

As part of an automated fact-checking pipeline, the claim veracity class...