Extractive and Abstractive Explanations for Fact-Checking and Evaluation of News

04/27/2021
by   Ashkan Kazemi, et al.
11

In this paper, we explore the construction of natural language explanations for news claims, with the goal of assisting fact-checking and news evaluation applications. We experiment with two methods: (1) an extractive method based on Biased TextRank – a resource-effective unsupervised graph-based algorithm for content extraction; and (2) an abstractive method based on the GPT-2 language model. We perform comparative evaluations on two misinformation datasets in the political and health news domains, and find that the extractive method shows the most promise.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2020

Can We Spot the "Fake News" Before It Was Even Written?

Given the recent proliferation of disinformation online, there has been ...
research
12/13/2021

Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing

Fact-checking systems have become important tools to verify fake and mis...
research
08/28/2023

Helping Fact-Checkers Identify Fake News Stories Shared through Images on WhatsApp

WhatsApp has introduced a novel avenue for smartphone users to engage wi...
research
11/02/2020

Biased TextRank: Unsupervised Graph-Based Content Extraction

We introduce Biased TextRank, a graph-based content extraction method in...
research
08/07/2023

XAI in Automated Fact-Checking? The Benefits Are Modest And There's No One-Explanation-Fits-All

Fact-checking is a popular countermeasure against misinformation but the...
research
07/08/2019

CobWeb: A Research Prototype for Exploring User Bias in Political Fact-Checking

The effect of user bias in fact-checking has not been explored extensive...

Please sign up or login with your details

Forgot password? Click here to reset