The Fact Extraction and VERification (FEVER) Shared Task

11/27/2018
by   James Thorne, et al.
0

We present the results of the first Fact Extraction and VERification (FEVER) Shared Task. The task challenged participants to classify whether human-written factoid claims could be Supported or Refuted using evidence retrieved from Wikipedia. We received entries from 23 competing teams, 19 of which scored higher than the previously published baseline. The best performing system achieved a FEVER score of 64.21 shared task and a summary of the systems, highlighting commonalities and innovations among participating systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/13/2019

Adversarial attacks against Fact Extraction and VERification

This paper describes a baseline for the second iteration of the Fact Ext...
research
06/02/2021

Evidence-based Factual Error Correction

This paper introduces the task of factual error correction: performing e...
research
09/03/2018

UKP-Athene: Multi-Sentence Textual Entailment for Claim Verification

The Fact Extraction and VERification (FEVER) shared task was launched to...
research
05/31/2021

Zero-shot Fact Verification by Claim Generation

Neural models for automated fact verification have achieved promising re...
research
10/07/2019

BERT for Evidence Retrieval and Claim Verification

Motivated by the promising performance of pre-trained language models, w...
research
09/25/2021

Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification

This paper presents an end-to-end system for fact extraction and verific...
research
08/30/2022

IMCI: Integrate Multi-view Contextual Information for Fact Extraction and Verification

With the rapid development of automatic fake news detection technology, ...

Please sign up or login with your details

Forgot password? Click here to reset