SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support for rumours

04/20/2017
by   Leon Derczynski, et al.
0

Media is full of false claims. Even Oxford Dictionaries named "post-truth" as the word of 2016. This makes it more important than ever to build systems that can identify the veracity of a story, and the kind of discourse there is around it. RumourEval is a SemEval shared task that aims to identify and handle rumours and reactions to them, in text. We present an annotation scheme, a large dataset covering multiple topics - each having their own families of claims and replies - and use these to pose two concrete challenges as well as the results achieved by participants on these challenges.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2020

The Role of Pragmatic and Discourse Context in Determining Argument Impact

Research in the social sciences and psychology has shown that the persua...
research
09/19/2023

Prompt, Condition, and Generate: Classification of Unsupported Claims with In-Context Learning

Unsupported and unfalsifiable claims we encounter in our daily lives can...
research
06/15/2022

SciTweets – A Dataset and Annotation Framework for Detecting Scientific Online Discourse

Scientific topics, claims and resources are increasingly debated as part...
research
04/29/2020

A Benchmark Dataset of Check-worthy Factual Claims

In this paper we present the ClaimBuster dataset of 23,533 statements ex...
research
05/24/2022

Beyond Fact Verification: Comparing and Contrasting Claims on Contentious Topics

As the importance of identifying misinformation is increasing, many rese...
research
12/31/2020

Investigating Memorability of Dynamic Media

The Predicting Media Memorability task in MediaEval'20 has some challeng...
research
10/02/2018

Findings of the E2E NLG Challenge

This paper summarises the experimental setup and results of the first sh...

Please sign up or login with your details

Forgot password? Click here to reset