DeepAI AI Chat
Log In Sign Up

SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support for rumours

by   Leon Derczynski, et al.
The University of Sheffield

Media is full of false claims. Even Oxford Dictionaries named "post-truth" as the word of 2016. This makes it more important than ever to build systems that can identify the veracity of a story, and the kind of discourse there is around it. RumourEval is a SemEval shared task that aims to identify and handle rumours and reactions to them, in text. We present an annotation scheme, a large dataset covering multiple topics - each having their own families of claims and replies - and use these to pose two concrete challenges as well as the results achieved by participants on these challenges.


page 1

page 2

page 3

page 4


The Role of Pragmatic and Discourse Context in Determining Argument Impact

Research in the social sciences and psychology has shown that the persua...

Prompt, Condition, and Generate: Classification of Unsupported Claims with In-Context Learning

Unsupported and unfalsifiable claims we encounter in our daily lives can...

SciTweets – A Dataset and Annotation Framework for Detecting Scientific Online Discourse

Scientific topics, claims and resources are increasingly debated as part...

A Benchmark Dataset of Check-worthy Factual Claims

In this paper we present the ClaimBuster dataset of 23,533 statements ex...

Beyond Fact Verification: Comparing and Contrasting Claims on Contentious Topics

As the importance of identifying misinformation is increasing, many rese...

Investigating Memorability of Dynamic Media

The Predicting Media Memorability task in MediaEval'20 has some challeng...

Findings of the E2E NLG Challenge

This paper summarises the experimental setup and results of the first sh...