Investigating Differences in Crowdsourced News Credibility Assessment: Raters, Tasks, and Expert Criteria

08/21/2020
by   Md Momen Bhuiyan, et al.
0

Misinformation about critical issues such as climate change and vaccine safety is oftentimes amplified on online social and search platforms. The crowdsourcing of content credibility assessment by laypeople has been proposed as one strategy to combat misinformation by attempting to replicate the assessments of experts at scale. In this work, we investigate news credibility assessments by crowds versus experts to understand when and how ratings between them differ. We gather a dataset of over 4,000 credibility assessments taken from 2 crowd groups—journalism students and Upwork workers—as well as 2 expert groups—journalists and scientists—on a varied set of 50 news articles related to climate science, a topic with widespread disconnect between public opinion and expert consensus. Examining the ratings, we find differences in performance due to the makeup of the crowd, such as rater demographics and political leaning, as well as the scope of the tasks that the crowd is assigned to rate, such as the genre of the article and partisanship of the publication. Finally, we find differences between expert assessments due to differing expert criteria that journalism versus science experts use—differences that may contribute to crowd discrepancies, but that also suggest a way to reduce the gap by designing crowd tasks tailored to specific expert criteria. From these findings, we outline future research directions to better design crowd processes that are tailored to specific crowds and types of content.

READ FULL TEXT
research
08/17/2021

Informed Crowds Can Effectively Identify Misinformation

Can crowd workers be trusted to judge whether news-like articles circula...
research
08/19/2022

Crowdsourced Fact-Checking at Twitter: How Does the Crowd Compare With Experts?

Fact-checking is one of the effective solutions in fighting online misin...
research
12/14/2018

Measuring Similarity: Computationally Reproducing the Scholar's Interests

Computerized document classification already orders the news articles th...
research
04/20/2020

The Effect of Video Playback Speed on Perception of Technical Skill in Robotic Surgery

Purpose: Previous research has shown that obtaining non-expert crowd eva...
research
10/21/2018

C2A: Crowd Consensus Analytics for Virtual Colonoscopy

We present a medical crowdsourcing visual analytics platform called C^2A...
research
06/03/2018

Mix and Match: Collaborative Expert-Crowd Judging for Building Test Collections Accurately and Affordably

Crowdsourcing offers an affordable and scalable means to collect relevan...
research
05/31/2018

Crowdsourcing for Reminiscence Chatbot Design

In this work-in-progress paper we discuss the challenges in identifying ...

Please sign up or login with your details

Forgot password? Click here to reset