DeepAI AI Chat
Log In Sign Up

Crowdsourcing in the Absence of Ground Truth -- A Case Study

06/17/2019
by   Ramya Srinivasan, et al.
FUJITSU
0

Crowdsourcing information constitutes an important aspect of human-in-the-loop learning for researchers across multiple disciplines such as AI, HCI, and social science. While using crowdsourced data for subjective tasks is not new, eliciting useful insights from such data remains challenging due to a variety of factors such as difficulty of the task, personal prejudices of the human evaluators, lack of question clarity, etc. In this paper, we consider one such subjective evaluation task, namely that of estimating experienced emotions of distressed individuals who are conversing with a human listener in an online coaching platform. We explore strategies to aggregate the evaluators choices, and show that a simple voting consensus is as effective as an optimum aggregation method for the task considered. Intrigued by how an objective assessment would compare to the subjective evaluation of evaluators, we also designed a machine learning algorithm to perform the same task. Interestingly, we observed a machine learning algorithm that is not explicitly modeled to characterize evaluators' subjectivity is as reliable as the human evaluation in terms of assessing the most dominant experienced emotions.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/30/2021

Recognizing Emotions evoked by Movies using Multitask Learning

Understanding the emotional impact of movies has become important for af...
05/16/2017

Subjective Knowledge Acquisition and Enrichment Powered By Crowdsourcing

Knowledge bases (KBs) have attracted increasing attention due to its gre...
10/25/2020

Crowdsourcing approach for subjective evaluation of echo impairment

The quality of acoustic echo cancellers (AECs) in real-time communicatio...
10/28/2021

IMDB-WIKI-SbS: An Evaluation Dataset for Crowdsourced Pairwise Comparisons

Today, comprehensive evaluation of large-scale machine learning models i...
01/12/2021

Toward Effective Automated Content Analysis via Crowdsourcing

Many computer scientists use the aggregated answers of online workers to...
03/08/2022

Reproducible Subjective Evaluation

Human perceptual studies are the gold standard for the evaluation of man...