Deep Robust Subjective Visual Property Prediction in Crowdsourcing

03/10/2019
by   Qianqian Xu, et al.
0

The problem of estimating subjective visual properties (SVP) of images (e.g., Shoes A is more comfortable than B) is gaining rising attention. Due to its highly subjective nature, different annotators often exhibit different interpretations of scales when adopting absolute value tests. Therefore, recent investigations turn to collect pairwise comparisons via crowdsourcing platforms. However, crowdsourcing data usually contains outliers. For this purpose, it is desired to develop a robust model for learning SVP from crowdsourced noisy annotations. In this paper, we construct a deep SVP prediction model which not only leads to better detection of annotation outliers but also enables learning with extremely sparse annotations. Specifically, we construct a comparison multi-graph based on the collected annotations, where different labeling results correspond to edges with different directions between two vertexes. Then, we propose a generalized deep probabilistic framework which consists of an SVP prediction module and an outlier modeling module that work collaboratively and are optimized jointly. Extensive experiments on various benchmark datasets demonstrate that our new approach guarantees promising results.

READ FULL TEXT
research
01/25/2015

Robust Subjective Visual Property Prediction from Crowdsourced Pairwise Labels

The problem of estimating subjective visual properties from image and vi...
research
03/25/2020

Impact of the Number of Votes on the Reliability and Validity of Subjective Speech Quality Assessment in the Crowdsourcing Approach

The subjective quality of transmitted speech is traditionally assessed i...
research
04/22/2022

Identifying Chinese Opinion Expressions with Extremely-Noisy Crowdsourcing Annotations

Recent works of opinion expression identification (OEI) rely heavily on ...
research
07/18/2017

Exploring Outliers in Crowdsourced Ranking for QoE

Outlier detection is a crucial part of robust evaluation for crowdsource...
research
05/31/2023

Crowdsourcing subjective annotations using pairwise comparisons reduces bias and error compared to the majority-vote method

How to better reduce measurement variability and bias introduced by subj...
research
10/28/2021

IMDB-WIKI-SbS: An Evaluation Dataset for Crowdsourced Pairwise Comparisons

Today, comprehensive evaluation of large-scale machine learning models i...
research
04/02/2019

Generating Labels for Regression of Subjective Constructs using Triplet Embeddings

Human annotations serve an important role in computational models where ...

Please sign up or login with your details

Forgot password? Click here to reset