Application of Just-Noticeable Difference in Quality as Environment Suitability Test for Crowdsourcing Speech Quality Assessment Task

04/11/2020
by   Babak Naderi, et al.
0

Crowdsourcing micro-task platforms facilitate subjective media quality assessment by providing access to a highly scale-able, geographically distributed and demographically diverse pool of crowd workers. Those workers participate in the experiment remotely from their own working environment, using their own hardware. In the case of speech quality assessment, preliminary work showed that environmental noise at the listener's side and the listening device (loudspeaker or headphone) significantly affect perceived quality, and consequently the reliability and validity of subjective ratings. As a consequence, ITU-T Rec. P.808 specifies requirements for the listening environment of crowd workers when assessing speech quality. In this paper, we propose a new Just Noticeable Difference of Quality (JNDQ) test as a remote screening method for assessing the suitability of the work environment for participating in speech quality assessment tasks. In a laboratory experiment, participants performed this JNDQ test with different listening devices in different listening environments, including a silent room according to ITU-T Rec. P.800 and a simulated background noise scenario. Results show a significant impact of the environment and the listening device on the JNDQ threshold. Thus, the combination of listening device and background noise needs to be screened in a crowdsourcing speech quality test. We propose a minimum threshold of our JNDQ test as an easily applicable screening method for this purpose.

READ FULL TEXT

page 1

page 5

page 6

research
03/25/2020

Impact of the Number of Votes on the Reliability and Validity of Subjective Speech Quality Assessment in the Crowdsourcing Approach

The subjective quality of transmitted speech is traditionally assessed i...
research
04/09/2021

Speech Quality Assessment in Crowdsourcing: Comparison Category Rating Method

Traditionally, Quality of Experience (QoE) for a communication system is...
research
09/14/2023

Multi-dimensional Speech Quality Assessment in Crowdsourcing

Subjective speech quality assessment is the gold standard for evaluating...
research
03/01/2023

Personalized Task Load Prediction in Speech Communication

Estimating the quality of remote speech communication is a complex task ...
research
10/25/2020

A Crowdsourcing Extension of the ITU-T Recommendation P.835 with Validation

The quality of the speech communication systems, which include noise sup...
research
05/17/2020

An Open source Implementation of ITU-T Recommendation P.808 with Validation

The ITU-T Recommendation P.808 provides a crowdsourcing approach for con...
research
04/17/2021

Comparison of remote experiments using crowdsourcing and laboratory experiments on speech intelligibility

Many subjective experiments have been performed to develop objective spe...

Please sign up or login with your details

Forgot password? Click here to reset