Comparison of remote experiments using crowdsourcing and laboratory experiments on speech intelligibility

04/17/2021
by   Ayako Yamamoto, et al.
0

Many subjective experiments have been performed to develop objective speech intelligibility measures, but the novel coronavirus outbreak has made it very difficult to conduct experiments in a laboratory. One solution is to perform remote testing using crowdsourcing; however, because we cannot control the listening conditions, it is unclear whether the results are entirely reliable. In this study, we compared speech intelligibility scores obtained in remote and laboratory experiments. The results showed that the mean and standard deviation (SD) of the remote experiments' speech reception threshold (SRT) were higher than those of the laboratory experiments. However, the variance in the SRTs across the speech-enhancement conditions revealed similarities, implying that remote testing results may be as useful as laboratory experiments to develop an objective measure. We also show that the practice session scores correlate with the SRT values. This is a priori information before performing the main tests and would be useful for data screening to reduce the variability of the SRT distribution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/31/2022

Subjective intelligibility of speech sounds enhanced by ideal ratio mask via crowdsourced remote experiments with effective data screening

It is essential to perform speech intelligibility (SI) experiments with ...
research
05/28/2021

Assessing the intelligibility of vocoded speech using a remote testing framework

Over the past year, remote speech intelligibility testing has become a p...
research
06/14/2022

Speech intelligibility of simulated hearing loss sounds and its prediction using the Gammachirp Envelope Similarity Index (GESI)

In the present study, speech intelligibility (SI) experiments were perfo...
research
03/25/2020

Impact of the Number of Votes on the Reliability and Validity of Subjective Speech Quality Assessment in the Crowdsourcing Approach

The subjective quality of transmitted speech is traditionally assessed i...
research
03/03/2018

Enhancement of Noisy Speech exploiting a Gaussian Modeling based Threshold and a PDF Dependent Thresholding Function

This paper presents a speech enhancement method, where an adaptive thres...
research
04/11/2020

Application of Just-Noticeable Difference in Quality as Environment Suitability Test for Crowdsourcing Speech Quality Assessment Task

Crowdsourcing micro-task platforms facilitate subjective media quality a...
research
06/01/2020

A time-scale modification dataset with subjective quality labels

Time Scale Modification (TSM) is a well-researched field; however, no ef...

Please sign up or login with your details

Forgot password? Click here to reset