Comparison of Speech Representations for Automatic Quality Estimation in Multi-Speaker Text-to-Speech Synthesis

02/28/2020
by   Jennifer Williams, et al.
0

We aim to characterize how different speakers contribute to the perceived output quality of multi-speaker Text-to-Speech (TTS) synthesis. We automatically rate the quality of TTS using a neural network (NN) trained on human mean opinion score (MOS) ratings. First, we train and evaluate our NN model on 13 different TTS and voice conversion (VC) systems from the ASVSpoof 2019 Logical Access (LA) Dataset. Since it is not known how best to represent speech for this task, we compare 8 different representations alongside MOSNet frame-based features. Our representations include image-based spectrogram features and x-vector embeddings that explicitly model different types of noise such as T60 reverberation time. Our NN predicts MOS with a high correlation to human judgments. We report prediction correlation and error. A key finding is the quality achieved for certain speakers seems consistent, regardless of the TTS or VC system. It is widely accepted that some speakers give higher quality than others for building a TTS system: our method provides an automatic way to identify such speakers. Finally, to see if our quality prediction models generalize, we predict quality scores for synthetic speech using a separate multi-speaker TTS system that was trained on LibriTTS data, and conduct our own MOS listening test to compare human ratings with our NN predictions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/01/2019

Training Multi-Speaker Neural Text-to-Speech Systems using Speaker-Imbalanced Speech Corpora

When the available data of a target speaker is insufficient to train a h...
research
06/03/2021

An objective evaluation of the effects of recording conditions and speaker characteristics in multi-speaker deep neural speech synthesis

Multi-speaker spoken datasets enable the creation of text-to-speech synt...
research
03/21/2022

The VoiceMOS Challenge 2022

We present the first edition of the VoiceMOS Challenge, a scientific eve...
research
05/21/2019

A Causality-Guided Prediction of the TED Talk Ratings from the Speech-Transcripts using Neural Networks

Automated prediction of public speaking performance enables novel system...
research
09/22/2022

Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks

Automatically predicting the outcome of subjective listening tests is a ...
research
02/16/2021

Voice Gender Scoring and Independent Acoustic Characterization of Perceived Masculinity and Femininity

Previous research has found that voices can provide reliable information...
research
05/18/2022

Color Overmodification Emerges from Data-Driven Learning and Pragmatic Reasoning

Speakers' referential expressions often depart from communicative ideals...

Please sign up or login with your details

Forgot password? Click here to reset