Multi-speaker Emotional Text-to-speech Synthesizer

12/07/2021
by   Sungjae Cho, et al.
0

We present a methodology to train our multi-speaker emotional text-to-speech synthesizer that can express speech for 10 speakers' 7 different emotions. All silences from audio samples are removed prior to learning. This results in fast learning by our model. Curriculum learning is applied to train our model efficiently. Our model is first trained with a large single-speaker neutral dataset, and then trained with neutral speech from all speakers. Finally, our model is trained using datasets of emotional speech from all speakers. In each stage, training samples of each speaker-emotion pair have equal probability to appear in mini-batches. Through this procedure, our model can synthesize speech for all targeted speakers and emotions. Our synthesized audio sets are available on our web page.

READ FULL TEXT

page 1

page 2

research
04/01/2019

Training Multi-Speaker Neural Text-to-Speech Systems using Speaker-Imbalanced Speech Corpora

When the available data of a target speaker is insufficient to train a h...
research
10/08/2020

Emotion Invariant Speaker Embeddings for Speaker Identification with Emotional Speech

Emotional state of a speaker is found to have significant effect in spee...
research
01/09/2021

Emotion transplantation through adaptation in HMM-based speech synthesis

This paper proposes an emotion transplantation method capable of modifyi...
research
03/29/2019

Does the Lombard Effect Improve Emotional Communication in Noise? - Analysis of Emotional Speech Acted in Noise -

Speakers usually adjust their way of talking in noisy environments invol...
research
05/22/2023

EMNS /Imz/ Corpus: An emotive single-speaker dataset for narrative storytelling in games, television and graphic novels

The increasing adoption of text-to-speech technologies has led to a grow...
research
10/14/2021

FedSpeech: Federated Text-to-Speech with Continual Learning

Federated learning enables collaborative training of machine learning mo...
research
04/15/2022

It is Okay to Not Be Okay: Overcoming Emotional Bias in Affective Image Captioning by Contrastive Data Collection

Datasets that capture the connection between vision, language, and affec...

Please sign up or login with your details

Forgot password? Click here to reset