Continuous Emotional Intensity Controllable Speech Synthesis using Semi-supervised Learning

11/11/2022
by   Yoori Oh, et al.
0

With the rapid development of the speech synthesis system, recent text-to-speech models have reached the level of generating natural speech similar to what humans say. But there still have limitations in terms of expressiveness. In particular, the existing emotional speech synthesis models have shown controllability using interpolated features with scaling parameters in emotional latent space. However, the emotional latent space generated from the existing models is difficult to control the continuous emotional intensity because of the entanglement of features like emotions, speakers, etc. In this paper, we propose a novel method to control the continuous intensity of emotions using semi-supervised learning. The model learns emotions of intermediate intensity using pseudo-labels generated from phoneme-level sequences of speech information. An embedding space built from the proposed model satisfies the uniform grid geometry with an emotional basis. In addition, to improve the naturalness of intermediate emotional speech, a discriminator is applied to the generation of low-level elements like duration, pitch and energy. The experimental results showed that the proposed method was superior in controllability and naturalness. The synthesized speech samples are available at https://tinyurl.com/34zaehh2

READ FULL TEXT

page 1

page 2

research
06/01/2023

EmoMix: Emotion Mixing via Diffusion Models for Emotional Speech Synthesis

There has been significant progress in emotional Text-To-Speech (TTS) sy...
research
05/05/2021

Exploring emotional prototypes in a high dimensional TTS latent space

Recent TTS systems are able to generate prosodically varied and realisti...
research
06/28/2023

EmoSpeech: Guiding FastSpeech2 Towards Emotional Text to Speech

State-of-the-art speech synthesis models try to get as close as possible...
research
03/02/2022

U-Singer: Multi-Singer Singing Voice Synthesizer that Controls Emotional Intensity

We propose U-Singer, the first multi-singer emotional singing voice synt...
research
09/22/2022

Controllable Accented Text-to-Speech Synthesis

Accented text-to-speech (TTS) synthesis seeks to generate speech with an...
research
07/30/2018

Deep Encoder-Decoder Models for Unsupervised Learning of Controllable Speech Synthesis

Generating versatile and appropriate synthetic speech requires control o...
research
10/27/2022

Explicit Intensity Control for Accented Text-to-speech

Accented text-to-speech (TTS) synthesis seeks to generate speech with an...

Please sign up or login with your details

Forgot password? Click here to reset