EmoSpeech: Guiding FastSpeech2 Towards Emotional Text to Speech

06/28/2023
by   Daria Diatlova, et al.
0

State-of-the-art speech synthesis models try to get as close as possible to the human voice. Hence, modelling emotions is an essential part of Text-To-Speech (TTS) research. In our work, we selected FastSpeech2 as the starting point and proposed a series of modifications for synthesizing emotional speech. According to automatic and human evaluation, our model, EmoSpeech, surpasses existing models regarding both MOS score and emotion recognition accuracy in generated speech. We provided a detailed ablation study for every extension to FastSpeech2 architecture that forms EmoSpeech. The uneven distribution of emotions in the text is crucial for better, synthesized speech and intonation perception. Our model includes a conditioning mechanism that effectively handles this issue by allowing emotions to contribute to each phone with varying intensity levels. The human assessment indicates that proposed modifications generate audio with higher MOS and emotional expressiveness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2023

EmoMix: Emotion Mixing via Diffusion Models for Emotional Speech Synthesis

There has been significant progress in emotional Text-To-Speech (TTS) sy...
research
06/29/2016

Penambahan emosi menggunakan metode manipulasi prosodi untuk sistem text to speech bahasa Indonesia

Adding an emotions using prosody manipulation method for Indonesian text...
research
08/30/2018

MES-P: an Emotional Tonal Speech Dataset in Mandarin Chinese with Distal and Proximal Labels

Emotion shapes all aspects of our interpersonal and intellectual experie...
research
05/10/2022

Bridging the prosody GAP: Genetic Algorithm with People to efficiently sample emotional prosody

The human voice effectively communicates a range of emotions with nuance...
research
11/11/2022

Continuous Emotional Intensity Controllable Speech Synthesis using Semi-supervised Learning

With the rapid development of the speech synthesis system, recent text-t...
research
03/02/2022

U-Singer: Multi-Singer Singing Voice Synthesizer that Controls Emotional Intensity

We propose U-Singer, the first multi-singer emotional singing voice synt...
research
04/20/2023

Emotional Expression Detection in Spoken Language Employing Machine Learning Algorithms

There are a variety of features of the human voice that can be classifie...

Please sign up or login with your details

Forgot password? Click here to reset