Audiovisual Speech Synthesis using Tacotron2

08/03/2020
by   Ahmed Hussen Abdelaziz, et al.
0

Audiovisual speech synthesis is the problem of synthesizing a talking face while maximizing the coherency of the acoustic and visual speech. In this paper, we propose and compare two audiovisual speech synthesis systems for 3D face models. The first system is the AVTacotron2, which is an end-to-end text-to-audiovisual speech synthesizer based on the Tacotron2 architecture. AVTacotron2 converts a sequence of phonemes representing the sentence to synthesize into a sequence of acoustic features and the corresponding controllers of a face model. The output acoustic features are used to condition a WaveRNN to reconstruct the speech waveform, and the output facial controllers are used to generate the corresponding video of the talking face. The second audiovisual speech synthesis system is modular, where acoustic speech is synthesized from text using the traditional Tacotron2. The reconstructed acoustic speech signal is then used to drive the facial controls of the face model using an independently trained audio-to-facial-animation neural network. We further condition both the end-to-end and modular approaches on emotion embeddings that encode the required prosody to generate emotional audiovisual speech. We analyze the performance of the two systems and compare them to the ground truth videos using subjective evaluation tests. The end-to-end and modular systems are able to synthesize close to human-like audiovisual speech with mean opinion scores (MOS) of 4.1 and 3.9, respectively, compared to a MOS of 4.1 for the ground truth generated from professionally recorded videos. While the end-to-end system gives a better overall quality, the modular approach is more flexible and the quality of acoustic speech and visual speech synthesis is almost independent of each other.

READ FULL TEXT
research
06/25/2018

EMPHASIS: An Emotional Phoneme-based Acoustic Model for Speech Synthesis System

We present EMPHASIS, an emotional phoneme-based acoustic model for speec...
research
04/06/2020

Vocoder-Based Speech Synthesis from Silent Videos

Both acoustic and visual information influence human perception of speec...
research
04/03/2022

On incorporating social speaker characteristics in synthetic speech

In our previous work, we derived the acoustic features, that contribute ...
research
08/30/2019

Maximizing Mutual Information for Tacotron

End-to-end speech synthesis method such as Tacotron, Tacotron2 and Trans...
research
09/04/2019

DurIAN: Duration Informed Attention Network For Multimodal Synthesis

In this paper, we present a generic and robust multimodal synthesis syst...
research
03/09/2023

FaceXHuBERT: Text-less Speech-driven E(X)pressive 3D Facial Animation Synthesis Using Self-Supervised Speech Representation Learning

This paper presents FaceXHuBERT, a text-less speech-driven 3D facial ani...
research
08/01/2017

Improved Speech Reconstruction from Silent Video

Speechreading is the task of inferring phonetic information from visuall...

Please sign up or login with your details

Forgot password? Click here to reset