Log In Sign Up

More than Words: In-the-Wild Visually-Driven Prosody for Text-to-Speech

by   Michael Hassid, et al.

In this paper we present VDTTS, a Visually-Driven Text-to-Speech model. Motivated by dubbing, VDTTS takes advantage of video frames as an additional input alongside text, and generates speech that matches the video signal. We demonstrate how this allows VDTTS to, unlike plain TTS models, generate speech that not only has prosodic variations like natural pauses and pitch, but is also synchronized to the input video. Experimentally, we show our model produces well synchronized outputs, approaching the video-speech synchronization quality of the ground-truth, on several challenging benchmarks including "in-the-wild" content from VoxCeleb2. We encourage the reader to view the demo videos demonstrating video-speech synchronization, robustness to speaker ID swapping, and prosody.


VisualTTS: TTS with Accurate Lip-Speech Synchronization for Automatic Voice Over

In this paper, we formulate a novel task to synthesize speech in sync wi...

Neural Dubber: Dubbing for Videos According to Scripts

Dubbing is a post-production process of re-recording actors' dialogues, ...

Vocoder-Based Speech Synthesis from Silent Videos

Both acoustic and visual information influence human perception of speec...

Improved Speech Reconstruction from Silent Video

Speechreading is the task of inferring phonetic information from visuall...

A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild

In this work, we investigate the problem of lip-syncing a talking face v...

Lipper: Synthesizing Thy Speech using Multi-View Lipreading

Lipreading has a lot of potential applications such as in the domain of ...

Lip-to-Speech Synthesis for Arbitrary Speakers in the Wild

In this work, we address the problem of generating speech from silent li...

1 Introduction

Post-sync, or dubbing (in the film industry), is the process of re-recording dialogue by the original actor in a controlled environment after the filming process to improve audio quality. Sometimes, a replacement actor is used instead of the original actor when a different voice is desired such as Darth Vader’s character in Star Wars [nyt].

Work in the area of automatic audio-visual dubbing often approaches the problem of generating content with synchronized video and speech by (1) applying a text-to-speech (TTS) system to produce audio from text, then (2) modifying the frames so that the face matches the audio [yang_large-scale_2020]. The second part of this approach is particularly difficult, as it requires generation of photorealistic video across arbitrary filming conditions.

In contrast, we extend the TTS setting to input not only text, but also facial video frames, producing speech that matches the facial movements of the input video. The result is audio that is not only synchronized to the video but also retains the original prosody, including pauses and pitch changes that can be inferred from the video signal, providing a key piece in producing high-quality dubbed videos.

Figure 1: When provided with text and video frames of a speaker, VDTTS generates speech with prosody that matches the video signal.

In this work, we present VDTTS, a visually-driven TTS model. Given text and corresponding video frames of a speaker speaking, our model is trained to generate the corresponding speech (see Fig. 1). As opposed to standard visual speech recognition models, which focus on the mouth region [shillingford2018large], we provide the full face to avoid potentially excluding information pertinent to the speaker’s delivery. This gives the model enough information to generate speech which not only matches the video but also recovers aspects of prosody, such as timing and emotion. Despite not being explicitly trained to generate speech that is synchronized to the input video, the learned model still does so.

Our model is comprised of four main components. Text and video encoders process the inputs, followed by a multi-source attention mechanism that connects these to a decoder that produces mel-spectrograms. A vocoder then produces waveforms from the mel-spectrograms.

We evaluate the performance of our method on GRID [cooke2006audio] as well as on challenging in-the-wild videos from VoxCeleb2 [chung2018voxceleb2]. To validate our design choices and training process, we also present an ablation study of key components of our method, model architecture, and training procedure.

Demo videos are available in the project page that demonstrate video-speech synchronization, robustness to speaker ID swapping, and prosody; we encourage readers to take a look.

Our main contributions are that we:

  • present and evaluate a novel visual TTS model, trained on a wide variety of open-domain YouTube videos;

  • show it achieves state-of-the-art video-speech synchronization on GRID and VoxCeleb2 when presented with arbitrary unseen speakers; and

  • demonstrate that our method recovers aspects of prosody such as pauses and pitch while producing natural, human-like speech.

Figure 2: The overall architecture of our model. Colors: inputs: yellow, trainable: blue, frozen: purple, output: orange.

2 Related work

Text-to-speech (TTS)

engines which generate natural sounding speech from text, have seen dazzling progress in recent years. Methods have shifted from parametric models towards increasingly end-to-end neural networks 

[oord2016wavenet, wang2017tacotron]. This shift enabled TTS models to generate speech that sounds as natural as professional human speech [jia2021png]

. Most approaches consist of three main components: an encoder that converts the input text into a sequence of hidden representations, a decoder that produces acoustic representations like mel-spectrograms from these, and finally a vocoder that constructs waveforms from the acoustic representations.

Some methods including Tacotron and Tacotron 2 use an attention-based autoregressive approach [wang2017tacotron, shen2018natural, jia2018transfer]; followup work such as FastSpeech [ren2019fastspeech, ren2020fastspeech2], Non-Attentive Tacotron (NAT) [shen2020non, jia2021png] and Parallel Tacotron [elias2020parallel, elias2021parallel]

, often replace recurrent neural networks with transformers.

Extensive research has been conducted on how to invert mel-spectrograms back into waveforms; since the former is a compressed audio representation, it is not generally invertible. For example, the seminal work of griffin1984signal proposes a simple least-squares approach, while modern approaches train models to learn task-specific mappings that can capture more of the audio signal, including the approaches of WaveNet as applied to Tacotron 2 [shen2018natural], MelGAN [oord2016wavenet, kumar2019melgan], or more recent work like WaveGlow [prenger2019waveglow] which trains a flow-based conditional generative model, DiffWave [kong2020diffwave] which propose a probabilistic model for conditional and unconditional waveform generation, or WaveGrad [chen2020wavegrad] that make use of data density gradients to generate waveforms. In our work, we use the fully-convolutional SoundStream vocoder [zeghidour2021soundstream].

TTS prosody control

skerry-ryan_towards_2018 define prosody as “the variation in speech signals excluding phonetics, speaker identity, and channel effects.” Standard TTS approaches tend to be trained to produce neutral speech, due the difficulty of modeling prosody.

Great efforts have been made towards transferring or controlling the prosody of TTS audio. wang_style_2018 created a style embedding by using a multi-headed attention module between the encoded input audio sequence and the global style tokens (GSTs). They trained a model jointly with the Tacotron model using the reconstruction loss of the mel-spectrograms. At inference time, they construct the style embedding from the text to enable style control, or from other audio for style transfer.

A Variational Auto-Encoder (VAE) latent representation of speaking style was used by [zhang_learning_2019]. During inference time, they alter speaking style by manipulating the latent embedding, or by obtained it from a reference audio. hsu_hierarchical_2018 used a VAE to create two levels of hierarchical latent variables, the first representing attribute groups, and the second representing more specific attribute configurations. This setup allows fine-grained control of the generated audio prosody including accent, speaking rate, etc.


In this setup, a silent video is presented to a model that tries to generate speech consistent with the mouth movements, without providing text. Vid2Speech [ephrat2017vid2speech]

uses a convolutional neural network (CNN) that generates an acoustic feature for each frame of a silent video. Lipper

[kumar2019lipper] use a closeup video of lips and produces text and speech, while [mira2021end] directly generated the speech without a vocoder. prajwal2020learning propose a speaker specific lip-reading model.

Utts. Hrs. Vocab. Speakers ID Source
GRID [cooke2006audio] 34K 43 51 34 Studio
LRS2 [chung2017lipreadingsentences] 47K 29 18K - BBC
LRS3 [afouras2018lrs3] 32K 30 17K 3.8K TED/TEDx
VoxCeleb2 [chung2018voxceleb2] 1M 2442 ~35K* 6.1K YouTube
LSVSR [shillingford2018large, yang_large-scale_2020] 3M 3130 127K ~464K YouTube
Table 1: Audio-visual speech dataset size comparison in terms of number of utterances, hours, and vocabulary. Numbers are shown before processing; the resulting number of utterances we use for VoxCeleb2 and LSVSR are smaller. In yang_large-scale_2020, LSVSR is called MLVD. (*VoxCeleb2 lacks transcripts, so we use an English-only automated transcription model [park2020improved] to produce transcripts for training purposes, also used for vocabulary size measurement in this table.)

For our task, we require triplets consisting of: a facial video, the corresponding speech audio, and a text transcript. The video and text are used as model inputs, whereas the speech audio is used as ground-truth for metrics and loss computation.

GRID is a standard dataset filmed under consistent conditions [cooke2006audio]. LRW [chung2016lip] and LRS2 [chung2017lipreadingsentences] are based on high-quality BBC television content, and LRS3 [afouras2018lrs3] is based on TED talks; however, these datasets are restricted to academic use only. VoxCeleb2 [chung2018voxceleb2] and LSVSR [shillingford2018large, yang_large-scale_2020], being based on open-domain YouTube data, contain the widest range of people, types of content, and words. A comparison of dataset size appears in Table 1.

In this work, we adopt GRID as a standard benchmark, and VoxCeleb2 and LSVSR due to their greater difficulty.

Automated dubbing

A common approach to automated dubbing is to generate or modify the video frames to match a given clip of audio speech [yang_large-scale_2020, lahiri2021lipsync3d, suwajanakorn2017synthesizing, kumar2017obamanet, kr2019towards, song2020everybody, fried2019text, kim2019neural, jha2019cross]. This wide and active area of research uses approaches that vary from conditional video generation, to retrieval, to 3D models. Unlike this line of work, we start from a fixed video and generate audio instead.

Recent work in visual TTS uses both text and video frames to train a TTS model, much like our approach. Concurrent work to ours [lu2021visualtts, hu2021neural] take this approach, the former using GRID and the latter using just LRS2. Unlike our work, these approaches explicitly constrain output signal length and attention weights to encourage synchronization.

3 Method

In this section, we describe the architecture of the proposed model and depict its components. Full architectural and training details are given in Appendix B and Appendix C respectively.


Fig. 2 illustrates the overall architecture of the VTTS model. As shown, and similarly to [ding2020textual], the architecture consists of (1) a video encoder, (2) a text encoder, (3) a speaker encoder, (4) an autoregressive decoder with a multi-source attention mechanism, and (5) a vocoder. The method follows [ding2020textual] using the combined loss.

Let and be the length of input video frame and phoneme sequences respectively. Let and be the width, height and the number of channels of the frames, the dimension of the phoneme embeddings, and the set of phonemes.

We begin with an input pair composed of a source video frame sequence and a sequence of phonemes .

The video encoder receives a frame sequence as input, produces a hidden representation for each frame, and then concatenates these representations, i.e.,

where is the hidden dimension of the model.

Similarly, the text encoder receives the source phonemes and produces a hidden representation,

The speaker encoder maps a speaker to a 256-dimensional speaker embedding,

The autoregressive decoder receives as input the two hidden representations and , and the speaker embedding , and predicts the mel-spectrogram of the synthesized speech using the attention context,

Finally, the predicted mel-spectrogram is transformed to a waveform using a frozen pretrained neural vocoder [zeghidour2021soundstream].

Video encoder

Our video encoder is inspired by VGG3D as in [shillingford2018large]. However, unlike their work and similar lipreading work, we use a full face crop instead of a mouth-only crop to avoid potentially excluding information that could be pertinent to prosody, such as facial expressions.

Text encoder

Our text encoder is derived from Tacotron 2’s [shen2018natural] text encoder. Each phoneme is first embedded in a -dimensional embedding space. Then the sequence of phoneme embeddings is passed through convolution layers and a Bi-LSTM layer.

Speaker encoder

In order to enable our model to handle a multi-speaker environment, we use a frozen, pretrained speaker embedding model [wan2018generalized]. When the speaker ID is provided in the dataset, as for GRID and VoxCeleb2, we generate embeddings per utterance and average over all utterances associated with the speaker, normalizing the result to unit norm. For LSVSR the speaker identity is unavailable, so we compute the embedding per-utterance. At test time, while we could use an arbitrary speaker embedding to make the voice match the speaker for comparison purposes, we use the average speaker embedding over the audio clips from this speaker. We encourage the reader to refer to the project page, in which example videos demonstrate how VDTTS preforms when speaker voice embeddings are swapped between different speakers.


Our RNN-based autoregressive decoder is similar to the one proposed by [shen2018natural], and consists of four parts: a pre-net, a fully connected network reprojecting the previous decoder output onto a lower dimension before it is used as input for future time steps; an attention module, in our case multi-source attention, discussed later; an LSTM core; and a post-net which predicts the final mel-spectrogram output.

The decoder receives as input the output sequences of the: video encoder , the text phoneme encoder as well as the speaker embedding produced by the speaker encoder , and generates a mel-spectrogram of the speech signal . In contrast to [shen2018natural], which do not support speaker voice embeddings, we concatenate them to the output of the pre-net to enable our model to be used in a multi-speaker environment.

Multi-source attention
Figure 3: The Multi Source Attention Mechanism.

A multi-source attention mechanism, similar to that of Textual Echo Cancellation [ding2020textual], allows selecting which of the outputs of the encoder are passed to the decoder in each timestep.

The multi-source attention, as presented in Fig. 3, has an individual attention mechanism for each of the encoders, without weights sharing between them. At each timestep , each attention module outputs an attention context,

where is the output of the pre-net layer of the decoder at timestep .

The input of the decoder at timestep is the projection of the concatenation of the two contexts described above via a linear layer,

While [ding2020textual]

aggregated the context vectors using summation, we found that a concatenation and projection work better in our setting as shown in

Sec. 4.6.

We use a Gaussian mixture attention mechanism [graves2013generating] for both modalities (video and text), since it is a soft monotonic attention which is known to achieve better results for speech synthesis [he2019robust, skerry2018towards, polyak2019attention].

Full architectural details appear in Appendix B.

4 Experiments

To evaluate the performance of the proposed video enhanced TTS model we conducted experiments on two very different public datasets: GRID [cooke2006audio] and VoxCeleb2 [chung2018voxceleb2]. GRID presents a controlled environment allowing us to test our method on high quality, studio captured videos with a small vocabulary, in which the same speakers appear in both the train and test sets. VoxCeleb2, however, is much more in-the-wild, therefore it is more diverse in terms of appearance (illumination, image quality, audio noise, face angles, etc.), and the set of speakers in the test set do not appear in the training set. This allows us to test the ability of the model to generalize to unseen speakers.

4.1 Evaluation Metrics

We objectively evaluate prosody accuracy, video-speech synchronization and word error rate (WER). We further evaluate synchronization subjectively with human ratings as described below.

Pitch (fundamental frequency, ) and voicing contours are computed using the output of the YIN pitch tracking algorithm [decheveigne2002yinalgo] with a

ms frame shift. For cases in which the predicted signal is too short we pad using a domain-appropriate padding up to the length of the reference. If it is too long we clip it short.

In the remainder of this section we define and provide intuition for the metrics in the experimental section.

4.1.1 Mel Cepstral Distortion (MCD) [Kubichek93mcd]

is a mel-spectrogram distance measure defined as:


where and are the -th Mel-Frequency Cepstral Coefficient (MFCC) [tiwari2010mfcc] of the -th frame from the reference and the predicted audio respectively. We sum the squared differences over the first MFCCs, skipping (overall energy). MFCCs are computed using a ms window and ms step size.

4.1.2 Pitch Metrics

We compute the following commonly used prosody metrics over the pitch and voicing sequences produced from the synthesized and the ground-truth waveforms [skerry2018towards, sisman2020overview].

F0 Frame Error (FFE) [chu2009reducing] measures the percentage of frames that either contain a pitch error or a voicing decision error.


where , are the pitch, and , are the voicing contours computed over the predicted and ground-truth audio.

Gross Pitch Error (GPE) [nakatani2008gpe] measures the percentage of frames where pitch differed by more than on frames and voice was present on both the predicted and reference audio.


where , are the pitch, and , are the voicing contours computed over the predicted and ground-truth audio.

Voice Decision Error (VDE) [nakatani2008gpe] measures the proportion of frames where the predicted audio is voiced differently than the ground-truth.


where , are the voicing contours computed over the predicted and ground-truth audio.

4.1.3 Lip Sync Error

We use Lip Sync Error - Confidence (LSE-C) and Lip Sync Error - Distance (LSE-D) [prajwal2020syncnet] to measure video-speech synchronization between the predicted audio and the video signal. The measurements are taken using a pretrained SyncNet model [chung16syncnet].

4.1.4 Word Error Rate (WER)

A TTS model is expected to produce an intelligible speech signal consistent with the input text. To measure this objectively, we measure WER as determined by an automatic speech recognition (ASR) model. To this end we use a state-of-the-art ASR model as proposed in

[park2020improved], trained on the LibriSpeech [panayotov2015librispeech] training set. The recognizer was not altered or fine-tuned.

Since LSVSR is open-ended content, and out-of-domain compared to the audiobooks in LibriSpeech, the ASR performance may result in a high WER even on ground-truth audio. Thus, we only use the WER metric for relative comparison. In Appendix A, we compute the WER on predictions from a text-only TTS model trained on several datasets to establish a range of reasonable WERs; we confirm that a rather high WER is to be expected.

4.1.5 Video-speech sync Mean Opinion Score (MOS)

We measured video-speech synchronization quality with a 3-point Likert scale with a granularity of 0.5. Each rater is required to watch a video at least twice before rating it and a rater cannot rate more than 18 videos; each video is rated by 3 raters. Each evaluation was conducted independently; different models were not compared pairwise. The (averaged) MOS ratings are shown as a confidence interval.

In Sec. 4.4 we rate a total of 200 videos each containing a unique speaker, while in Sec. 4.3 we chose 5 clips per speaker resulting in a total of 165 videos.

ground-truth [lu2021visualtts] - 7.68 6.87 - - - - -
VisualTTS [lu2021visualtts] - 5.81 8.50 - - - - -
ground-truth 2.68 0.04 7.24 6.73 26% - - - -
TTS-TextOnly [jia2021translatotron] 1.51 0.05 3.39 10.44 19% 15.76 0.48 0.30 0.42
VDTTS-LSVSR 2.10 0.06 5.85 7.93 55% 12.81 0.37 0.21 0.32
VDTTS-GRID 2.55 0.05 6.97 6.85 26% 7.89 0.14 0.07 0.11
Table 2: GRID evaluation. This table shows our experiments on the GRID dataset. The top two rows present the numbers as they appear in VisualTTS [lu2021visualtts]. ground-truth shows the metrics as evaluated on the original speech/video. TTS-TextOnly shows the performance of a vanilla text-only TTS model, while VDTTS-LSVSR and VDTTS-GRID are our model when trained on LSVSR and GRID respectively. While VDTTS-GRID archives the best overall performance, it is evident VDTTS-LSVSR generalizes well enough to the GRID dataset to outperform VisualTTS [lu2021visualtts]. See Sec. 4.1 for a detailed explanation of metrics; arrows indicate if higher or lower is better.
ground-truth 2.79 0.03 7.00 7.51 - - - - -
TTS-TextOnly [jia2021translatotron] 1.77 0.05 1.82 12.44 4% 14.67 0.59 0.38 0.42
VDTTS-VoxCeleb2 2.50 0.04 5.99 8.22 48% 12.17 0.46 0.31 0.30
VDTTS-LSVSR 2.45 0.04 5.92 8.25 25% 11.58 0.47 0.33 0.30
Table 3: VoxCeleb2 evaluation. ground-truth shows the synchronization quality of the original VoxCeleb2 speech and video. TTS-TextOnly represents a vanilla text-only TTS model, while VDTTS-VoxCeleb2 and VDTTS-LSVSR are our model when trained on VoxCeleb2 and LSVSR respectively. By looking at the WER, it is evident VDTTS-VoxCeleb2 generates unintelligible results, while VDTTS-LSVSR generalizes well to VoxCeleb2 data and produces better quality overall. See Sec. 4.1 for a detailed explanation of metrics; arrows indicate if higher or lower is better.

4.2 Data preprocessing

Several preprocessing steps were conducted before training and evaluating our models, including audio filtering, face cropping and limiting example length.

We follow a similar methodology first proposed by [shillingford2018large]

while creating the LSVSR dataset. We limit the duration of all examples to be in the range of 1 to 6 seconds, and transcripts are filtered through a language classifier

[salcianu2018compact] to include only English. We also remove utterances which have less than one word per second on average, since they do not contain enough spoken content. We filter blurry clips and use a neural network [chung2017lip] to verify that the audio and video channels are aligned. Then, we apply a landmarker as in [schroff2015facenet] and keep segments where the face yaw and pitch remain within

and remove clips where an eye-to-eye width of less than 80 pixels. Using the extracted and smoothed landmarks, we discard minor lip movements and nonspeaking faces using a threshold filter. The landmarks are used to compute and apply an affine transformation (without skew) to obtain canonicalized faces. Audio is filtered

[kavalerov2019universal] to reduce non-speech noise.

We use this methodology to collect a similar dataset to LSVSR [shillingford2018large], which we use as our in-the-wild training set with examples, and also to preprocess our version of VoxCeleb2, only changing the maximal face angle to to increase dataset size. Running the same processing as described above on VoxCeleb2 results in train, and test examples. As for GRID which we use as our controlled environment, we do not filter the data, and only use the face cropping part of the aforementioned pipeline to generate model inputs.

4.3 Controlled environment evaluation

In order to evaluate our method in a controlled environment we use the GRID dataset [cooke2006audio]. GRID is composed of studio video recordings of 33 speakers (originally 34, one is corrupt). There are 1000 videos of each speaker, and in each video a sentence is spoken with a predetermined “GRID” format. The vocabulary of the dataset is relatively small and all videos were captured in a controlled studio environment over a green screen with little head pose variation.

We compare VDTTS to the recent VisualTTS [lu2021visualtts] method using the same methodology reported by the authors. To that end, we take 100 random videos from each speaker as a test set. We use the remainder 900 examples per speaker as training data, and also for generating a lookup-table containing the speaker embedding, averaged and normalized per speaker, as explained in Sec. 3. At test time we present our models with video frames alongside the transcript and the average speaker embedding.

We evaluate our method using the metrics mentioned in Sec. 4.1, and compare it to several baselines: (1) VisualTTS [lu2021visualtts]; (2) PnG NAT TTS zero-shot voice transferring model from [jia2021translatotron], a state-of-the-art TTS model trained on the LibriTTS [zen2019libritts] dataset, denoted as TTS-TextOnly; (3) our model when trained over LSVSR (see Sec. 4.2); and (4) our model trained on the GRID training set.

Unfortunately, VisualTTS [lu2021visualtts] did not provide their random train/test splits. Therefore, we report the original metrics as they appear in [lu2021visualtts] alongside the numbers we found over our test set. Luckily, the two are comparable, as can be seen by the two rows in Table 2 named ground-truth.

The results appear in Table 2. Observe that, when trained on GRID, our method outperforms all other methods over in all metrics except WER. Moreover, our model trained on LSVSR, as we will see in a later section, gets better video-speech synchronization results than VisualTTS, which was trained on GRID, showing that our “in-the-wild” model generalizes to new domains and unseen speakers.

Figure 4: Qualitative examples. We present two examples (a) and (b) from the test set of VoxCeleb2 [chung2018voxceleb2]. Within each example, from top to bottom: input face images, ground-truth (GT) mel-spectrogram, mel-spectrogram output of VDTTS, mel-spectrogram output of a vanilla TTS model TTS-TextOnly, and two plots showing the normalized pitch (normalized by mean nonzero pitch, i.e. mean is only over voiced periods) of VDTTS and TTS-TextOnly compared to the ground-truth signal. For actual videos we refer the reader to the project page.

4.4 In-the-wild evaluation

In this section we evaluate VDTTS on the in-the-wild data from the test set of VoxCeleb2 [chung2018voxceleb2]

. This is an open-source dataset made of in-the-wild examples of people speaking and is taken from YouTube. We preprocess the data as described in

Sec. 4.2. Since this data is not transcribed, we augment the data with transcripts automatically generated using [park2020improved], yielding high quality, automatically transcribed test videos. We create a speaker embedding lookup table by averaging and normalizing the speaker voice embeddings from all examples of the same speaker.

As a baseline we again use the text-only TTS model from [jia2021translatotron], denoted in the table by TTS-TextOnly. The results are shown in Table 3.

Initially we trained our model on the train set of VoxCeleb2, called VDTTS-VoxCeleb2. Unfortunately, as can be seen by the high WER of , the model produced difficult-to-comprehend audio. We hypothesized that noisy automated transcripts were the culprit, so trained the model on an alternative in-the-wild dataset with human generated transcripts, LSVSR, we denote this model by VDTTS-LSVSR. As hypothesized, this leads to a great improvement in WER and reduced the error to only while leaving most other metrics comparable. For more details refer to Appendix A.

For qualitative examples of VDTTS-LSVSR we refer the reader to Sec. 4.5.

VDTTS 5.92 8.25 25% 11.58 0.47 0.33 0.30
VDTTS-no-speaker-emb 1.49 12.14 27% 14.5 0.67 0.43 0.37
VDTTS-small 1.48 12.45 38% 14 0.6 0.4 0.43
VDTTS-sum 5.74 8.47 28% 12.22 0.46 0.29 0.31
VDTTS-no-text 5.90 8.28 98% 12.99 0.53 0.35 0.35
VDTTS-no-video 1.44 12.62 27% 14.36 0.58 0.34 0.47
Table 4: Ablation study, showing different variations of the VDTTS model and hence the contribution of these components to the performance of VDTTS. See Sec. 4.6 for a detailed explanation of the different models, and Sec. 4.1 for definitions of metrics. Arrows indicate if higher or lower is better.

4.5 Prosody using video

We selected two inference examples from the test set of VoxCeleb2 to showcase the unique strength of VDTTS, which we present in Fig. 4. In both examples, the video frames provide clues about the prosody and word timing. Such visual information is not available to the text-only TTS model, TTS-TextOnly [jia2021translatotron], to which we compare.

In the first example (see Fig. 4(a)), the speaker talks at a particular pace that results in periodic gaps in the ground-truth mel-spectrogram. The VDTTS model preserves this characteristic and generates mel-spectrograms that are much closer to the ground-truth than the ones generated by TTS-TextOnly without access to the video.

Similarly, in the second example (see Fig. 4(b)), the speaker takes long pauses between some of the words. This can be observed by looking at the gaps in the ground-truth mel-spectrogram. These pauses are captured by VDTTS and are reflected in the predicted result below, whereas the mel-spectrogram of TTS-TextOnly does not capture this aspect of the speaker’s rhythm.

We also plot charts to compare the pitch generated by each model to the ground-truth pitch. In both examples, the curve of VDTTS fits the ground-truth much better than the TTS-TextOnly curve, both in the alignment of speech and silence, and also in how the pitch changes over time.

To view the videos and other examples, we refer the reader to the project page.

4.6 Ablation

In this section we conduct an ablation study to better understand the contribution of our key design choices.

The results for the ablation study are presented in Table 4 using the following abbreviations for the models: (1) VDTTS-no-speaker-emb: VDTTS without the use of a speaker embedding. Althouth unlikely, this version could possibly learn to compensate for the missing embedding using the person in the video. (2) VDTTS-small: VDTTS model with smaller encoders and smaller decoder, resulting from , as in [shen2018natural]. (3) VDTTS-sum: VDTTS using a summation (as in [ding2020textual]) instead of concatenation in the Multi Source Attention mechanism. (4) VDTTS-no-text: VDTTS model without text input, can be thought of as a silent-video-to-speech model. (5) VDTTS-no-video: VDTTS model without video input, can be thought of as a TTS model.

VDTTS-no-speaker-emb performs poorly on the video-speech synchronization metrics LSE-C and LSE-D, likely due to underfitting since the model is unable to infer the voice of the speaker using only the video.

Looking at VDTTS-small, makes it evident that increasing beyond what was originally suggested by ding2020textual is required.

Another interesting model is VDTTS-no-text, which has access only to the video frame input without any text. In terms of video-speech synchronization it is on par with the full model for LSE-C and LSE-D, but fails to produce words as can be seen by its high WER. Intriguingly, output from this model looks clearly synchronized, but sounds like English babbling, as can be seen in the project page. On one hand, this shows that the text input is necessary in order to produce intelligible content, and on the other hand it shows the video is sufficient for inferring synchronization and prosody without having access to the underlying text. Furthermore, in both this model and the full one, the synchronization is learnt without any explicit loss or constraint to encourage it, suggesting that the “easiest” solution for the model to learn is to infer prosody visually, rather than modeling it from the text.

5 Discussion and Future Work

In this paper we present VDTTS, a novel visually-driven TTS model that takes advantage of video frames as an input and generates speech with prosody that matches the video signal. Such a model can be used for post-sync or dubbing, producing speech synchronized to a sequence of video frames. Our method also naturally extends to other applications such as low-quality speech enhancement in videos, audio restoration in captioned videos.

The model produces near ground-truth quality on the GRID dataset. On open-domain “in-the-wild” evaluations, the model produces well-synchronized outputs approaching the video-speech synchronization quality of the ground-truth. We show that VDTTS performs favorably compared to alternate approaches.

Intriguingly, VDTTS is able to produce video-synchronized speech without any explicit losses or constraints to encourage this, suggesting complexities such as synchronization losses or explicit modeling are unnecessary. Furthermore, we demonstrate that the text and speaker embedding supply the speech content, while the prosody is produced by the video signal. Our results also suggest that the “easiest” solution for the model to learn is to infer prosody visually, rather than modeling it from the text.

It remains to be seen how VDTTS will perform in generating video-synchronous audio for input text that differs from the original video signal, which would be a powerful tool for performing, for instance, translation dubbing without the need to modify facial video.


Appendix A Word error rate discussion

As explained in Sec. 4.4, our VoxCeleb2 transcripts are automatically generated and thus contain transcription errors. As a result one can expect the WER for models trained on this data to be non-zero. In order to validate this hypothesis, that the result of such noisy data leads to a non-zero WER, we trained a version of the our model that accepts only text as input (without silent video), denoted as TTS-our. TTS-our was trained twice, once on the LibriTTS [zen2019libritts] dataset, and a second time when using our in-the-wild LSVSR dataset. When looking at Table 5 it is clear that when trained on LibriTTS this model achieves a low WER of 7%, while the same model when trained on in-the-wild dataset get a WER of 27%. This suggests that a WER in the region of should be expected when using LSVSR.

That being said, we believe reporting WER is valuable as a sanity check for noisy datasets, specially when trying to capture more than just the words.

Training data WER
LibriTTS 7%
Table 5: Comparison of WER on the VoxCeleb2 test set for our text only TTS model (TTS-our) when trained on different datasets.

Appendix B Detailed model architecture

Video Encoder Input size
Max Pooling per layer (F, T, T, T, T)
Number of output channels per layer (64, 128, 256, 512, 512)
Stride per layer (2, 1, 1, 1, 1)
Kernel size (for all layers) (3, 3, 3)
Activation (for all layers) ReLU
Normalization (for all layers) Group Norm
Text Encoder Input size
Conv layers 3 2048 51 kernel with 11 stride
Activation (for all Conv layers) ReLU
Bi-LSTM 1024-dim per direction
Normalization (for all Conv layers) Batch Norm
Multi Source Attention Attention input size
GMM attention (per source) 128-dim context
Linear Projection Fully connected layer
Decoder PreNet

2 fully connected layers with 256 neurons and ReLU act.

LSTM 2 1024-dim
Bi-LSTM 1024-dim per direction
PostNet 5 conv layers with 512 51 kernel with 11 stride
and TanH act.
Normalization (for all Decoder layers) Batch Norm
Teacher forcing prob 1.0

Appendix C Training hyperparameters

Training learning rate 0.0003
learning rate scheduler type Linear Rampup with Exponential Decay
scheduler decay start 40k steps
scheduler decay end 300k steps
scheduler warm-up 400 steps
batch size 512
Optimizer optimizer details Adam with
Regularization L2 regularization factor 1e-06