Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis

05/17/2020
by   K R Prajwal, et al.
15

Humans involuntarily tend to infer parts of the conversation from lip movements when the speech is absent or corrupted by external noise. In this work, we explore the task of lip to speech synthesis, i.e., learning to generate natural speech given only the lip movements of a speaker. Acknowledging the importance of contextual and speaker-specific cues for accurate lip-reading, we take a different path from existing works. We focus on learning accurate lip sequences to speech mappings for individual speakers in unconstrained, large vocabulary settings. To this end, we collect and release a large-scale benchmark dataset, the first of its kind, specifically to train and evaluate the single-speaker lip to speech task in natural settings. We propose a novel approach with key design choices to achieve accurate, natural lip to speech synthesis in such unconstrained scenarios for the first time. Extensive evaluation using quantitative, qualitative metrics and human evaluation shows that our method is four times more intelligible than previous works in this space. Please check out our demo video for a quick overview of the paper, method, and qualitative results. https://www.youtube.com/watch?v=HziA-jmlk_4 feature=youtu.be

READ FULL TEXT

page 1

page 3

page 4

page 8

research
06/28/2022

Show Me Your Face, And I'll Tell You How You Speak

When we speak, the prosody and content of the speech can be inferred fro...
research
03/31/2022

Audio-Visual Speech Codecs: Rethinking Audio-Visual Speech Enhancement by Re-Synthesis

Since facial actions such as lip movements contain significant informati...
research
09/01/2022

Lip-to-Speech Synthesis for Arbitrary Speakers in the Wild

In this work, we address the problem of generating speech from silent li...
research
04/11/2018

The Conversation: Deep Audio-Visual Speech Enhancement

Our goal is to isolate individual speakers from multi-talker simultaneou...
research
04/08/2021

AISHELL-4: An Open Source Dataset for Speech Enhancement, Separation, Recognition and Speaker Diarization in Conference Scenario

In this paper, we present AISHELL-4, a sizable real-recorded Mandarin sp...
research
11/02/2021

Personalized One-Shot Lipreading for an ALS Patient

Lipreading or visually recognizing speech from the mouth movements of a ...
research
09/09/2022

Reconstructing the Dynamic Directivity of Unconstrained Speech

An accurate model of natural speech directivity is an important step tow...

Please sign up or login with your details

Forgot password? Click here to reset