LumièreNet: Lecture Video Synthesis from Audio

07/04/2019
by   Byung-Hak Kim, et al.
6

We present LumièreNet, a simple, modular, and completely deep-learning based architecture that synthesizes, high quality, full-pose headshot lecture videos from instructor's new audio narration of any length. Unlike prior works, LumièreNet is entirely composed of trainable neural network modules to learn mapping functions from the audio to video through (intermediate) estimated pose-based compact and abstract latent codes. Our video demos are available at [22] and [23].

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset