CCVS: Context-aware Controllable Video Synthesis

07/16/2021
by   Guillaume Le Moing, et al.
5

This presentation introduces a self-supervised learning approach to the synthesis of new video clips from old ones, with several new key elements for improved spatial resolution and realism: It conditions the synthesis process on contextual information for temporal continuity and ancillary information for fine control. The prediction model is doubly autoregressive, in the latent space of an autoencoder for forecasting, and in image space for updating contextual information, which is also used to enforce spatio-temporal consistency through a learnable optical flow module. Adversarial training of the autoencoder in the appearance and temporal domains is used to further improve the realism of its output. A quantizer inserted between the encoder and the transformer in charge of forecasting future frames in latent space (and its inverse inserted between the transformer and the decoder) adds even more flexibility by affording simple mechanisms for handling multimodal ancillary information for controlling the synthesis process (eg, a few sample frames, an audio track, a trajectory in image space) and taking into account the intrinsically uncertain nature of the future by allowing multiple predictions. Experiments with an implementation of the proposed approach give very good qualitative and quantitative results on multiple tasks and standard benchmarks.

READ FULL TEXT

page 8

page 19

page 20

page 21

page 22

page 23

page 25

page 26

research
05/31/2020

Introducing Latent Timbre Synthesis

We present the Latent Timbre Synthesis (LTS), a new audio synthesis meth...
research
03/29/2018

Context-aware Synthesis for Video Frame Interpolation

Video frame interpolation algorithms typically estimate optical flow or ...
research
12/09/2022

Motion and Context-Aware Audio-Visual Conditioned Video Prediction

Existing state-of-the-art method for audio-visual conditioned video pred...
research
03/02/2023

STDepthFormer: Predicting Spatio-temporal Depth from Video with a Self-supervised Transformer Model

In this paper, a self-supervised model that simultaneously predicts a se...
research
07/15/2021

StyleVideoGAN: A Temporal Generative Model using a Pretrained StyleGAN

Generative adversarial models (GANs) continue to produce advances in ter...
research
08/19/2022

Wildfire Forecasting with Satellite Images and Deep Generative Model

Wildfire forecasting has been one of the most critical tasks that humani...
research
11/14/2017

Prediction Under Uncertainty with Error-Encoding Networks

In this work we introduce a new framework for performing temporal predic...

Please sign up or login with your details

Forgot password? Click here to reset