Playable Video Generation

01/28/2021
by   Willi Menapace, et al.
1

This paper introduces the unsupervised learning problem of playable video generation (PVG). In PVG, we aim at allowing a user to control the generated video by selecting a discrete action at every time step as when playing a video game. The difficulty of the task lies both in learning semantically consistent actions and in generating realistic videos conditioned on the user input. We propose a novel framework for PVG that is trained in a self-supervised manner on a large dataset of unlabelled videos. We employ an encoder-decoder architecture where the predicted action labels act as bottleneck. The network is constrained to learn a rich action space using, as main driving loss, a reconstruction loss on the generated video. We demonstrate the effectiveness of the proposed approach on several datasets with wide environment variety. Further details, code and examples are available on our project page willi-menapace.github.io/playable-video-generation-website.

READ FULL TEXT

page 1

page 4

page 8

10/21/2021

LARNet: Latent Action Representation for Human Action Synthesis

We present LARNet, a novel end-to-end approach for generating human acti...
11/23/2020

Action Concept Grounding Network for Semantically-Consistent Video Generation

Recent works in self-supervised video prediction have mainly focused on ...
03/03/2022

Playable Environments: Video Manipulation in Space and Time

We present Playable Environments - a new representation for interactive ...
04/14/2020

Unsupervised Multimodal Video-to-Video Translation via Self-Supervised Learning

Existing unsupervised video-to-video translation methods fail to produce...
04/05/2019

Point-to-Point Video Generation

While image manipulation achieves tremendous breakthroughs (e.g., genera...
07/17/2022

Action-conditioned On-demand Motion Generation

We propose a novel framework, On-Demand MOtion Generation (ODMO), for ge...
06/27/2021

Building a Video-and-Language Dataset with Human Actions for Multimodal Logical Inference

This paper introduces a new video-and-language dataset with human action...

Code Repositories

PlayableVideoGeneration

Official Pytorch implementation of "Playable Video Generation"


view repo