Hierarchical Video Generation from Orthogonal Information: Optical Flow and Texture

11/27/2017
by   Katsunori Ohnishi, et al.
0

Learning to represent and generate videos from unlabeled data is a very challenging problem. To generate realistic videos, it is important not only to ensure that the appearance of each frame is real, but also to ensure the plausibility of a video motion and consistency of a video appearance in the time direction. The process of video generation should be divided according to these intrinsic difficulties. In this study, we focus on the motion and appearance information as two important orthogonal components of a video, and propose Flow-and-Texture-Generative Adversarial Networks (FTGAN) consisting of FlowGAN and TextureGAN. In order to avoid a huge annotation cost, we have to explore a way to learn from unlabeled data. Thus, we employ optical flow as motion information to generate videos. FlowGAN generates optical flow, which contains only the edge and motion of the videos to be begerated. On the other hand, TextureGAN specializes in giving a texture to optical flow generated by FlowGAN. This hierarchical approach brings more realistic videos with plausible motion and appearance consistency. Our experiments show that our model generates more plausible motion videos and also achieves significantly improved performance for unsupervised action classification in comparison to previous GAN works. In addition, because our model generates videos from two independent information, our model can generate new combinations of motion and attribute that are not seen in training data, such as a video in which a person is doing sit-up in a baseball ground.

READ FULL TEXT

page 3

page 4

page 5

page 6

research
09/17/2019

An Internal Learning Approach to Video Inpainting

We propose a novel video inpainting algorithm that simultaneously halluc...
research
12/04/2018

Conditional Video Generation Using Action-Appearance Captions

The field of automatic video generation has received a boost thanks to t...
research
05/04/2021

Motion-Augmented Self-Training for Video Recognition at Smaller Scale

The goal of this paper is to self-train a 3D convolutional neural networ...
research
01/20/2020

The benefits of synthetic data for action categorization

In this paper, we study the value of using synthetically produced videos...
research
12/01/2018

From Third Person to First Person: Dataset and Baselines for Synthesis and Retrieval

First-person (egocentric) and third person (exocentric) videos are drast...
research
12/06/2021

Controllable Animation of Fluid Elements in Still Images

We propose a method to interactively control the animation of fluid elem...
research
12/06/2022

Neural Cell Video Synthesis via Optical-Flow Diffusion

The biomedical imaging world is notorious for working with small amounts...

Please sign up or login with your details

Forgot password? Click here to reset