DTVNet: Dynamic Time-lapse Video Generation via Single Still Image

08/11/2020
by   Jiangning Zhang, et al.
3

This paper presents a novel end-to-end dynamic time-lapse video generation framework, named DTVNet, to generate diversified time-lapse videos from a single landscape image, which are conditioned on normalized motion vectors. The proposed DTVNet consists of two submodules: Optical Flow Encoder (OFE) and Dynamic Video Generator (DVG). The OFE maps a sequence of optical flow maps to a normalized motion vector that encodes the motion information inside the generated video. The DVG contains motion and content streams that learn from the motion vector and the single image respectively, as well as an encoder and a decoder to learn shared content features and construct video frames with corresponding motion respectively. Specifically, the motion stream introduces multiple adaptive instance normalization (AdaIN) layers to integrate multi-level motion information that are processed by linear layers. In the testing stage, videos with the same content but various motion information can be generated by different normalized motion vectors based on only one input image. We further conduct experiments on Sky Time-lapse dataset, and the results demonstrate the superiority of our approach over the state-of-the-art methods for generating high-quality and dynamic videos, as well as the variety for generating videos with various motion information.

READ FULL TEXT

page 3

page 10

page 12

page 13

page 14

page 17

research
08/03/2023

MVFlow: Deep Optical Flow Estimation of Compressed Videos with Motion Vector Prior

In recent years, many deep learning-based methods have been proposed to ...
research
09/06/2021

Learning Fine-Grained Motion Embedding for Landscape Animation

In this paper we focus on landscape animation, which aims to generate ti...
research
04/14/2022

Human Identity-Preserved Motion Retargeting in Video Synthesis by Feature Disentanglement

Most motion retargeting methods in human action video synthesis decompos...
research
03/11/2019

Video Generation from Single Semantic Label Map

This paper proposes the novel task of video generation conditioned on a ...
research
05/12/2019

On Flow Profile Image for Video Representation

Video representation is a key challenge in many computer vision applicat...
research
12/21/2021

Continuous-Time Video Generation via Learning Motion Dynamics with Neural ODE

In order to perform unconditional video generation, we must learn the di...
research
04/18/2018

Superframes, A Temporal Video Segmentation

The goal of video segmentation is to turn video data into a set of concr...

Please sign up or login with your details

Forgot password? Click here to reset