Video Imagination from a Single Image with Transformation Generation

06/13/2017
by   Baoyang Chen, et al.
0

In this work, we focus on a challenging task: synthesizing multiple imaginary videos given a single image. Major problems come from high dimensionality of pixel space and the ambiguity of potential motions. To overcome those problems, we propose a new framework that produce imaginary videos by transformation generation. The generated transformations are applied to the original image in a novel volumetric merge network to reconstruct frames in imaginary video. Through sampling different latent variables, our method can output different imaginary video samples. The framework is trained in an adversarial way with unsupervised learning. For evaluation, we propose a new assessment metric RIQA. In experiments, we test on 3 datasets varying from synthetic data to natural scene. Our framework achieves promising performance in image quality assessment. The visual inspection indicates that it can successfully generate diverse five-frame videos in acceptable perceptual quality.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

page 8

research
08/30/2019

UGC-VIDEO: perceptual quality assessment of user-generated videos

Recent years have witnessed an ever-expandingvolume of user-generated co...
research
11/21/2022

SinFusion: Training Diffusion Models on a Single Image or Video

Diffusion models exhibited tremendous progress in image and video genera...
research
03/24/2023

Perceptual Quality Assessment of NeRF and Neural View Synthesis Methods for Front-Facing Views

Neural view synthesis (NVS) is one of the most successful techniques for...
research
06/07/2021

Efficient training for future video generation based on hierarchical disentangled representation of latent variables

Generating videos predicting the future of a given sequence has been an ...
research
06/25/2018

Learning Single-Image Depth from Videos using Quality Assessment Networks

Although significant progress has been made in recent years, depth estim...
research
12/01/2022

VIDM: Video Implicit Diffusion Models

Diffusion models have emerged as a powerful generative method for synthe...
research
11/26/2017

Unsupervised 3D Reconstruction from a Single Image via Adversarial Learning

Recent advancements in deep learning opened new opportunities for learni...

Please sign up or login with your details

Forgot password? Click here to reset