Semi-Recurrent CNN-based VAE-GAN for Sequential Data Generation

06/01/2018
by   Mohammad Akbari, et al.
0

A semi-recurrent hybrid VAE-GAN model for generating sequential data is introduced. In order to consider the spatial correlation of the data in each frame of the generated sequence, CNNs are utilized in the encoder, generator, and discriminator. The subsequent frames are sampled from the latent distributions obtained by encoding the previous frames. As a result, the dependencies between the frames are maintained. Two testing frameworks for synthesizing a sequence with any number of frames are also proposed. The promising experimental results on piano music generation indicates the potential of the proposed framework in modeling other sequential data such as video.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/13/2020

Recurrent Deconvolutional Generative Adversarial Networks with Application to Text Guided Video Generation

This paper proposes a novel model for video generation and especially ma...
research
03/16/2022

Diffusion Probabilistic Modeling for Video Generation

Denoising diffusion probabilistic models are a promising new class of ge...
research
11/11/2019

Generative Autoregressive Networks for 3D Dancing Move Synthesis from Music

This paper proposes a framework which is able to generate a sequence of ...
research
12/10/2017

Dynamics Transfer GAN: Generating Video by Transferring Arbitrary Temporal Dynamics from a Source Video to a Single Target Image

In this paper, we propose Dynamics Transfer GAN; a new method for genera...
research
11/30/2016

Sync-DRAW: Automatic Video Generation using Deep Recurrent Attentive Architectures

This paper introduces a novel approach for generating videos called Sync...
research
05/09/2022

Image2Gif: Generating Continuous Realistic Animations with Warping NODEs

Generating smooth animations from a limited number of sequential observa...

Please sign up or login with your details

Forgot password? Click here to reset