Generating Image Sequence from Description with LSTM Conditional GAN

06/08/2018
by   Xu Ouyang, et al.
0

Generating images from word descriptions is a challenging task. Generative adversarial networks(GANs) are shown to be able to generate realistic images of real-life objects. In this paper, we propose a new neural network architecture of LSTM Conditional Generative Adversarial Networks to generate images of real-life objects. Our proposed model is trained on the Oxford-102 Flowers and Caltech-UCSD Birds-200-2011 datasets. We demonstrate that our proposed model produces the better results surpassing other state-of-art approaches.

READ FULL TEXT

page 1

page 4

page 5

research
10/19/2017

StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks

Although Generative Adversarial Networks (GANs) have shown remarkable su...
research
11/29/2021

Generative Adversarial Networks with Conditional Neural Movement Primitives for An Interactive Generative Drawing Tool

Sketches are abstract representations of visual perception and visuospat...
research
06/29/2018

Generate the corresponding Image from Text Description using Modified GAN-CLS Algorithm

Synthesizing images or texts automatically is a useful research area in ...
research
03/18/2021

Impressions2Font: Generating Fonts by Specifying Impressions

Various fonts give us various impressions, which are often represented b...
research
05/15/2020

Generative Adversarial Networks for photo to Hayao Miyazaki style cartoons

This paper takes on the problem of transferring the style of cartoon ima...
research
07/08/2020

Words as Art Materials: Generating Paintings with Sequential GANs

Converting text descriptions into images using Generative Adversarial Ne...
research
08/09/2018

User-Guided Deep Anime Line Art Colorization with Conditional Adversarial Networks

Scribble colors based line art colorization is a challenging computer vi...

Please sign up or login with your details

Forgot password? Click here to reset