DeepAI AI Chat
Log In Sign Up

Generating Image Sequence from Description with LSTM Conditional GAN

by   Xu Ouyang, et al.
Illinois Institute of Technology

Generating images from word descriptions is a challenging task. Generative adversarial networks(GANs) are shown to be able to generate realistic images of real-life objects. In this paper, we propose a new neural network architecture of LSTM Conditional Generative Adversarial Networks to generate images of real-life objects. Our proposed model is trained on the Oxford-102 Flowers and Caltech-UCSD Birds-200-2011 datasets. We demonstrate that our proposed model produces the better results surpassing other state-of-art approaches.


page 1

page 4

page 5


StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks

Although Generative Adversarial Networks (GANs) have shown remarkable su...

Generate the corresponding Image from Text Description using Modified GAN-CLS Algorithm

Synthesizing images or texts automatically is a useful research area in ...

Words as Art Materials: Generating Paintings with Sequential GANs

Converting text descriptions into images using Generative Adversarial Ne...

Impressions2Font: Generating Fonts by Specifying Impressions

Various fonts give us various impressions, which are often represented b...

Generative Adversarial Networks for photo to Hayao Miyazaki style cartoons

This paper takes on the problem of transferring the style of cartoon ima...

User-Guided Deep Anime Line Art Colorization with Conditional Adversarial Networks

Scribble colors based line art colorization is a challenging computer vi...