DeepAI AI Chat
Log In Sign Up

Generating Image Sequence from Description with LSTM Conditional GAN

06/08/2018
by   Xu Ouyang, et al.
Illinois Institute of Technology
0

Generating images from word descriptions is a challenging task. Generative adversarial networks(GANs) are shown to be able to generate realistic images of real-life objects. In this paper, we propose a new neural network architecture of LSTM Conditional Generative Adversarial Networks to generate images of real-life objects. Our proposed model is trained on the Oxford-102 Flowers and Caltech-UCSD Birds-200-2011 datasets. We demonstrate that our proposed model produces the better results surpassing other state-of-art approaches.

READ FULL TEXT

page 1

page 4

page 5

10/19/2017

StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks

Although Generative Adversarial Networks (GANs) have shown remarkable su...
06/29/2018

Generate the corresponding Image from Text Description using Modified GAN-CLS Algorithm

Synthesizing images or texts automatically is a useful research area in ...
07/08/2020

Words as Art Materials: Generating Paintings with Sequential GANs

Converting text descriptions into images using Generative Adversarial Ne...
03/18/2021

Impressions2Font: Generating Fonts by Specifying Impressions

Various fonts give us various impressions, which are often represented b...
05/15/2020

Generative Adversarial Networks for photo to Hayao Miyazaki style cartoons

This paper takes on the problem of transferring the style of cartoon ima...
08/09/2018

User-Guided Deep Anime Line Art Colorization with Conditional Adversarial Networks

Scribble colors based line art colorization is a challenging computer vi...