Generative Adversarial Text to Image Synthesis

05/17/2016 ∙ by Scott Reed, et al. ∙ 0

Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.

READ FULL TEXT

Authors

page 1

page 6

page 7

page 8

Code Repositories

Text-to-Image-Synthesis

Pytorch implementation of Generative Adversarial Text-to-Image Synthesis paper


view repo

anime-character-generation

Homework 3 for MLDS course (2017 summer, NTU)


view repo

dcgan.label-to-image

Generative Adversarial Label to Image Synthesis


view repo

Text-To-Image-Synthesis

Generates and image from a caption


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.