Photographic text-to-image synthesis is a significant problem in generative model research , which aims to learn a mapping from a semantic text space to a complex RGB image space. This task requires the generated images to be not only realistic but also semantically consistent, i.e., the generated images should preserve specific object sketches and semantic details described in text.
Recently, Generative adversarial networks (GANs) have become the main solution to this task. Reed et al.  address this task through a GAN based framework. But this method only generates images and can barely generate vivid object details. Based on this method, StackGAN  proposes to stack another low-to-high resolution GAN to generate images. But this method requires training two separate GANs. Later on,  proposes to bypass the difficulty of learning mappings from text to RGB images and treat it as a pixel-to-pixel translation problem . It works by re-rendering an arbitrary-style training image conditioned on a targeting description. However, its high-resolution synthesis capability is unclear. At present, training a generative model to map from a low-dimensional text space to a high-resolution image space in a fully end-to-end manner still remains unsolved.
This paper pays attention to two major difficulties for text-to-image synthesis with GANs. The first is balancing the convergence between generators and discriminators [11, 37], which is a common problem in GANs. The second is stably modeling the huge pixel space in high-resolution images and guaranteeing semantic consistency . An effective strategy to regularize generators is critical to stabilize the training and help capture the complex image statistics .
In this paper, we propose a novel end-to-end method that can directly model high-resolution image statistics and generate photographic images (see Figure 1 bottom). The contributions are described as follows.
Our generator resembles a simple vanilla GAN, without requiring multi-stage training and multiple internal text conditioning like  or additional class label supervision like . To tackle the problem of the big leap from the text space to the image space, our insight is to leverage and regularize hierarchical representations with additional ‘deep’ adversarial constraints (see Figure 1
top). We introduce accompanying hierarchically-nested discriminators at multi-scale intermediate layers to play adversarial games and jointly encourage the generator to approach the real training data distribution. We also propose a new convolutional neural network (CNN) design for the generator to support accompanying discriminators more effectively. To guarantee the image diversity and semantic consistency, we enforce discriminators at multiple side outputs of the generator to simultaneously differentiate real-and-fake image-text pairs as well as real-and-fake local image patches.
We validate our proposed method on three datasets, CUB birds , Oxford-102 flowers , and large-scale MSCOCO . In complement of existing evaluation metrics (e.g. Inception score ) for generative models, we also introduce a new visual-semantic similarity metric to evaluate the alignment between generated images and conditioned text. It alleviates the issue of the expensive human evaluation. Extensive experimental results and analysis demonstrate the effectiveness of our method and significantly improved performance compared against previous state of the arts on all three evaluation metrics. All source code will be released.
2 Related Work
Deep generative models have attracted wide interests recently, including GANs [11, 33], Variational Auto-encoders (VAE) , etc . There are substantial existing methods investigating the better usage of GANs for different applications, such as image synthesis [33, 38], (unpaired) pixel-to-pixel translation [15, 52], medical applications [5, 50], etc [22, 45, 13, 46].
Text-to-image synthesis is an interesting application of GANs. Reed et al.  is the first to introduce a method that can generate resolution images. This method also presents a new strategy for image-text matching aware adversarial training. Reed et al.  propose a generative adversarial what-where network (GAWWN) to enable location and content instructions in text-to-image synthesis. Zhang et al.  propose a two-stage training strategy that is able to generate compelling images. Recently, Dong et al.  propose to learn a joint embedding of images and text so as to re-render a prototype image conditioned on a targeting description. Cha et al.  explore the usage of the perceptional loss 
with a CNN pretrained on ImageNet and Dashet al. 
make use of auxiliary classifiers (similar with) to assist GAN training for text-to-image synthesis. Xu et al.  shows an attention-driven method to improve fine-grained details.
Learning a continuous mapping from a low-dimensional manifold to a complex real image distribution is a long-standing problem. Although GANs have made significant progress, there are still many unsolved difficulties, e.g., training instability and high-resolution generation. Wide methods have been proposed to address the training instability, such as various training techniques [36, 2, 38, 31], regularization using extra knowledge (e.g. image labels, ImageNet CNNs) [9, 22, 6], or different generator and discriminator combinations [28, 10, 13]. While our method shows a new way to unite generators and discriminators and does not require any extra knowledge apart from training paired text and images. In addition, it is easy to see the training difficulty increases significantly as the targeting image resolution increases.
To synthesize high-resolution images, cascade networks are effective to decompose originally difficult tasks to multiple subtasks (Figure 2 A). Denton et al.  train a cascade of GANs in a Laplacian pyramid framework (LAPGAN) and use each to synthesize and refine image details and push up the output resolution stage-by-stage. StackGAN also shares similar ideas with LAPGAN. Inspired by this strategy, Chen et al.  present a cascaded refinement network to synthesize high-resolution scenes from semantic maps. Recently, Karras et al.  propose a progressive training of GANs. The idea is to add symmetric generator and discriminator layers gradually for high-resolution image generation (Figure 2 C). Compared with these strategies that train low-to-high resolution GANs stage-by-stage or progressively, our method has the advantage of leveraging mid-level representations to encourage the integration of multiple subtasks, which makes end-to-end high-resolution image synthesis in a single vanilla-like GAN possible.
Leveraging hierarchical representations of CNNs is an effective way to enhance implicit multi-scaling and ensembling for tasks such as image recognition [23, 49] and pixel or object classification [42, 26, 51]. Particularly, using deep supervision  at intermediate convolutional layers provides short error paths and increases the discriminativeness of feature representations. Our hierarchically-nested adversarial objective is inspired by the family of deeply-supervised CNNs.
3.1 Adversarial objective basics
In brief, a GAN  consists of a generator and a discriminator , which are alternatively trained to compete with each other. is optimized to distinguish synthesized images from real images, meanwhile, is trained to fool by synthesizing fake images. Concretely, the optimal and can be obtained by playing the following two-player min-max game,
3.2 Hierarchical-nested adversarial objectives
Numerous GAN methods have demonstrated ways to unite generators and discriminators for image synthesis. Figure 2 and Section 2 discuss some typical frameworks. Our method actually explores a new dimension of playing this adversarial game along the depth of a generator (Figure 2 D), which integrates additional hierarchical-nested discriminators at intermediate layers of the generator. The proposed objectives act as regularizers to the hidden space of , which also offer a short path for error signal flows and help reduce the training instability.
The proposed is a CNN (defined in Section 3.4), which produces multiple side outputs:
where denotes a sentence embedding (generated by a pre-trained char-RNN text encoder ). are images with gradually growing resolutions and is the final output with the highest resolution.
For each side output from the generator, a distinct discriminator is used to compete with it. Therefore, our full min-max objective is defined as
where and denotes training images at multi-scales, . Compared with Eq. (1), our generator competes with multiple discriminators at different hierarchies (Figure 2 D), which jointly learn discriminative features in different contextual scales.
In principle, the lower-resolution side output is used to learn semantic consistent image structures (e.g. object sketches, colors, and background), and the subsequent higher-resolution side outputs are used to render fine-grained details. Since our method is trained in an end-to-end fashion, the lower-resolution outputs can also fully utilize top-down knowledge from discriminators at higher resolutions. As a result, we can observe consistent image structures, color and styles in the outputs of both low and high resolution images. The experiment demonstrates this advantage compared with StackGAN.
3.3 Multi-purpose adversarial losses
Our generator produces resolution-growing side outputs composing an image pyramid. We leverage this hierarchy property and allow adversarial losses to capture hierarchical image statistics, with a goal to guarantee both semantic consistency and image fidelity.
In order to guarantee semantic consistency, we adopt the matching-aware pair loss proposed by . The discriminator is designed to take image-text pairs as inputs and be trained to identify two types of errors: a real image with mismatched text and a fake image with conditioned text.
The pair loss is designed to guarantee the global semantic consistency. However, there is no explicit loss to guide the discriminator to differentiate real images from fake images. And combining both tasks (generating realistic images and matching image styles with text) into one network output complicates the already challenging learning tasks. Moreover, as the image resolution goes higher, it might be challenging for a global pair-loss discriminator to capture the local fine-grained details (results are validated in experiments). In addition, as pointed in , a single global discriminator may over-emphasize certain biased local features and lead to artifacts.
To alleviate these issues and guarantee image fidelity, our solution is to add local adversarial image losses. We expect the low-resolution discriminators to focus on global structures, while the high-resolution discriminators to focus on local image details. Each discriminator consists of two branches (see Section 3.4), one computes a single scalar value for the pair loss and another branch computes a
2D probability mapfor the local image loss. For each , we control accordingly to tune the receptive field of each element in , which differentiates whether a corresponding local image patch is real or fake. The local GAN loss is also used for pixel-to-pixel translation tasks [38, 52, 15]. Figure 3 illustrates how hierarchically-nested discriminators compute the two losses on the generated images in the pyramid.
Full Objective Overall, our full min-max adversarial objective can be defined as
where is the mean-square loss (instead of the conventional cross-entropy loss) and . This objective is minimized by . While in practice, minimizes instead. For the local image loss, the shape of varies accordingly (see Figure 3). refers to the (largest local) global range. is the image loss branch and is the pair loss branch (conditioned on ). denotes a matched image-text pair and denotes a mismatched image-text pair.
(termed conditioning augmentation (CA)), instead of directly using the deterministic text embedding, we sample a stochastic vector from a Ganssian distribution, where and are parameterized functions of
. We add the Kullback-Leibler divergence regularization term,, to the GAN objective to prevent over-fitting and force smooth sampling over the text embedding distribution.
3.4 Architecture Design
Generator The generator is simply composed of three kinds of modules, termed -repeat res-blocks, stretching layers, and linear compression layers. A single res-block in the -repeat res-block is a modified111We remove ReLU after the skip-addition of each residual block, with an intention to reduce sparse gradients. residual block 
, which contains two convolutional (conv) layers (with batch normalization (BN)
and ReLU). The stretching layer serves to change the feature map size and dimension. It simply contains a scale-nearest up-sampling layer followed by a conv layer with BN+ReLU. The linear compression layer is a single conv layer followed by a Tanh to directly compress feature maps to the RGB space. We prevent any non-linear function in the compression layer that could impede the gradient signals. Starting from a embedding, which is computed by CA and a trained embedding matrix, the generator simply uses -repeat res-blocks connected by in-between stretching layers until the feature maps reach to the targeting resolution. For example, for resolution and , there are -repeat res-blocks and stretching layers. At pre-defined side-output positions at scales , we apply the compression layer to generate side output images, for the inputs of discriminators.
The discriminator simply contains consecutive stride-2 conv layers with BN+LeakyReLU. There are two branches are added to the upper layer of the discriminator. One branch is a direct fully convolutional layers to produce aprobability map (see Figure 3) and classify each location as real or fake. Another branch first concatenates a feature map and a text embedding (replicated by a reduced -d text embedding). Then we use an conv to fuse text and image features and a conv layer to classify an image-text pair to real or fake.
The optimization is similar to the standard alternative training strategy in GANs. Please refer to the supplementary material for more training and network details.
We denote our method as HDGAN, referring to High-Definition results and the idea of Hierarchically-nested Discriminators.
Dataset We evaluate our model on three widely used datasets. The CUB dataset  contains 11,788 bird images belonging to 200 categories. The Oxford-102 dataset  contains 8,189 flow images in 102 categories. Each image in both datasets is annotated with 10 descriptions provided by . We pre-process and split the images of CUB and Oxford-102 following the same pipeline in [34, 47]. The COCO dataset  contains 82,783 training images and 40,504 validation images. Each image has 5 text annotations. We use the pre-trained char-RNN text encoder provided by  to encode each sentence into a 1024-d text embedding vector.
Evaluation metric We use three kinds of quantitative metrics to evaluate our method. 1) Inception score  is a measurement of both objectiveness and diversity of generated images. Evaluating this score needs a pre-trained Inception model  on ImageNet. For CUB and Oxford-102, we use the fine-tuned Inception models on the training sets of the two datasets, respectively, provided by StackGAN. 2) Multi-scale structural similarity (MS-SSIM) metric  is used for further validation. It tests pair-wise similarity of generated images and can identity mode collapses reliably . Lower score indicates higher diversity of generated images (i.e. less model collapses).
3) Visual-semantic similarity The aforementioned metrics are widely used for evaluating standard GANs. However, they can not measure the alignment between generated images and the conditioned text, i.e., semantic consistency.  resorts to human evaluation, but this procedure is expensive and difficult to conduct. To tackle this issue, we introduce a new measurement inspired by , namely visual-semantic similarity (VS similarity). The insight is to train a visual-semantic embedding model and use it to measure the distance between synthesized images and input text. Denote as an image feature vector extracted by an Inception model . We define a scoring function . Then, we train two mapping functions and , which map both real images and paired text embeddings into a common space in , by minimizing the following bi-directional ranking loss:
where is the margin, which is set as 0.2. is a ground truth image-text pair, and and denote mismatched image-text pairs. In the testing stage, given an text embedding , and the generated image , the VS score can be calculated as . Higher score indicates better semantic consistency.
Recently, it updated to in its source code.
4.1 Comparative Results
To validate our proposed HDGAN, we compare our results with GAN-INT-CLS , GAWWN , TAC-GAN , Progressive GAN , StackGAN  and also its improved version StackGAN++ 222StackGAN++ and Prog.GAN are two very recently released preprints we noticed. We acknowledge them as they also target at generating high-resolution images. . We especially compare with StackGAN in details (results are obtained from its provided models).
Table 1 compares the Inception score. We follow the experiment settings of StackGAN to sample images for computing the score. HDGAN achieves significant improvement compared against other methods. For example, it improves StackGAN by and StackGAN++ by on CUB. HDGAN achieves competitive results with TAC-GAN on Oxford-102. TAC-GAN uses image labels to increase the discriminability of generators, while we do not use any extra knowledge. Figure 4 and Figure 5 compare the qualitative results with StackGAN on CUB and Oxford-102, respectively, by demonstrating more, semantic details, natural color, and complex object structures. Moreover, we qualitatively compare the diversity of samples conditioned on the same text (with random input noises) in Figure 7 left. HDGAN can generate substantially more compelling samples.
Different from CUB and Oxford-102, COCO is a much more challenging dataset and contains largely diverse natural scenes. Our method significantly outperforms StackGAN as well (Table 1). Figure 15 also shows some generated samples in several different scenes. Please refer to the supplementary material for more results.
Furthermore, the right figure compares the multi-resolution Inception score on CUB. Our results are from the side outputs of a single model. As can be observed, our results outperform the results of StackGAN and our results also outperform results of StackGAN substantially. It demonstrates that our HDGAN better preserves semantically consistent information in all resolutions (as stated in Section 3.2). Figure 7 right validates this property qualitatively. On the other hand, we observe that, in StackGAN, the low-resolution images and high-resolution images sometimes are visually inconsistent (see examples in Figure 4 and 5).
Table 2 compares the proposed visual-semantic similarity (VS) results on three datasets. The scores of the groundtruth image-text pair are also shown for reference. HDGAN achieves consistently better performance on both CUB and Oxford-102. These results demonstrate that HDGAN can better capture the visual semantic information in generated images.
Table 3 compares the MS-SSIM score with StackGAN and Prog.GAN for bird image generation. StackGAN and our HDGAN use text as input so the generated images are separable in class. We randomly sample
image pairs (400 per class) and compare the class-wise score in the left figure. HDGAN outperforms StackGAN in majority of classes and also has a lower standard deviation (vs. ). Note that Prog.GAN uses a noise input rather than text. We can compare with it for a general measure of the image diversity. Following the procedure of Prog.GAN, we randomly sample image pairs from all generated samples333We use bird images provided by Prog.GAN at https://github.com/tkarras/progressive_growing_of_gans. Note that Prog.GAN is trained on the LSUN  bird set, which contains million bird images. and show the results in Table 3 right. HDGAN outperforms both methods.
4.2 Style Transfer Using Sentence Interpolation
Ideally, a well-trained model can generalize to a smooth linear latent data manifold. To demonstrate this capability, we generate images using the linearly interpolated embeddings between two source sentences. As shown in Figure8, our generated images show smooth style transformation and well reflect the semantic details in sentences. For example, in the second row, complicated sentences with detailed appearance descriptions (e.g. pointy peaks and black wings) are used, our model could still successfully capture these subtle features and tune the bird’s appearance smoothly.
4.3 Ablation Study and Discussion
Hierarchically-nested adversarial training Our hierarchically-nested discriminators play a role of regularizing the layer representations (at scale ). In Table 4, we demonstrate their effectiveness and show the performance by removing parts of discriminators on both CUB and COCO datasets. As can be seen, increasing the usage of discriminators at different scales have positive effects. And using discriminators at is critical (by comparing the 64-256 and 128-256 cases). For now, it is uncertain if adding more discriminators and even on lower resolutions would be helpful. Further validation will be conducted. StackGAN emphasizes the importance of using text embeddings not only at the input but also with intermediate features of the generator, by showing a large drop from to without doing so. While our method only uses text embeddings at the input. Our results strongly demonstrate the effectiveness of our hierarchically-nested adversarial training to maintain such semantic information and a high Inception score.
The local image loss We analyze the effectiveness of the proposed local adversarial image loss. Table 4 compares the case without using it (denoted as ‘w/o local image loss’). The local image loss helps improve the visual-semantic matching evidenced by a higher VS score. We hypothesize that it is because adding the separate local image loss can offer the pair loss more “focus” on learning the semantic consistency. Furthermore, the local image loss helps generate more vivid image details. As demonstrated in Figure 9, although both models can successfully capture the semantic details in the text, the ‘w/ local’ model generates complex object structures described in the conditioned text more precisely.
Design principles StackGAN claims the failure of directly training a vanilla GAN to generate meaningful images. We test this extreme case using our method by removing all nested discriminators without the last one. Our method still generates fairly meaningful results (the first row of Table 4), which demonstrate the effectiveness of our proposed framework (see Section 3.4).
Initially, we tried to share the top layers of the hierarchical-nested discriminators of HDGAN inspired by 
. The intuition is that all discriminators have a common goal to differentiate real and fake despite difficult scales, and such sharing would reduce their inter-variances. However, we did not observe benefits from this mechanism and our independent discriminators can be trained fairly stably.
HDGAN has a very succinct framework, compared most existing methods, as they [43, 3] adds extra supervision on output images to ‘inject’ semantic information, which is shown helpful for improving the inception score. However, it is not clear that whether these strategies can substantially improve the visual quality, which is worth further study.
In this paper, we present a novel and effective method to tackle the problem of generating images conditioned on text descriptions. We explore a new dimension of playing adversarial games along the depth of the generator using the hierarchical-nested adversarial objectives. A multi-purpose adversarial loss is adopted to help render fine-grained image details. We also introduce a new evaluation metric to evaluate the semantic consistency between generated images and conditioned text. Extensive experiment results demonstrate that our method, namely HDGAN, can generate high-resolution photographic images and performs significantly better than existing state of the arts on three public datasets.
|w/o local image loss|
|w/ local image loss|
-  J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
-  D. Berthelot, T. Schumm, and L. Metz. Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017.
-  M. Cha, Y. Gwon, and H. T. Kung. Adversarial nets with perceptual losses for text-to-image synthesis. arXiv preprint arXiv:1708.09321, 2017.
-  Q. Chen and V. Koltun. Photographic image synthesis with cascaded refinement networks. ICCV, 2017.
-  P. Costa, A. Galdran, M. I. Meyer, M. D. Abràmoff, M. Niemeijer, A. M. Mendonça, and A. Campilho. Towards adversarial retinal image synthesis. arXiv preprint arXiv:1701.08974, 2017.
-  A. Dash, J. C. B. Gamboa, S. Ahmed, M. Z. Afzal, and M. Liwicki. Tac-gan-text conditioned auxiliary classifier generative adversarial network. arXiv preprint arXiv:1703.06412, 2017.
-  E. L. Denton, S. Chintala, R. Fergus, et al. Deep generative image models using a￼ laplacian pyramid of adversarial networks. In NIPS, pages 1486–1494, 2015.
-  H. Dong, S. Yu, C. Wu, and Y. Guo. Semantic image synthesis via adversarial learning. ICCV, 2017.
-  A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks. In NIPS, 2016.
-  I. Durugkar, I. Gemp, and S. Mahadevan. Generative multi-adversarial networks. arXiv preprint arXiv:1611.01673, 2016.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
-  K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ECCV, 2016.
-  X. Huang, Y. Li, O. Poursaeed, J. Hopcroft, and S. Belongie. Stacked generative adversarial networks. CVPR, 2017.
-  S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
-  P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. CVPR, 2017.
J. Johnson, A. Alahi, and L. Fei-Fei.
Perceptual losses for real-time style transfer and super-resolution.In ECCV, 2016.
-  T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In arXiv preprint arXiv:1710.10196, 2016.
-  D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
-  D. P. Kingma and M. Welling. Auto-encoding variational bayes. ICLR, 2013.
-  R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014.
-  C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. CVPR, 2017.
-  C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-supervised nets. In AIS, 2015.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
-  M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image translation networks. arXiv preprint arXiv:1703.00848, 2017.
-  J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
-  X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. P. Smolley. Least squares generative adversarial networks. arXiv preprint ArXiv:1611.04076, 2016.
-  L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein. Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163, 2016.
-  T. D. Nguyen, T. Le, H. Vu, and D. Phung. Dual discriminator generative adversarial nets. In NIPS, 2017.
-  M.-E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In ICCVGIP, 2008.
-  A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016.
A. v. d. Oord, N. Kalchbrenner, and K. Kavukcuoglu.
Pixel recurrent neural networks.ICML, 2016.
-  A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
-  S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. ICML, 2016.
-  S. E. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele, and H. Lee. Learning what and where to draw. In NIPS, 2016.
-  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In NIPS, 2016.
-  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, and X. Chen. Improved techniques for training gans. In NIPS. 2016.
-  A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb. Learning from simulated and unsupervised images through adversarial training. CVPR, 2017.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, June 2015.
-  D. Ulyanov, A. Vedaldi, and V. Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
-  P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-ucsd birds 200. 2010.
-  S. Xie and Z. Tu. Holistically-nested edge detection. In ICCV, 2015.
-  T. Xu, P. Zhang, Q. Huang, H. Zhang, Z. Gan, X. Huang, and X. He. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. arXiv preprint arXiv:1711.10485, 2017.
-  F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser, and J. Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
-  H. Zhang and K. Dana. Multi-style generative network for real-time transfer. arXiv preprint arXiv:1703.06953, 2017.
-  H. Zhang, V. Sindagi, and V. M. Patel. Image de-raining using a conditional generative adversarial network. arXiv preprint arXiv:1701.05957, 2017.
-  H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In ICCV, 2017.
-  H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. Metaxas. Stackgan++: Text to photo-realistic image synthesis with stacked generative adversarial networks. In arXiv preprint arXiv:1710.10916, 2017.
-  Z. Zhang, Y. Xie, F. Xing, M. Mcgough, and L. Yang. Mdnet: A semantically and visually interpretable medical image diagnosis network. In CVPR, 2017.
-  Z. Zhang, L. Yang, and Y. Zheng. Translating and segmenting multimodal medical volumes with cycle- and shape-consistency generative adversarial network. In CVPR, 2018.
-  H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In CVPR, 2017.
-  J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. ICCV, 2017.
6 Supplementary Material
6.1 Training and Architecture Details
The training procedure is similar to the one used in standard GANs, which alternatively updates the generator and discriminators until converge.
The Adam optimizer 
is used. The initial learning rate is set as 0.0002 and decreased by half for every 100 epochs (50 for COCO). The model is trained for 500 epochs in total (200 epochs for COCO). We configure the side outputs at 4 different scales where the feature map resolution is equal to, and , respectively. For the local image loss of these 4 side outputs, we set . For example, refers to . These numbers are not fine-tuned but are set empirically. We believe there exists better configurations to be explored.
All intermediate conv layers, except from the specified ones in Section 3.4, use
kernels (with reflection padding). Some other normalization also layers are experimented (i.e. instance normalization and layer normalization ) since they are used by recent advances [52, 4]. But the results are not satisfactory.
With respect to the generator, we use 1-repeat residual blocks for the generator till the resolution. The input of the generator is a tensor. As the feature map resolution increases by 2, the number of feature maps is halved at sizes. To generate images, we pre-train the generator to due to the limitation of the GPU memory. We use a -repeat res-block followed by a stretching layer to upscale the feature map size to . and a linear compression layer to generate images. Since the image already captures the overall semantics and details, to boost the training and encourage the maintain this information, we use a l1 reconstruction loss to ‘self-regularize’ the generator.
6.2 More Qualitative Results and Analysis
In this section, we demonstrate more sample results for the three datasets.
Figure 10 compares our results with StackGAN. For each input, 6 images are randomly sampled. Furthermore, we visualize zoomed-in samples compared with StackGAN in Figure 11. Our results demonstrate obviously better quality, less artifacts, and less sharp pixel transitions.
Figure 13 shows the results on the CUB bird dataset. All the outputs of a model with different resolutions are also shown. As can be observed in this two figures, our method can generate fairly vivid images with different poses, shape, background, etc. Moreover, the images with different resolutions, which are side outputs of a single model, have very consistent information. More and more image details can be observed as the resolution increases. Figure 14 shows the results on the Oxford-102 flower dataset. Very detailed petals can be generated with photographic colors and saturability.
Figure 15 shows some sampled results on the COCO dataset. COCO is much more challenging than the other two datasets since it contains natural images from a wide variety of scenes containing hundreds of different objects. As can be observed in the shown samples, our method can still generate semantically consistent images.
However, it is worth to notice that, although our method significantly improves existing methods [47, 34] on COCO, we realize that generating fine-grained details of complex natural scenes with various objects is still challenging. Based on this study, we expect to further address this problem as the future study.
Failure cases: Although we observed that the majority of test data can result in successful outputs (at least one sample of a single input text), there are still observable failure cases. The major problems include obvious artifacts, minor semantic inconsistency (compared with groundtruth), loss of object basic shapes. Figure 12 shows these mentioned failure cases. To compare with StackGAN category by category, please refer to Table 3 left (in the main paper).