Products at e-commerce websites are usually displayed by images from different views. Multi-view images provide straightforward and comprehensive product illustrations to potential buyers. However, such multi-view images are often expensive to produce in both time and cost, thus sometimes not available. For example, when one occasionally sees an image of the desired clothing item from a magazine, which only provides a single view image, he/she has to imagine its look from other views. An automatic model that can generate multi-view images from a single-view input is desired in such scenarios and can find practical application on e-commerce platforms and other applications like photo/video editing and AR/VR. Provided a single view clothing image, we aim to generate the rest views of the input image without requiring any extra information.
Although image generation is a challenging task due to the high dimensionality of images and the complex configuration and layout of image contents, some recent works have demonstrated good performance on realistic image generation beneficial from advanced models like Variational Autoencoder (VAE)) and Generative Adversarial Networks (GANs) . VAE adopts variational inference plus deep representation learning to learn a complex generative model and gets rid of the time-consuming sampling process. However, VAE usually fails to provide rich details in generated images. Another popular generative model, GANs, introduces a real-fake discriminator to supervise the learning of the generative model. Benefiting from the competition between discriminator and generator, GANs are advantageous in providing realistic details, but they usually introduce artifacts to the global appearance, especially when the image to be generated is large.
To tackle this challenging problem of generating multi-view images from a single-view observation, many approaches [1, 10, 32] first construct the 3D structure of the object and then generate desired target view images from that model. While other methods [18, 29, 33] learn the transformation between the input view and target view by relocating pixels. However, those methods mainly synthesize rigid objects, cars, chairs with simple textures. The generation of deformable objects with rich details such as clothes or human body has not been fully explored.
In this paper, we propose a novel image generation model named Variational GAN (VariGAN) that combines the strengths of variational inference and adversarial training. The proposed model overcomes the limitations of GAN in modeling global appearance by introducing internal variational inference in the generative model learning. A low resolution (LR) image capturing global appearance is firstly generated by variational inference. This process learns to draw rough shapes and colors of the images to generate at a different view, conditioned on the given images. With the generated LR image, VariGAN then performs adversarial learning to generate realistic high resolution (HR) images by filling richer details to the low resolution image. Since the LR image only has the basic contour of the target object in a desired view, the fine image generation module just needs to focus on drawing details and rectifying defects in low resolution images. See Fig. 1 for illustration. Decomposing the complicated image generation process into the above two complementary learning processes significantly simplifies the learning and produces more realistic-look multi-view images. Note that VariGAN is a generic model and can be applied to other image generation applications like style transfer. We would like to exploit these potential applications of VariGAN in the future.
The main contributions are summarized as follows:
To our best knowledge, we are the first to address the new problem of generating multi-view clothing images based on a given clothing image of a certain view, which has both theoretical and practical significance.
We propose a novel VariGAN generation architecture for multi-view clothing image generation that adopts a new coarse-to-fine image generation strategy. The proposed model is effective in both capturing global appearance and drawing richer details consistent with the input conditioned image.
We apply our proposed model on two largest clothes image datasets and demonstrate its superiority through comprehensive evaluations compared with other state-of-the-arts. We will release the model and relevant code upon acceptance.
2 Related Work
Image generation has been a heated topic in recent years. Many approaches have been proposed with the emergence of deep learning techniques. Variational Autoencoder (VAE) generates images based on the probabilistic graphical models, and are optimized by maximizing the lower bound of the data likelihood. Yan  proposed the Attribute2Image, which generates images from visual attributes. They modeled an image as a composite of foreground and background and extended the VAE with disentangled latent variables. Gregor  proposed the DRAW, which integrates the attention mechanism to the VAE to generate realistic images recurrently by patches. Different from the generative parametric approaches, Generative Adversarial Networks (GANs) proposed by Goodfellow  introduce a generator and a discriminator in their model. The generator is trained to generate images to confuse the discriminator, and the discriminator is trained to distinguish between real and fake samples. Since then, many GANs-based models have been proposed, including Conditional GANs , BiGANs [3, 5], Semi-suprvised GANs , InfoGAns 
and Auxiliary Classifier GANs. GANs have been used to generate images from labels , text [20, 31] and also images [9, 19, 24, 30, 35, 34]. Our proposed model is also an image-conditioned GAN, with generation capability strengthened by variational inference.
proposed a transforming auto-encoder to generate images with view variance. Rezende introduced a general framework to learn 3D structures from 2D observations with a 3D-2D projection mechanism. Yan  proposed Perspective Transformer Nets to learn the projection transformation after reconstructing the 3D volume of the object. Wu  also proposed the 3D-2D projection layers that enable the learning of 3D object structures using annotated 2D keypoints. They further proposed the 3D-GAN  which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. Zhou  propose to synthesize novel views of the same object or scene corresponding by learning appearance flows. Most of these models are trained with the target view images or image pairs which can be generated by a graphics engine. Therefore, in theory, there are infinite amount of training data with desired view to train the model. However, in our task, the training data is limited in both views and numbers, which greatly adds the difficulties to generate image of different views.
3 Proposed Method
3.1 Problem Setup
We first define the problem of generating multi-view images from a single view input. Suppose we have a pre-defined set of view angles , where corresponds to one specific view, front or side view. An object captured from the view is denoted as . Given the source image , multi-view image generation is to generate an image with a different view from , and .
3.2 Variational GANs
Standard GANs have been applied to generate images of desired properties based on the input. This type of model learns a generative model on the distribution of the desired images, sampling from which would provide new images. Different from other generative models, GANs employ an extra discriminative model to supervise the generative learning process, instead of purely optimizing
to best explain training data via Maximum Likelihood Estimation.
Specified in multi-view image generation, given an input conditioned image (captured at viewpoint ), the goal is to learn the distribution for generating the new image of a different viewpoint , from a labeled dataset . Here is specified by users as an input to the model.
The objective of GANs is defined as
where tries to generate real data given noise through minimizing its loss to fool an adversarial discriminator , and tries to maximize its discrimination accuracy between real data and generated data.
However, it is difficult to learn a generator to produce plausible images of high resolution, correct contour and rich details, because GANs are limited in capturing global appearance. To address this critical issue and generate more realistic images, the variational GANs (VariGANs) proposed in this work combines the strengths of variation inference for modeling correct contours and adversarial learning to fill realistic details. It decomposes the generator into two components. One is for generating a coarse image through the variational inference model and the other is for generating the final image with fine details based on the outcome from . Formally, the objective of VariGANs is
Here is the random latent variable and is the coarse image generator. This objective can be optimized by maximizing the variational lower bound of , maximizing the discrimination accuracy of , and minimizing the loss of against . We will elaborate the model of , and in the following parts.
3.3 Coarse Image Generation
Given an input image with the view of , target view , and latent variable , the coarse image generator learns the distribution with focus on modeling the global appearance. We use to denote parameters of the coarse image generator. To alleviate difficulties of directly optimizing this log-likelihood function and avoid the time-consuming sampling, we apply the variational Bayesian approach to optimize the lower bound of the log-likelihood , as proposed in [11, 22]. Specifically, an auxiliary distribution is introduced to approximate the true posterior .
The conditional log-likelihood of the coarse image generator is defined as
where the variational lower bound is
where the first term in Eqn. (2) is a regularization term that reduces the gap between the prior and the proposal distribution . The second term is the log likelihood of samples and is usually measured by the reconstruction loss, , used in our model.
3.4 Fine Image Generation
After obtaining the low resolution image of the desired output , the fine image generation module learns another generator that maps the low resolution image to the high resolution image conditioned on the input . The generator is trained to generate images that cannot be distinguished from “real” images by an adversarial conditional discriminator, , which is trained to distinguish as well as possible the generator’s “fakes”. See Eqn. (3.2).
Since the multi-view image generator need not only fool the discriminator but also be near the ground truth of the target image with a different view, we also add the loss for the generator. The loss is chosen because it alleviates over-smoothing artifacts compared with loss.
3.5 Network Architecture
The overall architecture of the proposed model in the training phase is illustrated in Fig. 2. It consists of three modules: the coarse image generator, the fine image generator and the conditional discriminator. During training, the target view image and the conditioned image are passed through two siamese-like encoders to learn their representations respectively. By word embedding, the input desired view angle
is transformed into a vector. The representations of, and are combined to generate the latent variable . However, during testing, there is no target image and the encoder for it. The latent variable is randomly sampled and combined with the representation of the condition image and the target view to generate the target view LR image . After that, and are sent to the fine image generator to generate the HR image . Similar to the coarse image generation module, the fine image generation module also contains two siamese-like encoders and a decoder. Moreover, there are skip connections between mirrored layers in the encoder and decoder stacks. By the channel concatenation of the HR image and the condition image , a conditional discriminator is adopted to distinguish whether the generated image is real or fake.
Coarse Image Generator
There are several convolution layers in the encoder of the coarse image generator to down sample the input image to an tensor. Then a fully-connected layer is topped to transform the tensor to an -D representation. The encoders for the target image and the condition image share weights. A word embedding layer is employed to embed the target view into an -D vector. The representations of the target image, the conditioned image and the view embedding are combined and transformed to an -D latent variable. Then, the latent variable together with the conditioned image representation and the view embedding are passed through a series of de-convolutional layers to generate a image.
Fine Image Generator with Skips
Similar to the coarse image generation module, the fine image generator also contains two siamese-like encoders and a decoder. The encoder consists of several convolutional layers to down-sample the image to a tensor. Then several de-convolutional layers are used to up-sample the bottleneck tensor to .
Since the mapping from low resolution image to high resolution image can be seen as a conditional image translation problem, the low and high resolution images only differ in surface appearance, but both are rendered under the same underlying structure. Therefore, the shape information can be shared between the LR and HR image. Besides, the low-level information of the conditioned image will also provide rich guidance when translating the LR image to the HR image. It would be desirable to shuttle these two kinds of information directly across the net. Inspired by the work of “U-Net”  and image-to-image translateion , we add skip connections between the LR image encoder and the HR image decoder, and also between the conditioned image encoder and the HR image decoder simultaneously (see Fig. 3). By such skip connections, the decoder up-samples the encoded tensor to the high resolution image with the target view by several de-convolution layers.
The generated high resolution image and the ground-truth target image are concatenated with the conditioned image by channels to form the negative pair and positive pair, respectively. These two kinds of image pairs are passed to the conditional discriminator and train the fine image generator adversarially.
3.6 Implementation Details
For the coarse image generator, an encoder network contains 6 convolution layers followed by 1 fully-connected layer (convolution layers have 64, 128, 256, 256, 256 and 1024 channels with filter size of 55, 55, 55, 33, 33 and 4
4, respectively; the fully-connected layer has 1024 neurons), andis set to 1024, is set to 64, respectively. The representations of the target image and the condition image and the embedding of the target view are concatenated and transformed to the latent variable by a fully-connected layer with 1024 neurons. The decoder network consists of 1 fully-connected layers with 25688 neurons, followed by 6 de-convolutional layers with 22 up-sampling (with 256, 256, 256, 128, 64 and 3 channels with filter size of 33, 55, 55, 55, 55 and 55).
For the fine image generation module, the encoder network contains 7 convolution layers (with 64, 128, 256, 512, 512, 512, 512 channels with filter size of 4
4 and stride 2). Thus,is set to 512. The decoder network consists of 7 de-convolutional layers with 512, 512, 512, 256, 128, 64 and 3 channels with filter size 44 and stride 2. The conditional discriminator consists of 5 convolutional layers (they have 64, 128, 256, 512 and 1 channel(s) with filer size 44 and stride 2, 2, 2, 1, 1). is set to 128.
For training, we first train the coarse image generator for 500 epochs. Using the generated low resolution image and the conditioned image, we then iteratively train the fine image generator and the conditional discriminator for another 500 epochs. All networks are trained using ADAM solvers with batch size of 32 and an initial learning rate of 0.0003.
To validate the effectiveness of our proposed VariGAN model, we conduct extensive quantitative and qualitative evaluations on the MVC  and the DeepFashion  datasets that contain a huge number of clothing images with different views. We compare the performance of generating multi-view images with two state-of-the-art image generation models: conditional VAE (cVAE) , conditional GANs (cGANs) .
The cVAE has a similar architecture as the coarse image generator in our VGANs. It has one more convolution layer in the encoder and one more de-convolution layer in the decoder, which directly generates the HR Image. The cGANs have one encoder network to encode the conditioned image and one word embedding layer to transform the view to the vector. The encoded conditioned image and the view embedding are concatenated and fed into the decoder to generate the HR image.
In addition to performance comparison with state-of-the-art models, we do ablation studies to investigate the design and important components of our proposed VariGANs. We firstly conduct experiment that replace the variational inference with GANs. Secondly, we train our model without the dual-path U-Net in the fine image generation module to verify the role of the skip connections between the encoders and the decoder. We also conduct experiments without loss to prove the importance of the traditional loss for plausible image generation. Finally, we do experiments of VariGANs without the conditional discriminator to show whether the channel-concatenation of the generated image and the condition image is beneficial for the high resolution image generation. Now we proceed to introduce details on evaluation benchmarks and present experimental results.
4.1 Datasets and Evaluation Metrics
MVC 111http://mvc-datasets.github.io contains 36,323 clothing items. Most of them have 4 views, front, left side, right side and back side. DeepFashion 222http://mmlab.ie.cuhk.edu.hk/projects/DeepFashion.html contains 8,697 clothing items with 4 views, front, side (left or right), back and full body. Example images from the two datasets are demonstrated in Fig. 4. We can see that the view and scale variance of images from DeepFashion is much higher than those in MVC. The total number of images in DeepFashion is also much smaller than MVC. Both the high variance and the limited number of training samples bring great difficulties to multi-view image generation on DeepFashion. To give a consistent task specification of multi-view image generation on the two datasets, we define that the view set contains the front, side and back view. We consider two generation goals: to generate the side view and back view images conditioned on the front view image; and to generate the front view and back view image from side view images. These two scenarios are most popular in real life. We split the MVC dataset into the training set with 33,323 groups of images and the testing set with 3,000 groups of images. Each group contains the three views of clothing images. The training set of DeepFashion dataset consists of 7,897 groups of clothing images, and there are 800 groups of images in the testing set.
In the previous literatures on image generation, the performance is usually evaluated by human subjects, which is subjective and time-consuming. Since the ground-truth images with target views are provided in the datasets, thus in our experiments, we adopt the Structural Similarity (SSIM) 
to measure the similarity between generated image and ground truth image. We do not use the pixel-level mean square error as the evaluation metric since we focus more on the quality of generated clothes images for evaluation (note similarity of generated image and ground truth essentially also measures the quality of the result). There are usually human models in the generated clothes images, thus the images may present different poses (even in the same viewpoint) from the ground truth, which means pixel-wise mean square error is not suitable in our case. SSIM can faithfully reflect the similarity of two images regardless of the light condition or small pose variance, since it models the perceived change in the structural information of the images. This evaluation metric is also widely used in many other image generation papers such as[30, 18]. The SSIM between two images and is defined as
where and are the generated image and the ground-truth image. and are the average and variance of the image. and are two variables to stabilize the division, which are determined by the dynamic range of the images.
where denotes one generated image, and is the label predicted by the Inception model. It is computed based on the assumption that the generated image with good quality should diverse and meaningful, , the divergence between the marginal distribution and the conditional distribution should be large.
4.2 Experimental Results and Analysis
|cVAE ||0.66 .09||0.58 .08||2.61 .06||2.35 .08|
|cGANs ||0.69 .10||0.59 .08||3.45 .08||2.45 .10|
|Ours||0.70 .10||0.62 .08||3.69 .09||3.03 .20|
We compare our results with two state-of-the-arts methods, Conditional VAE (cVAE)  and Conditional GANs (cGANs)  on MVC and DeepFashion datasets. The SSIM and Inception Scores for our proposed method and compared methods are reported in Table 1. We can see that cVAE has the worst SSIM and Inception Scores on both datasets, while the cGANs improve the SSIM and Inception Score compared to cVAE. Our proposed VariGANs further improves the performance on both datasets. The better SSIM and Inception Scores also indicate that our proposed method is able to generate more realistic images conditioned on the single-view image and the target view.
We demonstrate some representative examples generated by the state-of-the-arts methods and our proposed method in Fig. 5. It can be seen that the samples generated by cVAE are blurry, and the color is not correct. However, it correctly draws the general shape of the person and the clothes in the side view. The images generated by cGANs are more realistic with more details, but present severe artifacts. Some of the generated images look unrealistic, such as the example in the second row. The low resolution image generated by the proposed coarse image generator with our proposed VariGAN model presents better shape and contour than cVAE, benefiting from a more reasonable target setting for this phase (i.e. only generating LR images). Besides, the generated LR image looks more natural than those generated by other baselines, in terms of the view point, shape and contour. Finally, the fine image generator fills correct color and adds richer and realistic texture to the LR image.
We give more examples generated by our VariGANs in Fig. 6. The first two rows are from MVC dataset, and the others are from the DeepFashion dataset. The first and third row show generating the side and back view images from the front view. Given the side view image, the second and fourth row demonstrate the generated front and back view images. There are coarse images, fine images and ground-truth images shown in each column of Fig. 6 (a) and Fig. 6 (b). It can be seen that the generated coarse images have the right view based on the conditioned image. The details are added to the coarse images reasonably by our fine image generation module. The results also demonstrate that the generated images need not be the same as the ground-truth image. There may be pose variance in the generated images like the generated front view image of the second example. Note that our proposed model focuses on clothes generation and does not consider humans in the image. Besides, some blocky artifacts can be observed in some examples, In the future, we would explore how to remove such artifacts by adopting more complicated models to learn to generate sharper details. Nevertheless, the current results present sufficient details about novel views for users.
Visualization of the Feature Maps.
To provide a deeper insight to the mechanism of the multi-view image generation in our proposed model, we also visualize the feature maps of the first two convolution layers in the encoder of coarse image generation and their corresponding de-convolution layers in the decoder (the last two), as shown in Fig. 7. The visualization demonstrates that the model learns how to change the view of different parts of the image. From the visualization results, one can observe that the generated feature maps effectively capture the transition of view angles and the counters from another view.
4.3 Ablation Study
|Ours w/o V||0.69 .11||0.59 .07||3.49 .08||2.72 .08|
|Ours w/o U-Net||0.56 .08||0.53 .07||3.04 .06||2.38 .07|
|Ours w/o||0.58 .09||0.49 .06||3.23 .08||2.47 .06|
|Ours w/o cDisc||0.66 .09||0.55 .09||3.25 .15||2.56 .05|
|Ours||0.70 .10||0.62 .08||3.69 .09||3.03 .20|
In this subsection, we analyze the effectiveness of the components in our proposed model on MVC and DeepFashion to further validate the design of our model by conducting following experiments:
VariGANs w/o . In this experiment, the variational inference is replaced by another GANs to investigate the role of variational inference in our proposed VariGANs.
VariGANs w/o U-Net. The LR image and the conditioned image go through the Siamese encoder in the fine image generator until a bottle-neck, and the outputs of the encoders are concatenated and fed into the decoder networks.
VariGANs w/o loss. This experiment is to verify the importance of the traditional reconstruction loss in generating plausible images.
VariGANs w/o conditional discriminator. Only the generated HR images and ground truth images are sent to the discriminator separately.
We report the results of those experiments on MVC and DeepFashion in Table 2. It can be seen that removing or replacing any component of our model lowers the performance of SSIM and IS. We also illustrate the images generated by different variants of our VariGANs in Fig. 8. Conditioned on the LR image generated by GANs, the result in Fig. 8.(b) displays relative good shape and right texture. However, there are also missing parts, the left hand, in the generated image. The results generated by VariGANs without the dual-path U-Net have incomplete areas and un-natural colors as shown in Fig. 8.(c). Without loss, the detail texture is not learned well such as the upper part of the cloth in Fig. 8.(d). VariGANs without conditional discriminator generate comparative results (Fig. 8.(e)) as VariGANs (Fig. 8.(f)) except some smears.
In this paper, we propose a Variational Generative Adversarial Networks (VariGANs) for synthesizing realistic clothing images with different views as input image. The proposed method enhances the GANs with variational inference, which generate image from coarse to fine. Specifically, providing the input image with a certain view, the coarse image generator first generate the basic shape of the object with target view. Then the fine image generator fill the details into the coarse image and correct the defects. With extensive experiments, our model can generate more plausible results than the state-of-the-arts methods. The ablation studies also verify the importance of each component in the proposed VariGANs.
-  T. Chen, Z. Zhu, A. Shamir, S.-M. Hu, and D. Cohen-Or. 3-sweep: Extracting editable objects from a single photo. ACM Transactions on Graphics, 2013.
-  X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. arXiv:1606.03657, 2016.
-  J. Donahue, P. Krähenbähl, and T. Darrell. Adversarial feature learning. arXiv:1605.09782, 2016.
-  A. Dosovitskiy, J. T. Springenberg, M. Tatarchenko, and T. Brox. Learning to generate chairs, tables and cars with convolutional networks. In CVPR, 2015.
-  V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky, and A. Courville. Adversarially learned inference. arXiv:1606.00704, 2017.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra.
Draw: A recurrent neural network for image generation.In ICML, 2015.
-  G. E. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. In ICANN, 2011.
-  P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. arXiv:1611.07004, 2016.
-  N. Kholgade, T. Simon, A. Efros, and Y. Sheikh. 3d object manipulation in a single photograph using stock 3d models. ACM Transactions on Graphics, 2014.
-  D. P. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR, 2014.
-  T. D. Kulkarni, W. Whitney, P. Kohli, and J. B. Tenenbaum. Deep convolutional inverse graphics network. In NIPS, 2015.
-  K.-H. Liu, T.-Y. Chen, and C.-S. Chen. Mvc: A dataset for view-invariant clothing retrieval and attribute prediction. In ICMR, 2016.
-  Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In CVPR, 2016.
-  M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv:1411.1784, 2014.
-  A. Odena. Semi-supervised learning with generative adversarial networks. arXiv:1606.01583, 2016.
-  A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv:1610.09585, 2016.
-  E. Park, J. Yang, E. Yumer, D. Ceylan, and A. C. Berg. Transformation-grounded image generation network for novel 3d view synthesis. In CVPR, 2017.
-  D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
-  S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text-to-image synthesis. In ICML, 2016.
-  D. J. Rezende, S. M. A. Eslami, S. Mohamed, P. W. Battaglia, M. Jaderberg, and N. Heess. Unsupervised learning of 3d structure from images. In NIPS, 2016.
D. J. Rezende, S. Mohamed, and D. Wierstra.
Stochastic backpropagation and approximate inference in deep generative models.In ICML, 2014.
-  O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
-  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. arXiv:1606.03498, 2016.
-  K. Sohn, H. Lee, and X. Yan. Learning structured output representation using deep conditional generative models. In NIPS, 2015.
-  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004.
-  J. Wu, T. Xue, J. J. Lim, Y. Tian, J. B. Tenenbaum, A. Torralba, and W. T. Freeman. Single image 3d interpreter network. In ECCV, 2016.
-  X. Yan, J. Yang, K. Sohn, and H. Lee. Attribute2image: Conditional image generation from visual attributes. In ECCV, 2016.
-  X. Yan, J. Yang, E. Yumer, Y. Guo, and H. Lee. Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision. In NIPS, 2016.
-  D. Yoo, N. Kim, S. Park, A. S. Paek, and I. S. Kweon. Pixel-level domain transfer. arXiv:1603.07442, 2016.
-  H. Zhang, T. Xu, H. Li, S. Zhang, X. Huang, X. Wang, and D. Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. arXiv:1612.03242, 2016.
-  Y. Zheng, X. Chen, M.-M. Cheng, K. Zhou, S.-M. Hu, and N. J. Mitra. Interactive images: cuboid proxies for smart image manipulation. ACM Transactions on Graphics, 2012.
-  T. Zhou, S. Tulsiani, W. Sun, J. Malik, and A. A. Efros. View synthesis by appearance flow. In ECCV, 2016.
-  Y. Zhou and T. L. Berg. Learning temporal transformations from time-lapse videos. In ECCV, 2016.
-  J.-Y. Zhu, P. Krähenbühl, E. Shechtman, and A. A. Efros. Generative visual manipulation on the natural image manifold. In ECCV, 2016.