Generative image modeling is a long-standing goal for computer vision. Unconditional generative models set to learn functions that generate the entire image distribution given a finite number of training samples. Generative Adversarial Networks (GANs) are a recently introduced technique for image generative modeling. They are used extensively for image generation owing to: i) training effective unconditional image generators ii) being almost the only method for unsupervised image translation between domains (but see NAM 17]).
Along with their obvious advantages, GANs have critical disadvantages: i) GANs are very hard to train, this is expressed by a very erratic progression of training, sudden run collapses, and extreme sensitivity to hyper-parameters. ii) GANs suffer from mode-dropping - the modeling of only some but not all the modes of the target distribution. The birthday paradox can be used to measure the extent of mode dropping : The number of modes modeled by a generator can be estimated by generating a fixed number of images and counting the number of repeated images. Empirical evaluation of GANs found that the number of modes is significantly lower than the number in the training distribution.
The disadvantages of GANs gave rise to research into non-adversarial alternatives for training generative models. GLO  and IMLE  are two such methods. GLO, introduced by Bojanowski et al., embeds the training images in a low dimensional space, so that they are reconstructed when the embedding is passed through a jointly trained deep generator. The advantages of GLO are i) encoding the entire distribution without mode dropping ii) the learned latent space corresponds to semantic image properties i.e. Euclidean distances between latent codes correspond to semantically meaningful differences. A critical disadvantage of GLO is that there is not a principled way to sample new images from it. Although the authors’ recommended fitting a Gaussian to the latent codes of the training images, this does not result in high-quality image synthesis.
IMLE was proposed by Li and Malik  for training generative models by sampling a large number of latent codes from an arbitrary distribution, mapping each to the image domain using a trained generator and ensuring that for every training image there exists a generated image which is near to it. IMLE is trivial to sample from and does not suffer from mode-dropping. Like other nearest neighbor methods, IMLE is sensitive to the exact metric used, particularly given that the training set is finite. Recall that while the classic Cover-Hart result 
tells us that asymptotically the error rate of the nearest neighbor classifier is within a factor of 2 of the Bayes risk, when we use a finite set of exemplars better choices of metrics give us better classifier performance. When trained directly on image pixels using anloss, IMLE synthesizes blurry images.
In this work, we present a new technique, Generative Latent Nearest Neighbors (GLANN), which is able to train generative models of comparable or better quality to GANs. Our method overcomes the metric problem of IMLE by first embedding the training images using GLO. The attractive linear properties of the latent space induced by GLO, allow the Euclidean metric to be semantically meaningful in the latent space . We train an IMLE-based model to map between an arbitrary noise distribution , and the GLO latent space . The GLO generator can then map the generated latent codes to pixel space, thus generating an image. Our method GLANN enjoys the best of both IMLE and GLO: easy sampling, modeling the entire distribution, stable training and sharp image synthesis. A schema of our approach is presented in Fig. 1.
We quantitatively evaluate our method using established protocols and find that it significantly outperforms other non-adversarial methods, while being usually better or competitive with current GAN based models. GLANN is also able to achieve promising results on high-resolution image generation and 3D generation. Finally, we show that GLANN-trained models are the first to perform truly non-adversarial unsupervised image translation.
2 Previous Work
Generative Modeling: Generative modeling of images is a long-standing problem of wide applicability. Early approaches included mixtures of Gaussian models (GMM) 
. Such methods were very limited in image resolution and quality. Since the introduction of deep learning, deep methods have continually been used for image generative models. Early attempts included Deep Belief Networks (DBNs) (e.g.
). DBNs however were rather tricky to train and did not scale to high resolutions. Variational Autoencoders (VAEs)
are a significant breakthrough in deep generative modeling, introduced by Kingma and Welling. VAEs are able to generate images from the Gaussian distribution by making a variational approximation. This scheme was followed by multiple works including the recent Wasserstein Autoencoder. Although VAEs are relatively simple to train and have solid theoretical foundations, they generally do not generate sharp images. This is partially due to making restrictive assumptions such as a unimodal prior and requirement for an encoder.
Several other non-adversarial training paradigms exist: Generative invertible flows , that were recently extended to high resolution  but at prohibitive computational costs. Another training paradigm is autoregressive image models e.g. PixelRNN/PixelCNN 
, where pixels are modeled sequentially. Autoregressive models are computationally expensive and underperform adversarial methods although they are the state of the art in audio generation (e.g. WaveNet).
Adversarial Generative Models: Generative Adversarial Networks (GANs) were first introduced by Goodfellow et al.  and are the state-of-the-art method for training generative models. A basic discussion on GANs was given in Sec. 1. GANs have shown a remarkable ability for image generation, but suffer from difficult training and mode dropping. Many methods were proposed for improving GANs e.g. changing the loss function (e.g. Wasserstein GAN ) or regularizing the discriminator to be Lipschitz by: clipping , gradient regularization [11, 26] or spectral normalization . GAN training was shown to scale to high resolutions  using engineering tricks and careful hyper-parameter selection.
Evaluation of Generative Models: Evaluation of generative models is challenging. Early works evaluated generative models using probabilistic criteria (e.g. ). More recent generative models (particularly GANs) are not amenable to such evaluation. GAN generations have traditionally been evaluated using visual inspection of a handful of examples or by a user study. More recently, more principled evaluation protocols have emerged. Inception Scores (IS) which take into account both diversity and quality were first introduced by . FID scores  were more recently introduced to overcome major flaws of the IS protocol 
. Very recently, a method for generative evaluation which is able to capture both precision and recall was introduced by Sajjadi et al.
. Due to the hyperparameters sensitivity of GANs, a large scale study of the performance ofdifferent GANs and VAE was carried out by Lucic et al.  over a large search space of 100 different hyperparameters, establishing a common baseline for evaluation.
Non-Adversarial Methods: The disadvantages of GANs motivated research into GAN alternatives. GLO , a recently introduced encoder-less generative model which uses a non-adversarial loss function, achieves better results than VAEs. Due to the lack of a good sampling procedure, it does not outperform GANs (see Sec. 3.1). IMLE , a method related to ICP was also introduced for training unconditional generative models, however due to computational challenges and the choice of metric, it also does not outperform GANs. Chen and Koltun  presented a non-adversarial method for supervised image mapping, which in some cases was found to be competitive with adversarial methods. Hoshen and Wolf introduced an ICP-based method  for unsupervised word translation which contains no adversarial training. However, this method is not currently able to generate high quality images. They also presented non-adversarial method, NAM [15, 16, 13], for unsupervised image mapping. The method relies on having access to a strong unconditional model of the target domain, which is typically trained using GANs.
3 Our method
In this section we present a method - GLANN - for synthesizing high-quality images without using GANs.
Classical methods often factorize a set of data points via the following decomposition:
Where is a latent code describing , and is a set of weights. Such factorization is poorly constrained and is typically accompanied by other constraints such as low-rank, positivity (NMF), sparsity etc. Both and are optimized directly e.g. by alternating least squares or SVD. The resulting are latent vectors that embed the data in a lower dimension and typically better behaved space. It is often found that attributes become linear operations in the latent space.
GLO  is a recently introduced deep method, which is different from the above in three aspects: i) Constraining all latent vectors to lie on a unit sphere or a unit ball. ii) Replacing the linear matrix , by a deep CNN generator which is more suitable for modeling images. iii) Using a Laplacian pyramid loss function (but we find that a VGG  perceptual loss works better).
The GLO optimization objective is written in Eq. 2:
Bojanowski et al , implement as a Laplacian pyramid. All weights are trained by SGD (including the generator weights and a latent vector per each training image ). After training, the result is a generator and a latent embedding of each training image .
is a recent non-adversarial technique that maps between distributions using a maximum likelihood criterion. Each epoch of IMLE consists of the following stages: i)random latent codes
are sampled from a normal distribution ii) The latent codes are mapped by the generator resulting in imagesiii) For each training example , the nearest generated image is found such that: iv) is optimized using nearest neighbors as approximate correspondences This procedure is repeated until the convergence of .
3.3 Limitations of GLO and IMLE
The main limitation of GLO is that the generator is not trained to sample from any known distribution i.e. the distribution of is unknown and we cannot directly sample from it. When sampling latent variables from a normal distribution or when fitting a Gaussian to the training set latent codes (as advocated in ), generations that are of much lower quality than GANs are usually obtained. This prevents GLO from being competitive with GANs.
Although sampling from an IMLE trained generator is trivial, the training is not, a good metric might not be known, the nearest neighbor computation and feature extraction for each random noise generation is costly. IMLE typically results in blurry image synthesis.
3.4 GLANN: Generative Latent Nearest Neighbor
We present a method - GLANN - that overcomes the weaknesses of both GLO and IMLE. GLANN consists of two stages: i) embedding the high-dimensional image space into a ”well-behaved” latent space using GLO. ii) Mapping between an arbitrary distribution (typically a multi-dimensional normal distribution) and the low-dimensional latent space using IMLE.
3.4.1 Stage 1: Latent embedding
Images are high-dimensional and distances between them in pixel space might not be meaningful. This makes IMLE and the use of simple metric functions such as or less effective in pixel space. In some cases perceptual features may be found under which distances make sense, however they are high dimensional and expensive to compute.
Instead our method first embeds the training images in a low dimensional space using GLO. Differently from the GLO algorithm, we use a VGG perceptual loss function. The optimization objective is written in Eq,6:
All parameters are optimized directly by SGD. By the end of training, the training images are embedded by the low dimensional latent codes . The latent space enjoys convenient properties such as linearity. A significant benefit of this space is that a Euclidean metric in the space can typically yield more more semantically meaningful results than raw image pixels.
3.4.2 Stage 2: Sampling from the latent space
GLO replaced the problem of sampling from image pixels by the problem of sampling from without offering an effective sampling algorithm. Although the original paper suggests fitting a Gaussian to the training latent vectors , this typically does not result in good generations. Instead we propose learning a mapping from a distribution from which sampling is trivial (e.g. multivariate normal) to the empirical latent code distribution using IMLE.
At the beginning of each epoch, we sample a set of random noise codes from the noise distribution. Each one of the codes is mapped using mapping function to the latent space:
During the epoch, our method iteratively samples a minibatch of latent codes from the set computed in the previous stage. For each latent code
, we find the nearest neighbor mapped noise vector (using a Euclidean distance metric):
The approximate matches can now be used for finetuning the mapping function :
This procedure is repeated until the convergence of . It was shown theoretically by Li and Malik , that the method achieves a form of maximum likelihood estimate.
3.4.3 Sampling new images
Synthesizing new images is now a simple task: We first sample a noise vector from the multivariate normal distribution . The new sample is mapped to the latent code space:
By our previous optimization, was trained such that latent code lies close to the data manifold. We can therefore use the generator to project the latent code to image space by our GLO trained generator :
will appear to come from the distribution of the input images .
It is also possible to invert this transformation by optimizing for the noise vector given an image :
|Dataset||MM GAN||NS GAN||LSGAN||WGAN||BEGAN||VAE||GLO||Ours|
To evaluate the performance of our proposed method, we perform quantitative and qualitative experiments comparing our method against established baselines.
4.1 Quantitative Image Generation Results
In order to compare the quality of our results against representative adversarial methods, we evaluate our method using the protocol established by Lucic et al. . This protocol fixes the architecture of all generative models to be InfoGAN . They evaluate representative adversarial models (DCGAN, LSGAN, NSGAN, W-GAN, W-GAN GP, DRAGAN, BEGAN) and a single non-adversarial model (VAE). In , significant computational resources are used to evaluate the performance of each method over a set of 100 hyper-parameter settings, e.g.: learning rate, regularization, presence of batch norm etc.
Finding good evaluation metrics for generative models is an active research area. Lucic et al. argue that the previously used Inception Score (IS) is not a good evaluation metric, as the maximal IS score is obtained by synthesizing a single image from every class. Instead, they advocate using Frechet Inception Distance (FID). FID measures the similarity of the distributions of real and generated images by two steps: i) Running the Inception network as a feature extractor to embed each of the real and generated images ii) Fitting a multi-variate Gaussian to the real and generated embeddings separately, to yield means ,
and variances, for the real and generated distributions respectively. The FID score is then computed as in Eq. 10:
Lucic et al. evaluate the baselines on standard public datasets: MNIST , Fashion MNIST , CIFAR10  and CelebA . MNIST, Fashion-MNIST and CIFAR10 contain 50k color images and 10k validation images. MNIST and Fashion are while CIFAR is .
For a fair comparison of our method, we use the same generator architecture used by Lucic et al. for our GLO model. We do not have a discriminator, instead, we use a VGG perceptual loss. Also differently from the methods tested by Lucic et al. we train an additional network for IMLE sampling from the noise space to the latent space. In our implementation, has two dense layers with
hidden nodes, with RelU and BatchNorm. GLANN actually uses fewer parameters than the baseline by not using a discriminator. Our method was trained with ADAM. We used the highest learning rate that allowed convergence: for the mapping network, for the latent codes ( for CelebA), generator learning rate was the latent code rate. epochs were used for GLO training decayed by every 50 epochs. epochs were used for mapping network training.
Tab. 1 presents a comparison of the FID achieved by our method and those reported by Lucic et al. We removed DRAGAN and WGAN-GP for space consideration (and as other methods represented similar performance). The results for GLO were obtained by fitting a Gaussian to the learned latent codes (as suggested in ).
On Fashion and CIFAR10, our method significantly outperforms all baselines - despite just using a single hyper-parameter setting. Our method is competitive on MNIST, although it does not reach the top performance. As most methods performed very well on this task, we do not think that it has much discriminative power. We found that a few other methods outperformed ours in terms of FID on CelebA, due to checkerboard patterns in our generated images. This is a well known phenomenon of deconvolutional architectures , which are now considered outdated. In Sec. 4.3, we show high-quality CelebA-HQ facial images generated by our method when trained using modern architectures.
Our method always significantly outperforms the VAE and GLO baseline, which are strong representatives of non-adversarial methods. One of the main messages in  was that GAN methods require a significant hyperparameter search to achieve good performance. Our method was shown to be very stable and achieved strong performance (top on two datasets) with a fixed hyperparameter setting. An extensive hyperparameter search can potentially further increase the performance our method, we leave it to future work.
4.2 Evaluation of Precision and Recall
FID is effective at measuring precision, but not recall. We therefore also opt for the evaluation metric recently presented by Sajjadi et al.  which they name PRD. PRD first embeds an equal number of generated and real images using the inception network. All image embeddings (real and generated) are concatenated and clustered into bins (). Histograms , are computed for the number of images in each cluster from the real, generated data respectively. The precision () and recall () are defined:
The set of pairs forms the precision-recall curve (threshold is sampled from an equiangular grid). The precision-recall curve is summarized by a variation of the score: which is able to assign greater importance to precision or recall. Specifically are used for capturing (recall, precision).
The exact numerical precision-recall values are not available in , they do provide scatter plots with the pairs of all models trained in . We computed for the models trained using our method as described in the previous section. The scores were computed using the authors’ code. For ease of comparison, we overlay our scores over the scatter plots provided in . Our numerical scores are: MNIST , Fashion , CIFAR10 and CelebA . The results for GLO with sampling by fitting a Gaussian to the learned latent codes (as suggested in ) were much worse: MNIST , Fashion , CIFAR10 , CelebA .
From Fig. 2 we can observe that our method generally performs better or competitively to GANs on both precision and recall. On MNIST our method and the best GAN method achieved near-perfect precision-recall. On Fashion our method achieved near perfect precision-recall while the best GAN method lagged behind. On CIFAR10 the performance of our method was also convincingly better than the best GAN model. On CelebA, our method performed well but did not achieve the top performance due to the checkerboard issue described in Sec. 4.2. Overall the performance of our method is typically better or equal to the baselines examined, this is even more impressive in view of the baselines being exhaustively tested over 100 hyperparameter configurations. We also note that our method outperformed VAEs and GLOs very convincingly. This provides evidence that our method is far superior to other generator-based non-adversarial models.
4.3 Qualitative Image Generation Results
We provide qualitative comparisons between our method and the GAN models evaluated by Sajjadi et al. . We also show promising results on high-resolution image generation.
As mentioned above, Sajjadi et al.  evaluated different generative models in terms of precision and recall. They provided visual examples of their best performing model (marked as B) for each of the datasets evaluated. In Fig. 3, we provide a visual comparison between random samples generated by our model (without cherry picking) vs. their reported results.
We can observe that on MNIST and Fashion-MNIST our method and the best GAN method performed very well. The visual examples are diverse and of high visual quality.
On the CIFAR10 dataset, we can observe that our examples are more realistic than those generated by the best GAN model trained by . On CelebA our generated image are very realistic and with many fewer failed generations. Our generated images do suffer from some pixelization (discussed in Sec. 4.1). We note that GANs can generate very high quality faces (e.g. PGGAN ), however it appears that for the small architecture used by Lucic et al. and Sajjadi et al., GANs do not generate particularly high-quality facial images.
To evaluate the performance of our method on higher resolution images, we trained our method on the CelebA-HQ dataset at resolution. We used the network architecture from Mescheder et al . We use channels, latent code dimensionality of and noise dimension of . We used a learning rate of for the latent codes, for the generator and for the noise to latent code mapping function. We trained for 250 epochs, decayed by every 10 epochs.
We show some examples of interpolation between two randomly sampled noises in Fig. 4. Several observations can be made from the figures: i) Our model is able to generate very high quality images at high resolutions. ii) The smooth interpolations illustrate that our model generalizes well to unseen images.
To show the ability of our method to scale to , we present two interpolations at this high resolution in Fig. 5, although we note that not all interpolations at such high resolution were successful.
4.4 ModelNet Chair 3D Generation
4.5 Non-Adversarial Unsupervised Image Translation
As generative models are trained in order to be used in downstream tasks, we propose to evaluate generative models by the downstream task of cross domain unsupervised mapping. NAM  was proposed by Hoshen and Wolf for unsupervised domain mapping. The method relies on having a strong unconditional generative model of the output image domain. Stronger generative models perform better at this task. This required [16, 13] to use GAN-based unconditional generators. We evaluated our model using the quantitative benchmarks presented in  - namely: , and . Our model achieved scores of , and on the three tasks respectively. The results are similar to those obtained using the GAN-based unconditional models (although SVHN is a bit lower here). GLANN is therefore the first model able to achieve fully unsupervised image translation without the use of GANs.
In this work, we replaced the standard adversarial loss function by a perceptual loss. In practice we use ImageNet-trained VGG features. Zhang et al. claimed that self-supervised perceptual losses work no worse than the ImageNet-trained features. It is therefore likely that our method will have similar performance with self-supervised perceptual losses.
Higher resolution: The increase in resolution between to or
was enabled by a simple modification of the loss function: the perceptual loss was calculated both on the original images, as well as on a bi-linearly subsampled version of the image. Going up to higher resolutions simply requires more sub-sampling levels. Research into more sophisticated perceptual loss will probably yield further improvements in synthesis quality.
Other modalities: In this work we focuses on image synthesis. We believe that our method can extend to many other modalities, particularly 3D and video. The simplicity of the procedure and robustness to hyperparameters makes application to other modalities much simpler than GANs. We showed some evidence for this assertion in Sec. 4.4. One research task for future work is finding good perceptual loss functions for domains outside 2D images.
In this paper we introduced a novel non-adversarial method for training generative models. Our method combines ideas from GLO and IMLE and overcomes the weaknesses of both methods. When compared on established benchmarks, our method outperformed the the most common GAN models that underwent exhaustive hyperparameter tuning. Our method is robust and simple to train and achieves excellent results. As future work, we plan to extend this work to higher resolutions and new modalities such as video and 3D.
-  M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. In ICLR, 2017.
-  S. Arora and Y. Zhang. Do gans actually learn the distribution? an empirical study. arXiv preprint arXiv:1706.08224, 2017.
-  S. Barratt and R. Sharma. A note on the inception score. arXiv preprint arXiv:1801.01973, 2018.
Y. Bengio, G. Mesnil, Y. Dauphin, and S. Rifai.
Better mixing via deep representations.
International Conference on Machine Learning, pages 552–560, 2013.
-  P. Bojanowski, A. Joulin, D. Lopez-Paz, and A. Szlam. Optimizing the latent space of generative networks. In ICML, 2018.
-  Q. Chen and V. Koltun. Photographic image synthesis with cascaded refinement networks. ICCV, 2017.
-  X. Chen, X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS. 2016.
-  T. Cover and P. Hart. Nearest neighbor pattern classification. IEEE transactions on information theory, 1967.
-  L. Dinh, D. Krueger, and Y. Bengio. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, pages 2672–2680, 2014.
-  I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein gans. In NIPS, 2017.
-  M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pages 6626–6637, 2017.
-  Y. Hoshen. Non-adversarial mapping with vaes. In NIPS, 2018.
-  Y. Hoshen and L. Wolf. An iterative closest point method for unsupervised word translation. arXiv preprint arXiv:1801.06126, 2018.
-  Y. Hoshen and L. Wolf. Nam - unsupervised cross-domain image mapping without cycles or gans. In ICLR Workshop, 2018.
-  Y. Hoshen and L. Wolf. Nam: Non-adversarial unsupervised domain mapping. In ECCV, 2018.
P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros.
Image-to-image translation with conditional adversarial networks.In CVPR, 2017.
-  T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
-  D. Kingma and J. Ba. Adam: A method for stochastic optimization. In The International Conference on Learning Representations (ICLR), 2016.
-  D. P. Kingma and P. Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. arXiv preprint arXiv:1807.03039, 2018.
-  D. P. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR, 2014.
-  A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
-  Y. LeCun and C. Cortes. MNIST handwritten digit database. 2010.
-  K. Li and J. Malik. Implicit maximum likelihood estimation. arXiv preprint arXiv:1809.09087, 2018.
-  M. Lucic, K. Kurach, M. Michalski, S. Gelly, and O. Bousquet. Are gans created equal? a large-scale study. arXiv preprint arXiv:1711.10337, 2017.
-  L. Mescheder, S. Nowozin, and A. Geiger. Which training methods for gans do actually converge?, booktitle = International Conference on Machine Learning (ICML), year = 2018.
-  T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. In ICLR, 2018.
-  A. Odena, V. Dumoulin, and C. Olah. Deconvolution and checkerboard artifacts. Distill, 1(10):e3, 2016.
-  A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
-  A. v. d. Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
-  M. S. Sajjadi, O. Bachem, M. Lucic, O. Bousquet, and S. Gelly. Assessing generative models via precision and recall. arXiv preprint arXiv:1806.00035, 2018.
-  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pages 2234–2242, 2016.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. ICLR, 2015.
-  I. Tolstikhin, O. Bousquet, S. Gelly, and B. Schoelkopf. Wasserstein auto-encoders. In ICLR, 2018.
-  J. Wu, C. Zhang, T. Xue, W. T. Freeman, and J. B. Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In NIPS, 2016.
Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao.
3d shapenets: A deep representation for volumetric shapes.
Proceedings of the IEEE conference on computer vision and pattern recognition, 2015.
-  H. Xiao, K. Rasul, and R. Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
S. Yang, P. Luo, C. C. Loy, and X. Tang.
From facial parts responses to face detection: A deep learning approach.In ICCV, pages 3676–3684, 2015.
-  H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena. Self-attention generative adversarial networks. arXiv preprint arXiv:1805.08318, 2018.
-  R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. The unreasonable effectiveness of deep features as a perceptual metric. arXiv preprint arXiv:1801.03924, 2018.
-  D. Zoran and Y. Weiss. From learning models of natural image patches to whole image restoration. In ICCV, 2011.