1 Introduction
Generative image modeling is a longstanding goal for computer vision. Unconditional generative models set to learn functions that generate the entire image distribution given a finite number of training samples. Generative Adversarial Networks (GANs)
[10] are a recently introduced technique for image generative modeling. They are used extensively for image generation owing to: i) training effective unconditional image generators ii) being almost the only method for unsupervised image translation between domains (but see NAM [16]) iii) being an effective perceptual image loss function (e.g. Pix2Pix
[17]).Along with their obvious advantages, GANs have critical disadvantages: i) GANs are very hard to train, this is expressed by a very erratic progression of training, sudden run collapses, and extreme sensitivity to hyperparameters. ii) GANs suffer from modedropping  the modeling of only some but not all the modes of the target distribution. The birthday paradox can be used to measure the extent of mode dropping [2]: The number of modes modeled by a generator can be estimated by generating a fixed number of images and counting the number of repeated images. Empirical evaluation of GANs found that the number of modes is significantly lower than the number in the training distribution.
The disadvantages of GANs gave rise to research into nonadversarial alternatives for training generative models. GLO [5] and IMLE [24] are two such methods. GLO, introduced by Bojanowski et al., embeds the training images in a low dimensional space, so that they are reconstructed when the embedding is passed through a jointly trained deep generator. The advantages of GLO are i) encoding the entire distribution without mode dropping ii) the learned latent space corresponds to semantic image properties i.e. Euclidean distances between latent codes correspond to semantically meaningful differences. A critical disadvantage of GLO is that there is not a principled way to sample new images from it. Although the authors’ recommended fitting a Gaussian to the latent codes of the training images, this does not result in highquality image synthesis.
IMLE was proposed by Li and Malik [24] for training generative models by sampling a large number of latent codes from an arbitrary distribution, mapping each to the image domain using a trained generator and ensuring that for every training image there exists a generated image which is near to it. IMLE is trivial to sample from and does not suffer from modedropping. Like other nearest neighbor methods, IMLE is sensitive to the exact metric used, particularly given that the training set is finite. Recall that while the classic CoverHart result [8]
tells us that asymptotically the error rate of the nearest neighbor classifier is within a factor of 2 of the Bayes risk, when we use a finite set of exemplars better choices of metrics give us better classifier performance. When trained directly on image pixels using an
loss, IMLE synthesizes blurry images.In this work, we present a new technique, Generative Latent Nearest Neighbors (GLANN), which is able to train generative models of comparable or better quality to GANs. Our method overcomes the metric problem of IMLE by first embedding the training images using GLO. The attractive linear properties of the latent space induced by GLO, allow the Euclidean metric to be semantically meaningful in the latent space . We train an IMLEbased model to map between an arbitrary noise distribution , and the GLO latent space . The GLO generator can then map the generated latent codes to pixel space, thus generating an image. Our method GLANN enjoys the best of both IMLE and GLO: easy sampling, modeling the entire distribution, stable training and sharp image synthesis. A schema of our approach is presented in Fig. 1.
We quantitatively evaluate our method using established protocols and find that it significantly outperforms other nonadversarial methods, while being usually better or competitive with current GAN based models. GLANN is also able to achieve promising results on highresolution image generation and 3D generation. Finally, we show that GLANNtrained models are the first to perform truly nonadversarial unsupervised image translation.
2 Previous Work
Generative Modeling: Generative modeling of images is a longstanding problem of wide applicability. Early approaches included mixtures of Gaussian models (GMM) [41]
. Such methods were very limited in image resolution and quality. Since the introduction of deep learning, deep methods have continually been used for image generative models. Early attempts included Deep Belief Networks (DBNs) (e.g.
[4]). DBNs however were rather tricky to train and did not scale to high resolutions. Variational Autoencoders (VAEs)
[21]are a significant breakthrough in deep generative modeling, introduced by Kingma and Welling. VAEs are able to generate images from the Gaussian distribution by making a variational approximation. This scheme was followed by multiple works including the recent Wasserstein Autoencoder
[34]. Although VAEs are relatively simple to train and have solid theoretical foundations, they generally do not generate sharp images. This is partially due to making restrictive assumptions such as a unimodal prior and requirement for an encoder.Several other nonadversarial training paradigms exist: Generative invertible flows [9], that were recently extended to high resolution [20] but at prohibitive computational costs. Another training paradigm is autoregressive image models e.g. PixelRNN/PixelCNN [30]
, where pixels are modeled sequentially. Autoregressive models are computationally expensive and underperform adversarial methods although they are the state of the art in audio generation (e.g. WaveNet
[29]).Adversarial Generative Models: Generative Adversarial Networks (GANs) were first introduced by Goodfellow et al. [10] and are the stateoftheart method for training generative models. A basic discussion on GANs was given in Sec. 1. GANs have shown a remarkable ability for image generation, but suffer from difficult training and mode dropping. Many methods were proposed for improving GANs e.g. changing the loss function (e.g. Wasserstein GAN [1]) or regularizing the discriminator to be Lipschitz by: clipping [1], gradient regularization [11, 26] or spectral normalization [27]. GAN training was shown to scale to high resolutions [39] using engineering tricks and careful hyperparameter selection.
Evaluation of Generative Models: Evaluation of generative models is challenging. Early works evaluated generative models using probabilistic criteria (e.g. [41]). More recent generative models (particularly GANs) are not amenable to such evaluation. GAN generations have traditionally been evaluated using visual inspection of a handful of examples or by a user study. More recently, more principled evaluation protocols have emerged. Inception Scores (IS) which take into account both diversity and quality were first introduced by [32]. FID scores [12] were more recently introduced to overcome major flaws of the IS protocol [3]
. Very recently, a method for generative evaluation which is able to capture both precision and recall was introduced by Sajjadi et al.
[31]. Due to the hyperparameters sensitivity of GANs, a large scale study of the performance of
different GANs and VAE was carried out by Lucic et al. [25] over a large search space of 100 different hyperparameters, establishing a common baseline for evaluation.NonAdversarial Methods: The disadvantages of GANs motivated research into GAN alternatives. GLO [5], a recently introduced encoderless generative model which uses a nonadversarial loss function, achieves better results than VAEs. Due to the lack of a good sampling procedure, it does not outperform GANs (see Sec. 3.1). IMLE [24], a method related to ICP was also introduced for training unconditional generative models, however due to computational challenges and the choice of metric, it also does not outperform GANs. Chen and Koltun [6] presented a nonadversarial method for supervised image mapping, which in some cases was found to be competitive with adversarial methods. Hoshen and Wolf introduced an ICPbased method [14] for unsupervised word translation which contains no adversarial training. However, this method is not currently able to generate high quality images. They also presented nonadversarial method, NAM [15, 16, 13], for unsupervised image mapping. The method relies on having access to a strong unconditional model of the target domain, which is typically trained using GANs.
3 Our method
In this section we present a method  GLANN  for synthesizing highquality images without using GANs.
3.1 Glo
Classical methods often factorize a set of data points via the following decomposition:
(1) 
Where is a latent code describing , and is a set of weights. Such factorization is poorly constrained and is typically accompanied by other constraints such as lowrank, positivity (NMF), sparsity etc. Both and are optimized directly e.g. by alternating least squares or SVD. The resulting are latent vectors that embed the data in a lower dimension and typically better behaved space. It is often found that attributes become linear operations in the latent space.
GLO [5] is a recently introduced deep method, which is different from the above in three aspects: i) Constraining all latent vectors to lie on a unit sphere or a unit ball. ii) Replacing the linear matrix , by a deep CNN generator which is more suitable for modeling images. iii) Using a Laplacian pyramid loss function (but we find that a VGG [33] perceptual loss works better).
The GLO optimization objective is written in Eq. 2:
(2) 
Bojanowski et al [5], implement as a Laplacian pyramid. All weights are trained by SGD (including the generator weights and a latent vector per each training image ). After training, the result is a generator and a latent embedding of each training image .
3.2 Imle
IMLE [24]
is a recent nonadversarial technique that maps between distributions using a maximum likelihood criterion. Each epoch of IMLE consists of the following stages: i)
random latent codesare sampled from a normal distribution ii) The latent codes are mapped by the generator resulting in images
iii) For each training example , the nearest generated image is found such that: iv) is optimized using nearest neighbors as approximate correspondences This procedure is repeated until the convergence of .3.3 Limitations of GLO and IMLE
The main limitation of GLO is that the generator is not trained to sample from any known distribution i.e. the distribution of is unknown and we cannot directly sample from it. When sampling latent variables from a normal distribution or when fitting a Gaussian to the training set latent codes (as advocated in [5]), generations that are of much lower quality than GANs are usually obtained. This prevents GLO from being competitive with GANs.
Although sampling from an IMLE trained generator is trivial, the training is not, a good metric might not be known, the nearest neighbor computation and feature extraction for each random noise generation is costly. IMLE typically results in blurry image synthesis.
3.4 GLANN: Generative Latent Nearest Neighbor
We present a method  GLANN  that overcomes the weaknesses of both GLO and IMLE. GLANN consists of two stages: i) embedding the highdimensional image space into a ”wellbehaved” latent space using GLO. ii) Mapping between an arbitrary distribution (typically a multidimensional normal distribution) and the lowdimensional latent space using IMLE.
3.4.1 Stage 1: Latent embedding
Images are highdimensional and distances between them in pixel space might not be meaningful. This makes IMLE and the use of simple metric functions such as or less effective in pixel space. In some cases perceptual features may be found under which distances make sense, however they are high dimensional and expensive to compute.
Instead our method first embeds the training images in a low dimensional space using GLO. Differently from the GLO algorithm, we use a VGG perceptual loss function. The optimization objective is written in Eq,
6:(3) 
All parameters are optimized directly by SGD. By the end of training, the training images are embedded by the low dimensional latent codes . The latent space enjoys convenient properties such as linearity. A significant benefit of this space is that a Euclidean metric in the space can typically yield more more semantically meaningful results than raw image pixels.
3.4.2 Stage 2: Sampling from the latent space
GLO replaced the problem of sampling from image pixels by the problem of sampling from without offering an effective sampling algorithm. Although the original paper suggests fitting a Gaussian to the training latent vectors , this typically does not result in good generations. Instead we propose learning a mapping from a distribution from which sampling is trivial (e.g. multivariate normal) to the empirical latent code distribution using IMLE.
At the beginning of each epoch, we sample a set of random noise codes from the noise distribution. Each one of the codes is mapped using mapping function to the latent space:
(4) 
During the epoch, our method iteratively samples a minibatch of latent codes from the set computed in the previous stage. For each latent code
, we find the nearest neighbor mapped noise vector (using a Euclidean distance metric):
(5) 
The approximate matches can now be used for finetuning the mapping function :
(6) 
This procedure is repeated until the convergence of . It was shown theoretically by Li and Malik [24], that the method achieves a form of maximum likelihood estimate.
3.4.3 Sampling new images
Synthesizing new images is now a simple task: We first sample a noise vector from the multivariate normal distribution . The new sample is mapped to the latent code space:
(7) 
By our previous optimization, was trained such that latent code lies close to the data manifold. We can therefore use the generator to project the latent code to image space by our GLO trained generator :
(8) 
will appear to come from the distribution of the input images .
It is also possible to invert this transformation by optimizing for the noise vector given an image :
(9) 
Adversarial  NonAdversarial  

Dataset  MM GAN  NS GAN  LSGAN  WGAN  BEGAN  VAE  GLO  Ours 
MNIST  6.7 0.4  
Fashion  
Cifar10  
CelebA 
4 Experiments
To evaluate the performance of our proposed method, we perform quantitative and qualitative experiments comparing our method against established baselines.
4.1 Quantitative Image Generation Results
In order to compare the quality of our results against representative adversarial methods, we evaluate our method using the protocol established by Lucic et al. [25]. This protocol fixes the architecture of all generative models to be InfoGAN [7]. They evaluate representative adversarial models (DCGAN, LSGAN, NSGAN, WGAN, WGAN GP, DRAGAN, BEGAN) and a single nonadversarial model (VAE). In [25], significant computational resources are used to evaluate the performance of each method over a set of 100 hyperparameter settings, e.g.: learning rate, regularization, presence of batch norm etc.
Finding good evaluation metrics for generative models is an active research area. Lucic et al. argue that the previously used Inception Score (IS) is not a good evaluation metric, as the maximal IS score is obtained by synthesizing a single image from every class. Instead, they advocate using Frechet Inception Distance (FID)
[12]. FID measures the similarity of the distributions of real and generated images by two steps: i) Running the Inception network as a feature extractor to embed each of the real and generated images ii) Fitting a multivariate Gaussian to the real and generated embeddings separately, to yield means ,and variances
, for the real and generated distributions respectively. The FID score is then computed as in Eq. 10:(10) 
Lucic et al. evaluate the baselines on standard public datasets: MNIST [23], Fashion MNIST [37], CIFAR10 [22] and CelebA [38]. MNIST, FashionMNIST and CIFAR10 contain 50k color images and 10k validation images. MNIST and Fashion are while CIFAR is .
For a fair comparison of our method, we use the same generator architecture used by Lucic et al. for our GLO model. We do not have a discriminator, instead, we use a VGG perceptual loss. Also differently from the methods tested by Lucic et al. we train an additional network for IMLE sampling from the noise space to the latent space. In our implementation, has two dense layers with
hidden nodes, with RelU and BatchNorm. GLANN actually uses fewer parameters than the baseline by not using a discriminator. Our method was trained with ADAM
[19]. We used the highest learning rate that allowed convergence: for the mapping network, for the latent codes ( for CelebA), generator learning rate was the latent code rate. epochs were used for GLO training decayed by every 50 epochs. epochs were used for mapping network training.Tab. 1 presents a comparison of the FID achieved by our method and those reported by Lucic et al. We removed DRAGAN and WGANGP for space consideration (and as other methods represented similar performance). The results for GLO were obtained by fitting a Gaussian to the learned latent codes (as suggested in [5]).
On Fashion and CIFAR10, our method significantly outperforms all baselines  despite just using a single hyperparameter setting. Our method is competitive on MNIST, although it does not reach the top performance. As most methods performed very well on this task, we do not think that it has much discriminative power. We found that a few other methods outperformed ours in terms of FID on CelebA, due to checkerboard patterns in our generated images. This is a well known phenomenon of deconvolutional architectures [28], which are now considered outdated. In Sec. 4.3, we show highquality CelebAHQ facial images generated by our method when trained using modern architectures.
Our method always significantly outperforms the VAE and GLO baseline, which are strong representatives of nonadversarial methods. One of the main messages in [25] was that GAN methods require a significant hyperparameter search to achieve good performance. Our method was shown to be very stable and achieved strong performance (top on two datasets) with a fixed hyperparameter setting. An extensive hyperparameter search can potentially further increase the performance our method, we leave it to future work.
MNIST  Fashion  CIFAR10  CelebA 

4.2 Evaluation of Precision and Recall
FID is effective at measuring precision, but not recall. We therefore also opt for the evaluation metric recently presented by Sajjadi et al. [31] which they name PRD. PRD first embeds an equal number of generated and real images using the inception network. All image embeddings (real and generated) are concatenated and clustered into bins (). Histograms , are computed for the number of images in each cluster from the real, generated data respectively. The precision () and recall () are defined:
(11) 
(12) 
The set of pairs forms the precisionrecall curve (threshold is sampled from an equiangular grid). The precisionrecall curve is summarized by a variation of the score: which is able to assign greater importance to precision or recall. Specifically are used for capturing (recall, precision).
The exact numerical precisionrecall values are not available in [31], they do provide scatter plots with the pairs of all models trained in [25]. We computed for the models trained using our method as described in the previous section. The scores were computed using the authors’ code. For ease of comparison, we overlay our scores over the scatter plots provided in [31]. Our numerical scores are: MNIST , Fashion , CIFAR10 and CelebA . The results for GLO with sampling by fitting a Gaussian to the learned latent codes (as suggested in [5]) were much worse: MNIST , Fashion , CIFAR10 , CelebA .
From Fig. 2 we can observe that our method generally performs better or competitively to GANs on both precision and recall. On MNIST our method and the best GAN method achieved nearperfect precisionrecall. On Fashion our method achieved near perfect precisionrecall while the best GAN method lagged behind. On CIFAR10 the performance of our method was also convincingly better than the best GAN model. On CelebA, our method performed well but did not achieve the top performance due to the checkerboard issue described in Sec. 4.2. Overall the performance of our method is typically better or equal to the baselines examined, this is even more impressive in view of the baselines being exhaustively tested over 100 hyperparameter configurations. We also note that our method outperformed VAEs and GLOs very convincingly. This provides evidence that our method is far superior to other generatorbased nonadversarial models.
IMLE  GLO  GAN  Ours 

4.3 Qualitative Image Generation Results
We provide qualitative comparisons between our method and the GAN models evaluated by Sajjadi et al. [31]. We also show promising results on highresolution image generation.
As mentioned above, Sajjadi et al. [31] evaluated different generative models in terms of precision and recall. They provided visual examples of their best performing model (marked as B) for each of the datasets evaluated. In Fig. 3, we provide a visual comparison between random samples generated by our model (without cherry picking) vs. their reported results.
We can observe that on MNIST and FashionMNIST our method and the best GAN method performed very well. The visual examples are diverse and of high visual quality.
On the CIFAR10 dataset, we can observe that our examples are more realistic than those generated by the best GAN model trained by [25]. On CelebA our generated image are very realistic and with many fewer failed generations. Our generated images do suffer from some pixelization (discussed in Sec. 4.1). We note that GANs can generate very high quality faces (e.g. PGGAN [18]), however it appears that for the small architecture used by Lucic et al. and Sajjadi et al., GANs do not generate particularly highquality facial images.
To evaluate the performance of our method on higher resolution images, we trained our method on the CelebAHQ dataset at resolution. We used the network architecture from Mescheder et al [26]. We use channels, latent code dimensionality of and noise dimension of . We used a learning rate of for the latent codes, for the generator and for the noise to latent code mapping function. We trained for 250 epochs, decayed by every 10 epochs.
We show some examples of interpolation between two randomly sampled noises in Fig. 4. Several observations can be made from the figures: i) Our model is able to generate very high quality images at high resolutions. ii) The smooth interpolations illustrate that our model generalizes well to unseen images.
To show the ability of our method to scale to , we present two interpolations at this high resolution in Fig. 5, although we note that not all interpolations at such high resolution were successful.
4.4 ModelNet Chair 3D Generation
4.5 NonAdversarial Unsupervised Image Translation
As generative models are trained in order to be used in downstream tasks, we propose to evaluate generative models by the downstream task of cross domain unsupervised mapping. NAM [16] was proposed by Hoshen and Wolf for unsupervised domain mapping. The method relies on having a strong unconditional generative model of the output image domain. Stronger generative models perform better at this task. This required [16, 13] to use GANbased unconditional generators. We evaluated our model using the quantitative benchmarks presented in [16]  namely: , and . Our model achieved scores of , and on the three tasks respectively. The results are similar to those obtained using the GANbased unconditional models (although SVHN is a bit lower here). GLANN is therefore the first model able to achieve fully unsupervised image translation without the use of GANs.
5 Discussion
Loss function:
In this work, we replaced the standard adversarial loss function by a perceptual loss. In practice we use ImageNettrained VGG features. Zhang et al.
[40] claimed that selfsupervised perceptual losses work no worse than the ImageNettrained features. It is therefore likely that our method will have similar performance with selfsupervised perceptual losses.Higher resolution: The increase in resolution between to or
was enabled by a simple modification of the loss function: the perceptual loss was calculated both on the original images, as well as on a bilinearly subsampled version of the image. Going up to higher resolutions simply requires more subsampling levels. Research into more sophisticated perceptual loss will probably yield further improvements in synthesis quality.
Other modalities: In this work we focuses on image synthesis. We believe that our method can extend to many other modalities, particularly 3D and video. The simplicity of the procedure and robustness to hyperparameters makes application to other modalities much simpler than GANs. We showed some evidence for this assertion in Sec. 4.4. One research task for future work is finding good perceptual loss functions for domains outside 2D images.
6 Conclusions
In this paper we introduced a novel nonadversarial method for training generative models. Our method combines ideas from GLO and IMLE and overcomes the weaknesses of both methods. When compared on established benchmarks, our method outperformed the the most common GAN models that underwent exhaustive hyperparameter tuning. Our method is robust and simple to train and achieves excellent results. As future work, we plan to extend this work to higher resolutions and new modalities such as video and 3D.
References
 [1] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. In ICLR, 2017.
 [2] S. Arora and Y. Zhang. Do gans actually learn the distribution? an empirical study. arXiv preprint arXiv:1706.08224, 2017.
 [3] S. Barratt and R. Sharma. A note on the inception score. arXiv preprint arXiv:1801.01973, 2018.

[4]
Y. Bengio, G. Mesnil, Y. Dauphin, and S. Rifai.
Better mixing via deep representations.
In
International Conference on Machine Learning
, pages 552–560, 2013.  [5] P. Bojanowski, A. Joulin, D. LopezPaz, and A. Szlam. Optimizing the latent space of generative networks. In ICML, 2018.
 [6] Q. Chen and V. Koltun. Photographic image synthesis with cascaded refinement networks. ICCV, 2017.
 [7] X. Chen, X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS. 2016.
 [8] T. Cover and P. Hart. Nearest neighbor pattern classification. IEEE transactions on information theory, 1967.
 [9] L. Dinh, D. Krueger, and Y. Bengio. Nice: Nonlinear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
 [10] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, pages 2672–2680, 2014.
 [11] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein gans. In NIPS, 2017.
 [12] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. Gans trained by a two timescale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pages 6626–6637, 2017.
 [13] Y. Hoshen. Nonadversarial mapping with vaes. In NIPS, 2018.
 [14] Y. Hoshen and L. Wolf. An iterative closest point method for unsupervised word translation. arXiv preprint arXiv:1801.06126, 2018.
 [15] Y. Hoshen and L. Wolf. Nam  unsupervised crossdomain image mapping without cycles or gans. In ICLR Workshop, 2018.
 [16] Y. Hoshen and L. Wolf. Nam: Nonadversarial unsupervised domain mapping. In ECCV, 2018.

[17]
P. Isola, J.Y. Zhu, T. Zhou, and A. A. Efros.
Imagetoimage translation with conditional adversarial networks.
In CVPR, 2017.  [18] T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
 [19] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In The International Conference on Learning Representations (ICLR), 2016.
 [20] D. P. Kingma and P. Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. arXiv preprint arXiv:1807.03039, 2018.
 [21] D. P. Kingma and M. Welling. Autoencoding variational bayes. In ICLR, 2014.
 [22] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
 [23] Y. LeCun and C. Cortes. MNIST handwritten digit database. 2010.
 [24] K. Li and J. Malik. Implicit maximum likelihood estimation. arXiv preprint arXiv:1809.09087, 2018.
 [25] M. Lucic, K. Kurach, M. Michalski, S. Gelly, and O. Bousquet. Are gans created equal? a largescale study. arXiv preprint arXiv:1711.10337, 2017.
 [26] L. Mescheder, S. Nowozin, and A. Geiger. Which training methods for gans do actually converge?, booktitle = International Conference on Machine Learning (ICML), year = 2018.
 [27] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. In ICLR, 2018.
 [28] A. Odena, V. Dumoulin, and C. Olah. Deconvolution and checkerboard artifacts. Distill, 1(10):e3, 2016.
 [29] A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
 [30] A. v. d. Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
 [31] M. S. Sajjadi, O. Bachem, M. Lucic, O. Bousquet, and S. Gelly. Assessing generative models via precision and recall. arXiv preprint arXiv:1806.00035, 2018.
 [32] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pages 2234–2242, 2016.
 [33] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. ICLR, 2015.
 [34] I. Tolstikhin, O. Bousquet, S. Gelly, and B. Schoelkopf. Wasserstein autoencoders. In ICLR, 2018.
 [35] J. Wu, C. Zhang, T. Xue, W. T. Freeman, and J. B. Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generativeadversarial modeling. In NIPS, 2016.

[36]
Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao.
3d shapenets: A deep representation for volumetric shapes.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, 2015.  [37] H. Xiao, K. Rasul, and R. Vollgraf. Fashionmnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.

[38]
S. Yang, P. Luo, C. C. Loy, and X. Tang.
From facial parts responses to face detection: A deep learning approach.
In ICCV, pages 3676–3684, 2015.  [39] H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena. Selfattention generative adversarial networks. arXiv preprint arXiv:1805.08318, 2018.
 [40] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. The unreasonable effectiveness of deep features as a perceptual metric. arXiv preprint arXiv:1801.03924, 2018.
 [41] D. Zoran and Y. Weiss. From learning models of natural image patches to whole image restoration. In ICCV, 2011.
Comments
There are no comments yet.