Non-Adversarial Image Synthesis with Generative Latent Nearest Neighbors

12/21/2018 ∙ by Yedid Hoshen, et al. ∙ 28

Unconditional image generation has recently been dominated by generative adversarial networks (GANs). GAN methods train a generator which regresses images from random noise vectors, as well as a discriminator that attempts to differentiate between the generated images and a training set of real images. GANs have shown amazing results at generating realistic looking images. Despite their success, GANs suffer from critical drawbacks including: unstable training and mode-dropping. The weaknesses in GANs have motivated research into alternatives including: variational auto-encoders (VAEs), latent embedding learning methods (e.g. GLO) and nearest-neighbor based implicit maximum likelihood estimation (IMLE). Unfortunately at the moment, GANs still significantly outperform the alternative methods for image generation. In this work, we present a novel method - Generative Latent Nearest Neighbors (GLANN) - for training generative models without adversarial training. GLANN combines the strengths of IMLE and GLO in a way that overcomes the main drawbacks of each method. Consequently, GLANN generates images that are far better than GLO and IMLE. Our method does not suffer from mode collapse which plagues GAN training and is much more stable. Qualitative results show that GLANN outperforms a baseline consisting of 800 GANs and VAEs on commonly used datasets. Our models are also shown to be effective for training truly non-adversarial unsupervised image translation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Generative image modeling is a long-standing goal for computer vision. Unconditional generative models set to learn functions that generate the entire image distribution given a finite number of training samples. Generative Adversarial Networks (GANs)

[10] are a recently introduced technique for image generative modeling. They are used extensively for image generation owing to: i) training effective unconditional image generators ii) being almost the only method for unsupervised image translation between domains (but see NAM [16]

) iii) being an effective perceptual image loss function (e.g. Pix2Pix

[17]).

Along with their obvious advantages, GANs have critical disadvantages: i) GANs are very hard to train, this is expressed by a very erratic progression of training, sudden run collapses, and extreme sensitivity to hyper-parameters. ii) GANs suffer from mode-dropping - the modeling of only some but not all the modes of the target distribution. The birthday paradox can be used to measure the extent of mode dropping [2]: The number of modes modeled by a generator can be estimated by generating a fixed number of images and counting the number of repeated images. Empirical evaluation of GANs found that the number of modes is significantly lower than the number in the training distribution.

The disadvantages of GANs gave rise to research into non-adversarial alternatives for training generative models. GLO [5] and IMLE [24] are two such methods. GLO, introduced by Bojanowski et al., embeds the training images in a low dimensional space, so that they are reconstructed when the embedding is passed through a jointly trained deep generator. The advantages of GLO are i) encoding the entire distribution without mode dropping ii) the learned latent space corresponds to semantic image properties i.e. Euclidean distances between latent codes correspond to semantically meaningful differences. A critical disadvantage of GLO is that there is not a principled way to sample new images from it. Although the authors’ recommended fitting a Gaussian to the latent codes of the training images, this does not result in high-quality image synthesis.

Figure 1: An illustration of our architecture: a random noise vector is sampled and mapped to the latent space to yield latent code . The latent code is projected by the generator to yield image .

IMLE was proposed by Li and Malik [24] for training generative models by sampling a large number of latent codes from an arbitrary distribution, mapping each to the image domain using a trained generator and ensuring that for every training image there exists a generated image which is near to it. IMLE is trivial to sample from and does not suffer from mode-dropping. Like other nearest neighbor methods, IMLE is sensitive to the exact metric used, particularly given that the training set is finite. Recall that while the classic Cover-Hart result [8]

tells us that asymptotically the error rate of the nearest neighbor classifier is within a factor of 2 of the Bayes risk, when we use a finite set of exemplars better choices of metrics give us better classifier performance. When trained directly on image pixels using an

loss, IMLE synthesizes blurry images.

In this work, we present a new technique, Generative Latent Nearest Neighbors (GLANN), which is able to train generative models of comparable or better quality to GANs. Our method overcomes the metric problem of IMLE by first embedding the training images using GLO. The attractive linear properties of the latent space induced by GLO, allow the Euclidean metric to be semantically meaningful in the latent space . We train an IMLE-based model to map between an arbitrary noise distribution , and the GLO latent space . The GLO generator can then map the generated latent codes to pixel space, thus generating an image. Our method GLANN enjoys the best of both IMLE and GLO: easy sampling, modeling the entire distribution, stable training and sharp image synthesis. A schema of our approach is presented in Fig. 1.

We quantitatively evaluate our method using established protocols and find that it significantly outperforms other non-adversarial methods, while being usually better or competitive with current GAN based models. GLANN is also able to achieve promising results on high-resolution image generation and 3D generation. Finally, we show that GLANN-trained models are the first to perform truly non-adversarial unsupervised image translation.

2 Previous Work

Generative Modeling: Generative modeling of images is a long-standing problem of wide applicability. Early approaches included mixtures of Gaussian models (GMM) [41]

. Such methods were very limited in image resolution and quality. Since the introduction of deep learning, deep methods have continually been used for image generative models. Early attempts included Deep Belief Networks (DBNs) (e.g.

[4]

). DBNs however were rather tricky to train and did not scale to high resolutions. Variational Autoencoders (VAEs)

[21]

are a significant breakthrough in deep generative modeling, introduced by Kingma and Welling. VAEs are able to generate images from the Gaussian distribution by making a variational approximation. This scheme was followed by multiple works including the recent Wasserstein Autoencoder

[34]. Although VAEs are relatively simple to train and have solid theoretical foundations, they generally do not generate sharp images. This is partially due to making restrictive assumptions such as a unimodal prior and requirement for an encoder.

Several other non-adversarial training paradigms exist: Generative invertible flows [9], that were recently extended to high resolution [20] but at prohibitive computational costs. Another training paradigm is autoregressive image models e.g. PixelRNN/PixelCNN [30]

, where pixels are modeled sequentially. Autoregressive models are computationally expensive and underperform adversarial methods although they are the state of the art in audio generation (e.g. WaveNet

[29]).

Adversarial Generative Models: Generative Adversarial Networks (GANs) were first introduced by Goodfellow et al. [10] and are the state-of-the-art method for training generative models. A basic discussion on GANs was given in Sec. 1. GANs have shown a remarkable ability for image generation, but suffer from difficult training and mode dropping. Many methods were proposed for improving GANs e.g. changing the loss function (e.g. Wasserstein GAN [1]) or regularizing the discriminator to be Lipschitz by: clipping [1], gradient regularization [11, 26] or spectral normalization [27]. GAN training was shown to scale to high resolutions [39] using engineering tricks and careful hyper-parameter selection.

Evaluation of Generative Models: Evaluation of generative models is challenging. Early works evaluated generative models using probabilistic criteria (e.g. [41]). More recent generative models (particularly GANs) are not amenable to such evaluation. GAN generations have traditionally been evaluated using visual inspection of a handful of examples or by a user study. More recently, more principled evaluation protocols have emerged. Inception Scores (IS) which take into account both diversity and quality were first introduced by [32]. FID scores [12] were more recently introduced to overcome major flaws of the IS protocol [3]

. Very recently, a method for generative evaluation which is able to capture both precision and recall was introduced by Sajjadi et al.

[31]

. Due to the hyperparameters sensitivity of GANs, a large scale study of the performance of

different GANs and VAE was carried out by Lucic et al. [25] over a large search space of 100 different hyperparameters, establishing a common baseline for evaluation.

Non-Adversarial Methods: The disadvantages of GANs motivated research into GAN alternatives. GLO [5], a recently introduced encoder-less generative model which uses a non-adversarial loss function, achieves better results than VAEs. Due to the lack of a good sampling procedure, it does not outperform GANs (see Sec. 3.1). IMLE [24], a method related to ICP was also introduced for training unconditional generative models, however due to computational challenges and the choice of metric, it also does not outperform GANs. Chen and Koltun [6] presented a non-adversarial method for supervised image mapping, which in some cases was found to be competitive with adversarial methods. Hoshen and Wolf introduced an ICP-based method [14] for unsupervised word translation which contains no adversarial training. However, this method is not currently able to generate high quality images. They also presented non-adversarial method, NAM [15, 16, 13], for unsupervised image mapping. The method relies on having access to a strong unconditional model of the target domain, which is typically trained using GANs.

3 Our method

In this section we present a method - GLANN - for synthesizing high-quality images without using GANs.

3.1 Glo

Classical methods often factorize a set of data points via the following decomposition:

(1)

Where is a latent code describing , and is a set of weights. Such factorization is poorly constrained and is typically accompanied by other constraints such as low-rank, positivity (NMF), sparsity etc. Both and are optimized directly e.g. by alternating least squares or SVD. The resulting are latent vectors that embed the data in a lower dimension and typically better behaved space. It is often found that attributes become linear operations in the latent space.

GLO [5] is a recently introduced deep method, which is different from the above in three aspects: i) Constraining all latent vectors to lie on a unit sphere or a unit ball. ii) Replacing the linear matrix , by a deep CNN generator which is more suitable for modeling images. iii) Using a Laplacian pyramid loss function (but we find that a VGG [33] perceptual loss works better).

The GLO optimization objective is written in Eq. 2:

(2)

Bojanowski et al [5], implement as a Laplacian pyramid. All weights are trained by SGD (including the generator weights and a latent vector per each training image ). After training, the result is a generator and a latent embedding of each training image .

3.2 Imle

IMLE [24]

is a recent non-adversarial technique that maps between distributions using a maximum likelihood criterion. Each epoch of IMLE consists of the following stages: i)

random latent codes

are sampled from a normal distribution ii) The latent codes are mapped by the generator resulting in images

iii) For each training example , the nearest generated image is found such that: iv) is optimized using nearest neighbors as approximate correspondences This procedure is repeated until the convergence of .

3.3 Limitations of GLO and IMLE

The main limitation of GLO is that the generator is not trained to sample from any known distribution i.e. the distribution of is unknown and we cannot directly sample from it. When sampling latent variables from a normal distribution or when fitting a Gaussian to the training set latent codes (as advocated in [5]), generations that are of much lower quality than GANs are usually obtained. This prevents GLO from being competitive with GANs.

Although sampling from an IMLE trained generator is trivial, the training is not, a good metric might not be known, the nearest neighbor computation and feature extraction for each random noise generation is costly. IMLE typically results in blurry image synthesis.

3.4 GLANN: Generative Latent Nearest Neighbor

We present a method - GLANN - that overcomes the weaknesses of both GLO and IMLE. GLANN consists of two stages: i) embedding the high-dimensional image space into a ”well-behaved” latent space using GLO. ii) Mapping between an arbitrary distribution (typically a multi-dimensional normal distribution) and the low-dimensional latent space using IMLE.

3.4.1 Stage 1: Latent embedding

Images are high-dimensional and distances between them in pixel space might not be meaningful. This makes IMLE and the use of simple metric functions such as or less effective in pixel space. In some cases perceptual features may be found under which distances make sense, however they are high dimensional and expensive to compute.

Instead our method first embeds the training images in a low dimensional space using GLO. Differently from the GLO algorithm, we use a VGG perceptual loss function. The optimization objective is written in Eq, 

6:

(3)

All parameters are optimized directly by SGD. By the end of training, the training images are embedded by the low dimensional latent codes . The latent space enjoys convenient properties such as linearity. A significant benefit of this space is that a Euclidean metric in the space can typically yield more more semantically meaningful results than raw image pixels.

3.4.2 Stage 2: Sampling from the latent space

GLO replaced the problem of sampling from image pixels by the problem of sampling from without offering an effective sampling algorithm. Although the original paper suggests fitting a Gaussian to the training latent vectors , this typically does not result in good generations. Instead we propose learning a mapping from a distribution from which sampling is trivial (e.g. multivariate normal) to the empirical latent code distribution using IMLE.

At the beginning of each epoch, we sample a set of random noise codes from the noise distribution. Each one of the codes is mapped using mapping function to the latent space:

(4)

During the epoch, our method iteratively samples a minibatch of latent codes from the set computed in the previous stage. For each latent code

, we find the nearest neighbor mapped noise vector (using a Euclidean distance metric):

(5)

The approximate matches can now be used for finetuning the mapping function :

(6)

This procedure is repeated until the convergence of . It was shown theoretically by Li and Malik [24], that the method achieves a form of maximum likelihood estimate.

3.4.3 Sampling new images

Synthesizing new images is now a simple task: We first sample a noise vector from the multivariate normal distribution . The new sample is mapped to the latent code space:

(7)

By our previous optimization, was trained such that latent code lies close to the data manifold. We can therefore use the generator to project the latent code to image space by our GLO trained generator :

(8)

will appear to come from the distribution of the input images .

It is also possible to invert this transformation by optimizing for the noise vector given an image :

(9)
Adversarial Non-Adversarial
Dataset MM GAN NS GAN LSGAN WGAN BEGAN VAE GLO Ours
MNIST 6.7 0.4
Fashion
Cifar10
CelebA
Table 1: Quality of Generation (FID)

4 Experiments

To evaluate the performance of our proposed method, we perform quantitative and qualitative experiments comparing our method against established baselines.

4.1 Quantitative Image Generation Results

In order to compare the quality of our results against representative adversarial methods, we evaluate our method using the protocol established by Lucic et al. [25]. This protocol fixes the architecture of all generative models to be InfoGAN [7]. They evaluate representative adversarial models (DCGAN, LSGAN, NSGAN, W-GAN, W-GAN GP, DRAGAN, BEGAN) and a single non-adversarial model (VAE). In [25], significant computational resources are used to evaluate the performance of each method over a set of 100 hyper-parameter settings, e.g.: learning rate, regularization, presence of batch norm etc.

Finding good evaluation metrics for generative models is an active research area. Lucic et al. argue that the previously used Inception Score (IS) is not a good evaluation metric, as the maximal IS score is obtained by synthesizing a single image from every class. Instead, they advocate using Frechet Inception Distance (FID)

[12]. FID measures the similarity of the distributions of real and generated images by two steps: i) Running the Inception network as a feature extractor to embed each of the real and generated images ii) Fitting a multi-variate Gaussian to the real and generated embeddings separately, to yield means ,

and variances

, for the real and generated distributions respectively. The FID score is then computed as in Eq. 10:

(10)

Lucic et al. evaluate the baselines on standard public datasets: MNIST [23], Fashion MNIST [37], CIFAR10 [22] and CelebA [38]. MNIST, Fashion-MNIST and CIFAR10 contain 50k color images and 10k validation images. MNIST and Fashion are while CIFAR is .

For a fair comparison of our method, we use the same generator architecture used by Lucic et al. for our GLO model. We do not have a discriminator, instead, we use a VGG perceptual loss. Also differently from the methods tested by Lucic et al. we train an additional network for IMLE sampling from the noise space to the latent space. In our implementation, has two dense layers with

hidden nodes, with RelU and BatchNorm. GLANN actually uses fewer parameters than the baseline by not using a discriminator. Our method was trained with ADAM

[19]. We used the highest learning rate that allowed convergence: for the mapping network, for the latent codes ( for CelebA), generator learning rate was the latent code rate. epochs were used for GLO training decayed by every 50 epochs. epochs were used for mapping network training.

Tab. 1 presents a comparison of the FID achieved by our method and those reported by Lucic et al. We removed DRAGAN and WGAN-GP for space consideration (and as other methods represented similar performance). The results for GLO were obtained by fitting a Gaussian to the learned latent codes (as suggested in [5]).

On Fashion and CIFAR10, our method significantly outperforms all baselines - despite just using a single hyper-parameter setting. Our method is competitive on MNIST, although it does not reach the top performance. As most methods performed very well on this task, we do not think that it has much discriminative power. We found that a few other methods outperformed ours in terms of FID on CelebA, due to checkerboard patterns in our generated images. This is a well known phenomenon of deconvolutional architectures [28], which are now considered outdated. In Sec. 4.3, we show high-quality CelebA-HQ facial images generated by our method when trained using modern architectures.

Our method always significantly outperforms the VAE and GLO baseline, which are strong representatives of non-adversarial methods. One of the main messages in [25] was that GAN methods require a significant hyperparameter search to achieve good performance. Our method was shown to be very stable and achieved strong performance (top on two datasets) with a fixed hyperparameter setting. An extensive hyperparameter search can potentially further increase the performance our method, we leave it to future work.

MNIST Fashion CIFAR10 CelebA
Figure 2: Precision-Recall measured by for 4 datasets. The plots were reported by [31]. We marked the results of our model for each dataset by a star on the relevant plot.

4.2 Evaluation of Precision and Recall

FID is effective at measuring precision, but not recall. We therefore also opt for the evaluation metric recently presented by Sajjadi et al. [31] which they name PRD. PRD first embeds an equal number of generated and real images using the inception network. All image embeddings (real and generated) are concatenated and clustered into bins (). Histograms , are computed for the number of images in each cluster from the real, generated data respectively. The precision () and recall () are defined:

(11)
(12)

The set of pairs forms the precision-recall curve (threshold is sampled from an equiangular grid). The precision-recall curve is summarized by a variation of the score: which is able to assign greater importance to precision or recall. Specifically are used for capturing (recall, precision).

The exact numerical precision-recall values are not available in [31], they do provide scatter plots with the pairs of all models trained in [25]. We computed for the models trained using our method as described in the previous section. The scores were computed using the authors’ code. For ease of comparison, we overlay our scores over the scatter plots provided in [31]. Our numerical scores are: MNIST , Fashion , CIFAR10 and CelebA . The results for GLO with sampling by fitting a Gaussian to the learned latent codes (as suggested in [5]) were much worse: MNIST , Fashion , CIFAR10 , CelebA .

From Fig. 2 we can observe that our method generally performs better or competitively to GANs on both precision and recall. On MNIST our method and the best GAN method achieved near-perfect precision-recall. On Fashion our method achieved near perfect precision-recall while the best GAN method lagged behind. On CIFAR10 the performance of our method was also convincingly better than the best GAN model. On CelebA, our method performed well but did not achieve the top performance due to the checkerboard issue described in Sec. 4.2. Overall the performance of our method is typically better or equal to the baselines examined, this is even more impressive in view of the baselines being exhaustively tested over 100 hyperparameter configurations. We also note that our method outperformed VAEs and GLOs very convincingly. This provides evidence that our method is far superior to other generator-based non-adversarial models.

IMLE GLO GAN Ours
Figure 3: Comparison of synthesis by IMLE [24], GLO [5], GAN [25], Ours. First row: MNIST, Second row: Fashion, Third row: CIFAR10, Last row: CelebA64. The missing IMLE images were not reported in [24]. The GAN results are taken from [25], corresponding to the best generative model out of as evaluated by the precision-recall metric.
Figure 4: Interpolation on CelebA-HQ at resolution. The rightmost and leftmost images are randomly sampled from random noise. The interpolation are smooth and of high visual quality.
Figure 5: Interpolation on CelebA-HQ at resolution.

4.3 Qualitative Image Generation Results

We provide qualitative comparisons between our method and the GAN models evaluated by Sajjadi et al. [31]. We also show promising results on high-resolution image generation.

As mentioned above, Sajjadi et al. [31] evaluated different generative models in terms of precision and recall. They provided visual examples of their best performing model (marked as B) for each of the datasets evaluated. In Fig. 3, we provide a visual comparison between random samples generated by our model (without cherry picking) vs. their reported results.

We can observe that on MNIST and Fashion-MNIST our method and the best GAN method performed very well. The visual examples are diverse and of high visual quality.

On the CIFAR10 dataset, we can observe that our examples are more realistic than those generated by the best GAN model trained by [25]. On CelebA our generated image are very realistic and with many fewer failed generations. Our generated images do suffer from some pixelization (discussed in Sec. 4.1). We note that GANs can generate very high quality faces (e.g. PGGAN [18]), however it appears that for the small architecture used by Lucic et al. and Sajjadi et al., GANs do not generate particularly high-quality facial images.

To evaluate the performance of our method on higher resolution images, we trained our method on the CelebA-HQ dataset at resolution. We used the network architecture from Mescheder et al [26]. We use channels, latent code dimensionality of and noise dimension of . We used a learning rate of for the latent codes, for the generator and for the noise to latent code mapping function. We trained for 250 epochs, decayed by every 10 epochs.

We show some examples of interpolation between two randomly sampled noises in Fig. 4. Several observations can be made from the figures: i) Our model is able to generate very high quality images at high resolutions. ii) The smooth interpolations illustrate that our model generalizes well to unseen images.

To show the ability of our method to scale to , we present two interpolations at this high resolution in Fig. 5, although we note that not all interpolations at such high resolution were successful.

4.4 ModelNet Chair 3D Generation

Figure 6: Examples of 3D chairs generated by GLANN

To further illustrate the scope of GLANN, we present preliminary results for 3D generation on the Chairs category of ModelNet [36]. The generator follows the 3DGAN architecture from [35]. GLANN was trained with ADAM and an loss. Some GLANN generated 3D samples are presented in Fig. 6.

4.5 Non-Adversarial Unsupervised Image Translation

As generative models are trained in order to be used in downstream tasks, we propose to evaluate generative models by the downstream task of cross domain unsupervised mapping. NAM [16] was proposed by Hoshen and Wolf for unsupervised domain mapping. The method relies on having a strong unconditional generative model of the output image domain. Stronger generative models perform better at this task. This required [16, 13] to use GAN-based unconditional generators. We evaluated our model using the quantitative benchmarks presented in [16] - namely: , and . Our model achieved scores of , and on the three tasks respectively. The results are similar to those obtained using the GAN-based unconditional models (although SVHN is a bit lower here). GLANN is therefore the first model able to achieve fully unsupervised image translation without the use of GANs.

5 Discussion

Loss function:

In this work, we replaced the standard adversarial loss function by a perceptual loss. In practice we use ImageNet-trained VGG features. Zhang et al.

[40] claimed that self-supervised perceptual losses work no worse than the ImageNet-trained features. It is therefore likely that our method will have similar performance with self-supervised perceptual losses.

Higher resolution: The increase in resolution between to or

was enabled by a simple modification of the loss function: the perceptual loss was calculated both on the original images, as well as on a bi-linearly subsampled version of the image. Going up to higher resolutions simply requires more sub-sampling levels. Research into more sophisticated perceptual loss will probably yield further improvements in synthesis quality.

Other modalities: In this work we focuses on image synthesis. We believe that our method can extend to many other modalities, particularly 3D and video. The simplicity of the procedure and robustness to hyperparameters makes application to other modalities much simpler than GANs. We showed some evidence for this assertion in Sec. 4.4. One research task for future work is finding good perceptual loss functions for domains outside 2D images.

6 Conclusions

In this paper we introduced a novel non-adversarial method for training generative models. Our method combines ideas from GLO and IMLE and overcomes the weaknesses of both methods. When compared on established benchmarks, our method outperformed the the most common GAN models that underwent exhaustive hyperparameter tuning. Our method is robust and simple to train and achieves excellent results. As future work, we plan to extend this work to higher resolutions and new modalities such as video and 3D.

References