GDPP
Improved Generator loss to reduce modecollapse and improve the generated samples quality.
view repo
Generative models have proven to be an outstanding tool for representing highdimensional probability distributions and generating realistic looking images. A fundamental characteristic of generative models is their ability to produce multimodal outputs. However, while training, they are often susceptible to mode collapse, which means that the model is limited in mapping the input noise to only a few modes of the true data distribution. In this paper, we draw inspiration from Determinantal Point Process (DPP) to devise a generative model that alleviates mode collapse while producing higher quality samples. DPP is an elegant probabilistic measure used to model negative correlations within a subset and hence quantify its diversity. We use DPP kernel to model the diversity in real data as well as in synthetic data. Then, we devise a generation penalty term that encourages the generator to synthesize data with a similar diversity to real data. In contrast to previous stateoftheart generative models that tend to use additional trainable parameters or complex training paradigms, our method does not change the original training scheme. Embedded in an adversarial training and variational autoencoder, our Generative DPP approach shows a consistent resistance to modecollapse on a widevariety of synthetic data and natural image datasets including MNIST, CIFAR10, and CelebA, while outperforming stateoftheart methods for dataefficiency, convergencetime, and generation quality. Our code is publicly available.
READ FULL TEXT VIEW PDF
Deep generative models provide powerful tools for distributions over
com...
read it
Deep generative models (DGMs) of images are now sufficiently mature that...
read it
We propose a simple yet highly effective method that addresses the
mode...
read it
Generating highresolution, photorealistic images has been a longstand...
read it
Holographic waveshaping has found numerous applications across the phys...
read it
All generative models have to combat missing modes. The conventional wis...
read it
All generative models have to combat missing modes. The conventional wis...
read it
Improved Generator loss to reduce modecollapse and improve the generated samples quality.
Deep generative models have gained great research interest in recent years as a powerful framework to represent high dimensional data in an unsupervised fashion. Among many generative approaches, Generative Adversarial Networks (GANs)
(Goodfellow et al., 2014) and Variational AutoEncoders (VAEs) (Kingma & Welling, 2013) took a place among the most prominent approaches for synthesizing realistic images. They consist of two networks: a generator (decoder) and a discriminator (encoder), where the generator attempts to map latent code to fake data points that simulate the distribution of real data. Nevertheless, in the process of learning multimodal complex distributions, both models may converge to a trivial solution where the generator learns to produce few modes exclusively, which referred to by mode collapse.To address this, we propose using Determinantal Point Processes (DPP) to model the diversity within data samples. DPP is a probabilistic model that has been mainly adopted for solving subset selection problems with diversity constraints (Kulesza & Taskar, 2011)
, such as video and document summarization. In such cases, representative sampling requires quantifying the diversity of
subsets, where is the size of the ground set. However, this renders DPP sampling from true data to be computationally inefficient in the generation domain. The key idea of our work is to model the diversity within real and fake data throughout the training process using DPP kernels, which adds an insignificant computational overhead. Then, we encourage producing samples of similar diversity distribution to the true data by backpropagating our proposed DPPinspired metric through the generator. In such a way, the generator explicitly learns to cover more modes of real distribution without a significant overhead.Recent approaches tackled modecollapse in one of two different ways: (1) modifying the learning of the system to reach a better convergence point (e.g. (Metz et al., 2017; Gulrajani et al., 2017)); or (2) explicitly enforcing the models to capture diverse modes or map back to the truedata distribution (e.g. (Srivastava et al., 2017; Che et al., 2017)). Here we focus on a relaxed version of the latter, where we use the same learning paradigm of the standard generators and add a penalty term to the objective function. The advantage of such an approach is to avoid adding any extra trainable parameters to the framework while maintaining the same backpropagation steps as the default learning paradigm. Thus, our model converges faster to a fair equilibrium point where generator imitates the diversity of truedata distribution and produces higher quality generations.
Contribution. we introduce a new penalty term, that we denote Generative Determinantal Point Processes (GDPP) loss. Our loss only assumes access to a generator
and a feature extraction function
. The loss encourages the generator to diversify generated samples to match the diversity of real data as illustrated in Fig. 1. This criterion can be considered as a complement to the original generation loss which attempts to learn an indistinguishable distribution from the truedata distribution without explicitly enforcing diversity. We assess the performance of GDPP on three different synthetic data environments, while also verifying its advantage on three realworld images datasets. Our approach consistently outperforms several stateoftheart approaches that of more complex learning paradigms in terms of alleviating modecollapse and generation quality.Among many existing generation frameworks, GANs tend to synthesize the highest quality generations, however, they are harder to optimize due to unstable training dynamics. Here, we discuss a few generic approaches addressing mode collapse with an emphasis on GANs. We categorize them based on their approaches to alleviate mode collapse.
(Donahue et al., 2017; Dumoulin et al., 2017) are of the earliest methods that proposed learning a reconstruction network besides learning the generative network. Adding this extra network to the framework aims at reversing the action of generator by mapping from data to noise. Likelihoodfree variational inference (LFVI) (Tran et al., 2017), merges this concept with learning implicit densities using hierarchical Bayesian modeling. Ultimately, VEEGAN (Srivastava et al., 2017) used the same concept, but without basing reconstruction loss on the discriminator. This has the advantage of isolating the generation process from the discriminator’s sensitivity to any of the modes. Along similar lines, (Che et al., 2017) proposed several ways of regularizing the objective of adversarial learning including geometric metric regularizer, mode regularizer, and manifolddiffusion training. Specifically, mode regularization has shown a potential into alleviating mode collapse and stabilizing the training.
InfoGAN (Chen et al., 2016) propose an informationtheoretic extension of GANs that obtains disentangled representation of data by latentcode reconstitution through a penalty term in its objective. InfoGAN includes autoencoder over latent codes; however, it was shown to have stability problems similar to the standard GAN and requires stabilization empirical tricks. The UnrolledGAN of (Metz et al., 2017) propose a novel objective to update the generator with respect to the unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator’s objective, which has been shown to improve the generator training process and to reduce mode collapse. Generalized LSGAN of (Edraki & Qi, 2018) define a pullback operator to map generated samples to the data manifold. With a similar philosophy, BourGAN (Xiao et al., 2018) draws samples from a mixture of Gaussians instead of a single Gaussian. There is, however, no specific enforcement to diversify samples. Finally, improving Wasserstein GANs of (Arjovsky et al., 2017), WGANGP (Gulrajani et al., 2017) introduce a gradient penalization employed in stateoftheart systems (Karras et al., 2018).
One of the popular methods to reduce mode collapse is using multiple generator networks to provide better coverage of the true data distribution. (Liu & Tuzel, 2016) propose using two generators with shared parameters to learn the joint data distribution. The two generators are trained independently on two domains to ensure a diverse generation. However, sharing the parameters guide both the generators to a similar subspace. (Durugkar et al., 2017) propose a similar idea of multiple discriminators that are being an ensemble, which was shown to produce better quality samples. Recently, (Ghosh et al., 2018) proposed MADGAN which is a multiagent GAN architecture incorporating multiple generators and one discriminator. Along with distinguishing real from fake samples, the discriminator also learns to identify the generator that synthesized the fake sample. The learning of such a system implies forcing different generators to learn unique modes, which helps in better coverage of data modes. DualGAN of (Nguyen et al., 2017) improves the diversity within GANs at the additional requirement of training two discriminators. The Mixed GAN approach of (Lucas et al., 2018) rather introduces a permutation invariant architecture for the discriminator, that doubles the number of parameters. In contrast to these approaches, our GDPPGAN does not require any extra trainable parameters which results in a faster training as well as being less susceptible to overfitting.
Finally, we also refer to PacGAN (Lin et al., 2018) which modifies the discriminator input with concatenated samples to better sample the diversity within real data. Nevertheless, such an approach is subject to memory and computational constraints as a result of the significant increase in batch size. Additionally, spectral normalization strategies have been recently proposed in (Miyato et al., 2018) and SAGAN (Zhang et al., 2018) to further stabilize the training. We note that these strategies are orthogonal to our contribution and could be implemented in conjunction with ours to further improve the training stability of generative models.
DPP is a probabilistic measure was introduced in quantum physics (Macchi, 1975)
to model the GaussPoisson and the ’fermion’ processes, then was extensively studied in random matrix theory, e.g.
(Hough et al., 2006). It provides a tractable and efficient means to capture negative correlation with respect to a similarity measure, that in turn can be used to quantify the diversity within a subset. As pointed out by (Gong et al., 2014), DPP is agnostic about the order of the items within subsets. Hence, it can be used to model data that is randomly sampled from a certain distribution such as minibatches sampled from training data.A point process on a ground set is a probability measure on the power set , where is the size of the ground set. A point process is called determinantal if, given a random subset drawn according to , we have for every ,
(1) 
for some symmetric similarity kernel , where is the similarity kernel of subset . must be real, positive semidefinite matrix (all the eigenvalues of are between 0 and 1); since it represents a probabilistic measure and all of its principal minors must be nonnegative.
is often referred to as the marginal kernel because it contains all the information needed to compute the probability of any subset being selected in . denotes the submatrix of indexed by , specifically, . Hence, the marginal probability of including one element is , and two elements and is . A large value of reduces the likelihood of both elements to appear together in a diverse subset.
(Kulesza & Taskar, 2010) proposed decomposing the kernel as a Gram matrix:
(2) 
where can be seen as a quality score of an item in the ground set , while and is used as an
normalized feature vector of an item. In this manner,
is evaluated as a ”normalized similarity” between items and of , and the kernel is guaranteed to be real positive semidefinite matrix., where is the eigen value of the kernel , and since the kernel is a positive semidefinite matrix. Hence, we may visualize that DPP models diverse representations of data because the determinant of corresponds to the volume in
D which is equivalent to the multiplication of data variances (i.e., the eigen values).
DPP in literature: DPP has proven to be a valuable tool when tackling diversity enforcement in problems such as document summarization (e.g., (Kulesza & Taskar, 2011; Hong & Nenkova, 2014)
), pose estimation (e.g.,
(Gupta, 2015)) and video summarization (e.g., (Gong et al., 2014; Mahasseni et al., 2017)). For instance, (Zhang et al., 2016) proposed to learn the two parameters in eq. 2 to quantify the diversity of the kernel based on spatiotemporal features of the video to perform summarization. Recently, (Hsiao & Grauman, 2018) proposed to use DPP to automatically create capsule wardrobes, i.e. assemble a minimal set of items that provide maximal mixandmatch outfits given an inventory of candidate garments.Our GDPP loss encourages the generator to sample fake data of diversity similar to real data diversity. The key challenge is to model the diversity within real data and fake data. We discussed in Sec. 3 how DPP can be used to quantify the diversity within a discrete data distribution. Unlike subset selection problems (e.g., document/video summarization), in the generation domain we are not merely interested in increasing diversity within generated samples. Only increasing the samples diversity will result in samples that are far apart in the generation domain, but not necessarily representative of real data diversity. Instead, we aim to generate samples that imitate the diversity of real data. Thus, we construct a DPP kernel for both the real data and the generated samples at every iteration of the training process as shown in Fig. 2. Then, we encourage the generator to synthesize samples that have a similar diversity kernel to that of the training data. In order to simplify learning kernels, we match the eigenvalues and eigenvectors of the fake data DPP kernel with their corresponding of the real data DPP kernel. Eigenvalues and vectors capture the manifold structure of both real and fake data, and hence renders the optimization more feasible. Fig. 1 shows pairing the two kernels by matching their high dimensional eigen manifolds.
During training, a generative model produces a batch of samples , where is the batch size and is noise vector inputted to the generator . At every iteration, we also have a batch of samples , where is a sampler from true distribution. Our aim is to produce that is probabilistically sampled following the DPP kernel of , which satisfies:
(3) 
such that
is a random variable representing a fake subset
drawn with a generative point process , and is DPP kernel of a real subset indexed by .To construct , we use the kernel decomposition in Eq. 2. However, since both true and fake samples are drawn randomly with no quality criteria, it is safe to assume . Thus, we construct the kernels as follows: and , such that and are feature representations extracted by the feature extraction function .
Our aim is to learn a fake diversity kernel close to the real diversity kernel . Nonetheless, matching two kernels is an unconstrained optimization problem as pointed out by (Li et al., 2009). So, instead, we match the kernels using their major characteristics: eigenvalues and eigenvectors. This results in scaling down the matching problem into regressing the magnitudes of eigenvalues and the orientations of eigenvectors. Hence, our devised GDPP loss is composed of two components: diversity magnitude loss , and diversity structure loss as follows:
(4)  
where and are the eigenvalues of and respectively.
Finally, we account for the outlier structures by using the minmax normalized version of the eigenvalues
to scale the cosine similarity between the eigenvectors
and . This aims to alleviate the effect of noisy structures that intrinsically occur within the real data distribution or within the learning process.Integrating GDPP loss with GANs. As a primary benchmark, we integrate our GDPP loss with GANs . Since our aim is to avoid adding any extra trainable parameters, we utilize features extracted by the discriminator: we choose to use the hidden activations before the last layer as our feature extraction function . We apply normalization on the obtained features that guarantees constructing a positive semidefinite matrix according to eq. 4. We finally integrate into the GAN objective by only modifying the generator loss of the standard adversarial loss (Goodfellow et al., 2014) as follows:
(5) 
Integrating GDPP loss with VAEs. A key property of our loss is its generality to any generative model. We show that by also embedding it within VAEs. A VAE consists of an encoder network , where is an input training batch and
is sampled from a normal distribution parametrized by encoder outputs
and, representing respectively the standard deviation and the mean of the distribution. Additionally, VAE has a decoder network
which reconstructs . We use the final hidden activations in as our feature extraction function . Given a sampled from a normal distribution , is used to generate the fake batch , while the real batch is randomly sampled from training data. Finally, we compute the as in Eq. 4, rendering the GDPPVAE loss as:(6)  
2D Ring  2D Grid  1200D Synthetic  








GAN (Goodfellow et al., 2014)  1  99.3  3.3  0.5  1.6  2.0  
ALI (Dumoulin et al., 2017)  2.8  0.13  15.8  1.6  3  5.4  
Unrolled GAN (Metz et al., 2017)  7.6  35.6  23.6  16.0  0  0.0  
VEEGAN (Srivastava et al., 2017)  8.0  52.9  24.6  40.0  5.5  28.3  
WGANGP (Gulrajani et al., 2017)  6.8  59.6  24.2  28.7  6.4  29.5  
GDPPGAN  8.0  71.7  24.8  68.5  7.4  48.3 
GAN  ALI  UnrolledGAN  VEEGAN  WGANGP  GDPPGAN 
(a)  (b)  (c)  (d)  (e)  (f) 
(g)  (h)  (i)  (j)  (k)  (l) 
In our experiments, we target evaluating the generation based on two criteria: mode collapse and generated samples quality. Due to the intractability of loglikelihood estimation, this problem is nontrivial for real data. Therefore, we start by analyzing the performance on synthetic data where we can accurately evaluate these criteria. Then, we demonstrate the effectiveness of our method on real data using standard evaluation metrics. The same architecture is used for all methods and hyperparameters were tuned separately for each approach to achieve the best performance (See Appendix A for details).
Mode collapse and the quality of generations can be explicitly evaluated on synthetic data since the true distribution is welldefined. In this section, we evaluate the performance of the methods on mixtures of Gaussian of known mode locations and distribution (See Appendix B for details). We use the same architecture for all the models, which is the same one used by (Metz et al., 2017) and (Srivastava et al., 2017). We note that the first four rows in Table 1 are obtained from (Srivastava et al., 2017), since we are using the same architecture and training paradigm. Fig. 3 illustrates the effect of each method on the 2D Ring and Grid data. As shown by the vanillaGAN in the 2D Ring example (Fig. 3a), it can generate the highest quality samples however it only captures a single mode. On the other extreme, the WGANGP on the 2D grid (Fig. 3k) captures almost all modes in the true distribution, but this is only because it generates highly scattered samples that do not precisely depict the true distribution. GDPPGAN (Fig. 3f,l) creates a precise representation of the true data distribution reflecting that the method learned an accurate structure manifold.
2D Ring  2D Grid  






Exact determinant ()  8  82.9  12.6  21.7  
Only diversity magnitude ()  8  67.0  20.4  15.9  
Only diversity structure ()  8  65.2  18.2  35.2  
GDPP with unnormalized structure term ()  7.2  81.2  20.6  68.8  
Final GDPPloss ()  8  71.7  24.8  68.5 
Performance Evaluation: At every iteration, we sample fake points from the generator and real points from the given distribution. Mode collapse is quantified by the number of real modes recovered in fake data, and the generation quality is quantified by the % of HighQuality Samples. A generated sample is counted as highquality if it was sampled within three standard deviations in case of 2D Ring or Grid, and ten standard deviations in case of the 1200D data. We train all models for 25K iterations, except for VEEGAN which needs 100K iterations to properly converge. At inference time, we generate 2500 samples from each of the trained models and measure both metrics. We report the numbers averaged over five runs with different random initialization in Table 1. GDPPGAN clearly outperforms all other methods, for instance on the most challenging 1200D dataset that was designed to mimic a natural data distribution, bringing a 63% relative improvement in highquality samples and 15% in mode detection over its best competitor WGANGP. Finally, we show that our method is robust to random initialization in Appendix C.1.
Ablation Study: We run a study on the 2D Ring and Grid data to show the individual effects of each component in our loss. As shown in Table 2, optimizing the determinant directly increases the diversity generating the highest quality samples. This works best on the 2D Ring since the true data distribution can be represented by a repulsion model. However, for more complex data as in 2D Grid, optimizing the determinant fails because it does not wellrepresent the real manifold structure but aims at repelling the fake samples from each other. Using GDPP with an unnormalized structure term is prone to learning outlier caused by the inherent noise within the data. Nonetheless, scaling the structure loss by the truedata eigenvalues seems to disentangle the noise from the prominent structure and better models the data diversity.
DataEfficiency: We evaluate the amount of training data needed by each method to reach the same local optima as evaluated by our two metrics on both the 2D Ring and Grid data. Since the truedata is sampled from a mixture of Gaussians, we can generate an infinite size of training data. Therefore, we can quantify the amount of the training data by using the batchsize while fixing the number of backpropagation steps. In this experiment (Fig. 5), we run all the methods for the same number of iterations (25,000) and vary the batch size. However, WGANGP tends to capture higher quality samples with fewer data. In the case of 2D Grid data, GDPPGAN performs on par with other methods for small amounts of data, yet it tends to significantly outperform other methods on the quality of generated samples once trained on enough data.
StackedMNIST  CIFAR10  
#Modes (Max 1000)  KL div.  Inception score  IvO  
DCGAN (Radford et al., 2016)  427  3.163  5.26 0.13  0.0911 
DeLiGAN (Gurumurthy et al., 2017)  767  1.249  5.68 0.09  0.0896 
UnrolledGAN (Metz et al., 2017)  817  1.430  5.43 0.21  0.0898 
RegGAN (Che et al., 2017)  955  0.925  5.91 0.08  0.0903 
WGAN (Arjovsky et al., 2017)  961  0.140  5.44 0.06  0.0891 
WGANGP (Gulrajani et al., 2017)  995  0.148  6.27 0.13  0.0891 
GDPPGAN (Ours)  1000  0.135  6.58 0.10  0.0883 
VAE (Kingma & Welling, 2013)  341  2.409  1.19 0.02  0.543 
GDPPVAE (Ours)  623  1.328  1.32 0.03  0.203 
DCGAN  UnrolledGAN  VEEGAN  RegGAN  WGAN  WGANGP  GDPPGAN  


0.0674  0.2467  0.1978  0.1357  0.1747  0.4331  0.0746 
TimeEfficiency: To analyze time efficiency, we explore two primary aspects: convergence rate, and physical running time. First, to find out which method converges faster, we fix the batch size at 512 and vary the number of training iterations for all models (Fig. 5). In the 2D Ring, only VEEGAN captures a higher number of modes before GDPPGAN, however, they are of much lower quality than the ones generated by GDPPGAN. In 2D Grid, however, GDPPGAN performs on par with unrolledGAN for the first 5,000 iterations while the others are falling behind. After then, our method significantly outperforms all the methods with respect to both the number of captured modes and the quality of generated samples. Second, we compare the physical running time of all methods given the same data and number of iterations. To obtain reliable results, we chose to run the methods on CIFAR10 instead of the synthetic, since the latter has an insignificant running time. We compute the average running time of an iteration across 1000 iterations over five different runs of each method. Table 4 shows that GDPPGAN has a negligible computational overhead beyond DCGAN, rendering it the fastest improvedGAN approach. We also elaborate on the runtime analysis and conduct additional experiments in Appendix C.3 to explore the computation overhead.
We run realimage generation experiments on three various datasets: StackedMNIST, CIFAR10, and CelebA. For the first two, we use the experimental setting used in (Gulrajani et al., 2017) and (Metz et al., 2017). We also investigated the robustness of our method by using another more challenging setting proposed by (Srivastava et al., 2017) in Appendix C.2. For CelebA, we use the experimental setting of (Karras et al., 2018). In our evaluation, we focus on comparing with the stateoftheart methods that adopt a change in the original adversarial loss. Nevertheless, most baselines can be deemed orthogonal to our contribution and can enhance the generation if integrated with our approach. Finally, we show that our loss is generic to any generative model by incorporating it within Variational AutoEncoder (VAE) of (Kingma & Welling, 2013) in Table 3. Appendix D shows qualitative examples from several models and baselines.
A variant of MNIST (LeCun, 1998) designed to increase the number of discrete modes in the data. The data is synthesized by stacking three randomly sampled MNIST digits along the color channel resulting in a image. In this case, Stacked MNIST has 1000 discrete modes corresponding to the number of possible triplets of digits. Following (Gulrajani et al., 2017), we generate 50,000 images that are later used to train the networks. We train all the models for 15,000 iterations, except for DCGAN and unrolledGAN that need 30,000 iterations to converge to a reasonable localoptima.
We follow (Srivastava et al., 2017)
to evaluate the number of recovered modes and divergence between the true and fake distributions. We sample 26000 fake images for all the models. We identify the mode of each generated image by using the classifier mentioned in
(Che et al., 2017), which is trained on the standard MNIST dataset to classify each channel of the fake sample. The quality of samples is evaluated by computing the KLdivergence between generated label distribution and training labels distribution. As shown in Table 3, GDPPGAN captures all modes and generates a fake distribution that has the lowest KLDivergence with the truedistribution. Moreover, when applied on the VAE, it doubles the number of modes captured (i.e., from 341 to 623) and cuts the KLDivergence to half (from 2.4 to 1.3). Lastly, we follow (Richardson & Weiss, 2018) to assess the severity of mode collapse by computing the number of statistically different bins using MNIST in Appendix C.4.We evaluate the methods on CIFAR10 after training all the models for 100K iterations. Unlike StackedMNIST, the modes are intractable in this dataset. This is why we follow (Metz et al., 2017) and (Srivastava et al., 2017) in using two different metrics: Inception Score (Salimans et al., 2016) for the generation quality and InferenceviaOptimization (IvO) for diversity. As shown in Table 3, GDPPGAN consistently outperforms all other methods in both metrics. Furthermore, applying the GDPP on the VAE reduces the IvO by 63%. However, we note that both the inceptionscores are considerably low which is also observed by (Shmelkov et al., 2018) when applying the VAE on CIFAR10.
Inferenceviaoptimization (Metz et al., 2017) is used to assess the severity of mode collapse in generations by comparing real images with the nearest generated image. In the case of mode collapse, there are some real images for which this distance is large. We measure this metric by sampling a real image from the test set of real data. Then we optimize the loss between and generated image by modifying the noise vector . If a method attains low MSE, then it can be assumed that this method captures more modes than ones that attain a higher MSE. Fig. 6 presents some real images with their nearest optimized generations.
We also assess the stability of the training, by calculating inception score at different stages while training on CIFAR10 (Fig. 7). Evidently, DCGAN has the least stable training with a high variation. However, by only adding GDPP penalty term to the generator loss, the model generates highquality images the earliest on training with a stable increase.
Finally, to evaluate the performance of our loss on largescale adversarial training, we embed our GDPP loss in ProgressiveGrowing GANs (Karras et al., 2018). We train the models for 40K iterations corresponding to 4 scales up to results, and for 200K iterations at 5 scales (). On large scale datasets such as CelebA dataset (Liu et al., 2018), it is harder to stabilize the training of DCGAN. In fact, DCGAN is only able to produce reasonable results in the first scale but not the second due to the highresolution requirement. That is why, we embed our loss with WGANGP this time instead of DCGAN paradigm, which is as well orthogonal to our loss.
Unlike CIFAR10 dataset, CelebA does not simulate ImageNet because it only contains faces, not natural scenes/objects. Therefore, using a model trained on ImageNet as a basis for evaluation (i.e., Inception Score), will cause inaccurate recognition. On the other hand, IvO was shown to be fooled by producing blurry images out of the optimization in highresolution datasets as in CelebA
(Srivastava et al., 2017). Therefore, we follow (Karras et al., 2018) to evaluate the performance on CelebA using Sliced Wasserstein Distance (SWD) (Peyré et al., 2017). A small Wasserstein distance indicates that the distribution of the patches is similar, which entails that real and fake images appear similar in both appearance and variation at this spatial resolution. Accordingly, the SWD metric can evaluate the quality of images as well as the severity of modecollapse on largescale datasets such as CelebA. Table 5 shows the average and minimum SWD metric across the last 10K training iterations. We chose this time frame because it shows a saturation in training loss for all the competing methods.Avg. SWD  Min. SWD  

Training Data  0.0033  
DCGAN  0.0906  0.0241  
WGANGP  0.0186  0.0115  
GDPPGAN  0.0163  0.0075  

Training Data  0.0023  
WGANGP  0.0197  0.0095  
GDPPGAN  0.0181  0.0088 
In this work, we introduced a novel criterion to train generative networks on capturing a similar diversity to one of the true data by utilizing Determinantal Point Process(DPP). We apply our criterion to Generative Adversarial training and the Variational AutoEncoder by learning a kernel via features extracted from the discriminator/encoder. Then, we train the generator on optimizing a loss between the fake and real, eigenvalues and eigenvectors of this kernel to encourage the generator on simulating the diversity of real data. Our GDPP framework accumulates many desirable properties: it does not require any extra trainable parameters, it operates in an unsupervised setting, yet it consistently outperforms stateoftheart methods on a battery of synthetic data and real image datasets as measure by generation quality and invariance to mode collapse. Furthermore, GDPPGANs exhibit a stabilized adversarial training and has been shown to be time and data efficient as compared to stateoftheart approaches. Moreover, the GDPP criterion is architecture and model invariant, allowing it to be embedded with any variants of generative models such as adversarial feature learning and conditional GANs.
The MNIST database of handwritten digits.
http://yann. lecun. com/exdb/mnist/, 1998.Video summarization with long shortterm memory.
In ECCV, 2016.
Comments
There are no comments yet.