1 Introduction
Advances in deep learning have resulted in stateoftheart generative models for a wide variety of data generation tasks. Generative methods map sampled points in a lowdimensional latent space with known distribution to points in highdimensional space with distributions matching realdata. In particular, generative adversarial networks (GANs)
(Goodfellow et al., 2014)have shown successful applications in superresolution
(Ledig et al., 2017)(Isola et al., 2017; Zhu et al., 2017), texttoimage translation (Reed et al., 2016)(Pathak et al., 2016), image manipulation (Zhu et al., 2016), synthetic data generation (Shrivastava et al., 2017) and domain adaptation (Tzeng et al., 2017).A GAN architecture consists of a generator and a discriminator . The generator maps lowdimensional latent points
to highdimensional data distribution
. The latentspace distributionis typically chosen to be a normal or uniform distribution. The goal of the generator
is to produce data such that they are perceptually indistinguishable from real data. However, the discriminator is trained to distinguish between ‘fake’ and ‘real’ data. Both the generator and discriminator are trained in an adversarial fashion, and at the end of training the generator learns to generate data with a distribution similar to the real one.One natural question for generative models is how to model the latent space effectively to generate diverse and varied output. Interpolating between samples in the latent space can lead to semantic interpolation in image space (Radford et al., 2016). Interpolation can help transfer certain semantic features of one image to another. Successful interpolation also shows that GANs do not simply overfit or reproduce the training set, but generate novel output. Interpolation has been shown to disentangle factors of variation in the latent space with many applications (Liu et al., 2018a; Kumar & Chellappa, 2018; Liu et al., 2018b; Yin et al., 2017).
Imposing a parametric structure on the latent space can cause distributional mismatches where the prior distribution does not match the interpolated point’s distribution. This mismatch causes the interpolated points to lose fidelity in quality (White, 2016)
. Previous research has resulted in various parametric models to fix this problem
(White, 2016; Kilcher et al., 2018; Agustsson et al., 2019). One of the findings in prior work (Leśniak et al., 2019)is that the use of a Cauchy distributed prior solves the distributional mismatch problem. But, Cauchy is a very peculiar distribution, with undefined moments and a heavytail. This means that during inference there will always be a number of undesirable outputs (as acknowledged also in
(Leśniak et al., 2019)) due to latent vectors being sampled from the these tails.
In this paper, we propose the use of nonparametric priors to address the aforementioned issues. The advantage of a nonparametric prior is that we do not use any modeling assumptions and propose a general optimization approach to determine the prior for the task at hand. In particular, our contributions are as follows:

We analyze the distribution mismatch problem in latentspace interpolation using basic probability tools, and derive a nonparametric approach to search for a prior which can address the distribution mismatch problem.

We present algorithms to solve for the prior using offtheshelf optimizers, and show that obtained priors have interesting multilobe structures with fast decaying tails, resulting in midpoint distribution to be close to the prior.

We show that the resulting nonparametric prior yields better quality and diversity in generated output, with no additional training data nor any added architectural complexity.
More broadly, our approach is a general and flexible method to impose other constraints on latentspace statistics. Our goal is not to outperform all the latest developments in generative models, but to show that our proposed standalone formulation can boost performance with no added training or architectural modifications.
2 Background and Related Work
Generative Adversarial Network: As described in Section 1, a GAN consists of two components: a generator and a discriminator , which are adversarially trained against one another until the generator can map latentspace points to a high dimensional distribution which the discriminator cannot distinguish from true data samples. Formally, this can be expressed as a minmax game in (1) which the generator tries to minimize and the discriminator tries to maximize (Goodfellow et al., 2014):
(1)  
where, is the objective function, are real data points sampled from a true distribution, and are sampled points from the latentspace distribution. If the training of the GAN is stable and the Nash equilibrium is achieved, then the generator learns to generate samples similar to the true distribution. In general, GAN training is not always stable, thus several methods have been introduced to improve the training (Salimans et al., 2016; Arjovsky & Bottou, 2017)
. This includes different kinds of divergences and loss functions
(Nowozin et al., 2016; Arjovsky et al., 2017; Gulrajani et al., 2017). Several other works improve generated image quality (Dai et al., 2017; Zhang et al., 2017) or resolution (Denton et al., 2015; Karras et al., 2018).Interpolation: For any two given latentspace points , a linearlyinterpolated point is given by for some It has been shown that GANs can generate novel outputs via linear interpolation, and as the line is traversed (), the output images smoothly transition from one to another without visual artifacts (Radford et al., 2016). They further showed that vector arithmetic in the latent space has corresponding semantic meaning in the output space, e.g. latentspace points for “man with glasses”  “man without glasses” + “woman without glasses” generates an image of a woman wearing glasses (c.f. Fig. 7 of (Radford et al., 2016)).
Distribution Mismatch of Interpolated Points: Interpolation, while semantically meaningful, presents challenges in ensuring all interpolated points preserve the same data quality (or in the case of images visual quality). Most GANs utilize simple parametric distributions such as normal or uniform as the prior distribution to sample the latent space. However, these two choices of priors cause the interpolated point’s distribution to not match with either the normal or uniform distribution as observed by (Kilcher et al., 2018). We replicate this argument below for the sake of exposition, since this is the core problem we tackle in this paper.
Let be two points in the latent space of the GAN’s generator, and let be a linearly interpolated point. Note that . We are interested when , which only holds for delta functions for finite moment distributions (we later prove this statement more formally in Section 3). However, we see that the worst case for when is different from occurs at or the midpoint denoted . Analyzing the Euclidean norm of and gives the following equations (Kilcher et al., 2018):
(2)  
where, is the dimension of the latentspace and
is the chisquared distribution. The GAN is trained with latentspace whose norm squared distance follows a
distribution according to (2). However, the midpoint will have a distribution. Clearly, there is distribution mismatch between the points at which the GAN is trained to generate realistic samples and the midpoint where we want to do interpolation. Further, this distribution mismatch becomes worse if we increase the dimension of the latent space. Finally, it has been shown that the Normal distribution in higher dimensions forms a ‘soap bubble’ structure, i.e. most of its probability mass concentrates in an annulus around the mean, rather than near the mean itself
(Hall et al., 2005). This implies that interpolations that traverse near the origin of the latent space will suffer degradation in output fidelity/quality, which has been confirmed for GANs in (White, 2016). A similar proof for a distribution mismatch can be shown for the uniform distribution.We note that we are not the first to propose a solution to this problem. Several prior approaches have proposed solutions, either through new interpolation schemes or new prior distributions different from normal or uniform that suffer less distribution mismatch. White et al. proposes spherical linear interpolation by following the geodesic curve on a hypersphere to avoid interpolating near the origin (and thereby minimize distribution mismatch) if the latent points are sampled uniformly on a sphere with finite radius (White, 2016). However, this interpolation is not semantically meaningful if the path becomes too long and it passes through unnecessary images as noted in (Kilcher et al., 2018). Similar to spherical interpolation, (Agustsson et al., 2019) propose a normalized interpolation. Yet, it inherits similar issues as in spherical interpolation in not being the shortest path.
Other approaches have attempted to define new prior distributions to ensure the interpolated points have low distribution mismatch. This is similar to the method we employ in our paper, except ours is nonparametric. Kilcher et al. propose a new prior distribution defined as follows:
(3) 
where is the dimension of the latent space,
is the Gamma distribution and
is a latent vector (Kilcher et al., 2018). Latent spaces defined using this prior distribution do not suffer as much from midpoint distribution mismatch. Further work by Lesniak et al. showed that the Cauchy distribution induces a midpoint distribution which is the same as itself (Leśniak et al., 2019). However, the Cauchy distribution has undefined moments, which makes analysis difficult, and also is heavytailed, which can lead to undesirable outputs.3 Design of nonparametric priors for GANs
The primary motivation to design nonparametric priors is that in order to have a prior whose distribution of midpoints is close to the original prior, we need to optimize for an appropriate cost over the space of density functions. This optimization is easily done for the nonparametric case, and requires rather few assumptions. Our terminology of nonparametric
stems from classical density estimation approaches, rather than the more modern usage in Bayesian nonparametric.
Let be the chosen prior distribution. Let and be two samples drawn from . Let an interpolated point be given by: , for . The precise relation between the distribution of this interpolated point and is given analytically as follows.
Property 1:
If and are two independent samples drawn from , then the density function of , for is given by .
Proof:
The proof is a direct application of the following two results from probability theory.

: If and
are two independent random variables, with density functions
and , then the density of their sum is given by the linear convolution . 
: If random variable has density function , then for , the density of is given by .
Apply to and separately, then convolve the results using . QED.
Following from here, the distribution mismatch problem can be expressed as the search for a prior distribution such that the distribution of any other interpolated point is close to . That is, we would like to satisfy:
(4) 
where,
. The only distributions we are aware of that satisfy this condition are the Cauchy (undefined moments, heavytailed), and delta functions (zero variance).
Property 2:
The only density functions with finite moments that satisfy condition (4) are delta functions.
Proof:
A density function that satisfies condition (4), will also satisfy the equality of all moments of the left and right side densities. By specifically applying this to the equality of variances, we will show that the only solution is a delta function (among the class of finite moment densities). The following two results come handy.

: If and are two independent random variables, with respective variance and , then the variance of their sum is given by .

: If random variable has variance , then for , the variance of is given by .
If (4) holds, it must imply the equality of the variance of the prior, and the variance of any intermediatepoint. i.e. . If is finite, equality happens if and only if . This implies that is a delta function. QED.
The search for a density function that satisfies (4) is thus not meaningful in the context of generative models, because a delta function as prior implies constant output. Cauchy, on the other hand, is too specific a choice, and suffers from pathologies such as undefined moments, which renders imposing any additional constraints on latentspace statistics impossible. It also is heavytailed, which may cause generation of undesirable outputs for samples from the tails.
What if we relax condition (4), such that we do not seek exact equality, but closeness of the left and right sides? The next section shows that this relaxed search results in a problem which can be solved using standard offtheshelf function minimizers. Using this approach we obtain distributions with many useful properties. If we let , and , we seek to minimize some form of distance or divergence between and among densities with finitevariance. This optimization problem is defined in the next section.
3.1 Searching for the optimal prior distribution
As mentioned earlier, instead of enforcing exact equality as in condition (4), we would like to minimize the discrepancy between the left and right sides. A natural choice would be to minimize the KL divergence between , and . Ideally, it might make sense to minimize this over the entire range of ’s, i.e.
(5) 
where is the chosen divergence/distance between the densities. However, this is likely an intractable problem due to integration over . In order to make this tractable, we observe that for a given , the mean of is the same as the mean of . However, the variance of goes as . Thus, for interpolation problems, where , the largest discrepancy in variance between and occurs at a value of . Thus, for interpolation problems, we suggest that minimizing for the worstcase error is sufficient.
3.2 Optimization problem for interpolation priors
For interpolation priors, we minimize the KL divergence between and . To create a tractable problem, we restrict to be defined over a compact domain, without loss of generality, we choose it to be . We discretize the domain with sufficient fineness, typically we choose bins in . The distribution is now discretely represented by the bincenters . The optimization problem now becomes:
(6) 
In (6), is a divergence/distance function between and . We use KL divergence because not only is it a natural choice, but we also find that it produces smooth distributions than when using the distance. Without a variance constraint, the solution of (6) is simply a discrete delta function, which we would like to avoid. The variance constraint is equivalently expressed as a quadraticterm involving ’s, based on which we have:
(7) 
where, . In general the KL divergence is convex on the space of density functions, but in our case and are related to each other; our objective function is not convex. We solve (3.2) using fmincon in Matlab, which uses an interiorpoint algorithm with barrier functions. We note that the solution from fmincon may only be a locally optimal solution, yet we find the obtained solution is quite robust to large variation in initialization. We also note that using any of , and as objective in (3.2) gives us the same result. We use the following settings for fmincon: interiorpoint as algorithm, Max Function Evaluations , Max Iterations and . We use in our experiments because it provides the best FID score (Heusel et al., 2017). Figure 1 shows the trace of the optimizer cost value for the above settings; we observe convergence to a local minimum in less than iterations.
Remarks on the shape of the obtained distribution:
Here we make brief remarks on the shape of the obtained distribution. Firstly, we note that the exact shape of the obtained distribution varies slightly each time we run the solver. This is of course expected. However, we find that all obtained solutions seem to share the same general shape: they have a large mainlobe, appear to be symmetric, and have small but significant sidelobes. This is more clearly shown in Figure 2. We note that we did not impose any symmetry condition during optimization, yet these solutions emerged despite different initialization.
Dependence on initialization:
We initialized our solver with a uniform density, delta functions centered at different locations, and truncated Gaussians with varying means and standarddeviations. For all these, the final solution still converges to a shape very similar to that shown in Figure
2. Further, all obtained solutions seem to perform equally well in the final evaluation of GAN output quality.Role of sidelobes:
We are not aware of any simple parametric distribution that can describe the shape seen in Figure 2
, except perhaps a Gaussian mixture model. However, the shape of the sidelobes is intricate, and not simple Gaussianlike. The existence of these sidelobes seems to allow us to strike a balance between the fast taildecay of distributions like the Gamma, and the heavy tail of the Cauchy. It is almost as if the obtained shape fuses the best properties of the two classes of distributions, enabling us to generate good quality GAN output, all the while minimizing the divergence to the interpolated samples.
Continuous samples from discretized density:
At first glance it may appear that since we discretize the domain while solving (3.2), that our prior is capable of generating only discrete samples. This is easily dealt with as follows. Once we generate a sample from the discretized density, what we get is really an index corresponding to the bincenter, but the bin itself has nonzero width given by how finely we partition . From the corresponding bin, we simply generate a uniform random variable restricted to the width of the bin. This approach implicitly corresponds to sampling from a continuous density constructed by a zerothorder interpolation over the obtained discrete one. One can be more sophisticated than this, but the sampling algorithm will no longer be as simple. We find that the approach described above is quite sufficient in practice.
Quantification of midpoint mismatch:
Table 1 shows the actual KL divergence between the prior and midpoint distribution. It is clear that for the normal and uniform distributions, the midpoint distribution is very divergent from the actual prior distribution; whereas the distribution obtained from solving (3.2) has a much lower divergence from the midpoint. In (3.2) we are only minimizing the distribution mismatch for the onedimensional case. The idea is that if the 1D distribution is similar to its midpoint distribution, then the divergence between the corresponding Euclidean norm distribution will be low even for higher dimensions.
Figure 3 shows the midpoint distribution mismatch for different priors for the one dimensional case. Figure 4 shows the Euclidean norm distribution for prior and midpoints for different dimensions, computed from a set of samples. For the midpoints, we sample two sets of points, and calculate the Euclidean norm of the corresponding midpoints. We see that for the normal distributions, at low dimensions ( and ) the midpoint distribution overlaps well with the prior distribution. As the dimension increases (), the two distributions start to diverge. For there is almost no overlap between the prior and midpoint distribution. We observe similar trend for the uniform distribution. On the contrary, in our case the midpoint distribution and the prior distribution (of Euclidean norm) overlap well with each other even in higher dimensions. In Figure 4 we can notice that our nonparametric distribution does bring the Euclidean norm distribution very close to the origin compared to the normal and uniform.
Distribution  KL divergence 

Uniform Distribution  0.3065 
Normal Distribution  0.1544 
Proposed NonParametric Distribution  0.0075 
4 Experiments and Results
Datasets, models, and baselines:
To validate the effectiveness of the proposed approach, we train the standard DCGAN model (Radford et al., 2016) on four different datasets: a) CelebA dataset (Liu et al., 2015), b) CIFAR10 (Krizhevsky & Hinton, 2009), c) LSUN Bedroom, and d) LSUN Kitchen (Yu et al., 2015)
). We train our model to the same number of epochs and all the hyperparameters of the training are kept same for all the cases. We train each model three times and report the average scores. Details about the network architecture and the training method are provided in the supplemental material. We compare our proposed nonparametric prior distribution against standard ones like the normal, uniform and the priors designed to minimize the midpoint distribution mismatch like Gamma
(Kilcher et al., 2018), and Cauchy (Leśniak et al., 2019). For Gamma and Cauchy, we use the same parameters as suggested in the corresponding references.Qualitative tests:
In Figure 5, we show the effect of interpolation through origin in high latentspace dimension () for different priors on CelebA datatset. Here, we interpolate between two random points such that the interpolation line passes through the origin. Similar to (Kilcher et al., 2018), we also observe that with standard priors like the normal, in high latent dimension, the GAN generates nonrealistic images around the origin. Note the difference in quality near the images in the center of the panels (space around origin). With our nonparametric distribution obtained from the solution of (3.2), the GAN generates more realistic images around the origin even at higher dimensions. It was pointed out by Lesniak et. al. (Leśniak et al., 2019) that if a GAN is trained for more epochs, then it learns to fill the space around the origin even with the standard priors. We observe similar trend with the normal and uniform priors. However, with our nonparametric prior the GAN learns to fill the space around the origin very early in training compared with the standard priors. We also present qualitative comparisons on LSUN bedroom dataset in Figure 6: comparing the results with the standard priors and the priors proposed in (Kilcher et al., 2018) and (Leśniak et al., 2019), highlighting the favorable performance of the proposed approach. While we note that it is difficult to perceptually appreciate whether we outperform the other priors, we show that we do obtain competitive visual quality with a conceptually general approach. In the supplemental material, we show additional qualitative results for LSUN Bedroom/Kitchen and CIFAR10 datasets. We note that the Cauchy distribution had difficulty converging on these datasets, exhibiting possible mode collapse and instability during GAN training.
Quantitative evaluation:
For quantitative analysis, we use the Inception Score (IS) (Salimans et al., 2016) and the Frechet Inception Distance (FID) (Heusel et al., 2017) which are the standard metrics used to evaluate GAN performance. The inception score correlates with the visual quality of the generated image – higher the better. However, recent studies suggest that the inception score does not compare the statistics of the generated dataset with the realworld data (Heusel et al., 2017; Zhang et al., 2018), and thus is not always a reliable indicator of visual quality. This drawback of the IS is overcome by the FID score, which compares the statistics of the generated data with the real data with respect to features. The lower the FID score, the better. We will see in Tables 2 to 6, that our nonparametric prior performs better in terms of FID score on both the prior and midpoint by at least points. In terms of IS we are the best in most of the cases, when we do not perform better, we come quite close to the best performing one.
To get the IS and FID score we sample points from the prior, and estimate the scores on the corresponding image samples. For midpoint, we sample two sets of points from the prior, and an image is generated by the GAN, with the corresponding average points as inputs. Results are summarized in Tables 2 to 6 for different datasets. Table 2 compares our nonparametric prior with other standard priors on the CelebA dataset, at latent space dimension . The nonparametric prior outperforms all other priors on both the metrics. Our prior has better FID score by more than points on both the prior and the midpoint. As expected the IS and FID scores for the Cauchy is almost the same for the prior and the midpoint. With Gamma, we observe that the score on midpoints is slightly better than the prior since the gamma distribution is highly dense around the origin. In Table 3, we show the IS and FID for the prior and the midpoint at latent space dimension . We note that our nonparametric distribution performs better in all the cases compared with all other priors. Scores for our nonparametric prior is almost similar to the scores in Table 2, which indicates its robustness toward the increase in latent space dimension. Cauchy prior sometimes leads to mode collapse during the training which is indicated by its poor FID scores. From Table 2 and 3, we also note that the IS and FID score become worse for the midpoint compared to the prior point as we increase the latent space dimension.
Distribution  Inception Score  FID Score  

Prior  MidPoint  Prior  MidPoint  
Uniform  1.843  1.369  24.055  40.371 
Normal  1.805  1.371  26.173  42.136 
Gamma  1.776  1.618  29.912  28.608 
Cauchy  1.625  1.628  59.601  60.128 
Nonparametric  1.933  1.681  17.735  19.115 
Distribution  Inception Score  FID Score  

Prior  MidPoint  Prior  MidPoint  
Uniform  1.908  1.407  25.586  44.837 
Normal  1.857  1.434  25.035  43.596 
Gamma  1.738  1.608  33.816  32.241 
Cauchy  1.734  1.743  86.286  86.278 
Nonparametric  1.973  1.636  14.953  19.322 
Distribution  Inception Score  FID Score  

Prior  MidPoint  Prior  MidPoint  
Uniform  6.411  5.204  43.501  76.913 
Normal  6.836  5.656  39.235  65.525 
Gamma  6.449  6.798  48.334  39.262 
Cauchy  2.972  2.964  180.37  180.40 
Nonparametric  6.871  6.809  34.803  37.112 
Distribution  Inception Score  FID Score  

Prior  MidPoint  Prior  MidPoint  
Uniform  2.969  2.649  42.998  76.412 
Normal  2.812  2.591  64.682  108.49 
Gamma  2.930  2.808  162.44  161.37 
Cauchy  3.148  3.149  97.057  97.109 
Nonparametric  3.028  2.769  27.857  31.472 
Distribution  Inception Score  FID Score  

Prior  MidPoint  Prior  MidPoint  
Uniform  2.656  2.549  40.041  51.119 
Normal  2.844  2.867  39.909  53.448 
Gamma  2.183  2.147  181.81  187.00 
Cauchy  1.182  1.179  242.27  242.87 
Nonparametric  3.109  3.031  33.194  35.074 
Table 4 shows the IS and FID scores for CIFAR10. With our nonparametric prior, the GAN performs better than other priors on both the prior and midpoint by at least points on FID score. Similar to CelebA dataset, we observe the training with Cauchy prior is highly unstable. In Table 5 and 6 we compare our nonparametric prior with other priors on the LSUN bedroom and LSUN kitchen datasets. We observe that our nonparametric prior outperform other priors on the FID score by at least points. We observe that the Gamma and Cauchy priors perform worse on both the prior and midpoint in terms of FID score. These priors often lead to mode collapse during the training. Note that the LSUN dataset has a larger variation in images, and also a larger trainingset than the CelebA dataset. The nonparametric prior performs best in both cases as measured by the FID on both prior and midpoint, showing its benefits on large datasets with large variation. A few salient observations from the results are:

The quantitative results on different datasets show that standard priors like the normal and uniform perform better on the prior point but worse on the midpoint.

Gamma and Cauchy do not perform consistently across datasets. In some cases they are the best, but when they are not, their performance can be far from the best.

The nonparametric distribution is far more consistent, and is either the best, or pretty close to the best performing one, on all four datasets.
5 Conclusions
In this paper, we propose a generalized approach to solve distribution mismatch for interpolation in GANs. We show the qualitative and quantitative effectiveness of our approach over the standard priors. We note that often times, our proposed method is in fact the best one, and in cases when it is not, it comes quite close to the best performing technique. Our goal is not necessarily outperform all other GANs, but to suggest the use of nontrivial priors, which might improve image quality without any additional trainingdata or architectural complexity. Additionally, it would be an interesting avenue of future work to extend this approach to extrapolation problems, or impose other interesting statistical or physicallymotivated constraints over latent spaces.
Acknowledgements
This work was supported by a Intel HVMRC grant. PT was supported by ARO grant number W911NF1710293. SJ was jointly supported from both the Herberger Research Initiative in the Herberger Institute for Design and the Arts (HIDA) and the Fulton Schools of Engineering (FSE) at Arizona State University.
References
 Agustsson et al. (2019) Agustsson, E., Sage, A., Timofte, R., and Van Gool, L. Optimal transport maps for distribution preserving operations on latent spaces of generative models. In International Conference on Learning Representations (ICLR), 2019.
 Arjovsky & Bottou (2017) Arjovsky, M. and Bottou, L. Towards principled methods for training generative adversarial networks. In International Conference on Learning Representations (ICLR), 2017.
 Arjovsky et al. (2017) Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein gan. In International Conference on Machine Learning (ICML), 2017.

Dai et al. (2017)
Dai, B., Fidler, S., Urtasun, R., and Lin, D.
Towards diverse and natural image descriptions via a conditional gan.
In
IEEE International Conference on Computer Vision (ICCV)
, pp. 2989–2998, 2017.  Denton et al. (2015) Denton, E. L., Chintala, S., Fergus, R., et al. Deep generative image models using a￼ laplacian pyramid of adversarial networks. In Advances in neural information processing systems, pp. 1486–1494, 2015.
 Goodfellow et al. (2014) Goodfellow, I., PougetAbadie, J., Mirza, M., Xu, B., WardeFarley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014.
 Gulrajani et al. (2017) Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A. C. Improved training of wasserstein gans. In Advances in Neural Information Processing Systems, pp. 5767–5777, 2017.
 Hall et al. (2005) Hall, P., Marron, J. S., and Neeman, A. Geometric representation of high dimension, low sample size data. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(3):427–444, 2005.
 Heusel et al. (2017) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two timescale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626–6637, 2017.

Isola et al. (2017)
Isola, P., Zhu, J.Y., Zhou, T., and Efros, A. A.
Imagetoimage translation with conditional adversarial networks.
InIEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, pp. 5967–5976. IEEE, 2017.  Karras et al. (2018) Karras, T., Aila, T., Laine, S., and Lehtinen, J. Progressive growing of gans for improved quality, stability, and variation. In International Conference on Learning Representations (ICLR), 2018.
 Kilcher et al. (2018) Kilcher, Y., Lucchi, A., and Hofmann, T. Semantic interpolation in implicit models. In International Conference on Learning Representations (ICLR), 2018.
 Krizhevsky & Hinton (2009) Krizhevsky, A. and Hinton, G. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
 Kumar & Chellappa (2018) Kumar, A. and Chellappa, R. Disentangling 3D Pose in A Dendritic CNN for Unconstrained 2D Face Alignment. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 430–439, 2018.
 Ledig et al. (2017) Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A. P., Tejani, A., Totz, J., Wang, Z., et al. Photorealistic single image superresolution using a generative adversarial network. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pp. 4, 2017.
 Leśniak et al. (2019) Leśniak, D., Sieradzki, I., and Podolak, I. Distributioninterpolation trade off in generative models. In International Conference on Learning Representations (ICLR), 2019.
 Liu et al. (2018a) Liu, Y., Wei, F., Shao, J., Sheng, L., Yan, J., and Wang, X. Exploring disentangled feature representation beyond face identification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2080–2089, 2018a.
 Liu et al. (2018b) Liu, Y., Yeh, Y., Fu, T., Wang, S., Chiu, W., and Wang, Y. F. Detach and Adapt: Learning CrossDomain Disentangled Deep Representation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8867–8876, 2018b.
 Liu et al. (2015) Liu, Z., Luo, P., Wang, X., and Tang, X. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3730–3738, 2015.
 Nowozin et al. (2016) Nowozin, S., Cseke, B., and Tomioka, R. fgan: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems, pp. 271–279, 2016.
 Pathak et al. (2016) Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., and Efros, A. A. Context encoders: Feature learning by inpainting. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2536–2544, 2016.
 Radford et al. (2016) Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference on Learning Representations (ICLR), 2016.
 Reed et al. (2016) Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., and Lee, H. Generative adversarial text to image synthesis. In 33rd International Conference on Machine Learning, pp. 1060–1069, 2016.
 Salimans et al. (2016) Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., and Chen, X. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pp. 2234–2242, 2016.
 Shrivastava et al. (2017) Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., and Webb, R. Learning from simulated and unsupervised images through adversarial training. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pp. 5, 2017.
 Tzeng et al. (2017) Tzeng, E., Hoffman, J., Saenko, K., and Darrell, T. Adversarial discriminative domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pp. 4, 2017.
 White (2016) White, T. Sampling generative networks. arXiv preprint arXiv:1609.04468, 2016.
 Yin et al. (2017) Yin, W., Fu, Y., Sigal, L., and Xue, X. Semilatent GAN: Learning to generate and modify facial images from attributes. arXiv preprint arXiv:1704.02166, 2017.
 Yu et al. (2015) Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., and Xiao, J. LSUN: Construction of a largescale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
 Zhang et al. (2017) Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., and Metaxas, D. N. StackGAN: Text to photorealistic image synthesis with stacked generative adversarial networks. In IEEE International Conference on Computer Vision (ICCV), pp. 5907–5915, 2017.
 Zhang et al. (2018) Zhang, H., Goodfellow, I., Metaxas, D., and Odena, A. Selfattention generative adversarial networks. arXiv preprint arXiv:1805.08318, 2018.
 Zhu et al. (2016) Zhu, J.Y., Krähenbühl, P., Shechtman, E., and Efros, A. A. Generative visual manipulation on the natural image manifold. In European Conference on Computer Vision (ECCV), pp. 597–613. Springer, 2016.
 Zhu et al. (2017) Zhu, J.Y., Park, T., Isola, P., and Efros, A. A. Unpaired imagetoimage translation using cycleconsistent adversarial networks. In IEEE International Conference on Computer Vision (ICCV), 2017.
Comments
There are no comments yet.