One common approach is to use gradient-based learning to maximize the likelihood of under the model distribution . We typically place two important constraints on in order to make this possible: (1) existence of an explicit inverse , and (2) existence of a simple procedure by which we can evaluate . Often times, is instead constructed as a map from to in order to make it convenient to evaluate for a given observation . This is the operation on which we place the greatest demand for throughput during training. Each choice corresponds to making one of the two operations – generating samples or evaluating – convenient, and the other inconvenient. Regardless of the choice, both operations are required, and so both constraints are made to hold in practice. We regard as a map from to in this presentation for sake of simplicity.
Much of the work in deep probabilistic modeling is concerned with allowing to be flexible enough to capture intricate latent structure, while simultaneously ensuring that both conditions hold. We can dichotomize current approaches based on how the second constraint – existence of a simple procedure to evaluate – is satisfied. The first approach involves making autoregressive by appropriately masking the weights of each layer. This induces a lower-triangular structure in the Jacobian, since each component of the model’s output is made to depend only on the previous ones. We can then rapidly evaluate the log-determinant term involved in likelihood computation, by accumulating the diagonal elements of the Jacobian.
Research into autoregressive modeling dates back several decades, and we only note some recent developments.Germain et al. (2015)
describe an autoregressive autoencoder for density estimation.Kingma et al. (2016) synthesize autogressive modeling with normalizing flows (Rezende and Mohamed, 2015) for variational inference, and Papamakarios et al. (2017) make further improvements. van den Oord et al. (2016c) apply this idea to image generation, with follow-up work (Dinh et al. (2016), van den Oord et al. (2016b)) that exploits parallelism using masked convolutions. van den Oord et al. (2016a) do the same for audio generation, and van den Oord et al. (2017) introduce strategies to improve efficiency. We refer the reader to Jang (2018) for an excellent overview of these works in more detail.
The second approach involves choosing the layers of to be transformations, not necessarily autoregressive, for which explicit expressions for the Jacobian are still available. We can then evaluate the log-determinant term for
by accumulating the layerwise contributions in accordance with the chain rule, using a procedure analogous to backpropagation.Rezende and Mohamed (2015) introduced this idea to variational inference, and recent work, including (Berg et al., 2018) and (Tomczak and Welling, 2016), describe new types of such transformations.
Both approaches must invariably compromise on model flexibility. An efficient method for differentiating implicit densities that do not fulfill these constraints would enrich the current toolset for probabilistic modeling. Wu et al. (2016) advocate using annealed importance sampling (Neal, 2001) for evaluating implicit densities, but it is not clear how this approach could be used to obtain gradients. Very recent work (Li and Turner, 2017) uses Stein’s identity to cast gradient computation for implicit densities as a sparse recovery problem. Our approach, which we call spectral backpropagation, harnesses the capabilities of modern automatic differentiation (Abadi et al. (2016), Paszke et al. (2017)) by directly backpropagating through an approximation for the spectral density of .
We make the first steps toward demonstrating the viability of this approach by minimizing and , where is the implicit density of a non-invertible Wide ResNet (Zagoruyko and Komodakis, 2016) , on a set of test problems. Having done so, we then turn our attention to characterizing the behavior of the generator in GANs (Goodfellow et al., 2014), using a series of computational studies made possible by spectral backpropagation. Our purpose in conducting these studies is twofold. Firstly, we show that our approach is suitable for application to high-dimensional problems. Secondly, we identify two novel properties of generators:
The existence of adversarial perturbations for classification models (Szegedy et al., 2013) is paralleled by the existence of aberrant, nonsensical outputs to which assigns very high likelihood.
The eigenvectors of the metric induced by the over latent space correspond to meaningful, quasi-disentangled explanatory factors. Perturbing latent variables along these eigenvectors allows us to quantify the extent to which makes use of latent space.
We hope that these observations will contribute to an improved understanding of how well generators are able to capture the latent structure of the underlying data-generating process.
2.1 Generalizing the Change of Variable Theorem
We begin by revisiting the geometric intuition behind the usual change of variable theorem. First, we consider a rectangle in with vertices given in clockwise order, starting from the bottom-left vertex. To determine its area, we compute its side lengths and write . Now suppose we are given a parallelepiped in
whose sides are described by the vectors, , and . Its volume is given by the triple product , where and denote cross product and inner product, respectively. This triple product can be rewritten as
which we can generalize to compute the volume of a parallelepiped in :
If we regard the vertices as observations in , the change of variable theorem can be understood as the differential analog of this formula. To wit, we suppose that is a diffeomorphism, and denote by the Jacobian of its output with respect to its input. Now, becomes the infinitesimal volume element determined by . For an observation , the change of variable theorem says that we can compute
where we set .
An -dimensional parallelepiped in requires vectors to specify its sides. When , its volume is given by the more general formula (Hanson, 1994),
The corresponding analog of the change of variable theorem is known in the context of geometric measure theory as the smooth coarea formula (Krantz and Parks, 2008). When is a diffeomorphism between manifolds, it says that
where we set as before, and define to be the metric induced by over the latent manifold . In many cases of interest, such as in GANs, the function is not necessarily injective. Application of the coarea formula would then require us to evaluate an inner integral over , rather than over the singleton . We ignore this technicality and apply Equation 1 anyway.
The change of variable theorem gives us access to the implicit density in the form of the spectral density of . Indeed, the Lie identity allows us to express the log-likelihood corresponding to Equation 1 as
We focus on the factor involving on the RHS, which can be written as
where denotes the spectrum, and
the delta distribution over the eigenvalues in the spectrum. We letdenote the parameters of , and assume that is independent of . Now, differentiating Equation 2 with respect to gives
Equation 3 allows us to formulate gradient computation for implicit densities as a variant of stochastic backpropagation (Kingma and Welling (2013), Rezende et al. (2014)), in which the base distribution for the expectation is the spectral density of
rather than a normal distribution.
2.2 An Estimator for Spectral Backpropagation
To obtain an estimator for Equation 3, we turn to the thriving literature on stochastic approximation of spectral sums. These methods estimate quantities of the form , where is a large or implicitly-defined matrix, by accessing using only matrix-vector products. In our case, , and the products involving can be evaluated rapidly using automatic differentiation. We make no attempt to conduct a comprehensive survey, but note that among the most promising recent approaches are those described by Han et al. (2017), Boutsidis et al. (2017), Ubaru et al. (2017), and Fitzsimons et al. (2017).
, which work on the basis of polynomial interpolation. Given a function, these methods construct an order- approximating polynomial to , given by
where and . The main difference between the two approaches is the choice of approximating polynomial. Boutsidis et al. (2017) use Taylor polynomials, for which
where we use superscript to denote iterated differentiation. On the other hand, Han et al. (2017) use Chebyshev polynomials. These are defined by the recurrence relation
The coefficients for the Chebyshev polynomials are called the Chebyshev nodes, and are defined by
Now suppose that we are given a matrix such that . After having made a choice for the construction of , we can use the approximation
This reduces the problem of estimating the spectral sum to computing the traces for all .
Two issues remain in applying this approximation. The first is that both and are restricted to . In our case, , and can be an arbitrary positive definite matrix. To address this issue, we define , where
Now we set , so that
We stress that while is defined using , the coefficients are computed using . With these definitions in hand, we can write
Han et al. (2017) require spectral bounds and , so that , and set
After using Equation 7 with a Chebyshev approximation for to obtain , we compute
Boutsidis et al. (2017) instead define , and write
This time, we set and use Equation 7 with a Taylor approximation for to obtain . Then, we compute
We can easily obtain an accurate upper bound using the power method. The lower bound is fixed to a small, predetermined constant in our work.
The second issue is that deterministically evaluating the terms in Equation 5 requires us to compute matrix powers of . Thankfully, we can drastically reduce the computational cost and approximate these terms using only matrix-vector products. This is made possible by the stochastic trace estimator introduced by Hutchinson (1990):
When the distribution for the probe vectors has expectation zero, the estimate is unbiased. We use the Rademacher distribution, which samples the components of uniformly from . We refer the reader to Avron and Toledo (2011)
for a detailed study on the variance of this estimator.
We first describe how the trace estimator is applied when a Taylor approximation is used to construct . In this case, we have
The inner summands are evaluated using the recursion and for . It follows that the number of matrix-vector products involved in the approximation increases linearly with respect to the order of the approximating polynomial . The same idea allows us to accumulate the traces for the Chebyshev approximation, based on Equation 4. The resulting procedure is given in Algorithm 1; it is our computational workhorse for evaluating the log-likelihood in Equation 2 and estimating the gradient in Equation 3.
3 Learning Implicit Densities
. Subfigures (b) and (c) show the model samples superimposed over contour plots of the corresponding ground-truth test energies. Each epoch corresponds toiterations. We see in (a) that the relative error for the approximation to the log determinant typically stays below 30%, except toward the end of training for the last two test energies. At this point, samples from these two models begin to drift away from the origin, as shown in (c).
Suppose that we are tasked with matching a given data distribution with the implicit density of the model . Two approaches for learning are available, and the choice of which to use depends on the type of access we have to the data distribution . The first approach – minimizing – is applicable when we know how to evaluate the likelihood of , but are not necessarily able to sample from it. The second approach – minimizing – is applicable when we are able to sample from , but are not necessarily able to evaluate its likelihood. We show that spectral backpropagation can be used in both cases, when neither of the two conditions described in Section 1 holds.
All of the examples considered here are densities over . We match them by transforming a prior given by a spherical normal distribution. Our choice for the architecture of is a Wide ResNet comprised of four residual blocks. Each block is a three-layer bottleneck from to whose hidden layer size is 32. All layers are equipped with biases, and use LeakyReLU activations. We compute the gradient updates using a batch size of 64, and apply the updates using Adam (Kingma and Ba, 2014) with a step size of , and all other parameters kept at their the default values.
To compute the gradient update given by Equation 3, we use Algorithm 1 with for all experiments. For minimizing , we use , and for minimizing , . In order to monitor the accuracy of the approximation for the likelihood, we compute at each iteration, and evaluate the ground-truth likelihood in accordance with Equation 2. We define the relative error of the approximation with respect to the ground-truth log-likelihood by
provided that the quotient is not too large. This definition of relative error avoids numerical problems when .
We begin by considering the first approach, in which we seek to minimize . This objective requires that we be able to sample from , so we choose to be a map from latent space to observation space. The results are shown in Figure 1. To prevent from making collapse to model the infinite support of , we found it helpful to incorporate the regularizer
into the objectives for the third and fourth test energies. Here, denotes the spectral norm. To implement this regularizer, we simply backpropagate through the estimate of that is already produced by Algorithm 1. We use in both cases. Despite the use of this regularizer, we find that continuing to train the models for these last two test energies causes the samples to drift away from the origin (see Figure 1(c)). We have not made any attempt to address this behavior. Finally, we note that since is bounded from below by the negative log-normalization constant of , it can become negative when is unnormalized. We see that this happens for all four examples.
In the second approach, we seek to minimize . This objective requires that we be able to evaluate the likelihood of , so we choose to be a map from observation space to latent space. The results are shown in Figure 2. We note that minimizing is ill-posed when is unnormalized. In this scenario, the model distribution can match while also assigning mass outside the support of . We see that this expected behavior manifests in both examples.
4 Evaluating GAN Likelihoods
For our explorations involving GANs, we train a series of DCGAN (Radford et al., 2015) models on rescaled versions of the CelebA (Liu et al., 2015) and LSUN Bedroom datasets. We vary model capacity in terms of the base feature map count multiplier for the DCGAN architecture. The generator and discriminator have five layers each, and use translated LeakyReLU activations (Xiang and Li, 2017). To stabilize training, we use weight normalization with fixed scale factors in the discriminator (Salimans et al., 2016). Our prior is defined by , where is the size of the embedding space. All models were trained for
iterations with a batch size of 32, using RMSProp with step sizeand decay factor . We present results from two of these models in Figure 3.
We apply spectral backpropagation to explore the effect of perturbing a given latent variable to maximize likelihood under the generator distribution . This is readily accomplished by noting that the same procedure to evaluate Equation 3 can also be used to obtain gradients with respect to . The results are shown in Figure 4. Intuitively, we might expect the outputs to be transformed in such a way that they gravitate towards modes of the dataset. But this is not what happens. Instead, the outputs are transformed into highly aberrant, out-of-distribution examples while nonetheless attaining very high likelihood. As optimization proceeds, also becomes increasingly ill-conditioned. This shows that likelihood for generators need not correspond to intuitive notions of visual plausibility.
5 Uncovering Latent Explanatory Factors
The generator in GANs is well-known for organizing latent space such that semantic features can be transferred by means of algebraic operations over latent variables (Radford et al., 2015). This suggests the existence of a systematic organization of latent space, but perhaps one that cannot be globally characterized in terms of a handful of simple explanatory factors. We instead explore whether local changes in latent space can be characterized in this way. Since the metric describes local change in the generator’s output, it is natural to consider the effect of perturbations along its eigenvectors. To this end, we fix 12 trial embeddings in latent space, and compare the effect of perturbations along random directions to perturbations along these eigenvectors. The random directions are obtained by sampling from a spherical normal distribution. We show the results in Figure 5.
We can see that dominant eigenvalues, especially the principal eigenvalue, often result in the most drastic changes. Furthermore, these changes are not only semantically meaningful, but also tend to make modifications to distinct attributes of the image. To see this more clearly, we consider the top two rows of Figure 5(g). Movement along the first two eigenvectors changes hair length and facial orientation; movement along the third eigenvector decreases the length of the bangs; movement along the fourth and fifth eigenvectors changes background color; and movement along the sixth and seventh eigenvectors changes hair color.
Inspecting the two columns (c), (e), (g), and (d), (f), (h) in Figure 5 suggests that larger values of may encourage the generator to capture more explanatory factors, possibly at the price of decreased sample quality. We would like to explore the effect of varying and on the number of such factors. To do this, we fix a sample of latent variables . For each , we define
for every eigenvector of . The quantity measures the pixelwise change resulting from a perturbation along an eigenvector, relative to the change we expect from a random perturbation. Finally, we define
where is the indicator function. This quantity measures the average number of eigenvectors for which the relative change is greater than the threshold . As such, it can be regarded as an effective measure of dimensionality for latent space. We explore the effect of varying and on in Figure 6.
Current approaches for probabilistic modeling attempt to satisfy two goals that are fundamentally at odds with one another: fulfillment of the two constraints described in Section1, and model flexibility. In this work, we develop a computational tool that aims to expand the scope of probabilistic modeling to functions that do not satisfy these constraints. We make the first steps toward demonstrating feasibility of this approach by minimizing divergences in far greater generality than what has been attempted before. Finally, we uncover surprising facts about the organization of latent space for GANs that we hope will contribute to an improved understanding of how effectively they capture underlying latent structure.
- Abadi et al.  Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In OSDI, volume 16, pages 265–283, 2016.
- Avron and Toledo  Haim Avron and Sivan Toledo. Randomized algorithms for estimating the trace of an implicit symmetric positive semi-definite matrix. Journal of the ACM (JACM), 58(2):8, 2011.
- Berg et al.  Rianne van den Berg, Leonard Hasenclever, Jakub M Tomczak, and Max Welling. Sylvester normalizing flows for variational inference. arXiv preprint arXiv:1803.05649, 2018.
- Boutsidis et al.  Christos Boutsidis, Petros Drineas, Prabhanjan Kambadur, Eugenia-Maria Kontopoulou, and Anastasios Zouzias. A randomized algorithm for approximating the log determinant of a symmetric positive definite matrix. Linear Algebra and its Applications, 533:95–117, 2017.
- Dinh et al.  Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016.
- Fitzsimons et al.  Jack Fitzsimons, Diego Granziol, Kurt Cutajar, Michael Osborne, Maurizio Filippone, and Stephen Roberts. Entropic trace estimates for log determinants. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 323–338. Springer, 2017.
- Germain et al.  Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: Masked autoencoder for distribution estimation. In International Conference on Machine Learning, pages 881–889, 2015.
- Goodfellow et al.  Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
- Han et al.  Insu Han, Dmitry Malioutov, Haim Avron, and Jinwoo Shin. Approximating spectral sums of large-scale matrices using stochastic chebyshev approximations. SIAM Journal on Scientific Computing, 39(4):A1558–A1585, 2017.
- Hanson  Andrew J. Hanson. Graphics gems iv. chapter Geometry for N-dimensional Graphics, pages 149–170. Academic Press Professional, Inc., San Diego, CA, USA, 1994. ISBN 0-12-336155-9. URL http://dl.acm.org/citation.cfm?id=180895.180909.
- Hutchinson  Michael F Hutchinson. A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines. Communications in Statistics-Simulation and Computation, 19(2):433–450, 1990.
- Jang  Eric Jang. Normalizing flows tutorial, part 2: Modern normalizing flows, 2018. URL https://blog.evjang.com/2018/01/nf2.html.
- Kingma and Ba  Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Kingma and Welling  Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
- Kingma et al.  Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. In Advances in Neural Information Processing Systems, pages 4743–4751, 2016.
- Krantz and Parks  Steven Krantz and Harold Parks. Analytical tools: The area formula, the coarea formula, and poincaré inequalities. Geometric Integration Theory, pages 1–33, 2008.
- Li and Turner  Yingzhen Li and Richard E Turner. Gradient estimators for implicit models. arXiv preprint arXiv:1705.07107, 2017.
Liu et al. 
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang.
Deep learning face attributes in the wild.
Proceedings of International Conference on Computer Vision (ICCV), 2015.
- Neal  Radford M Neal. Annealed importance sampling. Statistics and computing, 11(2):125–139, 2001.
- Papamakarios et al.  George Papamakarios, Iain Murray, and Theo Pavlakou. Masked autoregressive flow for density estimation. In Advances in Neural Information Processing Systems, pages 2335–2344, 2017.
Paszke et al. 
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary
DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer.
Automatic differentiation in pytorch.2017.
- Radford et al.  Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
- Rezende and Mohamed  Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015.
- Rezende et al.  Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
- Salimans et al.  Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pages 2234–2242, 2016.
- Szegedy et al.  Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
- Tomczak and Welling  Jakub M Tomczak and Max Welling. Improving variational auto-encoders using householder flow. arXiv preprint arXiv:1611.09630, 2016.
- Ubaru et al.  Shashanka Ubaru, Jie Chen, and Yousef Saad. Fast estimation of tr(f(a)) via stochastic lanczos quadrature. SIAM Journal on Matrix Analysis and Applications, 38(4):1075–1099, 2017.
- van den Oord et al. [2016a] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016a.
- van den Oord et al. [2016b] Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelcnn decoders. In Advances in Neural Information Processing Systems, pages 4790–4798, 2016b.
- van den Oord et al. [2016c] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016c.
- van den Oord et al.  Aaron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis C Cobo, Florian Stimberg, et al. Parallel wavenet: Fast high-fidelity speech synthesis. arXiv preprint arXiv:1711.10433, 2017.
- Wu et al.  Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, and Roger Grosse. On the quantitative analysis of decoder-based generative models. arXiv preprint arXiv:1611.04273, 2016.
- Xiang and Li  Sitao Xiang and Hao Li. On the effects of batch and weight normalization in generative adversarial networks. stat, 1050:22, 2017.
- Zagoruyko and Komodakis  Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.