1 Introduction
Modelling is fundamental to the success of technological innovations for artificial intelligence. A powerful model learns a useful representation of the observations for a specified prediction task, and generalises to unknown instances that follow similar generative mechanics. A well established area of machine learning research focuses on developing
prescribed probabilistic models (Diggle & Gratton, 1984), where learning is based on evaluating the probability of observations under the model. Implicit probabilistic models, on the other hand, are defined by a stochastic procedure that allows for direct generation of samples, but not for the evaluation of model probabilities. These are omnipresent in scientific and engineering research involving data analysis, for instance ecology, climate science and geography, where simulators are used to fit realworld observations to produce forecasting results. Within the machine learning community there is a recent interest in a specific type of implicit models, generative adversarial networks (GANs) (Goodfellow et al., 2014), which has been shown to be one of the most successful approaches to image and text generation
(Radford et al., 2016; Yu et al., 2017; Arjovsky et al., 2017; Berthelot et al., 2017). Very recently, implicit distributions have also been considered as approximate posterior distributions for Bayesian inference, e.g. see
Liu & Feng (2016); Wang & Liu (2016); Li & Liu (2016); Karaletsos (2016); Mescheder et al. (2017); Huszár (2017); Li et al. (2017); Tran et al. (2017). These examples demonstrate the superior flexibility of implicit models, which provide highly expressive means of modelling complex data structures.Whilst prescribed probabilistic models can be learned by standard (approximate) maximum likelihood or Bayesian inference, implicit probabilistic models require substantially more severe approximations due to the intractability of the model distribution. Many existing approaches first approximate the model distribution or optimisation objective function and then use those approximations to learn the associated parameters. However, for any finite number of data points there exists an infinite number of functions, with arbitrarily diverse gradients, that can approximate perfectly the objective function at the training datapoints, and optimising such approximations can lead to unstable training and poor results. Recent research on GANs, where the issue is highly prevalent, suggest that restricting the representational power of the discriminator is effective in stabilising training (e.g. see Arjovsky et al., 2017; Kodali et al., 2017). However, such restrictions often introduce undesirable biases, responsible for problems such as mode collapse in the context of GANs, and the underestimation of uncertainty in variational inference methods (Turner & Sahani, 2011).
In this paper we explore approximating the derivative of the log density, known as the score function, as an alternative method for training implicit models. An accurate approximation of the score function then allows the application of many wellstudied algorithms, such as maximum likelihood, maximum entropy estimation, variational inference and gradientbased MCMC, to implicit models. Concretely, our contributions include:

the Stein gradient estimator, a novel generalisation of the score matching gradient estimator (Hyvärinen, 2005), that includes both parametric and nonparametric forms;

a comparison of the proposed estimator with the score matching and the KDE plugin estimators on performing gradientfree MCMC, metalearning of approximate posterior samplers for Bayesian neural networks, and entropy based regularisation of GANs.
2 Learning implicit probabilistic models
Given a dataset containing i.i.d. samples we would like to learn a probabilistic model for the underlying data distribution . In the case of implicit models, is defined by a generative process. For example, to generate images, one might define a generative model that consists of sampling randomly a latent variable and then defining . Here is a function parametrised by , usually a deep neural network or a simulator. We assume to be differentiable w.r.t. . An extension to this scenario is presented by conditional implicit models, where the addition of a supervision signal , such as an image label, allows us to define a conditional distribution implicitly by the transformation . A related methodology, wild variational inference (Liu & Feng, 2016; Li & Liu, 2016) assumes a tractable joint density , but uses implicit proposal distributions to approximate an intractable exact posterior . Here the approximate posterior
can likewise be represented by a deep neural network, but also by a truncated Markov chain, such as that given by Langevin dynamics with learnable stepsize.
Whilst providing extreme flexibility and expressive power, the intractability of density evaluation also brings serious optimisation issues for implicit models. This is because many learning algorithms, e.g. maximum likelihood estimation (MLE), rely on minimising a distance/divergence/discrepancy measure , which often requires evaluating the model density (c.f. Ranganath et al., 2016; Liu & Feng, 2016). Thus good approximations to the optimisation procedure are the key to learning implicit models that can describe complex data structure. In the context of GANs, the JensenShannon divergence is approximated by a variational lowerbound represented by a discriminator (Barber & Agakov, 2003; Goodfellow et al., 2014). Related work for wild variational inference (Li & Liu, 2016; Mescheder et al., 2017; Huszár, 2017; Tran et al., 2017) uses a GANbased technique to construct a density ratio estimator for (Sugiyama et al., 2009, 2012; Uehara et al., 2016; Mohamed & Lakshminarayanan, 2016) and then approximates the KLdivergence term in the variational lowerbound:
(1) 
In addition, Li & Liu (2016) and Mescheder et al. (2017) exploit the additive structure of the KLdivergence and suggest discriminating between and an auxiliary distribution that is close to , making the density ratio estimation more accurate. Nevertheless all these algorithms involve a minimax optimisation, and the current practice of gradientbased optimisation is notoriously unstable.
The stabilisation of GAN training is itself a recent trend of related research (e.g. see Salimans et al., 2016; Arjovsky et al., 2017). However, as the gradientbased optimisation only interacts with gradients, there is no need to use a discriminator if an accurate approximation to the intractable gradients could be obtained. As an example, consider a variational inference task with the approximate posterior defined as . Notice that the variational lowerbound can be rewritten as
(2) 
the gradient of the variational parameters can be computed by a sum of the path gradient of the first term (i.e. ) and the gradient of the entropy term . Expanding the latter, we have
(3)  
in which the first term in the last line is zero (Roeder et al., 2017). As we typically assume the tractability of , an accurate approximation to would remove the requirement of discriminators, speedup the learning and obtain potentially a better model. Many gradient approximation techniques exist (Stone, 1985; Fan & Gijbels, 1996; Zhou & Wolfe, 2000; De Brabanter et al., 2013)
, and in particular, in the next section we will review kernelbased methods such as kernel density estimation
(Singh, 1977) and score matching (Hyvärinen, 2005) in more detail, and motivate the main contribution of the paper.3 Gradient approximation with the Stein gradient estimator
We propose the Stein gradient estimator
as a novel generalisation of the score matching gradient estimator. Before presenting it we first setup the notation. Column vectors and matrices are boldfaced. The random variable under consideration is
with if not specifically mentioned. To avoid misleading notation we use the distribution to derive the gradient approximations for general cases. As Monte Carlo methods are heavily used for implicit models, in the rest of the paper we mainly consider approximating the gradient for . We use to denote the th element of the th sample . We also denote the matrix form of the collected gradients as , and its approximation with for some .3.1 Stein gradient estimator: inverting Stein’s identity
We start from introducing Stein’s identity that was first developed for Gaussian random variables (Stein, 1972, 1981) then extended to general cases (Gorham & Mackey, 2015; Liu et al., 2016). Let be a differentiable multivariate test function which maps to a column vector . We further assume the boundary condition for :
(4) 
This condition holds for almost any test function if has sufficiently fastdecaying tails (e.g. Gaussian tails). Now we introduce Stein’s identity (Stein, 1981; Gorham & Mackey, 2015; Liu et al., 2016)
(5) 
in which the gradient matrix term This identity can be proved using integration by parts: for the th row of the matrix , we have
(6)  
Observing that the gradient term of interest appears in Stein’s identity (5), we propose the Stein gradient estimator by inverting Stein’s identity. As the expectation in (5) is intractable, we further approximate the above with Monte Carlo (MC):
(7) 
with the random error due to MC approximation, which has mean and vanishes as . Now by temporarily denoting equation (7) can be rewritten as
Thus we consider a ridge regression method (i.e. adding an
regulariser) to estimate :(8) 
with the Frobenius norm of a matrix and . Simple calculation shows that
(9) 
where One can show that the RBF kernel satisfies Stein’s identity (Liu et al., 2016). In this case and by the reproducing kernel property (Berlinet & ThomasAgnan, 2011),
3.2 Stein gradient estimator minimises the kernelised Stein discrepancy
In this section we derive the Stein gradient estimator again, but from a divergence/discrepancy minimisation perspective. Stein’s method also provides a tool for checking if two distributions and are identical. If the test function set is sufficiently rich, then one can define a Stein discrepancy measure by
(10) 
see Gorham & Mackey (2015) for an example derivation. When is defined as a unit ball in an RKHS induced by a kernel , Liu et al. (2016) and Chwialkowski et al. (2016) showed that the supremum in (10) can be analytically obtained as (with shorthand for ):
(11) 
which is also named the kernelised Stein discrepancy (KSD). Chwialkowski et al. (2016) showed that for universal kernels satisfying the boundary condition, KSD is indeed a discrepancy measure: . Gorham & Mackey (2017) further characterised the power of KSD on detecting nonconvergence cases. Furthermore, if the kernel is twice differentiable, then using the same technique as to derive (16) one can compute KSD by
(12) 
In practice KSD is estimated with samples , and simple derivations show that the Vstatistic of KSD can be reformulated as . Thus the error in (8) is equivalent to the Vstatistic of KSD if , and we have the following:
Theorem 1.
is the solution of the following KSD Vstatistic minimisation problem
(13) 
One can also minimise the Ustatistic of KSD to obtain gradient approximations, and a full derivation of which, including the optimal solution, can be found in the appendix. In experiments we use Vstatistic solutions and leave comparisons between these methods to future work.
3.3 Comparisons to existing kernelbased gradient estimators
There exist other gradient estimators that do not require explicit evaluations of , e.g. the denoising autoencoder (DAE) (Vincent et al., 2008; Vincent, 2011; Alain & Bengio, 2014) which, with infinitesimal noise, also provides an estimate of at convergence. However, applying such gradient estimators result in a doubleloop optimisation procedure since the gradient approximation is repeatedly required for fitting implicit distributions, which can be significantly slower than the proposed approach. Therefore we focus on “quick and dirty” approximations and only include comparisons to kernelbased gradient estimators in the following.
3.3.1 KDE gradient estimator: plugin estimator with density estimation
A naive approach for gradient approximation would first estimate the intractable density (up to a constant), then approximate the exact gradient by . Specifically, Singh (1977) considered kernel density estimation (KDE) , then differentiated through the KDE estimate to obtain the gradient estimator:
(14) 
Interestingly for translation invariant kernels the KDE gradient estimator (14) can be rewritten as Inspecting and comparing it with the Stein gradient estimator (9), one might notice that the Stein method uses the full kernel matrix as the preconditioner, while the KDE method computes an averaged “kernel similarity” for the denominator. We conjecture that this difference is key to the superior performance of the Stein gradient estimator when compared to the KDE gradient estimator (see later experiments). The KDE method only collects the similarity information between and other samples to form an estimate of , whereas for the Stein gradient estimator, the kernel similarity between and for all are also incorporated. Thus it is reasonable to conjecture that the Stein method can be more sample efficient, which also implies higher accuracy when the same number of samples are collected.
3.3.2 Score matching gradient estimator: minimising MSE
The KDE gradient estimator performs indirect approximation of the gradient via density estimation, which can be inaccurate. An alternative approach directly approximates the gradient by minimising the expected error w.r.t. the approximation :
(15) 
It has been shown in Hyvärinen (2005) that this objective can be reformulated as
(16) 
The key insight here is again the usage of integration by parts: after expanding the loss objective, the cross term can be rewritten as if assuming the boundary condition (4) for (see (6)). The optimum of (16) is referred as the score matching gradient estimator. The objective (15) is also called Fisher divergence (Johnson, 2004) which is a special case of KSD (11) by selecting . Thus the Stein gradient estimator can be viewed as a generalisation of the score matching estimator.
The comparison between the two estimators is more complicated. Certainly by the CauchySchwarz inequality the Fisher divergence is stronger than KSD in terms of detecting convergence (Liu et al., 2016). However it is difficult to perform direct gradient estimation by minimising the Fisher divergence, since (i) the Dirac kernel is nondifferentiable so that it is impossible to rewrite the divergence in a similar form to (12), and (ii) the transformation to (16) involves computing . So one needs to propose a parametric approximation to and then optimise the associated parameters accordingly, and indeed Sasaki et al. (2014) and Strathmann et al. (2015) derived a parametric solution by first approximating the log density up to a constant as , then minimising (16) to obtain the coefficients and constructing the gradient estimator as
(17) 
Therefore the usage of parametric estimation can potentially remove the advantage of using a stronger divergence. Conversely, the proposed Stein gradient estimator (9) is nonparametric in that it directly optimises over functions evaluated at locations . This brings in two key advantages over the score matching gradient estimator: (i) it removes the approximation error due to the use of restricted family of parametric approximations and thus can be potentially more accurate; (ii) it has a much simpler and ubiquitous form that applies to any kernel satisfying the boundary condition, whereas the score matching estimator requires tedious derivations for different kernels repeatedly (see appendix).
In terms of computation speed, since in most of the cases the computation of the score matching gradient estimator also involves kernel matrix inversions, both estimators are of the same order of complexity, which is (kernel matrix computation plus inversion). Lowrank approximations such as the Nyström method (Smola & Schökopf, 2000; Williams & Seeger, 2001) can enable speedup, but this is not investigated in the paper. Again we note here that kernelbased gradient estimators can still be faster than e.g. the DAE estimator since no doubleloop optimisation is required. Certainly it is possible to apply earlystopping for the innerloop DAE fitting. However the resulting gradient approximation might be very poor, which leads to unstable training and poorly fitted implicit distributions.
3.4 Adding predictive power
Though providing potentially more accurate approximations, the nonparametric estimator (9) has no predictive power as described so far. Crucially, many tasks in machine learning require predicting gradient functions at samples drawn from distributions other than , for example, in MLE corresponds to the model distribution which is learned using samples from the data distribution instead. To address this issue, we derive two predictive estimators, one generalised from the nonparametric estimator and the other minimises KSD using parametric approximations.
Predictions using the nonparametric estimator.
Let us consider an unseen datum . If is sampled from , then one can also apply the nonparametric estimator (9) for gradient approximation, given the observed data . Concretely, if writing then the nonparametric Stein gradient estimator computed on is
with denoting a matrix with rows , and only differentiates through the second argument. Then we demonstrate in the appendix that, by simple matrix calculations and assuming a translation invariant kernel, we have (with column vector ):
(18)  
In practice one would store the computed gradient , the kernel matrix inverse and as the “parameters” of the predictive estimator. For a new observation in general, one can “pretend” is a sample from and apply the above estimator as well. The approximation quality depends on the similarity between and , and we conjecture here that this similarity measure, if can be described, is closely related to the KSD.
Fitting a parametric estimator using KSD.
The nonparametric predictive estimator could be computationally demanding. Setting aside the cost of fitting the “parameters”, in prediction the time complexity for the nonparametric estimator is . Also storing the “parameters” needs memory for
. These costs make the nonparametric estimator undesirable for highdimensional data, since in order to obtain accurate predictions it often requires
scaling with as well. To address this, one can also minimise the KSD using parametric approximations, in a similar way as to derive the score matching estimator in Section 3.3.2. More precisely, we define a parametric approximation in a similar fashion as (17), and in the appendix we show that if the RBF kernel is used for both the KSD and the parametric approximation, then the linear coefficients can be calculated analytically: , where(19)  
with the “gram matrix” that has elements . Then for an unseen observation the gradient approximation returns . In this case one only maintains the linear coefficients and computes a linear combination in prediction, which takes memory and time and therefore is computationally cheaper than the nonparametric prediction model (27).
4 Applications
We present some case studies that apply the gradient estimators to implicit models. Detailed settings (architecture, learning rate, etc.) are presented in the appendix. Implementation is released at https://github.com/YingzhenLi/SteinGrad.
4.1 Synthetic example: Hamiltonian flow with approximate gradients
We first consider a simple synthetic example to demonstrate the accuracy of the proposed gradient estimator. More precisely we consider the kernel induced Hamiltonian flow (not an exact sampler) (Strathmann et al., 2015) on a 2dimensional bananashaped object: . The approximate Hamiltonian flow is constructed using the same operator as in Hamiltonian Monte Carlo (HMC) (Neal et al., 2011), except that the exact score function is replaced by the approximate gradients. We still use the exact target density to compute the rejection step as we mainly focus on testing the accuracy of the gradient estimators. We test both versions of the predictive Stein gradient estimator (see section 3.4) since we require the particles of parallel chains to be independent with each other. We fit the gradient estimators on
training datapoints from the target density. The bandwidth of the RBF kernel is computed by the median heuristic and scaled up by a scalar between
. All three methods are simulated foriterations, share the same initial locations that are constructed by target distribution samples plus Gaussian noises of standard deviation 2.0, and the results are averaged over 200 parallel chains.
We visualise the samples and some MCMC statistics in Figure 2. In general all the resulting Hamiltonian flows are HMClike, which give us the confidence that the gradient estimators extrapolate reasonably well at unseen locations. However all of these methods have trouble exploring the extremes, because at those locations there are very few or even no training datapoints. Indeed we found it necessary to use large (but not too large) bandwidths, in order to both allow exploration of those extremes, and ensure that the corresponding test function is not too smooth. In terms of quantitative metrics, the acceptance rates are reasonably high for all the gradient estimators, and the KSD estimates (across chains) as a measure of sample quality are also close to that computed on HMC samples. The returned estimates of are close to zero which is the ground true value. We found that the nonparametric Stein gradient estimator is more sensitive to hyperparameters of the dynamics, e.g. the stepsize of each HMC step. We believe a careful selection of the kernel (e.g. those with long tails) and a better search for the hyperparameters (for both the kernel and the dynamics) can further improve the sample quality and the chain mixing time, but this is not investigated here.
4.2 Metalearning of approximate posterior samplers for Bayesian NNs
One of the recent focuses on metalearning has been on learning optimisers for training deep neural networks, e.g. see (Andrychowicz et al., 2016). Could analogous goals be achieved for approximate inference? In this section we attempt to learn an approximate posterior sampler for Bayesian neural networks (Bayesian NNs, BNNs) that generalises to unseen datasets and architectures. A more detailed introduction of Bayesian neural networks is included in the appendix, and in a nutshell, we consider a binary classification task: , . After observing the training data , we first obtain the approximate posterior , then approximate the predictive distribution for a new observation as In this task we define an implicit approximate posterior distribution as the following stochastic normalising flow (Rezende & Mohamed, 2015) : given the current location and the minibatch data , the update for the next step is
(20)  
The coordinates of the noise standard deviation and the moving direction are parametrised by a coordinatewise neural network. If properly trained, this neural network will learn the best combination of the current location and gradient information, and produce approximate posterior samples efficiently on different probabilistic modelling tasks. Here we propose using the variational inference objective (2) computed on the samples to learn the variational parameters
. Since in this case the gradient of the log joint distribution can be computed analytically, we only approximate the gradient of the entropy term
as in (3), with the exact score function replaced by the presented gradient estimators. We report the results using the nonparametric Stein gradient estimator as we found it works better than the parametric version. The RBF kernel is applied for gradient estimation, with the hyperparameters determined by a grid search on the bandwidth and .We briefly describe the test protocol. We take from the UCI repository (Lichman, 2013) six binary classification datasets (australian, breast, crabs, ionosphere, pima, sonar), train an approximate sampler on crabs with a small neural network that has one 20unit hidden layer with ReLU activation, and generalise to the remaining datasets with a bigger network that has 50 hidden units and uses sigmoid activation. We use ionosphere as the validation set to tune . The remaining 4 datasets are further split into 40% training subset for simulating samples from the approximate sampler, and 60% test subsets for evaluating the sampler’s performance.
Figure 3 presents the (negative) test loglikelihood (LL), classification error, and an estimate of the KSD Ustatistic (with data subsampling) over 5 splits of each test dataset. Besides the gradient estimators we also compare with two baselines: an approximate posterior sampler trained by maximum a posteriori (MAP), and stochastic gradient Langevin dynamics (SGLD) (Welling & Teh, 2011) evaluated on the test datasets directly. In summary, SGLD returns best results in KSD metric. The Stein approach performs equally well or a little better than SGLD in terms of testLL and test error. The KDE method is slightly worse and is close to MAP, indicating that the KDE estimator does not provide a very informative gradient for the entropy term. Surprisingly the score matching estimator method produces considerably worse results (except for breast dataset), even after carefully tuning the bandwidth and the regularisation parameter
. Future work should investigate the usage of advanced recurrent neural networks such as an LSTM
(Hochreiter & Schmidhuber, 1997), which is expected to return better performance.4.3 Towards addressing mode collapse in GANs using entropy regularisation
GANs are notoriously difficult to train in practice. Besides the instability of gradientbased minimax optimisation which has been partially addressed by many recent proposals (Salimans et al., 2016; Arjovsky et al., 2017; Berthelot et al., 2017), they also suffer from mode collapse. We propose adding an entropy regulariser to the GAN generator loss. Concretely, assume the generative model is implicitly defined by , then the generator’s loss is defined by
(21) 
where is the original loss function for the generator from any GAN algorithm and is a hyperparameter. In practice (the gradient of) (21) is estimated using Monte Carlo.
We empirically investigate the entropy regularisation idea on the very recently proposed boundary equilibrium GAN (BEGAN) (Berthelot et al., 2017) method using (continuous) MNIST, and we refer to the appendix for the detailed mathematical setup. In this case the nonparametric Vstatistic Stein gradient estimator is used. We use a convolutional generative network and a convolutional autoencoder and select the hyperparameters of BEGAN , and . The Epanechnikov kernel is used as the pixel values lie in a unit interval (see appendix for the expression of the score matching estimator), and to ensure the boundary condition we clip the pixel values into range . The generated images are visualised in Figure 5. BEGAN without the entropy regularisation fails to generate diverse samples even when trained with learning rate decay. The other three images clearly demonstrate the benefit of the entropy regularisation technique, with the Stein approach obtaining the highest diversity without compromising visual quality.
We further consider four metrics to assess the trained models quantitatively. First 500 samples are generated for each trained model, then we compute their nearest neighbours in the training set using distance, and obtain a probability vector by averaging over these neighbour images’ label vectors. In Figure 5 we depict the entropy of (top left), averaged distances to the nearest neighbour (top right), and the difference between the largest and smallest elements in (bottom right). The error bars are obtained by 5 independent runs. These results demonstrate that the Stein approach performs significantly better than the other two, in that it learns a better generative model not only faster but also in a more stable way. Interestingly the KDE approach achieves the lowest average distance to nearest neighbours, possibly because it tends to memorise training examples. We next train a fully connected network on MNIST that achieves 98.16% text accuracy, and compute on the generated images an empirical estimate of the inception score (Salimans et al., 2016) with (bottom left panel). High inception score indicates that the generate images tend to be both realistic looking and diverse, and again the Stein approach outperforms the others on this metric by a large margin.
Concerning computation speed, all the three methods are of the same order: 10.20s/epoch for KDE, 10.85s/epoch for Score, and 10.30s/epoch for Stein.
^{1}^{1}1All the methods are timed on a machine with an NVIDIA GeForce GTX TITAN X GPU. This is because (in the experiments and ) so that the complexity terms are dominated by kernel computations () required by all the three methods. Also for a comparison, the original BEGAN method without entropy regularisation runs for 9.05s/epoch. Therefore the main computation cost is dominated by the optimisation of the discriminator/generator, and the proposed entropy regularisation can be applied to many GAN frameworks with little computational burden.5 Conclusions and future work
We have presented the Stein gradient estimator as a novel generalisation to the score matching gradient estimator. With a focus on learning implicit models, we have empirically demonstrated the efficacy of the proposed estimator by showing how it opens the door to a range of novel learning tasks: approximating gradientfree MCMC, metalearning for approximate inference, and unsupervised learning for image generation. Future work will expand the understanding of gradient estimators in both theoretical and practical aspects. Theoretical development will compare both the Vstatistic and Ustatistic Stein gradient estimators and formalise consistency proofs. Practical work will improve the sample efficiency of kernel estimators in high dimensions and develop fast yet accurate approximations to matrix inversion. It is also interesting to investigate applications of gradient approximation methods to training implicit generative models without the help of discriminators. Finally it remains an open question that how to generalise the Stein gradient estimator to nonkernel settings and discrete distributions.
Acknowledgement
We thank Marton Havasi, Jiri Hron, David Janz, Qiang Liu, Maria Lomeli, Cuong Viet Nguyen and Mark Rowland for their comments and helps on the manuscript. We also acknowledge the anonymous reviewers for their review. Yingzhen Li thanks Schlumberger Foundation FFTF fellowship. Richard E. Turner thanks Google and EPSRC grants EP/M0269571 and EP/L000776/1.
References
 Alain & Bengio (2014) Guillaume Alain and Yoshua Bengio. What regularized autoencoders learn from the datagenerating distribution. The Journal of Machine Learning Research, 15(1):3563–3593, 2014.
 Andrychowicz et al. (2016) Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, pp. 3981–3989, 2016.
 Arjovsky et al. (2017) Martin Arjovsky, Soumith Chintala, and Leon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
 Barber & Agakov (2003) David Barber and Felix V Agakov. The im algorithm: A variational approach to information maximization. In NIPS, pp. 201–208, 2003.
 Berlinet & ThomasAgnan (2011) Alain Berlinet and Christine ThomasAgnan. Reproducing kernel Hilbert spaces in probability and statistics. Springer Science & Business Media, 2011.
 Berthelot et al. (2017) David Berthelot, Tom Schumm, and Luke Metz. Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017.
 Chwialkowski et al. (2016) Kacper Chwialkowski, Heiko Strathmann, and Arthur Gretton. A kernel test of goodness of fit. In Proceedings of The 33rd International Conference on Machine Learning, pp. 2606–2615, 2016.
 De Brabanter et al. (2013) Kris De Brabanter, Jos De Brabanter, Bart De Moor, and Irène Gijbels. Derivative estimation with local polynomial fitting. The Journal of Machine Learning Research, 14(1):281–301, 2013.
 Diggle & Gratton (1984) Peter J Diggle and Richard J Gratton. Monte carlo methods of inference for implicit statistical models. Journal of the Royal Statistical Society. Series B (Methodological), pp. 193–227, 1984.
 Fan & Gijbels (1996) Jianqing Fan and Irène Gijbels. Local polynomial modelling and its applications. Chapman & Hall, 1996.
 Goodfellow et al. (2014) Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
 Gorham & Mackey (2015) Jackson Gorham and Lester Mackey. Measuring sample quality with stein’s method. In NIPS, 2015.
 Gorham & Mackey (2017) Jackson Gorham and Lester Mackey. Measuring sample quality with kernels. In ICML, 2017.

Hinton (2002)
Geoffrey E Hinton.
Training products of experts by minimizing contrastive divergence.
Neural computation, 14(8):1771–1800, 2002.  Hochreiter & Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. Long shortterm memory. Neural computation, 9(8):1735–1780, 1997.
 Huszár (2017) Ferenc Huszár. Variational inference using implicit distributions. arXiv preprint arXiv:1702.08235, 2017.
 Hyvärinen (2005) Aapo Hyvärinen. Estimation of nonnormalized statistical models by score matching. Journal of Machine Learning Research, 6(Apr):695–709, 2005.

Hyvärinen (2006)
Aapo Hyvärinen.
Consistency of pseudolikelihood estimation of fully visible boltzmann machines.
Neural Computation, 18(10):2283–2292, 2006. 
Johnson (2004)
Oliver Thomas Johnson.
Information theory and the central limit theorem
. World Scientific, 2004.  Karaletsos (2016) Theofanis Karaletsos. Adversarial message passing for graphical models. arXiv preprint arXiv:1612.05048, 2016.
 Kingma & Ba (2015) Diederick P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015.
 Kodali et al. (2017) Naveen Kodali, Jacob Abernethy, James Hays, and Zsolt Kira. How to train your dragan. arXiv preprint arXiv:1705.07215, 2017.
 Li & Liu (2016) Yingzhen Li and Qiang Liu. Wild variational approximations. NIPS workshop on advances in approximate Bayesian inference, 2016.
 Li et al. (2017) Yingzhen Li, Richard E Turner, and Qiang Liu. Approximate inference with amortised mcmc. arXiv preprint arXiv:1702.08343, 2017.
 Lichman (2013) M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml.
 Liu & Feng (2016) Qiang Liu and Yihao Feng. Two methods for wild variational inference. arXiv preprint arXiv:1612.00081, 2016.
 Liu et al. (2016) Qiang Liu, Jason D Lee, and Michael I Jordan. A kernelized stein discrepancy for goodnessoffit tests and model evaluation. In ICML, 2016.
 Lyu (2009) Siwei Lyu. Interpretation and generalization of score matching. In Proceedings of the TwentyFifth Conference on Uncertainty in Artificial Intelligence, pp. 359–366. AUAI Press, 2009.

Marlin et al. (2010)
Benjamin Marlin, Kevin Swersky, Bo Chen, and Nando Freitas.
Inductive principles for restricted boltzmann machine learning.
In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 509–516, 2010.  Mescheder et al. (2017) Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. arXiv preprint arXiv:1701.04722, 2017.
 Mohamed & Lakshminarayanan (2016) Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXiv preprint arXiv:1610.03483, 2016.
 Neal et al. (2011) Radford M Neal et al. Mcmc using hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 2:113–162, 2011.
 Radford et al. (2016) Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
 Ranganath et al. (2016) Rajesh Ranganath, Jaan Altosaar, Dustin Tran, and David M. Blei. Operator variational inference. In NIPS, 2016.
 Rezende & Mohamed (2015) Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. In ICML, 2015.
 Roeder et al. (2017) Geoffrey Roeder, Yuhuai Wu, and David Duvenaud. Sticking the landing: An asymptotically zerovariance gradient estimator for variational inference. arXiv preprint arXiv:1703.09194, 2017.
 Salimans et al. (2016) Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, 2016.
 Särelä & Valpola (2005) Jaakko Särelä and Harri Valpola. Denoising source separation. Journal of machine learning research, 6(Mar):233–272, 2005.
 Sasaki et al. (2014) Hiroaki Sasaki, Aapo Hyvärinen, and Masashi Sugiyama. Clustering via mode seeking by direct estimation of the gradient of a logdensity. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 19–34. Springer, 2014.
 Singh (1977) Radhey S Singh. Improvement on some known nonparametric uniformly consistent estimators of derivatives of a density. The Annals of Statistics, pp. 394–399, 1977.
 Smola & Schökopf (2000) Alex J Smola and Bernhard Schökopf. Sparse greedy matrix approximation for machine learning. In Proceedings of the Seventeenth International Conference on Machine Learning, pp. 911–918. Morgan Kaufmann Publishers Inc., 2000.

Sonderby et al. (2017)
Casper Kaae Sonderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc
Huszár.
Amortised map inference for image superresolution.
In ICLR, 2017. 
Stein (1972)
Charles Stein.
A bound for the error in the normal approximation to the distribution
of a sum of dependent random variables.
In
Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 2: Probability Theory
, pp. 583–602, 1972. 
Stein (1981)
Charles M Stein.
Estimation of the mean of a multivariate normal distribution.
The annals of Statistics, pp. 1135–1151, 1981.  Stone (1985) Charles J Stone. Additive regression and other nonparametric models. The annals of Statistics, pp. 689–705, 1985.
 Strathmann et al. (2015) Heiko Strathmann, Dino Sejdinovic, Samuel Livingstone, Zoltan Szabo, and Arthur Gretton. Gradientfree hamiltonian monte carlo with efficient kernel exponential families. In Advances in Neural Information Processing Systems, pp. 955–963, 2015.
 Sugiyama et al. (2009) Masashi Sugiyama, Takafumi Kanamori, Taiji Suzuki, Shohei Hido, Jun Sese, Ichiro Takeuchi, and Liwei Wang. A densityratio framework for statistical data processing. Information and Media Technologies, 4(4):962–987, 2009.
 Sugiyama et al. (2012) Masashi Sugiyama, Taiji Suzuki, and Takafumi Kanamori. Densityratio matching under the bregman divergence: a unified framework of densityratio estimation. Annals of the Institute of Statistical Mathematics, 64(5):1009–1044, 2012.
 Tran et al. (2017) Dustin Tran, Rajesh Ranganath, and David M Blei. Deep and hierarchical implicit models. arXiv preprint arXiv:1702.08896, 2017.
 Turner & Sahani (2011) R. E. Turner and M. Sahani. Two problems with variational expectation maximisation for timeseries models. In D. Barber, T. Cemgil, and S. Chiappa (eds.), Bayesian Time series models, chapter 5, pp. 109–130. Cambridge University Press, 2011.
 Uehara et al. (2016) Masatoshi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Generative adversarial nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920, 2016.

Vincent (2011)
Pascal Vincent.
A connection between score matching and denoising autoencoders.
Neural computation, 23(7):1661–1674, 2011.  Vincent et al. (2008) Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and PierreAntoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pp. 1096–1103. ACM, 2008.
 Wang & Liu (2016) Dilin Wang and Qiang Liu. Learning to draw samples: With application to amortized mle for generative adversarial learning. arXiv preprint arXiv:1611.01722, 2016.
 Welling & Teh (2011) Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML11), pp. 681–688, 2011.
 Williams & Seeger (2001) Christopher KI Williams and Matthias Seeger. Using the nyström method to speed up kernel machines. In Advances in neural information processing systems, pp. 682–688, 2001.
 Yu et al. (2017) Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: sequence generative adversarial nets with policy gradient. In ThirtyFirst AAAI Conference on Artificial Intelligence, 2017.
 Zhou & Wolfe (2000) Shanggang Zhou and Douglas A Wolfe. On derivative estimation in spline regression. Statistica Sinica, pp. 93–108, 2000.
Appendix A Score matching estimator: remarks and derivations
In this section we provide more discussions and analytical solutions for the score matching estimator. More specifically, we will derive the linear coefficient for the case of the Epanechnikov kernel.
a.1 Some remarks on score matching
Remark. It has been shown in Särelä & Valpola (2005); Alain & Bengio (2014) that denoising autoencoders (DAEs) (Vincent et al., 2008), once trained, can be used to compute the score function approximately. Briefly speaking, a DAE learns to reconstruct a datum from a corrupted input by minimising the mean square error. Then the optimal DAE can be used to approximate the score function as . Sonderby et al. (2017) applied this idea to train an implicit model for image superresolution, providing some promising results in some metrics. However applying similar ideas to variational inference can be computationally expensive, because the estimation of is a subroutine for VI which is repeatedly required. Therefore in the paper we deploy kernel machines that allow analytical solutions to the score matching estimator in order to avoid double loop optimisation.
Remark. As a side note, score matching can also be used to learn the parameters of an unnormalised density. In this case the target distribution would be the data distribution and is often a Boltzmann distribution with intractable partition function. As a parameter estimation technique, score matching is also related to contrastive divergence (Hinton, 2002), pseudo likelihood estimation (Hyvärinen, 2006), and DAEs (Vincent, 2011; Alain & Bengio, 2014). Generalisations of score matching methods are also presented in e.g. Lyu (2009); Marlin et al. (2010).
a.2 The RBF kernel case
The derivations for the RBF kernel case is referred to (Strathmann et al., 2015), and for completeness we include the final solutions here. Assume the parametric approximation is defined as , where the RBF kernel uses bandwidth parameter . then the optimal solution of the coefficients , with
a.3 The Epanechnikov kernel case
The Epanechnikov kernel is defined as , where the first and second order gradients w.r.t. is
(22) 
Thus the score matching objective with is reduced to
with the matrix elements
Define the “gram matrix” , we write the matrix form of as
Thus with an regulariser, the fitted coefficients are
Appendix B Stein gradient estimator: derivations
b.1 Direct minimisation of KSD Vstatistic and Ustatistic
The Vstatistic of KSD is the following: given samples and recall
(23) 
The last term will be ignored as it does not depend on the approximation . Using matrix notations defined in the main text, readers can verify that the Vstatistic can be computed as
(24) 
Using the cyclic invariance of matrix trace leads to the desired result in the main text. The Ustatistic of KSD removes terms indexed by in (23), in which the matrix form is
(25) 
with the th row of defined as . For most translation invariant kernels this extra term , thus the optimal solution of by minimising KSD Ustatistic is
(26) 
b.2 Deriving the nonparametric predictive estimator
Let us consider an unseen datum . If is sampled from the distribution, then one can also apply the nonparametric estimator (9) for gradient approximations, given the observed data . Concretely, if writing then the nonparametric Stein gradient estimator (using Vstatistic) is
with denoting a matrix with rows , and only differentiates through the second argument. Thus by simple matrix calculations, we have:
(27)  
For translation invariant kernels, typically , and more conveniently,
Thus equation (27) can be further simplified to (with column vector )
(28)  
Comments
There are no comments yet.