1 Introduction
Hamiltonian Monte Carlo (HMC) uses local geometric information provided by the logposterior gradient to explore the high posterior density regions of the parameter space [10]
. Compared to the MetropolisHastings random walk algorithm, HMC has high acceptance probability and low sample autocorrelation even when the parameter space is highdimensional. That said, the advantages of HMC come at a computational cost that limits its application to smaller data sets. The gradient calculation involves the entire data set and scales linearly with the number of observations. As HMC needs to calculate the gradient multiple times within every single step, performing HMC on millions of observations requires an enormous computational budget. Allowing HMC to scale to large data sets would help tackle the double challenge of big data and big models.
There have been two main approaches to scaling HMC to larger data sets. The first is stochastic gradient HMC, which calculates the gradient on subsets of the data. [12] implemented a stochastic gradient version of Langevin Dynamics, which may be viewed as singlestep HMC. [3] introduced stochastic gradient HMC with “friction” to counterbalance the inherently noisy gradient. However, these methods may not be optimal because subsampling substantially reduces the acceptance probability of HMC [2].
The second approach relies on a surrogate function, the gradient of which is less expensive to calculate. [11][7] used Gaussian process (GP) to produce satisfactory results in lower dimensions. However, training a GP is itself computationally expensive and training points must be chosen with great care. More recently, [13] implemented neural network surrogate with random bases. It outperforms GP surrogate in their experiments but fails in parameter spaces of moderate dimensionality.
In this paper, we develop a third approach, neural network gradient HMC (NNgHMC), by using a neural network to directly approximate the gradient instead of using it as a surrogate. We also train all the neural network weights through backpropagation rather than having random weights
[13]. Compared to existing methods, our proposed approach can emulate Hamiltonian flows accurately even when dimensionality increases. In Section 3, details of our method and proof of convergence are presented. Section 4 includes experiments to validate our method and comparisons with previous methods on synthetic and real data.2 Background
2.1 Hamiltonian Monte Carlo
Let denote a probabilistic model with as its corresponding parameter. We also make
a random variable by giving the parameter a prior distribution
. The integration constant of the posterior distribution(1) 
is usually analytically intractable, but the distribution can be numerically simulated using MCMC. The MetropolisHastings algorithm constructs a Markov chain that randomly proposes a new state
from current state based on transition distribution and moves from to with probability . Unfortunately, in a higher dimensional space, the probability of randomly moving to drops dramatically. Therefore, the MH algorithm has trouble exploring the posterior efficiently in higher dimensions.The idea of HMC is to explore a frictionless landscape induced by potential energy function and kinetic energy function where potential energy is proportional to the negative log posterior. HMC introduces an auxiliary Gaussian momentum , and is the negative log density of . Potential energy tends to convert to kinetic energy so will likely move to a position with higher posterior density. More formally, the Hamiltonian system is defined by the following equations.
(2)  
(3)  
(4) 
In theory, convergence of HMC is guaranteed by the time reversibility of the Hamiltonian dynamics which, in turn, renders the Markov chain transitions reversible, thus ensuring detailed balance. By conservation of the Hamiltonian, HMC has acceptance probability 1 and can travel arbitrarily long trajectories along energy level contours. In practice, the Hamiltonian dynamics is simulated with the leapfrog algorithm which adds numerical errors. To ensure convergence to the posterior, a Metropolis correction step is used at the end of each trajectory.
Within each simulated trajectory, the leapfrog algorithm iterates back and forth between Equations (3) and (4), the latter of which features a summation over the loglikelihood evaluated at each separate data point. For large data sets, this repeated evaluation of the gradient becomes infeasible. In Section 3, we show how to greatly speed up HMC using neural network approximations to this gradient term, but first we introduce an important predecessor to our method, the surrogate HMC class of algorithms.
2.2 Surrogate HMC
Two methods for approximating the logposterior in the HMC context have already been advanced. The first uses a Gaussian Process regression to model the logposterior, the second uses a neural network. We refer to the latter as neural network surrogate HMC (NNsHMC). It is natural that both models would be used in such a capacity: Cybenko [4] showed that neural networks can provide universal function approximation, and Neal [9] showed that certain probabilistic neural networks converge to Gaussian processes as the number of hidden units goes to infinity. In this section, we focus on NNsHMC, since it is more closely related to our method (Section 3).
NNsHMC approximates the potential energy with a neural network surrogate and uses during leapfrog steps. The surrogate neural network has one hidden layer with softplus activation:
(5) 
where and
are weight matrices and bias vectors, respectively. Under this formulation, one can explicitly calculate the gradient
(6) 
and represent with another neural network, which is just the backpropagation graph of . Therefore, we can view neural network surrogate as using a constrained network with tied weights and local connections to approximate the gradient.
For training the neural network, Zhang et al [13] uses extreme learning machine (ELM) [5]
. ELM is a simple algorithm that randomly projects the input to the hidden layer and only trains the weights from the hidden layer to the output. Random projection is widely used in machine learning but backpropagation is the “default” training method for most neural networks with its optimality theoretically explained by Baldi et al
[1]. Moreover, since the goal is to improve computational efficiency, we want to make the surrogate neural network as small as possible. From this point of view, large hidden layers often seen in ELMs are less than optimal.3 Neural network gradient HMC
In contrast to previous work, NNgHMC does not use a surrogate function for but fits a neural network to approximate directly with backpropagation. Training data for the neural network are collected during the early period of HMC shortly after convergence. Once the approximate gradient is learned, the algorithm is exactly the same as classical HMC, but with neural network gradient replacing . Details are given in Algorithm 1.
One benefit of our method occurs as early as the data collection process. Since we approximate the gradient and not , we can collect more training data faster: surrogate HMC must reach the end of a leapfrog trajectory before obtaining a single (functional evaluation) training sample; the same leapfrog trajectory renders a new (gradient evaluation) training sample for each leapfrog step, and the number of such steps in a single trajectory can range into the hundreds.
Suppose that there are data points and that the parameter space is dimensional. In this case, gradient calculations involve partial derivatives
(7) 
each of which involves a summation over the data points. On the other hand, performing a forward pass in a shallow neural network is proportional only to the hidden layer size . Once the neural network is trained on burnin samples, posterior sampling with approximated gradient is orders of magnitude faster.
Although the neural network gradient approximation is not the same as , the method nonetheless samples from the true posterior. If one were able to simulate the Hamiltonian system directly, i.e. without numerical integration, then all the benefits of HMC would be preserved in the limit, as the gradient field approximates the true gradient field to arbitrary degree. On the other hand, the NNgHMC transition kernel—characterized by the approximate gradient leapfrog integrator combined with the Metropolis acceptreject step—leaves the posterior distribution invariant. We formalize the relevant results here and defer proofs to the appendix.
An important litmus test for the validity of our method is that it should leave the Hamiltonian invariant in the limit as stepsizes and gradient approximation errors approach zero. In turn, this result will imply high acceptance probabilities when the system is simulated from numerically, and when gradient approximations are good.
Proposition 1.
When the system induced by the approximate gradient field is simulated directly, changes in the Hamiltonian converge in probability to 0 as the approximate gradient converges pointwise to the true gradient. That is, for a sequence of approximate gradient fields converging to the true gradient field , the change in Hamiltonian values satisfies
(8) 
Proof.
Following [4], assume we are able to construct a sequence of approximate gradients satisfying
(9) 
where is the ball around the origin of radius . In this case, the vector field given by the approximate gradient induces a new system of equations:
(10)  
Then it follows that
(11)  
This last line implies is , and hence that is . ∎
We note that Proposition 1 is a local result, and that local deviations from the true Hamiltonian flow will accrue to larger global deviations in general. While this may seem disconcerting, NNgHMC maintains remarkably high acceptance rates in practice. To help understand why this is the case, we present local and global error analyses for the dynamics of the ordinary differential equation initial value problem
(12) 
approximated with function . These results will then be related back to NNgHMC by specifying and
(13) 
The general form of the following proofs follows after Section 2.1.2 of [8].
Proposition 2.
(Local error bounds) Let be the initial value, let be the value of the exact, true trajectory after traveling for time , and let be the value of the computed trajectory using Euler’s method applied to the approximated gradient field. Finally, assume that the exact solution is twice continuously differentiable. Then the local error has the following bounds:
(14) 
where measures the difference between the true, exact trajectory and the approximated trajectory at point .
Proof.
The proof follows from the Taylor expansion of both and :
(15)  
where . The result follows from the triangular inequality. ∎
From the above result, it follows that the local error rate approaches the error rate of Euler’s method using the true gradient field as goes to 0. The same approach can be used to obtain global error bounds.
Proposition 3.
(Global error bounds) We adopt the same notation as above with the addition of the error at iteration , , where is the value after Euler updates using the approximate gradient field. Also, let . Again we assume that the exact solution is twice differentiable, and we further assume that it is Lipschitz with constant . Then the following bounds on hold:
(16) 
and .
Proof.
The proof proceeds by recursion. Assume that we have obtained . Letting , a Taylor’s expansion gives:
(17)  
But is continuous by assumption, so we can bound on the closed interval by a constant . Furthermore, the Lipschitz assumption combined with the triangle inequality give:
(18)  
Next we make use of the following recursion relationship:
(19) 
for and . Noting that gives
(20) 
and the result follows. ∎
The above result suggests that the usual numerical error caused by a large Lipschitz constant can overpower the effects of gradient approximation error .
The preservation of volume entailed by both the theoretical Hamiltonian flow and the leapfrog integrator is important for HMC. The latter fact implies there is no need for Jacobian corrections within the acceptreject step. It turns out that the NNgHMC dynamics also preserve volume, both for direct and for leapfrog simulation.
Lemma 1.
Both for infinitesimal and finite step sizes, the NNgHMC trajectory preserves volume.
Proof.
For the finite case, the leapfrog algorithm iterates between shear transformations and so preserves volume [10]. For the case of direct simulation, we use the fact that the Hamiltonian vector field induced by the approximate gradient field has zero divergence (Liouville’s Theorem, [10]). We use the notation of Proposition 1, but drop the subscript for the sake of readability:
(21)  
∎
Not only does the NNgHMC trajectory preserve volume, it is reversible as well. This easy fact is shown in the proof of Proposition 2.
Theorem 1.
The NNgHMC transition kernel leaves the canonical distribution invariant.
Proof.
Since leapfrog integration preserves volume and since the Metropolis acceptance probability is the same as for classical HMC, all we need to show is that the leapfrog integration is reversible. This fact follows in the exact same way as for HMC, despite the use of an approximate gradient field to generate the dynamics:
(22)  
These are the same equations as in [10] except with replacing . Hence, just as in [10], the NNgHMC leapfrog equations are symmetric and thus reversible: to reverse a sequence of leapfrog dynamics, negate , take the same number of steps, and negate again. It follows that the NNgHMC transition kernel leaves the canonical distribution invariant and is an asymptotically exact method for sampling from the posterior distribution. ∎
Regardless of the accuracy of neural network gradient approximation, following the leapfrog simulated Hamiltonian proposal scheme would recover the true posterior distribution when combined with MetropolisHastings correction. If the gradient approximation is “bad,” NNgHMC would break down to a random walk algorithm. If the gradient approximation is “close enough,” NNgHMC would behave just like standard HMC, operating on energy level contours at a fraction of the computation cost. The neural network gradient approximation can be controlled with two tuning parameters: hidden layer size and training time , in addition to leapfrog steps and step size .
The neural network architecture is fixed to have one hidden layer and the number of units has to be predetermined. Neural network training time can be either set to some number of epochs or dependent on a stopping criterion (typically based on change in loss function between epochs). Since there is no noise (error) in the gradient, overfitting is not a concern; the hidden layer size and training time could be relatively large.
Given sufficient training data, the neural network will be able to accurately approximate the gradient field. The important question is: how much training data should be collected? To address this, we propose a training schedule that consists of a start point, an end point, and a rate for gradient data collection. For example, one may wish to run a HMC chain to draw 5000 samples in total. A training schedule could be training the neural network every 200 draws between the 400th and 1000th draws. After the neural network is trained each time, one would use the approximated gradient to sample for some iterations. If the acceptance probability is similar to that of standard HMC, one would stop the training schedule and complete the entire chain with NNgHMC. Otherwise, standard HMC would be used to sample the remaining draws. Since training the neural network and using it to sample is much cheaper computationally compared to standard HMC, the training schedule would add little overhead even if the neural network gradient approximation fails.
4 Experiments
In this section, we demonstrate the merits of proposed method: accuracy and scalability through a variety of experiments. The accuracy of gradient approximation can be reflected by high acceptance probability that is similar to standard HMC using the true gradient. Compared to draws from stochastic gradient HMC, the draws using proposed method are much more similar to standard HMC draws. Scalability means better performance when both data size and dimensionality
increase. The performance metric is effective sample size (ESS) adjusted by CPU time. ESS estimates the number of “independent” samples by factoring
correlation between samples at lag into account:The previous surrogate approach fails when reaches while the proposed method works well up to . Lastly, speed evaluation is done on three real data sets and the proposed method consistently beats standard HMC even when the time to collect training data and train the neural network is included.
Our proposed NNgHMC method is implemented in Keras and uses the default Adam optimizer
[6] during training. All experiments are performed on a 3.4 GHz Intel QuadCore CPU and our code is available at: github.com/linggeli7/hamiltonian.4.1 Distributions with challenging gradient fields
The banana shaped distribution in two dimensions can be sampled using the following unnormalized density
(23) 
where control the scale in space and B determines the curvature. For HMC, the energy function is set to be and the its gradient can be easily calculated. Using leapfrog steps and step size , standard HMC is used to sample 5000 draws with acceptance probability 0.58. Gradient values collected during the first 1000 draws are then used to train a neural network with hidden layer size for epochs. With the same tuning parameters, NNgHMC is used to sample 5000 draws with acceptance probability 0.57. Figure 2 compares standard HMC and NNgHMC draws, the true and approximated gradient fields, and two long simulated leapfrog trajectories using both. The neural network learns the distorted gradient field accurately and NNgHMC completely recovers the banana shape.
Next, we illustrate the proposed method on a
multivariate Gaussian distribution with illconditioned covariance.
The distribution is given by where is a diagonal matrix with smallest value , largest value and other values uniformly drawn between 1 and 100. As the distribution is on very disparate scales in different dimensions, HMC needs accurate gradient information to move accordingly. For HMC, the leapfrog step size is set to be 0.5 and the number of steps is set to be 100 so that acceptance probability is around 0.7. If the step size is too big, HMC would miss the high density region in the narrowest dimension. Without a sufficiently long trajectory, HMC would fail to explore the elongated tails in the widest dimension.We collect sample gradients until 50 iterations after convergence to train the neural network. The neural network has units in the hidden layer and is trained for epochs. With the same tuning parameters as standard HMC, NNgHMC has acceptance probability around 0.5. Despite slightly lower acceptance probability, as shown in Figure 3, NNgHMC converges to the true posterior just as standard HMC. With more training data, the neural network will learn the gradient field more accurately and NNgHMC will have similar acceptance probability as standard HMC.
4.2 200dimensional Bayesian logistic regression
Next we demonstrate the scalability of proposed method on logistic regression with simulated data. The
matrix has rows drawn from a 200 dimensional multivariate Gaussian distribution with mean zero and covariance . The regression coefficients are drawn independently from . Given and , the response vector is created with . With leapfrog steps and step size , HMC makes 1000 draws in 300 seconds with acceptance probability around . 4000 training points and gradients, which come from 200 draws after convergence, are used for neural network training.With the same tuning parameters, NNgHMC can make 1000 draws in just 40 seconds with acceptance probability around . HMC yields 1.5 effective samples per second while NNgHMC yields 6.75 effective draws per second on average. The improvement on effective sample size and CPU time ratio is considerable and will only increase as the size of the data set increases.
The choice of prior plays an important role in Bayesian inference, and it is common to fit models with different priors for sensitivity analysis. The gradient of energy function
is equal to the sum of the gradient of negative loglikelihood and the gradient of log prior . As the proposed method provides an accurate approximation of under prior , adding will yield an approximation of under a new prior . In this case, NNgHMC can sample from the new posterior much faster than HMC without additional training. Figure 4 compares the NNgHMC and HMC posteriors.Remark 1.
While there are no fixed rules on the size of hidden layers, nongenerative models typically have larger hidden layers than output layers. With input and output dimensions both being , a large hidden layer of size would lead to total units, which is computationally expensive. Meanwhile, a network with a hidden layer of size has half as many total units but is not nearly as expressive. Here we use eight disjoint hidden layers of size 50 to approximate 25 dimensional blocks of the gradient to cut down the number of total units to 90,000. Figure 5 compares the training losses of these three networks.
4.3 Lowdimensional models with expensive gradients
In this section, we evaluate our method using two models that involve costly gradient evaluations in spite of their typically low dimensions. First, we focus on the generalized autoregressive conditional heteroskedasticity (GARCH), which is a common econometric model that models the variance at time as a function of previous observations and variances. The general GARCH model is given by
(24)  
(25) 
Conditioning on the first observations, the likelihood is the product of densities. The likelihood and gradient calculation for GARCH models can be slow as it has to be done iteratively and scales with the number of observations. As shown in figure 6, 1000 observations are generated with a model and used as data for comparing standard HMC and NNgHMC. Truncated uninformative Gaussian priors are used because of GARCH stationarity constraints. 10000 draws are taken with standard HMC and gradient values collected between 1000 to 2000 iterations are used for training. Training a neural network with hidden layer size 50 takes 5s. With tuning parameters fixed at step size and leapfrog steps, standard HMC and NN gradient HMC both have close to 0.7 acceptance probability, but the latter is more computationally efficient (Table 1).
Method  AP  ESS  CPU time  Median ESS/s  Speedup 
Standard  0.72  (99, 261, 424)  436s  0.60  1 
NNg  0.70  (116, 176, 303)  59s  2.98  4.98 
AP: acceptance probability  
ESS: effective sample size (min, median, max) after removing 10% burnin 
Gaussian process is computationally expensive because the covariance matrix is and inverting it requires computation. Here we consider a Gaussian process regression model with the Matérn kernel:
(26)  
(27) 
where is the Euclidean distance between two observations and , length scale and smoothness are the hyperparameters. denotes the gamma function and is the modified Bessel function of the second kind. Here
is fixed at 1.5 to limit Gaussian process draws to be once differentiable functions. It is common to add white noise
to the covariance matrix for numerical stability. Therefore, the second free hyperparameters besides is. Diffused Lognormal priors are used for the hyperparameters. The
data matrix is drawn from Multivariate Gaussian with mean zero and identity covariance. is obtained by mapping with a polynomial pattern and adding noise.10000 draws are sampled using standard HMC with leapfrog steps and step size ; the acceptance probability is 0.83 but it is very time consuming. Gradient collected during the first 1000 draws is then used to train a neural network with hidden layer size for epochs. Using the same tuning parameters, NNgHMC can sample 10000 draws in much shorter time with the same acceptance probability (Table 2). Figure 7 compares the standard HMC and NNgHMC posteriors; Figure 8 compares Gaussian process model posterior draws along one particular direction.
Method  AP  ESS  CPU time  Median ESS/s  Speedup 
Standard  0.83  (5135, 5754, 7635)  1834s  3.14  1 
NNg  0.84  (4606, 6172, 7741)  50s  123.4  39.3 
AP: acceptance probability  
ESS: effective sample size (min, median, max) after removing 10% burnin 
4.4 Comparison with stochastic gradient HMC
Naïve stochastic gradient HMC using minibatches of data is problematic as the noisy gradient can push the sampler away from the target region. Recent more advanced stochastic gradient method uses a friction term and is shown to sample from the true posterior asymptotically. The formulation of SGHMC is given by:
(28)  
(29) 
where is the noise added to the gradient by subsampling. In practice, the friction term is set arbitrarily.
To further improve speed, SGHMC does not perform MetropolisHastings correction and uses very small step sizes. The SGHMC posterior is dependent on the choice of step size; however, a priori one would not know the optimal step size. Here we want to show that while SGHMC provides fast approximation of the true posterior when data are abundant, the SGHMC posterior may not be suitable for inference.
In our experiment, the Cover Type data from UCI machine learning repository is used. We run standard HMC for 4000 iterations with leapfrog steps and step size . We also run SGHMC for 4000 iterations with default parameters and varying step sizes from to .
Figure 9 illustrates the main issue with SGHMC. For these two marginal distributions, the SGHMC posteriors have roughly the same location but completely different shapes. On the other hand, NNgHMC posteriors agree with the standard HMC posteriors almost exactly.
Another comparison with SGHMC is performed with MetropolisHastings correction. Here the subsampled data size is 5000 and the tuning parameters are leapfrog steps and step size so that the simulated trajectory is shorter for less gradient noise to compound. While SGHMC is faster still, the quality of samples is inferior compared to proposed method as indicated by lower ESS in Table 3 and less mixed trace plot in Figure 10. Overall, NNgHMC still outperforms SGHMC in terms of median EES per second.
Method  AP  ESS  CPU time  Median ESS/s  Speedup 
Standard  0.80  (73, 143, 10000)  3147s  0.05  1 
NNg  0.67  (57, 186, 7174)  710s  0.26  5.77 
SG  0.33  (49, 59, 246)  357s  0.17  3.64 
AP: acceptance probability  
ESS: effective sample size (min, median, max) after removing 10% burnin 
4.5 Comparison with Gaussian process surrogate
We now compare our method to the Gaussian process surrogate approach with the squared exponential kernel parametrized by . We also add white noise to the covariance matrix. The squared exponential kernel, the default choice, is infinitely differentiable and gives rise to another Gaussian process as the derivative. Given observations and , we can explicitly write down the mean of the derivative at .
(30) 
Here we estimate both the length scale in the squared exponential kernel and the white noise parameter jointly with maximum likelihood. The estimation requires inverting the observed covariance matrix, which is where is the number of observations.
We compare with GP surrogate method on multivariate Gaussian distributions with covariance where varies from 10 to 40. For each Gaussian, we generate training data points to train the GP surrogate and neural network. The neural network has 100 hidden units and is trained for 10 epochs. After training, both methods are used to draw 1000 samples.
Table 4 compares acceptance probability for the two methods. We can see that the neural network predicted gradient provides better approximation overall than the gradient of GP as indicated by higher acceptance probability. This advantage is more pronounced as dimensionality increases.
Method  Dimension / Training  500  1000  2000 

Gaussian process  10  0.65  0.61  0.57 
20  0.64  0.65  0.62  
40  0.31  0.32  0.32  
Neural network gradient  10  0.95  0.96  0.97 
20  0.82  0.87  0.91  
40  0.61  0.75  0.87 
4.6 Speed evaluation on real data
Similar to other surrogate methods, NNgHMC has three stages: training data collection, training, and sampling. We have demonstrated that using a neural network can provide accurate approximation of the gradient, however, the effectiveness of our method still needs to be evaluated by time. If the neural network requires too much training data, then it would not reduce computation time. Here we first run standard HMC to draw a desired number of samples (10000) and record time as benchmark. Then we collect different amounts of training data points (10%, 15% and 20% of total number) and use NNgHMC to draw remaining samples. The time to collect training data and train the neural network is included for NNgHMC. As shown in Table 5, 10% of training data is sufficient for the neural network to learn the gradient and gives the most speedup. While adding more training data increases the quality of gradient approximation, the computation cost outweighs the benefit of higher acceptance probability.
Method  AP  ESS  CPU time  Median ESS/s  Speedup 
Online News Popularity ()  
Standard  0.77  (777, 2021, 5929)  3607s  0.66  1 
NNg (10%)  0.61  (605, 1416, 4865)  502s  2.82  4.27 
NNg (15%)  0.64  (620, 1382, 5500)  678s  2.04  3.09 
NNg (20%)  0.68  (700, 1731, 5397)  854s  2.03  3.08 
Census Income ()  
Standard  0.84  (6306, 9390, 10000)  1796s  5.23  1 
NNg (10%)  0.60  (4023, 6024, 7156)  564s  10.68  2.04 
NNg (15%)  0.68  (4617, 7511, 9201)  656s  11.45  2.19 
NNg (20%)  0.76  (5036, 7558, 8696)  740s  10.21  1.95 
Dota2 Games Results ()  
Standard  0.75  (1677, 5519, 8621)  20760s  0.27  1 
NNg (10%)  0.59  (1446, 4197, 6442)  2903s  1.45  5.44 
NNg (15%)  0.70  (1901, 4865, 7600)  3911s  1.24  4.59 
NNg (20%)  0.74  (2432, 5744, 8860)  4992s  1.15  4.26 
AP: acceptance probability  
ESS: effective sample size (min, median, max) after removing 10% burnin 
5 Discussion
Whereas HMC is helpful for computing large Bayesian models, its repeated gradient evaluations become overly costly for big data analysis. We have presented a method that circumvents the costly gradient evaluations, not by subsampling data batches but by learning an approximate gradient that is functionally free of the data. We find that multioutput, feedforward neural networks are ripe for this application: NNgHMC is able to handle models of comparatively large dimensionality.
The NNgHMC algorithm is an important paradigm shift away from the class of surrogate function approximate HMC algorithms, but this shift leaves many open questions. Much work is needed to extend NNgHMC to an online, adaptive methodology: what measures of approximation error will be useful criteria for ending the training regime of the algorithm, and are there benefits to iterating between training and sampling regimes? Are there any valid secondorder extensions to the NNgHMC algorithm à la Riemannian HMC? Finally—and most interestingly—can the representational power of deep neural networks be leveraged for more accurate approximations to the Hamiltonian flow?
References
 [1] Pierre Baldi and Peter Sadowski. A theory of local learning, the learning channel, and the optimality of backpropagation. Neural Networks, 83:51–74, 2016.
 [2] MJ Betancourt. The fundamental incompatibility of hamiltonian monte carlo and data subsampling. arXiv preprint arXiv:1502.01510, 2015.
 [3] Tianqi Chen, Emily Fox, and Carlos Guestrin. Stochastic gradient hamiltonian monte carlo. In International Conference on Machine Learning, pages 1683–1691, 2014.

[4]
George Cybenko.
Approximation by superpositions of a sigmoidal function.
Mathematics of Control, Signals, and Systems (MCSS), 2(4):303–314, 1989.  [5] GuangBin Huang, QinYu Zhu, and CheeKheong Siew. Extreme learning machine: a new learning scheme of feedforward neural networks. In Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on, volume 2, pages 985–990. IEEE, 2004.
 [6] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.

[7]
Shiwei Lan, Tan BuiThanh, Mike Christie, and Mark Girolami.
Emulation of higherorder tensors in manifold monte carlo methods for bayesian inverse problems.
Journal of Computational Physics, 308:81–101, 2016.  [8] Benedict Leimkuhler and Sebastian Reich. Simulating hamiltonian dynamics, volume 14. Cambridge university press, 2004.
 [9] Radford M Neal. Bayesian learning for neural networks, volume 118. Springer Science & Business Media, 2012.
 [10] Radford M Neal et al. Mcmc using hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 2:113–162, 2011.
 [11] Carl Edward Rasmussen, JM Bernardo, MJ Bayarri, JO Berger, AP Dawid, D Heckerman, AFM Smith, and M West. Gaussian processes to speed up hybrid monte carlo for expensive bayesian integrals. In Bayesian Statistics 7, pages 651–659, 2003.
 [12] Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML11), pages 681–688, 2011.
 [13] Cheng Zhang, Babak Shahbaba, and Hongkai Zhao. Hamiltonian monte carlo acceleration using surrogate functions with random bases. Statistics and Computing, pages 1–18, 2015.
Comments
There are no comments yet.