Combining deep learning with Bayesian uncertainty estimation has the potential to fit flexible and scalable models that are resistant to overfitting (MacKay, 1992b; Neal, 1995; Hinton & Van Camp, 1993). Stochastic variational inference is especially appealing because it closely resembles ordinary backprop (Graves, 2011; Blundell et al., 2015), but such methods typically impose restrictive factorization assumptions on the approximate posterior, such as fully independent weights. There have been attempts to fit more expressive approximating distributions which capture correlations such as matrix-variate Gaussians (Louizos & Welling, 2016; Sun et al., 2017) or multiplicative normalizing flows (Louizos & Welling, 2017), but fitting such models can be expensive without further approximations.
In this work, we introduce and exploit a surprising connection between natural gradient descent (Amari, 1998) and variational inference. In particular, several approximate natural gradient optimizers have been proposed which fit tractable approximations to the Fisher matrix to gradients sampled during training (Kingma & Ba, 2014; Martens & Grosse, 2015). While these procedures were described as natural gradient descent on the weights using an approximate Fisher matrix, we reinterpret these algorithms as natural gradient on a variational posterior using the exact Fisher matrix. Both the weight updates and the Fisher matrix estimation can be seen as natural gradient ascent on a unified evidence lower bound (ELBO), analogously to how Neal and Hinton (Neal & Hinton, 1998)
interpreted the E and M steps of Expectation-Maximization (E-M) as coordinate ascent on a single objective.
Using this insight, we give an alternative training method for variational Bayesian neural networks. For a factorial Gaussian posterior, it corresponds to a diagonal natural gradient method with weight noise, and matches the performance of Bayes By Backprop (Blundell et al., 2015), but converges faster. We also present noisy K-FAC, an efficient and GPU-friendly method for fitting a full matrix-variate Gaussian posterior, using a variant of Kronecker-Factored Approximate Curvature (K-FAC) (Martens & Grosse, 2015) with correlated weight noise.
2.1 Variational Inference for Bayesian Neural Networks
Given a dataset , a Bayesian neural net (BNN) is defined in terms of a prior on the weights, as well as the likelihood . Variational Bayesian methods (Hinton & Van Camp, 1993; Graves, 2011; Blundell et al., 2015) attempt to fit an approximate posterior to maximize the evidence lower bound (ELBO):
where is a regularization parameter and
are the parameters of the variational posterior. Proper Bayesian inference corresponds to, but other values may work better in practice on some problems.
2.2 Gradient Estimators for Gaussian Distribution
To optimize the ELBO, we must estimate the derivative of eq. (1) w.r.t. variational parameters . The standard approach uses the pathwise derivative estimator, also known as the reparameterization trick (Williams, 1992; Blundell et al., 2015; Kingma & Welling, 2013; Rezende et al., 2014)
. However, in the case of Gaussian distribution with parameters, there is another estimator given by Opper & Archambeau (2009):
which are due to Bonnet (1964) and Price (1958), respectively. Both equations can be proved through integration by parts. In the case of Gaussian distribution, eq. (2) is equivalent to the pathwise derivative estimator for .
2.3 Natural Gradient
Natural gradient descent is a second-order optimization method originally proposed by Amari (1997). There are two variants of natural gradient commonly used in machine learning, which do not have standard names, but which we refer to as natural gradient for point estimation (NGPE) and natural gradient for variational inference (NGVI).
In natural gradient for point estimation (NGPE), we assume the neural network computes a predictive distribution and we wish to maximize a cost function , which may be the data log-likelihood. The natural gradient is the direction of steepest ascent in the Fisher information norm, and is given by , where , and the covariance is with respect to sampled from the data distribution and sampled from the model’s predictions. NGPE is typically justified as a way to speed up optimization; see Martens (2014) for a comprehensive overview.
We now describe natural gradient for variational inference (NGVI) in the context of BNNs. We wish to fit the parameters of a variational posterior to maximize the ELBO (eq. (1)). Analogously to the point estimation setting, the natural gradient is defined as ; but in this case, is the Fisher matrix of , i.e. . Note that in contrast with point estimation, is a metric on , rather than , and its definition doesn’t directly involve the data. Interestingly, because is chosen to be tractable, the natural gradient can be computed exactly, and in many cases is even simpler than the ordinary gradient.
In general, NGPE and NGVI need not behave similarly; however, in Section 3, we show that in the case of Gaussian variational posteriors, the two are closely related.
2.4 Kronecker-Factored Approximate Curvature
As modern neural networks may contain millions of parameters, computing and storing the exact Fisher matrix and its inverse is impractical. Kronecker-factored approximate curvature (K-FAC) (Martens & Grosse, 2015) uses a Kronecker-factored approximation to the Fisher to perform efficient approximate natural gradient updates. Considering th layer in the neural network whose input activations are , weight , and output , we have . For simplicity, we define the following additional notation:
Therefore, the weight gradient is . With this gradient formula, K-FAC decouples this layer’s fisher matrix by approximating and as independent:
Further, assuming between-layer independence, the whole fisher matrix can be approximated as block diagonal consisting of layerwise fisher matrices . Decoupling into and not only avoids the quadratic storage cost of the exact Fisher, but also enables tractable computation of the approximate natural gradient:
As shown by eq. (4), computing natural gradient using K-FAC only consists of matrix transformations comparable to size of , making it very efficient.
3 Variational Inference using Noisy Natural Gradient
In this section, we draw a surprising relationship between natural gradient for point estimation (NGPE) of the weights of a neural net, and natural gradient for variational inference (NGVI) of a Gaussian posterior. (These terms are explained in Section 2.3.) In particular, we show that the NGVI updates can be approximated with a variant of NGPE with adaptive weight noise which we term Noisy Natural Gradient (NNG). This insight allows us to train variational posteriors with a variety of structures using noisy versions of existing optimization algorithms (see Figure 1).
In NGVI, our goal is to maximize the ELBO (eq. (1)) with respect to the parameters of a variational posterior distribution . We assume is a multivariate Gaussian parameterized by . Building on eq. (2), we determine the natural gradient of the ELBO with respect to and the precision matrix (see supplement for details):
We make several observations. First, the term inside the expectation in eq. (5) is the gradient for MAP estimation of . Second, the update for is preconditioned by , which encourages faster movement in directions of higher posterior uncertainty. Finally, the fixed point equation for is given by
Hence, if , will tend towards the expected Hessian of , so the update rule for will somewhat resemble a Newton-Raphson update. For simplicity, we further assume a spherical Gaussian prior , then . In each iteration, we sample and and apply a stochastic natural gradient update based on eq. (5):
where and are separate learning rates for and , and is the number of training examples. Roughly speaking, the update rule for corresponds to an exponential moving average of the Hessian, and the update rule for is a stochastic Newton step using .
This update rule has two problems. First, the log-likelihood Hessian may be hard to compute, and is undefined at some points for neural nets which use not-everywhere-differentiable activation functions such as ReLU. Second, if the negative log-likelihood is non-convex (as is the case for neural networks), the Hessian could have negative eigenvalues. so the update may result inwhich is not positive semidefinite. We circumvent both of these problems by approximating the negative log-likelihood Hessian with the NGPE Fisher matrix :
This approximation guarantees that is positive semidefinite, and it allows for tractable approximations such as K-FAC (see below). In the context of BNNs, approximating the log-likelihood Hessian with the Fisher was first proposed by Graves (2011), so we refer to it as the Graves approximation. In the case where the output layer of the network represents the natural parameters of an exponential family distribution (as is typical in regression or classification), the Graves approximation can be justified in terms of the generalized Gauss-Newton approximation to the Hessian; see Martens (2014) for details.111eq. (8) leaves ambiguous what distribution the gradients are sampled from. Throughout our experiments, we sample the targets from the model’s predictions, as done in K-FAC (Martens & Grosse, 2015). The resulting is known as the true Fisher. The alternative is to use the SGD gradients, giving the empirical Fisher. The true Fisher is a better approximation to the Hessian (Martens, 2014).
3.1 Simplifying the Update Rules
We have now derived a stochastic natural gradient update rule for Gaussian variational posteriors. In this section, we rewrite the update rules in order to disentangle hyperparameters and highlight relationships with NGPE. First, if the prior varianceis fixed222For simplicity, we assume the prior is a spherical Gaussian and its variance is fixed. Otherwise, we can keep an exponential moving average of the prior Hessian., then is a damped version of the moving average of the Fisher matrix and we can rewrite the update eq. (8):
In eq. (9), we avoid an awkward interaction between the KL weight and the learning rates by writing the update rules in terms of alternative learning rates and . We also rewrite the update rule for :
Observe that if is viewed as a point estimate of the weights, this update rule resembles NGPE with an exponential moving average of the Fisher matrix. The differences are that the Fisher matrix is damped by adding , and that the weights are sampled from , which is a Gaussian with covariance . Because our update rule so closely resembles NGPE with correlated weight noise, we refer to this method as Noisy Natural Gradient (NNG).
Interestingly, in second-order optimization, it is very common to dampen the updates by adding a multiple of the identity matrix to the curvature before inversion in order to compensate for errors in the quadratic approximation to the cost. NNG automatically achieves this effect, with the strength of the damping being; we refer to this as intrinsic damping. In practice, it may be advantageous to add additional extrinsic damping for purposes of stability.
3.3 Fitting Fully Factorized Gaussian Posteriors with Noisy Adam
The discussion so far has concerned NGVI updates for a full covariance Gaussian posterior. Unfortunately, the number of parameters needed to represent a full covariance Gaussian is of order . Since it can be in the millions even for a relatively small network, representing a full covariance Gaussian is impractical. There has been much work on tractable approximations to second-order optimization. In the context of NNG, imposing structure on also imposes structure on the form of the variational posterior. We now discuss two kinds of structure one can impose.
These update rules are similar in spirit to methods such as Adam, but with the addition of adaptive weight noise. We note that these update rules also differ from Adam in some details: (1) Adam keeps exponential moving averages of the gradients, which is equivalent to momentum, and (2) Adam applies the square root to the entries of in the denominator. We define noisy Adam by adding momentum term to be consistent with Adam. We regard difference (2) inessential. The choice of squaring or divison may affect optimization performance, but they don’t change the fixed points, i.e. they are fitting the same functional form of the variational posterior using the same variational objective. The full procedure is given in Alg. 1.
3.4 Fitting Matrix Variate Gaussian Posteriors with Noisy K-FAC
There has been much interest in fitting BNNs with matrix-variate Gaussian (MVG) posteriors333When we refer to a BNN with an “MVG posterior”, we mean that the weights in different layers are independent, and the weights for each layer follow an MVG distribution. in order to compactly capture posterior correlations between different weights (Louizos & Welling, 2016; Sun et al., 2017). Let denote the weights for one layer of a fully connected network. An MVG distribution is a Gaussian distribution whose covariance is a Kronecker product, i.e. . MVGs are potentially powerful due to their compact representation444If is of size , then the MVG covariance requires approximately parameters to represent, in contrast with a full covariance matrix over , which would require . of posterior covariances between weights. However, fitting MVG posteriors is difficult, since computing the gradients and enforcing the positive semidefinite constraint for and typically requires expensive matrix operations such as inversion. Therefore, existing methods for fitting MVG posteriors typically impose additional structure such as diagonal covariance (Louizos & Welling, 2016) or products of Householder transformations (Sun et al., 2017) to ensure efficient updates.
We observe that K-FAC (Martens & Grosse, 2015) uses a Kronecker-factored approximation to the Fisher matrix for each layer’s weights, as in eq. (3). By plugging this approximation in to eq. (9), we obtain an MVG posterior. In more detail, each block obeys the Kronecker factorization , where and are the covariance matrices of the activations and pre-activation gradients, respectively. K-FAC estimates and online using exponential moving averages which, conveniently for our purposes, are closely analogous to the exponential moving averages defining in eq. (9):
Conveniently, because these factors are estimated from the empirical covariances, they (and hence also ) are automatically positive semidefinite.
Plugging the above formulas into eq. (9) does not quite yield an MVG posterior due to the addition of the prior Hessian. In general, there may be no compact representation of . However, for spherical Gaussian priors555We consider spherical Gaussian priors for simplicity, but this trick can be extended to any prior whose Hessian is Kronecker-factored, such as group sparsity., we can approximate using a trick proposed by Martens & Grosse (2015) in the context of damping. In this way, the covariance decomposes as the Kronecker product of two terms:
This factorization corresponds to a matrix-variate Gaussian posterior , where the factor is arbitrarily assigned to the first factor. We refer to this BNN training method as noisy K-FAC. The full algorithm is given as Alg. 2.
|Test RMSE||Test log-likelihood|
3.5 Block Tridiagonal Covariance
Both the fully factorized and MVG posteriors assumed independence between layers. However, in practice the weights in different layers can be tightly coupled. To better capture these dependencies, we propose to approximate using the block tridiagonal approximation from Martens & Grosse (2015). The resulting posterior covariance is block tridiagonal, so it accounts for dependencies between adjacent layers. The noisy version of block tridiagonal K-FAC is completely analogous to the block diagonal version, but since the approximation is rather complicated, we refer the reader to Martens & Grosse (2015) for details.
4 Related Work
Variational inference was first applied to neural networks by Peterson (1987) and Hinton & Van Camp (1993). More recently, Graves (2011) proposed a practical method for variational inference with fully factorized Gaussian posteriors which used a simple (but biased) gradient estimator. Improving on that work, Blundell et al. (2015) proposed a unbiased gradient estimator using the reparameterization trick of Kingma & Welling (2013). Kingma et al. (2015) observed that variance of stochastic gradients can be significantly reduced by local reparameterization trick where global uncertainty in the weights is translated into local uncertainty in the activations.
There has also been much work on modeling the correlations between weights using more complex Gaussian variational posteriors. Louizos & Welling (2016) introduced the matrix variate Gaussian posterior as well as a Gaussian process approximation. Sun et al. (2017) decoupled the correlations of a matrix variate Gaussian posterior to unitary transformations and factorial Gaussian. Inspired by the idea of normalizing flows in latent variable models (Rezende & Mohamed, 2015), Louizos & Welling (2017) applied normalizing flows to auxiliary latent variables to produce more flexible approximate posteriors.
Since natural gradient was proposed by Amari (1998), there has been much work on tractable approximations. Hoffman et al. (2013) observed that for exponential family posteriors, the exact natural gradient could be tractably computed using stochastic versions of variational Bayes E-M updates. Martens & Grosse (2015) proposed K-FAC for performing efficient natural gradient optimization in deep neural networks. Following on that work, K-FAC has been adopted in many tasks (Grosse & Martens, 2016; Wu et al., 2017) to gain optimization benefits, and was shown to be amenable to distributed computation (Ba et al., 2017).
Khan et al. (2017a) independently derived a stochastic Newton update similar to eq. (5). Their focus was on variational optimization (VO) (Staines & Barber, 2012) which can be seen as a special case of variational inference by omitting KL term, and they only derived the version of diagonal approximation (see Section 3.3). Assuming the variational distribution is Gaussian distribution, we can extend NNG to VO by only modifying the update rule of Fisher to keep a running sum of individual Fisher.
In this section, we conducted a series of experiments to investigate the following questions: (1) How does noisy natural gradient (NNG) compare with existing methods in terms of prediction performance? (2) Is NNG able to scale to large dataset and modern-size convolutional neural network? (3) Can NNG achieve better uncertainty estimates? (4) Does it enable more efficient exploration in active learning and reinforcement learning?
Our method with a full-covariance multivariate Gaussian, a fully-factorized Gaussian, a matrix-variate Gaussian and block-tridiagonal posterior are denoted as NNG-full, NNG-FFG (noise Adam), NNG-MVG (noisy K-FAC) and NNG-BlkTri, respectively.
We first experimented with regression datasets from the UCI collection (Asuncion & Newman, 2007). All experiments used networks with one hidden layer unless stated otherwise. We compared our method with Bayes By Backprop (BBB) (Blundell et al., 2015)
, probabilistic backpropagation (PBP) with a factorial gaussian posterior(Hernández-Lobato & Adams, 2015). The results for PBP_MV (Sun et al., 2017) and VMG (Louizos & Welling, 2016) can be found in supplement.
Following previous work (Hernández-Lobato & Adams, 2015; Louizos & Welling, 2016), we report the standard metrics including root mean square error (RMSE) and test log-likelihood. The results are summarized in Table 1. As we can see from the results, our NNG-FFG performed similarly to BBB (Blundell et al., 2015), indicating that the Graves approximation did not cause a performance hit. Our NNG-MVG method achieved substantially better RMSE and log-likelihoods than BBB and PBP due to the more flexible posterior. Moreover, NNG-MVG outperforms PBP_MV (Sun et al., 2017) on all datasets other than Yacht and Year, though PBP_MV also uses matrix variate Gaussian posterior.
|D||B||D + B|
denotes Batch Normalization. We leave[N/A] for BBB and noisy Adam with BN since they are extremely unstable and work only with a very small .
To evaluate the scalability of our method to large networks, we applied noisy K-FAC to a modified version of the VGG16666 The detailed network architecture is 32-32-M-64-64-M-128-128-128-M-256-256-256-M-256-256-256-M-FC10, where each number represents the number of filters in a convolutional layer, and M denotes max-pooling.
The detailed network architecture is 32-32-M-64-64-M-128-128-128-M-256-256-256-M-256-256-256-M-FC10, where each number represents the number of filters in a convolutional layer, and M denotes max-pooling.network (Simonyan & Zisserman, 2014) and tested it on CIFAR10 benchmark (Krizhevsky, 2009). It is straightforward to incorporate noisy K-FAC into convolutional layers by considering them using Kronecker Factors for Convolution Grosse & Martens (2016). We compared our method to SGD with momentum, K-FAC and BBB in terms of test accuracy. Results are shown in Table 2. Noisy K-FAC achieves the highest accuracy on all configurations except where both data augmentation and Batch Normalization (BN) (Ioffe & Szegedy, 2015) are used. When no extra regularization used, noisy K-FAC shows a gain of 3% (85.52% versus 82.39%).
We observed that point estimates tend to make poorly calibrated predictions, as shown in Figure 2. By contrast, models trained with noisy K-FAC are well-calibrated (i.e. the bars align roughly along the diagonal), which benefits interpretability.
We note that noisy K-FAC imposes a weight decay term intrinsically. To check that this by itself doesn’t explain the performance gains, we modified K-FAC to use weight decay of the same magnitude. K-FAC with this weight decay setting achieves 83.51% accuracy. However, as shown in Table 2, noisy K-FAC achieves 85.52%, demonstrating the importance of adaptive weight noise.
5.3 Active Learning
One particularly promising application of uncertainty estimation is to guiding an agent’s exploration towards part of a space which it’s most unfamiliar with. We have evaluated our BNN algorithms in two instances of this general approach: active learning, and intrinsic motivation for reinforcement learning. The next two sections present experiments in these two domains, respectively.
In the simplest active learning setting (Settles, 2010), an algorithm is given a set of unlabeled examples and, in each round, chooses one unlabeled example to have labeled. A classic Bayesian approach to active learning is the information gain criterion (MacKay, 1992a), which in each step attempts to achieve the maximum reduction in posterior entropy. Under the assumption of i.i.d. Gaussian noise, this is equivalent to choosing the unlabeled example with the largest predictive variance.
We first investigated how accurately each of the algorithms could estimate predictive variances. In each trial, we randomly selected 20 labeled training examples and 100 unlabeled examples; we then computed each algorithm’s posterior predictive variances for the unlabeled examples. 10 independent trials were run. As is common practice, we treated the predictive variance of HMC as the “ground truth” predictive variance. Table4
reports the average and standard error of Pearson correlations between the predictive variances of each algorithm and those of HMC. In all of the datasets, our two methods NNG-MVG and NNG-BlkTri match the HMC predictive variances significantly better than the other approaches, and NNG-BlkTri consistently matches them slightly better than NNG-MVG due to the more flexible variational posterior.
Next, we evaluated the performance of all methods on active learning, following the protocol of Hernández-Lobato & Adams (2015). As a control, we Ωevaluated each algorithm with labeled examples selected uniformly at random; this is denoted with the _R suffix. Active learning results are denoted with the _A suffix. The average test RMSE for all methods is reported in Table 3. These results shows that NNG-MVG_A performs better than NNG-MVG_R in most datasets and is closer to HMC_A compared to PBP_A and NNG-FFG_A. However, we note that better predictive variance estimates do not reliably yield better active learning results, and in fact, active learning methods sometimes perform worse than random. Therefore, while information gain is a useful criterion for benchmarking purposes, it is important to explore other uncertainty-based active learning criteria.
5.4 Reinforcement Learning
We next experimented with using uncertainty to provide intrinsic motivation in reinforcement learning. Houthooft et al. (2016) proposed Variational Information Maximizing Exploration (VIME), which encouraged the agent to seek novelty through an information gain criterion. VIME involves training a separate BNN to predict the dynamics, i.e. learn to model the distribution . With the idea that surprising states lead to larger updates to the dynamics network, the reward function was augmented with an “intrinsic term” corresponding to the information gain for the BNN. If the history of the agent up until time step is denoted as , then the modified reward can be written in the following form:
In the above formulation, the true posterior is generally intractable. Houthooft et al. (2016) approximated it using Bayes by Backprop (BBB) (Blundell et al., 2015). We experimented with replacing the fully factorized posterior with our NNG-MVG model.
Following the experimental setup of Houthooft et al. (2016), we tested our method in three continuous control tasks and sparsified the rewards (see supplement for details). We compared our NNG-MVG dynamics model with a Gaussian noise baseline, as well as the original VIME formulation using BBB. All experiments used TRPO to optimize the policy itself (Schulman et al., 2015).
Performance is measured by the average return (under the original MDP’s rewards, not including the intrinsic term) at each iteration. Figure 3 shows the performance results in three tasks. Consistently with Houthooft et al. (2016), we observed that the Gaussian noise baseline completely breaks down and rarely achieves the goal, VIME significantly improved the performance. However, replacing the dynamics network with NNG-MVG considerably improved the exploration efficiency on all three tasks. Since the policy search algorithm was shared between all three conditions, we attribute this improvement to the improved uncertainty modeling by the dynamics network.
We drew a surprising connection between natural gradient ascent for point estimation and for variational inference. We exploited this connection to derive surprisingly simple variational BNN training procedures which can be instantiated as noisy versions of widely used optimization algorithms for point estimation. This let us efficiently fit MVG variational posteriors, which capture correlations between different weights. Our variational BNNs with MVG posteriors matched the predictive variances of HMC much better than fully factorized posteriors, and led to more efficient exploration in the settings of active learning and reinforcement learning with intrinsic motivation.
GZ was supported by an NSERC Discovery Grant, and SS was supported by a Connaught New Researcher Award and a Connaught Fellowship. We thank Emtiyaz Khan and Mark van der Wilk for helpful discussions.
- Amari (1997) Amari, Shun-ichi. Neural learning in structured parameter spaces-natural riemannian gradient. In Advances in neural information processing systems, pp. 127–133, 1997.
- Amari (1998) Amari, Shun-Ichi. Natural gradient works efficiently in learning. Neural computation, 10(2):251–276, 1998.
- Asuncion & Newman (2007) Asuncion, Arthur and Newman, David. Uci machine learning repository, 2007.
- Ba et al. (2016) Ba, Jimmy, Grosse, Roger, and Martens, James. Distributed second-order optimization using kronecker-factored approximations. 2016.
- Ba et al. (2017) Ba, Jimmy, Martens, James, and Grosse, Roger. Distributed second-order optimization using kronecker-factored approximations. In International Conference on Learning Representations, 2017.
- Blundell et al. (2015) Blundell, Charles, Cornebise, Julien, Kavukcuoglu, Koray, and Wierstra, Daan. Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424, 2015.
- Bonnet (1964) Bonnet, Georges. Transformations des signaux aléatoires a travers les systemes non linéaires sans mémoire. Annals of Telecommunications, 19(9):203–220, 1964.
- Duchi et al. (2011) Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011.
- Graves (2011) Graves, Alex. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems, pp. 2348–2356, 2011.
- Grosse & Martens (2016) Grosse, Roger and Martens, James. A kronecker-factored approximate fisher matrix for convolution layers. In International Conference on Machine Learning, pp. 573–582, 2016.
- Guo et al. (2017) Guo, Chuan, Pleiss, Geoff, Sun, Yu, and Weinberger, Kilian Q. On calibration of modern neural networks. arXiv preprint arXiv:1706.04599, 2017.
- Hernández-Lobato & Adams (2015) Hernández-Lobato, José Miguel and Adams, Ryan. Probabilistic backpropagation for scalable learning of bayesian neural networks. In International Conference on Machine Learning, pp. 1861–1869, 2015.
Hinton & Van Camp (1993)
Hinton, Geoffrey E and Van Camp, Drew.
Keeping the neural networks simple by minimizing the description
length of the weights.
Proceedings of the sixth annual conference on Computational learning theory, pp. 5–13. ACM, 1993.
- Hoffman et al. (2013) Hoffman, Matthew D, Blei, David M, Wang, Chong, and Paisley, John. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347, 2013.
- Houthooft et al. (2016) Houthooft, Rein, Chen, Xi, Duan, Yan, Schulman, John, De Turck, Filip, and Abbeel, Pieter. Vime: Variational information maximizing exploration. In Advances in Neural Information Processing Systems, pp. 1109–1117, 2016.
- Ioffe & Szegedy (2015) Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448–456, 2015.
- Khan et al. (2017a) Khan, Mohammad Emtiyaz, Lin, Wu, Tangkaratt, Voot, Liu, Zouzhu, and Nielsen, Didrik. Variational adaptive-Newton method for explorative learning. arXiv preprint arXiv:1711.05560, 2017a.
- Khan et al. (2017b) Khan, Mohammad Emtiyaz, Liu, Zuozhu, Tangkaratt, Voot, and Gal, Yarin. Vprop: Variational inference using rmsprop. arXiv preprint arXiv:1712.01038, 2017b.
- Kingma & Ba (2014) Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Kingma & Welling (2013) Kingma, Diederik P and Welling, Max. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
- Kingma et al. (2015) Kingma, Diederik P, Salimans, Tim, and Welling, Max. Variational dropout and the local reparameterization trick. In Advances in Neural Information Processing Systems, pp. 2575–2583, 2015.
- Krizhevsky (2009) Krizhevsky, Alex. Learning multiple layers of features from tiny images. 2009.
- Louizos & Welling (2016) Louizos, Christos and Welling, Max. Structured and efficient variational deep learning with matrix gaussian posteriors. In International Conference on Machine Learning, pp. 1708–1716, 2016.
- Louizos & Welling (2017) Louizos, Christos and Welling, Max. Multiplicative normalizing flows for variational bayesian neural networks. arXiv preprint arXiv:1703.01961, 2017.
- MacKay (1992a) MacKay, David JC. Information-based objective functions for active data selection. Neural computation, 4(4):590–604, 1992a.
- MacKay (1992b) MacKay, David JC. A practical Bayesian framework for backpropagation networks. Neural Computation, 4:448–472, 1992b.
- Martens (2014) Martens, James. New insights and perspectives on the natural gradient method. arXiv preprint arXiv:1412.1193, 2014.
- Martens & Grosse (2015) Martens, James and Grosse, Roger. Optimizing neural networks with kronecker-factored approximate curvature. In International Conference on Machine Learning, pp. 2408–2417, 2015.
- Neal (1995) Neal, Radford M. BAYESIAN LEARNING FOR NEURAL NETWORKS. PhD thesis, University of Toronto, 1995.
- Neal & Hinton (1998) Neal, Radford M and Hinton, Geoffrey E. A view of the em algorithm that justifies incremental, sparse, and other variants. In Learning in graphical models, pp. 355–368. Springer, 1998.
Neal et al. (2011)
Neal, Radford M et al.
Mcmc using hamiltonian dynamics.
Handbook of Markov Chain Monte Carlo, 2(11), 2011.
- Niculescu-Mizil & Caruana (2005) Niculescu-Mizil, Alexandru and Caruana, Rich. In Proceedings of the 22nd international conference on Machine learning, pp. 625–632. ACM, 2005.
- Opper & Archambeau (2009) Opper, Manfred and Archambeau, Cédric. The variational gaussian approximation revisited. Neural computation, 21(3):786–792, 2009.
- Peterson (1987) Peterson, Carsten. A mean field theory learning algorithm for neural networks. Complex systems, 1:995–1019, 1987.
- Price (1958) Price, Robert. A useful theorem for nonlinear devices having gaussian inputs. IRE Transactions on Information Theory, 4(2):69–72, 1958.
- Rezende & Mohamed (2015) Rezende, Danilo Jimenez and Mohamed, Shakir. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015.
- Rezende et al. (2014) Rezende, Danilo Jimenez, Mohamed, Shakir, and Wierstra, Daan. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
- Schulman et al. (2015) Schulman, John, Levine, Sergey, Abbeel, Pieter, Jordan, Michael, and Moritz, Philipp. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1889–1897, 2015.
- Settles (2010) Settles, Burr. Active learning literature survey. University of Wisconsin, Madison, 52(55-66):11, 2010.
- Simonyan & Zisserman (2014) Simonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
- Staines & Barber (2012) Staines, Joe and Barber, David. Variational optimization. arXiv preprint arXiv:1212.4507, 2012.
- Sun et al. (2017) Sun, Shengyang, Chen, Changyou, and Carin, Lawrence. Learning structured weight uncertainty in bayesian neural networks. In Artificial Intelligence and Statistics, pp. 1283–1292, 2017.
- Williams (1992) Williams, Ronald J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992.
- Wu et al. (2017) Wu, Yuhuai, Mansimov, Elman, Liao, Shun, Grosse, Roger, and Ba, Jimmy. Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. arXiv preprint arXiv:1708.05144, 2017.
Appendix A Natural Gradient for Multivariate Gaussian
Suppose we have a model parameterized by which lives in a subspace (such as the set of symmetric matrices). The natural gradient is motivated in terms of a trust region optimization problem, that finding the optimal in a neighborhood of defined with KL divergence,
Then the optimal solution to this optimization problem is given by . Here is the Fisher matrix and is the learning rate. Note that and are defined only for , but these can be extended to the full space however we wish without changing the optimal solution.
Now let assume the model is parameterized by multivariate Gaussian . The KL-divergence between and are:
Hence, the Fisher matrix w.r.t and are
Then, by the property of vec-operator , we get the natural gradient updates
An analogous derivation gives us . Considering , we have , which gives us the convenient formulas
Recall in variational inference, the gradient of ELBO towards and are given as
Appendix B Matrix Variate Gaussian
Recently Matrix Variate Gaussian (MVG) distribution are also used in Bayesian neural networks (Louizos & Welling, 2016; Sun et al., 2017). A matrix variate Gaussian distributions models a Gaussian distribution for a matrix ,
In which is the mean, is the covariance matrix among rows and is the covariance matrix among columns. Both and
are positive definite matrices to be a covariance matrix. Connected with Gaussian distribution, vectorization ofconfines a multivariate Gaussian distribution whose covariance matrix is Kronecker product of and .
Appendix C Implementation Details
c.1 Regression Implementation Details
The datasets were randomly splitted into training and test sets, with 90% of the data for training and the remaining for testing. To reduce the randomness, we repeated the splitting process for 20 times (except two largest datasets, i.e., Year and Protein, where we repeated 5 times and 1 times, respectively.) For all datasets except two largest ones, we used neural networks with 50 hidden units. For two largest datasets, we used 100 hidden units. Besides, we also introduced a Gamma prior, for the precision of the Gaussian likelihood and included the posterior into variational objective. The variational posterior we used is , then the expected likelihood can be computed as
Where represents digamma function. Therefore, ELBO can be computed with
With ELBO as above, we can directly compute the gradients towards variational parameters using automatic differentiation.
In training, the input features and training targets were normalized to be zero mean and unit variance. We removed the normalization on the targets in test time. For each dataset, we set and unless state otherwise. We set batch size 10 for 5 small datasets with less than 2000 data points, 500 for Year and 100 for other fours. Besides, we decay the learning rate by
in second half epochs.
c.2 Classification Implementation Details
Throughout classification experiments, we used VGG16 architecture but reduced the number of filters in each convolutional layer by half.
In training, we adopted learning rate selection strategy adopted by Ba et al. (2016). In particular, given a parameter update vector , the KL divergence between the predictive distributions before and after the update is given by the Fisher norm:
Observe that choosing a step size of will produce an update with squared Fisher norm . Motivated by the idea of trust region, we chose in each iteration such that the squared Fisher norm is at most some value :
We used an exponential decay schedule , where and were tunable parameters ( is 0.001 or 0.01 for noisy K-FAC in our CIFAR-10 experiments when models trained without/with Batch Normalization (Ioffe & Szegedy, 2015), is 0.95; is 0.0001 for noisy Adam), and was incremented periodically (every epoch in our CIFAR-10 experiments). In practice, computing involves curvature-vector products after each update which introduces significant computational overhead, so we instead used the approximate Fisher that we used to compute natural gradient. The maximum step size was set to be .
To reduce computational overhead of K-FAC (also noisy K-FAC) introduced by updating approximate Fisher matrix and inverting it, we set and . That means our curvature statistics are somewhat more stale, but we found that it didn’t significantly affect per-iteration optimization performance. was set to 0.01 and 0.003 for noisy K-FAC and noisy Adam, respectively.
We noticed that it was favorable to tune regularization parameter and prior variance . We used a small regularization parameter when data augmentation was adopted. E.g., we set when models were trained with data augmentation while otherwise. We speculate that using data augmentation leads to more training examples (larger ), so it’s reasonable to use a smaller . Moreover, we set to when models were trained without Batch Normalization.
c.3 Active Learning Implementation Details
Following the experimental protocol in PBP (Hernández-Lobato & Adams, 2015), we splited each dataset into training and test sets with 20 and 100 data points. All remaining data were included in pool sets. In all experiments, we used a neural network with one hidden layer and 10 hidden units.
After fitting our model in training data, we evaluated the performance in test data and further added one data point from pool set into training set. The selection was based on the method described by which was equivalent to choose the one with highest predictive variance. This process repeated 10 times, that is, we collected 9 points from pool sets. For each iteration, we re-trained the whole model from scratch.
Beyond that, as uncertainty estimation is of fundamental importance in active learning, we also performed experiments to evaluate the uncertainty estimation of our method directly, which was measured according to the Pearson’s correlation of predictive variance compared to HMC (Neal et al., 2011). Recall Pearson’s correlation,
is a measure of linear correlation between two variables and . Pearson’s corrleation ranges from to , with bigger value representing stronger correlations. We compared several algorithms, including PBP, NNG-FFG, NNG-MVG, NNG-BlkTri.
We trained NNG-FFG, NNG-MVG and NNG-BlkTri for 20000 epochs, PBP for 40 epochs and HMC with 20 chains, 100000 iterations. For all the models, we used 1000 sampled weights for predicting on the testing set, thus we could compute the model’s predicative variance for every data point in the test set. Finally, we computed the Pearson’s correlation between different models and HMC in terms of predicative variance. In all experiments, we used and no extra damping term.
c.4 Reinforcement Learning Implementation Details
In all three tasks, CartPoleSwingup, MountainCar and DoublePendulum, we used one-layer Bayesian Neural Network with 32 hidden units for both BBB and NNG-MVG. And we used rectified linear unit (RELU) as our activation function. The number of samples drawn from variational posterior was fixed to 10 in the training process. For TRPO, the batch size was set to be 5000 and the replay pool has a fixed number of 100,000 samples. In both BBB and NNG-MVG, the dynamic model was updated in each epoch with 500 iterations and 10 batch size. For the policy network, one-layer network with 32 tanh units was used.
For all three tasks, we sparsified the rewards in the following way. A reward of is given in CartPoleSwingup when , with the pole angle; when the car escapes the valley in MountainCar; and when , with the distance from the target in DoublePendulum.
To derive the intrinsic reward in Houthooft et al. (2016), we just need to analyze a single layer since we assume layer-wise independence in NNG-MVG. The intrinsic reward for each layer is given by (Note: below is the ELBO with as prior.)
Where is the step-size. As shown in eq. (16), the Fisher matrix for is given by , thus the first term in eq. (28) is easy to get by exploiting Kronecker structure. However, where itself is a gigantic matrix which makes computation of the second term intractable. Fortunately, the approximate variational posterior is a matrix variate Gaussian whose covariance is a Kronecker product, i.e. , where is of size and .
Using and substitute with , we get the following identity
where Fisher matrices and
By further ignoring off-diagonal block in eq. (29), we can decompose into two terms,
Now, each term can be computed efficient since and are small matrices.
Appendix D Additional Results
We also run PBP_MV (Sun et al., 2017) and VMG (Louizos & Welling, 2016) on regression datasets from UCI collection (Asuncion & Newman, 2007). Results are shown in Table 5. Note that VMG introduced pseudo input-output pairs to enhance the flexibility of posterior distribution.
|Test RMSE||Test log-likelihood|
While optimization was not the primary focus of this work, we compared NNG with the baseline BBB (Blundell et al., 2015) in terms of convergence. Training curves for two regression datasets are shown in Figure 4 . We found that NNG-FFG trained in fewer iterations than BBB, while leveling off to similar ELBO values, even though our BBB implementation used Adam, and hence itself exploited diagonal curvature. Furthermore, despite the increased flexibility and larger number of parameters, NNG-MVG took roughly 2 times fewer iterations to converge, while at the same time surpassing BBB by a significant margin in terms of the ELBO.