Noisy Natural Gradient as Variational Inference

12/06/2017 ∙ by Guodong Zhang, et al. ∙ 0

Combining the flexibility of deep learning with Bayesian uncertainty estimation has long been a goal in our field, and many modern approaches are based on variational Bayes. Unfortunately, one is forced to choose between overly simplistic variational families (e.g. fully factorized) or expensive and complicated inference procedures. We show that natural gradient ascent with adaptive weight noise can be interpreted as fitting a variational posterior to maximize the evidence lower bound (ELBO). This insight allows us to train full covariance, fully factorized, and matrix variate Gaussian variational posteriors using noisy versions of natural gradient, Adam, and K-FAC, respectively. On standard regression benchmarks, our noisy K-FAC algorithm makes better predictions and matches HMC's predictive variances better than existing methods. Its improved uncertainty estimates lead to more efficient exploration in the settings of active learning and intrinsic motivation for reinforcement learning.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Combining deep learning with Bayesian uncertainty estimation has the potential to fit flexible and scalable models that are resistant to overfitting (MacKay, 1992b; Neal, 1995; Hinton & Van Camp, 1993). Stochastic variational inference is especially appealing because it closely resembles ordinary backprop (Graves, 2011; Blundell et al., 2015), but such methods typically impose restrictive factorization assumptions on the approximate posterior, such as fully independent weights. There have been attempts to fit more expressive approximating distributions which capture correlations such as matrix-variate Gaussians (Louizos & Welling, 2016; Sun et al., 2017) or multiplicative normalizing flows (Louizos & Welling, 2017), but fitting such models can be expensive without further approximations.

In this work, we introduce and exploit a surprising connection between natural gradient descent (Amari, 1998) and variational inference. In particular, several approximate natural gradient optimizers have been proposed which fit tractable approximations to the Fisher matrix to gradients sampled during training (Kingma & Ba, 2014; Martens & Grosse, 2015). While these procedures were described as natural gradient descent on the weights using an approximate Fisher matrix, we reinterpret these algorithms as natural gradient on a variational posterior using the exact Fisher matrix. Both the weight updates and the Fisher matrix estimation can be seen as natural gradient ascent on a unified evidence lower bound (ELBO), analogously to how Neal and Hinton (Neal & Hinton, 1998)

interpreted the E and M steps of Expectation-Maximization (E-M) as coordinate ascent on a single objective.

Using this insight, we give an alternative training method for variational Bayesian neural networks. For a factorial Gaussian posterior, it corresponds to a diagonal natural gradient method with weight noise, and matches the performance of Bayes By Backprop (Blundell et al., 2015), but converges faster. We also present noisy K-FAC, an efficient and GPU-friendly method for fitting a full matrix-variate Gaussian posterior, using a variant of Kronecker-Factored Approximate Curvature (K-FAC) (Martens & Grosse, 2015) with correlated weight noise.

(a) Fully factorized
(b) Matrix-variate
(c) Block tridiagonal
(d) Full covariance
Figure 1: Normalized precision matrices for Gaussian variational posteriors trained using noisy natural gradient. We used a network with 2 hidden layers of 15 units each, trained on the Boston housing dataset.

2 Background

2.1 Variational Inference for Bayesian Neural Networks

Given a dataset , a Bayesian neural net (BNN) is defined in terms of a prior on the weights, as well as the likelihood . Variational Bayesian methods (Hinton & Van Camp, 1993; Graves, 2011; Blundell et al., 2015) attempt to fit an approximate posterior to maximize the evidence lower bound (ELBO):

(1)

where is a regularization parameter and

are the parameters of the variational posterior. Proper Bayesian inference corresponds to

, but other values may work better in practice on some problems.

2.2 Gradient Estimators for Gaussian Distribution

To optimize the ELBO, we must estimate the derivative of eq. (1) w.r.t. variational parameters . The standard approach uses the pathwise derivative estimator, also known as the reparameterization trick (Williams, 1992; Blundell et al., 2015; Kingma & Welling, 2013; Rezende et al., 2014)

. However, in the case of Gaussian distribution with parameters

, there is another estimator given by Opper & Archambeau (2009):

(2)

which are due to Bonnet (1964) and Price (1958), respectively. Both equations can be proved through integration by parts. In the case of Gaussian distribution, eq. (2) is equivalent to the pathwise derivative estimator for .

2.3 Natural Gradient

Natural gradient descent is a second-order optimization method originally proposed by Amari (1997). There are two variants of natural gradient commonly used in machine learning, which do not have standard names, but which we refer to as natural gradient for point estimation (NGPE) and natural gradient for variational inference (NGVI).

In natural gradient for point estimation (NGPE), we assume the neural network computes a predictive distribution and we wish to maximize a cost function , which may be the data log-likelihood. The natural gradient is the direction of steepest ascent in the Fisher information norm, and is given by , where , and the covariance is with respect to sampled from the data distribution and sampled from the model’s predictions. NGPE is typically justified as a way to speed up optimization; see Martens (2014) for a comprehensive overview.

We now describe natural gradient for variational inference (NGVI) in the context of BNNs. We wish to fit the parameters of a variational posterior to maximize the ELBO (eq. (1)). Analogously to the point estimation setting, the natural gradient is defined as ; but in this case, is the Fisher matrix of , i.e. . Note that in contrast with point estimation, is a metric on , rather than , and its definition doesn’t directly involve the data. Interestingly, because is chosen to be tractable, the natural gradient can be computed exactly, and in many cases is even simpler than the ordinary gradient.

In general, NGPE and NGVI need not behave similarly; however, in Section 3, we show that in the case of Gaussian variational posteriors, the two are closely related.

2.4 Kronecker-Factored Approximate Curvature

As modern neural networks may contain millions of parameters, computing and storing the exact Fisher matrix and its inverse is impractical. Kronecker-factored approximate curvature (K-FAC) (Martens & Grosse, 2015) uses a Kronecker-factored approximation to the Fisher to perform efficient approximate natural gradient updates. Considering th layer in the neural network whose input activations are , weight , and output , we have . For simplicity, we define the following additional notation:

Therefore, the weight gradient is . With this gradient formula, K-FAC decouples this layer’s fisher matrix by approximating and as independent:

(3)

Further, assuming between-layer independence, the whole fisher matrix can be approximated as block diagonal consisting of layerwise fisher matrices . Decoupling into and not only avoids the quadratic storage cost of the exact Fisher, but also enables tractable computation of the approximate natural gradient:

(4)

As shown by eq. (4), computing natural gradient using K-FAC only consists of matrix transformations comparable to size of , making it very efficient.

3 Variational Inference using Noisy Natural Gradient

In this section, we draw a surprising relationship between natural gradient for point estimation (NGPE) of the weights of a neural net, and natural gradient for variational inference (NGVI) of a Gaussian posterior. (These terms are explained in Section 2.3.) In particular, we show that the NGVI updates can be approximated with a variant of NGPE with adaptive weight noise which we term Noisy Natural Gradient (NNG). This insight allows us to train variational posteriors with a variety of structures using noisy versions of existing optimization algorithms (see Figure 1).

In NGVI, our goal is to maximize the ELBO (eq. (1)) with respect to the parameters of a variational posterior distribution . We assume is a multivariate Gaussian parameterized by . Building on eq. (2), we determine the natural gradient of the ELBO with respect to and the precision matrix (see supplement for details):

(5)

We make several observations. First, the term inside the expectation in eq. (5) is the gradient for MAP estimation of . Second, the update for is preconditioned by , which encourages faster movement in directions of higher posterior uncertainty. Finally, the fixed point equation for is given by

(6)

Hence, if , will tend towards the expected Hessian of , so the update rule for will somewhat resemble a Newton-Raphson update. For simplicity, we further assume a spherical Gaussian prior , then . In each iteration, we sample and and apply a stochastic natural gradient update based on eq. (5):

(7)

where and are separate learning rates for and , and is the number of training examples. Roughly speaking, the update rule for corresponds to an exponential moving average of the Hessian, and the update rule for is a stochastic Newton step using .

This update rule has two problems. First, the log-likelihood Hessian may be hard to compute, and is undefined at some points for neural nets which use not-everywhere-differentiable activation functions such as ReLU. Second, if the negative log-likelihood is non-convex (as is the case for neural networks), the Hessian could have negative eigenvalues. so the update may result in

which is not positive semidefinite. We circumvent both of these problems by approximating the negative log-likelihood Hessian with the NGPE Fisher matrix :

(8)

This approximation guarantees that is positive semidefinite, and it allows for tractable approximations such as K-FAC (see below). In the context of BNNs, approximating the log-likelihood Hessian with the Fisher was first proposed by Graves (2011), so we refer to it as the Graves approximation. In the case where the output layer of the network represents the natural parameters of an exponential family distribution (as is typical in regression or classification), the Graves approximation can be justified in terms of the generalized Gauss-Newton approximation to the Hessian; see Martens (2014) for details.111eq. (8) leaves ambiguous what distribution the gradients are sampled from. Throughout our experiments, we sample the targets from the model’s predictions, as done in K-FAC (Martens & Grosse, 2015). The resulting is known as the true Fisher. The alternative is to use the SGD gradients, giving the empirical Fisher. The true Fisher is a better approximation to the Hessian (Martens, 2014).

3.1 Simplifying the Update Rules

We have now derived a stochastic natural gradient update rule for Gaussian variational posteriors. In this section, we rewrite the update rules in order to disentangle hyperparameters and highlight relationships with NGPE. First, if the prior variance

is fixed222For simplicity, we assume the prior is a spherical Gaussian and its variance is fixed. Otherwise, we can keep an exponential moving average of the prior Hessian., then is a damped version of the moving average of the Fisher matrix and we can rewrite the update eq. (8):

(9)

In eq. (9), we avoid an awkward interaction between the KL weight and the learning rates by writing the update rules in terms of alternative learning rates and . We also rewrite the update rule for :

(10)

Observe that if is viewed as a point estimate of the weights, this update rule resembles NGPE with an exponential moving average of the Fisher matrix. The differences are that the Fisher matrix is damped by adding , and that the weights are sampled from , which is a Gaussian with covariance . Because our update rule so closely resembles NGPE with correlated weight noise, we refer to this method as Noisy Natural Gradient (NNG).

3.2 Damping

Interestingly, in second-order optimization, it is very common to dampen the updates by adding a multiple of the identity matrix to the curvature before inversion in order to compensate for errors in the quadratic approximation to the cost. NNG automatically achieves this effect, with the strength of the damping being

; we refer to this as intrinsic damping. In practice, it may be advantageous to add additional extrinsic damping for purposes of stability.

3.3 Fitting Fully Factorized Gaussian Posteriors with Noisy Adam

The discussion so far has concerned NGVI updates for a full covariance Gaussian posterior. Unfortunately, the number of parameters needed to represent a full covariance Gaussian is of order . Since it can be in the millions even for a relatively small network, representing a full covariance Gaussian is impractical. There has been much work on tractable approximations to second-order optimization. In the context of NNG, imposing structure on also imposes structure on the form of the variational posterior. We now discuss two kinds of structure one can impose.

Perhaps the simplest approach is to approximate with a diagonal matrix , as done by Adagrad (Duchi et al., 2011) and Adam (Kingma & Ba, 2014). For our NNG approach, this yields the following updates:

(11)

These update rules are similar in spirit to methods such as Adam, but with the addition of adaptive weight noise. We note that these update rules also differ from Adam in some details: (1) Adam keeps exponential moving averages of the gradients, which is equivalent to momentum, and (2) Adam applies the square root to the entries of in the denominator. We define noisy Adam by adding momentum term to be consistent with Adam. We regard difference (2) inessential. The choice of squaring or divison may affect optimization performance, but they don’t change the fixed points, i.e. they are fitting the same functional form of the variational posterior using the same variational objective. The full procedure is given in Alg. 1.

3.4 Fitting Matrix Variate Gaussian Posteriors with Noisy K-FAC

There has been much interest in fitting BNNs with matrix-variate Gaussian (MVG) posteriors333When we refer to a BNN with an “MVG posterior”, we mean that the weights in different layers are independent, and the weights for each layer follow an MVG distribution. in order to compactly capture posterior correlations between different weights (Louizos & Welling, 2016; Sun et al., 2017). Let denote the weights for one layer of a fully connected network. An MVG distribution is a Gaussian distribution whose covariance is a Kronecker product, i.e. . MVGs are potentially powerful due to their compact representation444If is of size , then the MVG covariance requires approximately parameters to represent, in contrast with a full covariance matrix over , which would require . of posterior covariances between weights. However, fitting MVG posteriors is difficult, since computing the gradients and enforcing the positive semidefinite constraint for and typically requires expensive matrix operations such as inversion. Therefore, existing methods for fitting MVG posteriors typically impose additional structure such as diagonal covariance (Louizos & Welling, 2016) or products of Householder transformations (Sun et al., 2017) to ensure efficient updates.

We observe that K-FAC (Martens & Grosse, 2015) uses a Kronecker-factored approximation to the Fisher matrix for each layer’s weights, as in eq. (3). By plugging this approximation in to eq. (9), we obtain an MVG posterior. In more detail, each block obeys the Kronecker factorization , where and are the covariance matrices of the activations and pre-activation gradients, respectively. K-FAC estimates and online using exponential moving averages which, conveniently for our purposes, are closely analogous to the exponential moving averages defining in eq. (9):

(12)

Conveniently, because these factors are estimated from the empirical covariances, they (and hence also ) are automatically positive semidefinite.

Plugging the above formulas into eq. (9) does not quite yield an MVG posterior due to the addition of the prior Hessian. In general, there may be no compact representation of . However, for spherical Gaussian priors555We consider spherical Gaussian priors for simplicity, but this trick can be extended to any prior whose Hessian is Kronecker-factored, such as group sparsity., we can approximate using a trick proposed by Martens & Grosse (2015) in the context of damping. In this way, the covariance decomposes as the Kronecker product of two terms:

(13)

This factorization corresponds to a matrix-variate Gaussian posterior , where the factor is arbitrarily assigned to the first factor. We refer to this BNN training method as noisy K-FAC. The full algorithm is given as Alg. 2.

Test RMSE Test log-likelihood
Dataset BBB PBP NNG-FFG NNG-MVG BBB PBP NNG-FFG NNG-MVG
Boston 3.1710.149 3.0140.180 3.0310.155 2.7420.125 -2.6020.031 -2.5740.089 -2.5580.032 -2.4460.029
Concrete 5.6780.087 5.6670.093 5.6130.113 5.0190.127 -3.1490.018 -3.1610.019 -3.1450.023 -3.0390.025
Energy 0.5650.018 1.8040.048 0.8390.046 0.4850.023 -1.5000.006 -2.0420.019 -1.6290.020 -1.4210.005
Kin8nm 0.0800.001 0.0980.001 0.0790.001 0.0760.001 1.1110.007 0.8960.006 1.1120.008 1.1480.007
Naval 0.0000.000 0.0060.000 0.0010.000 0.0000.000 6.1430.032 3.7310.006 6.2310.041 7.0790.034
Pow. Plant 4.0230.036 4.1240.035 4.0020.039 3.8860.041 -2.8070.010 -2.8370.009 -2.8030.010 -2.7760.011
Protein 4.3210.017 4.7320.013 4.3800.016 4.0970.009 -2.8820.004 -2.9730.003 -2.8960.004 -2.8360.002
Wine 0.6430.012 0.6350.008 0.6440.011 0.6370.011 -0.9770.017 -0.9680.014 -0.9760.016 -0.9690.014
Yacht 1.1740.086 1.0150.054 1.2890.069 0.9790.077 -2.4080.007 -1.6340.016 -2.4120.006 -2.3160.006
Year 9.076NA 8.879NA 9.071NA 8.885NA -3.614NA -3.603NA -3.620NA -3.595NA
Table 1: Averaged test RMSE and log-likelihood for the regression benchmarks.
0:  : Stepsize
0:  : Exponential decay rates for updating and the Fisher
0:   KL weighting, prior variance, extrinsic damping term
  
  Calculate the intrinsic damping term , total damping term
  while stopping criterion not met do
     
     
      (Update momentum)
     
     
     
          (Update parameters)
  end while
Algorithm 1 Noisy Adam. Differences from standard Adam are shown in blue.
0:  : stepsize
0:  : exponential moving average parameter
0:   KL weighting, prior variance, extrinsic damping term
0:  stats and inverse update intervals and
   and initialize
  Calculate the intrinsic damping term , total damping term
  while stopping criterion not met do
     
     
     if  (mod then
        Update the factors using eq. (12)
     end if
     if  (mod then
        Calculate the inverses using eq. (13).
     end if
     
     
  end while
Algorithm 2 Noisy K-FAC. Subscript denotes layers, , and . We assume zero momentum for simplicity. Differences from standard K-FAC are shown in blue.

3.5 Block Tridiagonal Covariance

Both the fully factorized and MVG posteriors assumed independence between layers. However, in practice the weights in different layers can be tightly coupled. To better capture these dependencies, we propose to approximate using the block tridiagonal approximation from Martens & Grosse (2015). The resulting posterior covariance is block tridiagonal, so it accounts for dependencies between adjacent layers. The noisy version of block tridiagonal K-FAC is completely analogous to the block diagonal version, but since the approximation is rather complicated, we refer the reader to Martens & Grosse (2015) for details.

4 Related Work

Variational inference was first applied to neural networks by Peterson (1987) and Hinton & Van Camp (1993). More recently, Graves (2011) proposed a practical method for variational inference with fully factorized Gaussian posteriors which used a simple (but biased) gradient estimator. Improving on that work, Blundell et al. (2015) proposed a unbiased gradient estimator using the reparameterization trick of Kingma & Welling (2013). Kingma et al. (2015) observed that variance of stochastic gradients can be significantly reduced by local reparameterization trick where global uncertainty in the weights is translated into local uncertainty in the activations.

There has also been much work on modeling the correlations between weights using more complex Gaussian variational posteriors. Louizos & Welling (2016) introduced the matrix variate Gaussian posterior as well as a Gaussian process approximation. Sun et al. (2017) decoupled the correlations of a matrix variate Gaussian posterior to unitary transformations and factorial Gaussian. Inspired by the idea of normalizing flows in latent variable models (Rezende & Mohamed, 2015), Louizos & Welling (2017) applied normalizing flows to auxiliary latent variables to produce more flexible approximate posteriors.

Since natural gradient was proposed by Amari (1998), there has been much work on tractable approximations. Hoffman et al. (2013) observed that for exponential family posteriors, the exact natural gradient could be tractably computed using stochastic versions of variational Bayes E-M updates. Martens & Grosse (2015) proposed K-FAC for performing efficient natural gradient optimization in deep neural networks. Following on that work, K-FAC has been adopted in many tasks (Grosse & Martens, 2016; Wu et al., 2017) to gain optimization benefits, and was shown to be amenable to distributed computation (Ba et al., 2017).

Khan et al. (2017a) independently derived a stochastic Newton update similar to eq. (5). Their focus was on variational optimization (VO) (Staines & Barber, 2012) which can be seen as a special case of variational inference by omitting KL term, and they only derived the version of diagonal approximation (see Section 3.3). Assuming the variational distribution is Gaussian distribution, we can extend NNG to VO by only modifying the update rule of Fisher to keep a running sum of individual Fisher.

Concurrently, Khan et al. (2017b)

also found the relationship between natural gradient and variational inference and derived Vprop by adding weight noise to RMSprop, which essentially resembles noisy Adam.

5 Experiments

In this section, we conducted a series of experiments to investigate the following questions: (1) How does noisy natural gradient (NNG) compare with existing methods in terms of prediction performance? (2) Is NNG able to scale to large dataset and modern-size convolutional neural network? (3) Can NNG achieve better uncertainty estimates? (4) Does it enable more efficient exploration in active learning and reinforcement learning?

Our method with a full-covariance multivariate Gaussian, a fully-factorized Gaussian, a matrix-variate Gaussian and block-tridiagonal posterior are denoted as NNG-full, NNG-FFG (noise Adam), NNG-MVG (noisy K-FAC) and NNG-BlkTri, respectively.

5.1 Regression

We first experimented with regression datasets from the UCI collection (Asuncion & Newman, 2007). All experiments used networks with one hidden layer unless stated otherwise. We compared our method with Bayes By Backprop (BBB) (Blundell et al., 2015)

, probabilistic backpropagation (PBP) with a factorial gaussian posterior 

(Hernández-Lobato & Adams, 2015). The results for PBP_MV (Sun et al., 2017) and VMG (Louizos & Welling, 2016) can be found in supplement.

Following previous work (Hernández-Lobato & Adams, 2015; Louizos & Welling, 2016), we report the standard metrics including root mean square error (RMSE) and test log-likelihood. The results are summarized in Table 1. As we can see from the results, our NNG-FFG performed similarly to BBB (Blundell et al., 2015), indicating that the Graves approximation did not cause a performance hit. Our NNG-MVG method achieved substantially better RMSE and log-likelihoods than BBB and PBP due to the more flexible posterior. Moreover, NNG-MVG outperforms PBP_MV (Sun et al., 2017) on all datasets other than Yacht and Year, though PBP_MV also uses matrix variate Gaussian posterior.

Method Network Test Accuracy
D B D + B
SGD VGG16 81.79 88.35 85.75 91.39
KFAC VGG16 82.39 88.89 86.86 92.13
BBB VGG16 82.82 88.31 N/A N/A
Noisy-Adam VGG16 82.68 88.23 N/A N/A
Noisy-KFAC VGG16 85.52 89.35 88.22 92.01
Table 2: Classification accuracy on CIFAR10 with modified VGG16. [D] denotes data augmentation including horizontal flip and random crop while [B]

denotes Batch Normalization. We leave

[N/A] for BBB and noisy Adam with BN since they are extremely unstable and work only with a very small .
Dataset PBP_R PBP_A NNG-FFG_R NNG-FFG_A NNG-MVG_R NNG-MVG_A HMC_R HMC_A
Boston 6.7160.500 5.4800.175 5.9110.250 5.4350.132 5.8310.177 5.2200.132 5.7500.222 5.1560.150
Concrete 12.4170.392 11.8940.254 12.5830.168 12.5630.142 12.3010.203 11.6710.175 10.5640.198 11.4840.191
Energy 3.7430.121 3.3990.064 4.0110.087 3.7610.068 3.6350.084 3.2110.076 3.2640.067 3.1180.062
Kin8nm 0.2590.006 0.2540.005 0.2460.004 0.2520.003 0.2430.003 0.2440.003 0.2260.004 0.2230.003
Naval 0.0150.000 0.0160.000 0.0130.000 0.0130.000 0.0100.000 0.0090.000 0.0130.000 0.0120.000
Pow. Plant 5.3120.108 5.0680.082 5.8120.119 5.4230.111 5.3770.133 4.9740.078 5.2290.097 4.8000.074
Wine 0.9450.044 0.8090.011 0.7300.011 0.7480.008 0.7520.014 0.7460.009 0.7400.011 0.7490.010
Yacht 5.3880.339 4.5080.158 7.3810.309 6.5830.264 7.1920.280 6.3710.204 4.6440.237 3.2110.120
Table 3: Average test RMSE in active learning.

5.2 Classification

To evaluate the scalability of our method to large networks, we applied noisy K-FAC to a modified version of the VGG16666

The detailed network architecture is 32-32-M-64-64-M-128-128-128-M-256-256-256-M-256-256-256-M-FC10, where each number represents the number of filters in a convolutional layer, and M denotes max-pooling.

network (Simonyan & Zisserman, 2014) and tested it on CIFAR10 benchmark (Krizhevsky, 2009). It is straightforward to incorporate noisy K-FAC into convolutional layers by considering them using Kronecker Factors for Convolution Grosse & Martens (2016). We compared our method to SGD with momentum, K-FAC and BBB in terms of test accuracy. Results are shown in Table 2. Noisy K-FAC achieves the highest accuracy on all configurations except where both data augmentation and Batch Normalization (BN) (Ioffe & Szegedy, 2015) are used. When no extra regularization used, noisy K-FAC shows a gain of 3% (85.52% versus 82.39%).

We observed that point estimates tend to make poorly calibrated predictions, as shown in Figure 2. By contrast, models trained with noisy K-FAC are well-calibrated (i.e. the bars align roughly along the diagonal), which benefits interpretability.

We note that noisy K-FAC imposes a weight decay term intrinsically. To check that this by itself doesn’t explain the performance gains, we modified K-FAC to use weight decay of the same magnitude. K-FAC with this weight decay setting achieves 83.51% accuracy. However, as shown in Table 2, noisy K-FAC achieves 85.52%, demonstrating the importance of adaptive weight noise.

Figure 2: Reliability diagrams (Niculescu-Mizil & Caruana, 2005; Guo et al., 2017) for K-FAC (left) and noisy K-FAC (right) on CIFAR10. Reliability diagrams show accuracy as a function of confidence. Models trained without BN (top) and with BN (bottom). ECE = Expected Calibration Error (Guo et al., 2017); smaller is better.

5.3 Active Learning

One particularly promising application of uncertainty estimation is to guiding an agent’s exploration towards part of a space which it’s most unfamiliar with. We have evaluated our BNN algorithms in two instances of this general approach: active learning, and intrinsic motivation for reinforcement learning. The next two sections present experiments in these two domains, respectively.

In the simplest active learning setting (Settles, 2010), an algorithm is given a set of unlabeled examples and, in each round, chooses one unlabeled example to have labeled. A classic Bayesian approach to active learning is the information gain criterion (MacKay, 1992a), which in each step attempts to achieve the maximum reduction in posterior entropy. Under the assumption of i.i.d. Gaussian noise, this is equivalent to choosing the unlabeled example with the largest predictive variance.

Dataset PBP NNG-FFG NNG-MVG NNG-BlkTri
Boston 0.7610.032 0.7180.035 0.8910.021 0.8890.024
Concrete 0.8170.028 0.8110.028 0.9130.010 0.9220.006
Energy 0.4710.076 0.4380.075 0.6170.087 0.6460.088
Kin8nm 0.5870.021 0.6590.015 0.7310.021 0.7590.023
Naval 0.2700.098 0.3210.087 0.5960.073 0.5980.070
Pow. Plant 0.5090.068 0.6180.050 0.8290.020 0.8530.020
Wine 0.8830.042 0.9180.014 0.9570.009 0.9640.006
Yacht 0.6200.053 0.5970.063 0.7170.072 0.7270.070
Table 4: Pearson correlation of each algorithm’s predictive variances with those of HMC.
(a) CartPoleSwingup
(b) MountainCar
(c) DoublePendulum
Figure 3: Performance of [TRPO] TRPO baseline with Gaussian control noise, [TRPO+BBB] VIME baseline with BBB dynamics network, and [TRPO+NNG-MVG] VIME with NNG-MVG dynamics network (ours). The darker-colored lines represent the median performance in 10 different random seeds while the shaded area show the interquartile range.

We first investigated how accurately each of the algorithms could estimate predictive variances. In each trial, we randomly selected 20 labeled training examples and 100 unlabeled examples; we then computed each algorithm’s posterior predictive variances for the unlabeled examples. 10 independent trials were run. As is common practice, we treated the predictive variance of HMC as the “ground truth” predictive variance. Table 

4

reports the average and standard error of Pearson correlations between the predictive variances of each algorithm and those of HMC. In all of the datasets, our two methods NNG-MVG and NNG-BlkTri match the HMC predictive variances significantly better than the other approaches, and NNG-BlkTri consistently matches them slightly better than NNG-MVG due to the more flexible variational posterior.

Next, we evaluated the performance of all methods on active learning, following the protocol of Hernández-Lobato & Adams (2015). As a control, we Ωevaluated each algorithm with labeled examples selected uniformly at random; this is denoted with the _R suffix. Active learning results are denoted with the _A suffix. The average test RMSE for all methods is reported in Table 3. These results shows that NNG-MVG_A performs better than NNG-MVG_R in most datasets and is closer to HMC_A compared to PBP_A and NNG-FFG_A. However, we note that better predictive variance estimates do not reliably yield better active learning results, and in fact, active learning methods sometimes perform worse than random. Therefore, while information gain is a useful criterion for benchmarking purposes, it is important to explore other uncertainty-based active learning criteria.

5.4 Reinforcement Learning

We next experimented with using uncertainty to provide intrinsic motivation in reinforcement learning. Houthooft et al. (2016) proposed Variational Information Maximizing Exploration (VIME), which encouraged the agent to seek novelty through an information gain criterion. VIME involves training a separate BNN to predict the dynamics, i.e. learn to model the distribution . With the idea that surprising states lead to larger updates to the dynamics network, the reward function was augmented with an “intrinsic term” corresponding to the information gain for the BNN. If the history of the agent up until time step is denoted as , then the modified reward can be written in the following form:

(14)

In the above formulation, the true posterior is generally intractable. Houthooft et al. (2016) approximated it using Bayes by Backprop (BBB) (Blundell et al., 2015). We experimented with replacing the fully factorized posterior with our NNG-MVG model.

Following the experimental setup of Houthooft et al. (2016), we tested our method in three continuous control tasks and sparsified the rewards (see supplement for details). We compared our NNG-MVG dynamics model with a Gaussian noise baseline, as well as the original VIME formulation using BBB. All experiments used TRPO to optimize the policy itself (Schulman et al., 2015).

Performance is measured by the average return (under the original MDP’s rewards, not including the intrinsic term) at each iteration. Figure 3 shows the performance results in three tasks. Consistently with Houthooft et al. (2016), we observed that the Gaussian noise baseline completely breaks down and rarely achieves the goal, VIME significantly improved the performance. However, replacing the dynamics network with NNG-MVG considerably improved the exploration efficiency on all three tasks. Since the policy search algorithm was shared between all three conditions, we attribute this improvement to the improved uncertainty modeling by the dynamics network.

6 Conclusion

We drew a surprising connection between natural gradient ascent for point estimation and for variational inference. We exploited this connection to derive surprisingly simple variational BNN training procedures which can be instantiated as noisy versions of widely used optimization algorithms for point estimation. This let us efficiently fit MVG variational posteriors, which capture correlations between different weights. Our variational BNNs with MVG posteriors matched the predictive variances of HMC much better than fully factorized posteriors, and led to more efficient exploration in the settings of active learning and reinforcement learning with intrinsic motivation.

Acknowledgements

GZ was supported by an NSERC Discovery Grant, and SS was supported by a Connaught New Researcher Award and a Connaught Fellowship. We thank Emtiyaz Khan and Mark van der Wilk for helpful discussions.

References

Appendix A Natural Gradient for Multivariate Gaussian

Suppose we have a model parameterized by which lives in a subspace (such as the set of symmetric matrices). The natural gradient is motivated in terms of a trust region optimization problem, that finding the optimal in a neighborhood of defined with KL divergence,

Then the optimal solution to this optimization problem is given by . Here is the Fisher matrix and is the learning rate. Note that and are defined only for , but these can be extended to the full space however we wish without changing the optimal solution.

Now let assume the model is parameterized by multivariate Gaussian . The KL-divergence between and are:

(15)

Hence, the Fisher matrix w.r.t and are

(16)

Then, by the property of vec-operator , we get the natural gradient updates

(17)

An analogous derivation gives us . Considering , we have , which gives us the convenient formulas

(18)

Recall in variational inference, the gradient of ELBO towards and are given as

(19)

Based on eq. (19) and eq. (18), the natural gradient is given by:

(20)

Appendix B Matrix Variate Gaussian

Recently Matrix Variate Gaussian (MVG) distribution are also used in Bayesian neural networks (Louizos & Welling, 2016; Sun et al., 2017). A matrix variate Gaussian distributions models a Gaussian distribution for a matrix ,

(21)

In which is the mean, is the covariance matrix among rows and is the covariance matrix among columns. Both and

are positive definite matrices to be a covariance matrix. Connected with Gaussian distribution, vectorization of

confines a multivariate Gaussian distribution whose covariance matrix is Kronecker product of and .

(22)

Appendix C Implementation Details

c.1 Regression Implementation Details

The datasets were randomly splitted into training and test sets, with 90% of the data for training and the remaining for testing. To reduce the randomness, we repeated the splitting process for 20 times (except two largest datasets, i.e., Year and Protein, where we repeated 5 times and 1 times, respectively.) For all datasets except two largest ones, we used neural networks with 50 hidden units. For two largest datasets, we used 100 hidden units. Besides, we also introduced a Gamma prior, for the precision of the Gaussian likelihood and included the posterior into variational objective. The variational posterior we used is , then the expected likelihood can be computed as

(23)

Where represents digamma function. Therefore, ELBO can be computed with

(24)

With ELBO as above, we can directly compute the gradients towards variational parameters using automatic differentiation.

In training, the input features and training targets were normalized to be zero mean and unit variance. We removed the normalization on the targets in test time. For each dataset, we set and unless state otherwise. We set batch size 10 for 5 small datasets with less than 2000 data points, 500 for Year and 100 for other fours. Besides, we decay the learning rate by

in second half epochs.

c.2 Classification Implementation Details

Throughout classification experiments, we used VGG16 architecture but reduced the number of filters in each convolutional layer by half.

In training, we adopted learning rate selection strategy adopted by Ba et al. (2016). In particular, given a parameter update vector , the KL divergence between the predictive distributions before and after the update is given by the Fisher norm:

(25)

Observe that choosing a step size of will produce an update with squared Fisher norm . Motivated by the idea of trust region, we chose in each iteration such that the squared Fisher norm is at most some value :

(26)

We used an exponential decay schedule , where and were tunable parameters ( is 0.001 or 0.01 for noisy K-FAC in our CIFAR-10 experiments when models trained without/with Batch Normalization (Ioffe & Szegedy, 2015), is 0.95; is 0.0001 for noisy Adam), and was incremented periodically (every epoch in our CIFAR-10 experiments). In practice, computing involves curvature-vector products after each update which introduces significant computational overhead, so we instead used the approximate Fisher that we used to compute natural gradient. The maximum step size was set to be .

To reduce computational overhead of K-FAC (also noisy K-FAC) introduced by updating approximate Fisher matrix and inverting it, we set and . That means our curvature statistics are somewhat more stale, but we found that it didn’t significantly affect per-iteration optimization performance. was set to 0.01 and 0.003 for noisy K-FAC and noisy Adam, respectively.

We noticed that it was favorable to tune regularization parameter and prior variance . We used a small regularization parameter when data augmentation was adopted. E.g., we set when models were trained with data augmentation while otherwise. We speculate that using data augmentation leads to more training examples (larger ), so it’s reasonable to use a smaller . Moreover, we set to when models were trained without Batch Normalization.

c.3 Active Learning Implementation Details

Following the experimental protocol in PBP (Hernández-Lobato & Adams, 2015), we splited each dataset into training and test sets with 20 and 100 data points. All remaining data were included in pool sets. In all experiments, we used a neural network with one hidden layer and 10 hidden units.

After fitting our model in training data, we evaluated the performance in test data and further added one data point from pool set into training set. The selection was based on the method described by which was equivalent to choose the one with highest predictive variance. This process repeated 10 times, that is, we collected 9 points from pool sets. For each iteration, we re-trained the whole model from scratch.

Beyond that, as uncertainty estimation is of fundamental importance in active learning, we also performed experiments to evaluate the uncertainty estimation of our method directly, which was measured according to the Pearson’s correlation of predictive variance compared to HMC (Neal et al., 2011). Recall Pearson’s correlation,

(27)

is a measure of linear correlation between two variables and . Pearson’s corrleation ranges from to , with bigger value representing stronger correlations. We compared several algorithms, including PBP, NNG-FFG, NNG-MVG, NNG-BlkTri.

We trained NNG-FFG, NNG-MVG and NNG-BlkTri for 20000 epochs, PBP for 40 epochs and HMC with 20 chains, 100000 iterations. For all the models, we used 1000 sampled weights for predicting on the testing set, thus we could compute the model’s predicative variance for every data point in the test set. Finally, we computed the Pearson’s correlation between different models and HMC in terms of predicative variance. In all experiments, we used and no extra damping term.

c.4 Reinforcement Learning Implementation Details

In all three tasks, CartPoleSwingup, MountainCar and DoublePendulum, we used one-layer Bayesian Neural Network with 32 hidden units for both BBB and NNG-MVG. And we used rectified linear unit (RELU) as our activation function. The number of samples drawn from variational posterior was fixed to 10 in the training process. For TRPO, the batch size was set to be 5000 and the replay pool has a fixed number of 100,000 samples. In both BBB and NNG-MVG, the dynamic model was updated in each epoch with 500 iterations and 10 batch size. For the policy network, one-layer network with 32 tanh units was used.

For all three tasks, we sparsified the rewards in the following way. A reward of is given in CartPoleSwingup when , with the pole angle; when the car escapes the valley in MountainCar; and when , with the distance from the target in DoublePendulum.

To derive the intrinsic reward in Houthooft et al. (2016), we just need to analyze a single layer since we assume layer-wise independence in NNG-MVG. The intrinsic reward for each layer is given by (Note: below is the ELBO with as prior.)

(28)

Where is the step-size. As shown in eq. (16), the Fisher matrix for is given by , thus the first term in eq. (28) is easy to get by exploiting Kronecker structure. However, where itself is a gigantic matrix which makes computation of the second term intractable. Fortunately, the approximate variational posterior is a matrix variate Gaussian whose covariance is a Kronecker product, i.e. , where is of size and .

Using and substitute with , we get the following identity

(29)

where Fisher matrices and

(30)

By further ignoring off-diagonal block in eq. (29), we can decompose into two terms,

(31)

and

(32)

Now, each term can be computed efficient since and are small matrices.

Appendix D Additional Results

Figure 4: Training curves for all three methods. For each method, we tuned the learning rate for updating the posterior mean. Note that BBB and NNG-FFG use the same form of , while NNG-MVG uses a more flexible distribution.

We also run PBP_MV (Sun et al., 2017) and VMG (Louizos & Welling, 2016) on regression datasets from UCI collection (Asuncion & Newman, 2007). Results are shown in Table 5. Note that VMG introduced pseudo input-output pairs to enhance the flexibility of posterior distribution.

Test RMSE Test log-likelihood
Dataset PBP_MV VMG PBP_MV VMG
Boston 3.1370.155 2.8100.110 -2.6660.081 -2.5400.080
Concrete 5.3970.130 4.7000.140 -3.0590.029 -2.9800.030
Energy 0.5560.016 1.1600.030 -1.1510.016 -1.4500.030
Kin8nm 0.0880.001 0.0800.001 1.0530.012 1.1400.010
Naval 0.0020.000 0.0000.000 4.9350.051 5.8400.000
Pow. Plant 4.0300.036 3.8800.030 -2.8300.008 -2.7800.010
Protein 4.4900.012 4.1400.010 -2.9170.003 -2.8400.000
Wine 0.6410.006 0.6100.010 -0.9690.013 -0.9300.020
Yacht 0.6760.054 0.7700.060 -1.0240.025 -1.2900.020
Year 9.450NA 8.780NA -3.392NA -3.589 NA
Table 5: Averaged test RMSE and log-likelihood for the regression benchmarks.

While optimization was not the primary focus of this work, we compared NNG with the baseline BBB (Blundell et al., 2015) in terms of convergence. Training curves for two regression datasets are shown in Figure 4 . We found that NNG-FFG trained in fewer iterations than BBB, while leveling off to similar ELBO values, even though our BBB implementation used Adam, and hence itself exploited diagonal curvature. Furthermore, despite the increased flexibility and larger number of parameters, NNG-MVG took roughly 2 times fewer iterations to converge, while at the same time surpassing BBB by a significant margin in terms of the ELBO.