1 Background
There are several existing approaches for preventing catastrophic forgetting of regular (nonBayesian) NNs by constructing a regularization term from parameters of previous tasks, such as Elastic Weight Consolidation (EWC) kirkpatrick2017overcoming and Synaptic Intelligence (SI) zenke2017continual . In the Bayesian setting, Variational Continual Learning (VCL) nguyen2017variational proposes a framework that makes use of Variational Inference (VI).
(1) 
The objective function is as in Equation 1, where is the index of tasks, represents the approximated posterior of parameters of task and is the data of task . This is the same as the objective of conventional VI except the prior is from the previous task, which produces the regularization by KullbackLeibler (KL)divergence between parameters of current and previous tasks. In addition, VCL nguyen2017variational proposes a predictive model trained by coresets of seen tasks to performs prediction for those tasks, where the coresets consist of data samples from the dataset of each seen task except the training data of each task.
(2) 
As shown in Equation 2, represents the collection of coresets at task and is the optimal posterior obtained by Equation 1. VCL shows promising performance comparing to EWC kirkpatrick2017overcoming and SI zenke2017continual , which demonstrates that effectiveness of Bayesian approaches to continual learning.
2 Facilitating Bayesian continual learning by natural gradients
In order to prevent catastrophic forgetting in Bayesian continual learning, we would prefer the posteriors of a new task stay as close as possible to the posteriors of the previous task. Conventional gradient methods give the direction of steepest descent of parameters in Euclidean space, which might cause a large difference in terms of distributions following a small change in terms of parameters. We posit that natural gradient methods may be a better choice than the conventional gradient descent. The definition of the natural gradient is the direction of steepest descent in Riemannian space rather than Euclidean space, which means the natural gradient would prefer the smallest change in terms of distribution while optimizing some objective function pascanu2013revisiting . The formulation is as below:
(3) 
where is the Fisher information of .
2.1 Natural gradients of the exponential family
Specifically, when the posterior of a parameter is from the exponential family, we can write it in the form of , where is the base measure and is lognormalizer, is natural parameter and are sufficient statistics. Then the Fisher information of is the covariance of the sufficient statistics which is the second derivative of hoffman2013stochastic :
(4) 
In this case, the natural gradient of is the transformation of the Euclidean gradient by the precision matrix of the sufficient statistics .
2.2 Gaussian natural gradients and the Adam optimizer
In the simplest (and most common) formulation of Bayesian s
, the weights are drawn from Gaussian distributions, with a meanfield factorization which assumes that the weights are independent. Hence, we have an approximate posterior for each weight
, where are the parameters to be optimized, and their Fisher information has an analytic form:(5) 
Consequently, the natural gradient of the mean of posterior can be computed as follows, where
represents the objective (loss) function:
(6) 
Equation 6 indicates that small can cause the magnitude of natural gradient to be much reduced. In addition, BNNs
usually need very small variances in initialization to obtain a good performance at prediction time, which brings difficulties when tuning learning rates when applying vanilla
Stochastic Gradient Descent (SGD) to this Gaussian Natural Gradient (GNG). As shown in Figure 1 and 2 in the supplementary materials we can see how the scale of variance in initialization changes the magnitude of GNG.Meanwhile, Adam optimization kingma2014adam provides a method to ignore the scale of gradients in updating steps, which could compensate this drawback of GNG
. More precisely, Adam optimization uses the second moment of gradients to reduce the variance of updating steps:
(7) 
where is index of updating steps, means averaging over updating steps, is the adaptive learning rate at step . Considering the first and second moments of ,
(8) 
We can see that only when and are independent from , the updates of GNG are equal to the updates by Euclidean gradients in the Adam optimizer. It also that shows larger will result in smaller updates when applying Adam optimization to GNG.
We show comparison between different gradient descent algorithms in the supplementary materials. More experimental results are shown in Section 4.
In nonBayesian models, natural gradients may have problems with the Adam optimizer because there is no posterior
defined. The distribution measured in natural gradient is the conditional probability
pascanu2013revisiting and the loss function is usually . In this case the natural gradient of becomes:(9) 
If we apply this to the Adam optimizer, which means replacing in Equation 7 by , the formulation is duplicated and involves the fourth moment of the gradient, which is undesirable for both Adam optimization and natural gradients. One example is EWC kirkpatrick2017overcoming which uses Fisher information to construct the penalty of changing previous parameters, hence, it has a similar form with Adam and it works worse with Adam than with vanilla SGD in our experience. However, this is not the case for Bayesian models, where Equation 9 does not hold because the parameter has its posterior and then the loss function is optimized w.r.t. , meanwhile in common cases.
3 Facilitating Bayesian Continual Learning with Stein Gradients
In the context of continual learning, “coresets” are small collections of data samples of every learned task, used for task revisiting when learning a new task nguyen2017variational . The motivation is to retain summarized information of the data distribution of learned tasks so that we can use this information to construct an optimization objective for preventing parameters from drifting too far away from the solution space of old tasks while learning a new task. Typically, the memory cost of coresets will increase with the number of tasks, hence, we would prefer the size of a coreset as small as possible. There are some existing approaches to Bayesian coreset construction for scalable machine learning huggins2016coresets ; campbell2018bayesian , the idea is to find a sparse weighted subset of data to approximate the likelihood over the whole dataset. In their problem setting the coreset construction is also crucial in posterior approximation, and the computational cost is at least campbell2018bayesian , where is the coreset size and is the dataset size. In Bayesian continual learning, the coreset construction does not play a role in the posterior approximation of a task. For example, we can construct coresets without knowing the posterior, i.e. random coresets, centre coresets nguyen2017variational
. However, the information of a learned task is not only in its data samples but also in its trained parameters, so we consider constructing coresets using our approximated posteriors, yet without intervening the usual Bayesian inference procedure.
3.1 Stein gradients
Stein gradients liu2016stein can be used to generate samples of a known distribution. Suppose we have a series of samples from the empirical distribution , and we update them iteratively to move closer to samples from the conditional distribution by , where
(10) 
is chosen to decrease the KLdivergence between and in the steepest direction; is chosen to be the unit ball of the Reproducing Kernel Hilbert Space to give a closed form update of samples; and is Stein operator.Thus, the Stein gradient can be computed by:
(11) 
In the meanfield BNN model introduced in Section 2.2, we can just replace by in Equation 11. The computational complexity of the Stein gradient method is , which is significantly cheaper than when .
4 Experiments
We tested GNG with Adam in the framework of VCL on permuted MNIST kirkpatrick2017overcoming , split MNIST zenke2017continual , and split fashion MNIST xiao2017fashion tasks. We applied a BNN with two hidden layers, each layer with 100 hidden units, all split tasks tested using multihead models zenke2017continual . The results are displayed in Figure 1 (left column) and the error bars are from 5 runs by different random seeds. In the permuted MNIST task, GNG with Adam outperforms standalone Adam. There is no significant difference in split tasks. More details and further analysis can be found in the supplementary material.

For experiments with coresets, we tested two different usages of coresets. In the first we use coresets to train a predictive model as introduced in nguyen2017variational (Equation 2); in the second we add a regret loss from the coresets to the objective function, which does not require a separate predictive model:
(12) 
where the second term in Equation 12 is regret loss constructed by coresets of previous tasks . We applied a RBF kernel in the same manner as described in liu2016stein to the Stein gradients and tested the Stein coresets in both permuted and split tasks, comparing with random and center coresets. The coreset size is per task in permuted MNIST and in split tasks, which is the same as used in nguyen2017variational . The results are shown in Figure 1 (right column). The regret usage of coresets gives better performance in general, and Stein coresets also outperform other two types of coresets in most cases.
References

[1]
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume
Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka
GrabskaBarwinska, et al.
Overcoming catastrophic forgetting in neural networks.
Proceedings of the national academy of sciences, page 201611835, 2017.  [2] Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In International Conference on Machine Learning, pages 3987–3995, 2017.
 [3] Cuong V Nguyen, Yingzhen Li, Thang D Bui, and Richard E Turner. Variational continual learning. In International Conference on Learning Representations, 2018.
 [4] Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. In International Conference on Learning Representations 2014, 2014.
 [5] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347, 2013.
 [6] Diederik P Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In Proceeding 3rd International Conference on Learning Representations, 2014.

[7]
Jonathan Huggins, Trevor Campbell, and Tamara Broderick.
Coresets for scalable Bayesian logistic regression.
In Advances in Neural Information Processing Systems, pages 4080–4088, 2016.  [8] Trevor Campbell and Tamara Broderick. Bayesian coreset construction via greedy iterative geodesic ascent. arXiv preprint arXiv:1802.01737, 2018.
 [9] Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose Bayesian inference algorithm. In Advances In Neural Information Processing Systems, pages 2378–2386, 2016.
 [10] Han Xiao, Kashif Rasul, and Roland Vollgraf. FashionMNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.

[11]
Christos Louizos, Karen Ullrich, and Max Welling.
Bayesian compression for deep learning.
In Advances in Neural Information Processing Systems, pages 3288–3298, 2017.
Appendix A Comparing different gradient descent algorithms
Updating trajectory of parameters of 1dimensional Bayesian linear regression in continual learning. The model is defined as
. The axis is and axis is . The contour depicts the average MSE over seen tasks, the crossmark indicates the position of true parameters of each task, different colours represent different tasks. The learning rate is set to for vanilla SGD and for all other methods. The initialization of and is set to .Figures 3 and 2 demonstrate how the optimization methods and scale of variance affect parameter updates in an 1dimensional Bayesian linear regression model of continual learning. In Figure 1(d) the updating steps are smaller than in Figure 2(d), even when the scale of variance is larger, which is because larger value of initialization results in a larger variance of gradients (see the difference between Figure 1(a) and Figure 2(a), Figure 1(b) and Figure 2(b)), and consequently larger as well, meaning that the step size of GNG decreases according to Equation 7 and 8 in the main content. In general, GNG shows lower variance in parameter updates, and it works better with Adam than with SGD.
Appendix B Further analysis of Gaussian Natural Gradients and Adam experiments
As one model has a limited capacity, and each different task contains some different information, the ideal case for continual learning is that each new task shares as much information as possible with previous tasks, and occupying as little extra capacity within the Neural Network as possible. This is analogous to model compression [11], but one key difference is we want more free parameters instead of parameters that are set to zero. For example, there are independent parameters in a model and the loglikelihood of current task is factorized as:
(13) 
If is absolutely free for this task, it indicates the following conditional probability is a constant w.r.t. :
(14) 
This would require
(15) 
Therefore, is free to move, then no matter what value of is set to in future tasks, it will not affect the loss of previously learned tasks. In realistic situations, is very unlikely to be absolutely free. However, it is feasible to maximize the entropy of , larger entropy indicating more freedom of . For instance, minimizing KL divergence includes maximizing the entropy of parameters:
(16) 
On the other hand, it is undesirable to change parameters with lower entropy instead of those with higher entropy while learning a new task, since it could cause a larger loss on previous tasks.
The entropy of a Gaussian distribution is defined by its variance alone. In this sense, a larger decrease of variance indicates larger decrease of entropy. To understand why GNG works better on permuted MNIST tasks, we visualized how the variances of the weights change in Figure 4 where we normalized all values as below:
(17) 
where is the maximal variance of the first task. When the variance of parameters is decreased by learning a new task, the entropy of the model is decreased as well. We can think of it as new information written into the model, so when the model has learned more tasks, the variances of more parameters will have shrunk as shown in Figure 4.
In an ideal case, a parameter with larger variance should be chosen to write new information preferentially to avoid erasing information of previous tasks. Therefore, it would be preferred if the dark colour spread more evenly in latter tasks in Figure 4, and Adam + GNG appears to have this property for the permuted MNIST task (Figure 3(a)). However, there is no notable difference caused by GNG for split MNIST tasks (Figure 3(b)), which consistent with their performance in terms of average accuracy over tasks. The underlying reason needs further investigation.
Comments
There are no comments yet.