An number of problems in artificial intelligence must cope with continual learning tasks involving very large datasets or even data streams which have become ubiquitous in application domains such as life-long learning, computer vision, natural language processing, bioinformatics and robotics. As these big-data problems demand richer models, novel online algorithms are needed that scale efficiently both in accuracy and computational resources. Recent work have shown that complex models require regularization to avoid local minima and overfitting—even when the data is abundant.
The aim of our work is to formulate an efficient online learning approach that leverages the advantages of stochastic gradient descent (SGD)  and Bayesian filtering . Many learning tasks can be cast as optimization problems, and SGD is simple, scalable and enjoys strong theoretical guarantees in convex problems [4, 5] but tends to overfit if not properly regularized. On the other hand, Bayesian filtering methods track belief distributions over the optimal parameters to avoid overfitting, but are typically computationally prohibitive for rich models. To combine these two approaches, we had to address two questions.
The first question is how to update a global belief distribution over optimal parameters from a local update prescribed by SGD. In other words, rather than calculating the posterior using a likelihood function, we take gradients as directly specifying the velocity field—or flow field—of belief updates. As shown in Figure 1, this can be viewed as tracking an ensemble of models under the dynamics induced by the expected prediction error. We answer this question by using the principle of minimum information discrimination (MID)  to choose the most conservative posterior belief that is consistent with the gradient measurement and assumptions on the flow field.
The second question is how to generate predictions without integrating over the parameter uncertainty. Calculating the optimal belief update would require estimating the expected SGD update at every point in parameter space, which is prohibitive. To overcome this problem, a natural choice is to useThompson sampling
, i.e. sampling parameters from the posterior according to the probability of being optimal. As is well known in the literature in Bayesian optimization , optimizing the parameters of an unknown and possibly non-convex error function requires dealing with the exploration-exploitation trade-off, which is precisely where Thompson sampling has been shown to outperform most state-of-the-art methods .
Here, we illustrate this modelling approach by deriving the update rules for Gaussian belief distributions over the optimal parameter. By assuming that observations generate linear flow fields, we furthermore show that the resulting update rules have closed-form solutions. Because of this, the resulting learning algorithms are online, permitting training examples to be discarded once they have been used.
2 Gaussian Belief Flows
We focus on prediction tasks with parameterized models, each denoted by with inputs and parameters . At each round the algorithm maintains a belief distribution over the optimal parameters. Here we choose to be represented by a -dimensional Gaussian with mean and covariance :
In each round, the algorithm samples a parameter vectorfrom the distribution . The components of are then used as parameters in a classification or regression machine for a given input . For example, we can consider logistic regression where are the parameters of the logistic function. Or we can use deep networks, where specifies the synaptic weights of the different layers of the network. Without loss of generality, this yields a prediction for the particular input :
A supervised output signal is also provided, and we wish to minimize the loss between the true output and predicted output:
To improve the loss on the current example, SGD then updates the parameter as:
is a learning rate which may vary as the number of rounds increase. For the case of a multilayer perceptron, this gradient can be efficiently computed by the well-known backpropagation algorithm in a single backwards pass.
2.1 Full Flow
Knowing that the sampled parameter vector needs to be modified to , how do we choose the posterior ? To answer this question, we assume that the update results from a linear flow field, i.e. each is updated as
where is an affine transformation matrix and
is a translational offset. The main advantage of a linear flow is that Gaussian distributions remain Gaussian under such a field. In particular, we choose the flow parameters to match the SGD update (1):
Under this linear flow, the new belief distribution is
where the mean is shifted to and the covariance is scaled to . There are many potential and which satisfy the flow constraint in (2), so we need a way to regularize the ensuing distribution
. We utilize the Kullback-Leibler divergence
subject to the constraint . This is an application of the principle of MID , an extension of the maximum entropy principle , and governs the updates of exponential family distributions . The next theorem (see Appendix for the proof) shows that this problem has a closed-form solution.
Let be the eigendecomposition of the covariance matrix. Let
be the transformed (whitened) differences between the sampled parameter vector and the mean before and after the update respectively, expressed in terms of the 2-D basis spanned by the unitary orthogonal vectors and . Then, the solution to (3) is
where the 2-D transformation matrix is given by
. The hyperparameters of the
posterior distribution are then equal to
. The hyperparameters of the posterior distribution are then equal to
There are actually four discrete solution to (3), one for each combination of and . If we assume that the SGD learning rate is small, then . In this case, we should choose the solution obtained by selecting .
2.2 Diagonal and Spherical Flows
We now consider two special cases: flows with diagonal and spherical transformation matrices, i.e. of the form
where denotes the square diagonal matrix with the elements of the vector on the main diagonal and is a unitary (rotation) matrix. We match these flow types with multivariate Gaussians having diagonal and spherical covariance matrices. Let be the prior and be the posterior at round , and let subindices denote vector components in the following.
For the diagonal case, the multivariate distribution factorizes into univariate Gaussians that can be updated independently under flow fields of the form
Under these constraints, it can be shown (see Appendix) that the optimal transformation is given by
where , are the -th component of the normalized sampled parameter before and after updating and . Similarly to the general case, we choose the solution closer to the identity when by picking . The posterior hyperparameters are
are the new mean and standard deviation of the-th component.
For the spherical case, the flow field that preserves the spherical distribution is of the form , where is a scalar and
is a unitary matrix such thatand are colinear; that is, rotates and scales to align it to . The update is
that is, similar to (4) but with an isotropic scaling factor . The optimal scaling factor is then given by
where , , where is choosen as for the near-identity transformation.
2.3 Non-Expansive Flows
The previously derived update rules allow flow fields to be expansive, producing posterior distributions having larger differential entropy than the prior. Such flow fields are needed when the error landscape is dynamic, e.g. when the data is nonstationary. However, for faster convergence with stationary distributions, it is desirable to restrict updates to non-expansive flows. Such flow fields are obtained by limiting the singular values of the transformation matrixto values that are smaller or equal than one.
The pseudocode of a typical gradient-based online learning procedure is listed in Algorithm 1. For a -dimensional multivariate Gaussian with diagonal and spherical covariance matrix, the update has time complexity , while for an unconstrained covariance matrix, this update is in a naive implementation performing a spectral decomposition in each iteration. The complexity of the unconstrained covariance implementation can be reduced using low-rank techniques as described in 
. Numerically, it is important to maintain the positive semidefiniteness of the covariance matrix. One simple way to achieve this is by constraining its eigenvalues to be larger than a predefined minimum.
The full optimal transformation will include rotations in the 2-D subspace spanned by the sampled and learned parameter vectors. In contrast, the diagonal transformation will only scale and translate each basis direction independently, and the spherical transformation acts on the radial parameter only. Since rotations allow for more flexible transformations, the full flow update will keep the belief distribution relatively unchanged, while the diagonal and spherical flows will force the belief distribution to shift and compress more. The update that is better at converging to an optimal set of weights is problem dependent. The three update types are shown in Figure 2a.
3.2 Update Rule
To strenghten the intuition, we briefly illustrate the non-trivial effect that the update rule has on the belief distribution. Figure 2b shows the values of the posterior hyperparameters of a one-dimensional standard Gaussian as a function of and , that is, the difference of the sampled weight and the center of the prior Gaussian before and after the update.
Roughly, there are three regimes, which depend on the displacement
. First, when the displacement moves towards the prior mean without crossing it, then the variance decreases. This occurs when, that is, below the diagonal in the first quadrant and above the diagonal in the third quadrant. Second, if the displacement crosses the prior mean, then the flow field is mainly explained in terms of a linear translation of the mean. This corresponds to the second and fourth quadrants. Finally, when the displacement moves away from the prior mean, the posterior mean follows the flow and the variance increases. This corresponds to the regions where , i.e. above the diagonal in the first and below the diagonal in the third quadrant.
Note that the diagonal leaves the two hyperparameters unchanged, and that the vertical does not lead to a change of the variance. Figure 2c illustrates the change of the prior into a posterior belief. The left panel shows the update of a sampled weight equal to the standard deviation, and the right panel show the update for a sampled weight equal to one-fifth of the standard deviation. Here, it is seen that if the sampled weight is closer to the mean, then the update is reflected in a mean shift with less change in the variance.
3.3 Pseudo Datasets
The belief updates can be related to Bayesian updates through (pseudo) datapoints that would yield the same posterior. Since a belief flow update ensures that the posterior stays within the Gaussian family, it is natural to relate it to Bayesian estimation of an unknown mean (but known covariance) under the self-conjugate Gaussian family. More precisely, let and denote the prior and the posterior of a belief flow update. Then, it is easily verified that this is equivalent to conditioning a prior on a point with likelihood function , where
The matrix is symmetric but not necessarily positive semidefinite unless we use non-expansive flows. A negative eigenvalue of then indicates an increase
of the variance along the direction of its eigenvector. From a Bayesian point of view, this implies that the pseudo datapoint was removed (or forgotten), rather than added, along the direction of . We have already encountered this case in the 1-D case in Section 3.2 when the posterior variance increases due to a displacement that points away from the mean.
Figure 3 shows the pseudo dataset for a sequence of updates of a spherical belief flow. The temporal dynamics of the belief distribution can be thought of as driven by the addition (subtraction) of datapoints that attract (repel) the mean shown in black by relocating the center of mass. In the figure, blue and red correspond to added and subtracted datapoints respectively, and the circular areas indicate their precision, i.e. , where is the unique eigenvalue of at round . The convergence of the belief distribution to a particular point can thus be analyzed in terms of the convergence of the weighted average of the pseudo datapoints to and accumulation of precision, that is .
4 Empirical Evaluation
We evaluated the diagonal variant of Gaussian belief flows (BFLO) on a variety of classification tasks by training logistic regressors and multilayer neural networks. These results were compared to several baseline methods, most importantly stochastic gradient descent (SGD). Our focus in these experiment was not to show better performance to existing learning approaches, but rather to illustrate the effect of different regularization schemes over SGD. Specifically, we were interested in the transient and steady-state regimes of the classifiers to measure the online and generalization properties respectively.
4.1 Logistic Regression
For this model, we compared Gaussian belief flows (BFLO) on several binary classification datasets and compared its performance to three learning algorithms: AROW , stochastic gradient descent (SGD) and Bayesian Langevin dynamics (BLANG) . With the exception of AROW, which combines large margin training and confidence weighting, these are all gradient-based learning algorithms. The algorithms were used to train a logistic regressor described as follows. The probability of the output given the corresponding input is modelled as
where is a parameter vector and is the logistic sigmoid. When used in combination with the binary KL-divergence111Equivalently, one can use the binary cross-entropy defined as .
as the error function, the error gradients become:
We measured the online and generalization performance in terms of the number of mistakes made in a single pass through 80% of the data and the average classification error without updating on the remaining 20% respectively. Additionally, to test robustness to noise, we repeated the experiments, but inverting the labels on 20% of the training examples (the evaluation was still done against the true labels).
To test the performance in the online setting, we selected well-known binary classification datasets having a large number of instances, summarized as follows. MUSHROOM: Physical characteristics of mushrooms, to be classified into edible or poisonous. This UCI dataset contains 8124 instances with 22 categorical attributes each that have been expanded to a total of 112 binary features222http://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/binary.html. COVTYPE: 581,012 forest instances described by 54 cartographic variables that have to be classified into 2 groups of forest cover types . IJCNN: This is the first task of the IJCNN 2001 Challenge . We took the winner’s preprocessed dataset , and balanced the classes by using only a subset of 27,130 instances of 22 features. EEG: This nonstationary time series contains a recording of 14,980 samples of 14 EEG channels. The task is to discriminate between the eye-open and eye-closed state333https://archive.ics.uci.edu/ml/datasets/EEG+Eye+State. A9A: This dataset, derived from UCI Adult , consists of 32,561 datapoints with 123 features from census data where the aim is to predict whether the income exceeds a given threshold.
|Online Classification Error in %|
|Final Classification Error in %|
We used grid search to choose simple experimental parameters that gave good results for SGD, and used them on all datasets and gradient-based algorithms to isolate the effect of regularization. In particular, we avoided using popular tricks such as “heavy ball”/momentum , minibatches, and scheduling of the learning rate. SGD and BLANG were initialized with parameters drawn from and BFLO with a prior equal to , where . The learning rate was kept fixed at . For the AROW parameter we chose . With the exception of EEG, all datasets were shuffled at the beginning of a run.
summarizes our experimental results averaged over 10 runs. The online classification error is given within a standard error. The last column lists the mean rank (out of 4, over all datasets), with 1 indicating an algorithm attaining the best classification performance. BFLO falls short in convergence speed due to its exploratory behavior in the beginning, but then outperforms the other classifiers in its generalization ability. Compared to the other methods, it comes closest to the performance of BLANG, both in the transient and in the steady-state regime, and in the remarkable robustness to noise. This may be because both methods use Monte Carlo samples of the posterior to generate predictions.
4.2 Feedforward Neural Networks
This experiment investigates the application of Belief Flows to learn the parameters of a more complex learning machine—in this case, a feedforward neural network with one hidden layer. We compared the online performance of Gaussian belief flows (BFLO) to two learning methods: plain SGD and SGD with dropout . As before, our aim was to isolate the effect of the regularization methods on the online and test performance by choosing a simple experimental setup with shared global parameters, avoiding architecture-specific optimization tricks.
We tested the learning algorithms on the well-known MNIST444http://yann.lecun.com/exdb/mnist/index.html handwritten digit recognition task (abbreviated here as BASIC), plus two variations derived in : RANDOM: MNIST digits with a random background, where each random pixel value was drawn uniformly; IMAGES: a patch from a black and white image was used as the background for the digit image. All datasets contained 62,000 grayscale, -pixel images (totalling 784 features). We split each dataset into an online training set containing 80% and a test set having 20% of the examples.
|Online Classification Error in %|
|Final Classification Error in %|
To attain converging learning curves in the (single-pass) online setting, we have chosen a modest architecture of 784 inputs, 200 hidden units and 10 outputs with aggressive updates. All units have a logistic sigmoid activation function, and error gradients were evaluated on the binary Kullback-Leibler divergence function averaged over the outputs. At the beginning of each run, the examples were shuffled and the training initialized with weights either drawn independently from a normal(SGD and dropout) or by setting the prior to (BFLO), where . Throughout the online learning phase, the learning rate was kept fixed at and we applied 5 update iterations on each example before discarding it.
We report the results in Table 2 which were averaged over 5 runs. As the level of noise increased, all the classifiers declined in performance, with PLAIN being the easiest and RANDOM being the hardest to learn. SGD had a particularly poor behavior relative to the regularized learners with their built-in mechanisms to avoid local minima. BFLO attained the lowest error rates both online and in generalization. Interestingly, it copes better with the RANDOM than with the IMAGES dataset. This could be because Monte Carlo-sampling smoothens the gradients within the neighborhood.
5 Discussion and Future Work
Our experiments indicate that Belief Flows methods are suited for difficult online learning problems where robustness is a concern. Gaussian belief flows may be related to ensemble learning methods in conjunction with locally quadratic cost functions. A continous ensemble of predictors that is trained under gradient descent follows a linear velocity field when the objective function is a quadratic form. Under such flow fields, the Gaussian family is a natural choice for modelling the ensemble density since it is invariant to linear velocity fields. An iteration of the update rule then infers the dynamics of the whole ensemble from a single sample by conservatively estimating the ensemble motion in terms of the Kullback-Leibler divergence.
Following the Bayesian rationale, the resulting predictor is an ensemble rather than a single member. Any strategy deciding which one of them to use in a given round must deal with the exploration-exploitation dilemma; that is, striking a compromise between minimizing the prediction error and trying out new predictions to further increase knowledge . This is necessary to avoid local minima–here we sample a predictor according to the probability of it being the optimal one, a strategy known as Thompson sampling [7, 23, 24]. The maintainence of a dynamic belief in this manner allows the method to outperform algorithms that maintain only a single estimate in complex learning scenarios.
The basic scheme presented here can be extended in many ways. Some possibilities include the application of Gaussian belief flows in conjunction with kernel functions and gradient acceleration techniques, and in closed-loop setups such as in active learning. Furthermore, linear flow fields can be generalized by using more complex probabilistic models suitable for other classes of flow fields following the same information-theoretic framework outlined in our work. Finally, future theoretical work includes analyzing regret bounds, and a more in-depth investigation of the relation between stochastic approximation, reinforcement learning and Bayesian methods that are synthesized in a belief flow model.
6 Proof of Theorem 1
The KL divergence is given by:
Then for Gaussians in , this divergence is:
The constraint on the flow implies:
where and . So the optimization can be written in terms of the matrix :
We first note that minimizing the KL-divergence implies that the transformation is full-rank, since collapsing the rank of the covariance matrix will lead to infinite divergence. Taking the derivative with respect to yields
which can be rewritten as
We first see that if , then is the solution where the flow field is invariant. The expression can be simplified if we consider the diagonalization of the covariance matrix:
In that case, we transform the variables:
So, in terms of these transformed variables, the optimal condition becomes:
The general solution can be found by considering the 2-D basis spanned by the vectors and , with unit vectors and .
The optimal matrix is just identity in the other directions orthogonal to this 2-D subspace. Then the optimality condition in (12) can be written in terms of the matrix
To solve this quadratic matrix equation, we first note that symmetry of the matrices implies that the solution must satisfy: . This means that the solution must be of the restricted form:
Then in terms of the unknowns , , and , we have the following three equations:
Fortunately, we can determine the solution analytically in closed form:
The quadratic formula then yields:
Putting these together, we get that the optimal transformation matrix is:
We see that there are actually four discrete solutions for . First, one of two roots (one positive, one negative) for can be chosen, and then the signs for and can be swapped. However, if we consider the situation where is not far from
, then we should choose the solution connected to the identity matrix. This means selecting the positive root for , and ensuring the diagonal terms of are positive. The full matrix solution can be expressed in terms of this as:
The parameters for the new belief distribution in the next round are then given in terms of :
This concludes our proof.
6.1 Special Cases
The optimal solution for the diagonal flow is obtained as a special case of (15) where and . Let be the variance of along the -th component. If we rescale the parameter components according to as
then the optimal scaling parameter is given by
For the spherical case, we begin by noting that the covariance and the transformation matrices are all isotropic, i.e. of the form , where is a scalar and is unitary. Because of this, the multivariate problem effectively reduces to a univariate problem where first rotates to align it to and then scales and translates the distribution along this axis.
-  M. Welling and Y.-W. Teh, “Bayesian Learning via Stochastic Gradient Langevin Dynamics,” in ICML, 2011.
-  H. Robbins and S. Monro, “A Stochastic Approximation Method,” Annals of Mathematical Statistics, vol. 22, no. 3, pp. 400–407, 1951.
-  S. Särkkä, Bayesian Filtering and Smoothing, ser. Institute of Mathematical Statistics Textbooks. Cambridge University Press, 2013.
L. Bottou, “Large-scale machine learning with stochastic gradient descent,” inProceedings of COMPSTAT’2010. Springer, 2010, pp. 177–186.
F. Bach and E. Moulines, “Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning,” inNIPS, 2011.
-  S. Kullback, Information Theory and Statistics. New York: Wiley, 1959.
-  W. Thompson, “On the likelihood that one unknown probability exceeds another in view of the evidence of two samples,” Biometrika, vol. 25, no. 3/4, pp. 285–294, 1933.
-  J. Mockus, “Bayesian Approach to Global Optimization: Theory and Applications,” 2013.
-  O. Chapelle and L. Li, “An Empirical Evaluation of Thompson Sampling,” in NIPS, 2011.
-  K. Crammer and D. D. Lee, “Learning via Gaussian Herding,” in NIPS, 2010, pp. 451–459.
-  E. Jaynes, “Information Theory and Statistical Mechanics,” Physical Review. Series II, vol. 106, no. 4, pp. 620–630, 1957.
-  I. Csiszár and P. Shields, Information Theory and Statistics: A Tutorial. Now Publishers Inc, 2004.
-  J. R. Bunch, C. P. Nielsen, and D. C. Sorensen, “Rank-one modification of the symmetric eigenproblem,” Numerische Mathematik, vol. 31, no. 1, pp. 31–48, 1978.
-  K. Crammer, A. Kulesza, and M. Dredze, “Adaptive Regularization of Weighted Vectors,” in NIPS, 2009.
-  R. Collobert, S. Bengio, and Y. Bengio, “A parallel mixture of SVMs for very large scale problems,” Neural Computation, vol. 14, no. 05, pp. 1105–1114, 2002.
-  D. Prokhorov, “IJCNN 2001 neural network competition,” 2001.
-  C.-C. Chang and C.-J. Lin, “IJCNN 2001 challenge: Generalization ability and text decoding,” in IJCNN. IEEE, 2001.
-  C.-J. Lin, R. C. Weng, and S. S. Keerthi, “Trust region Newton method for large-scale logistic regression,” Journal of Machine Learning Research, vol. 9, pp. 627—–650, 2008.
-  B. T. Polyak, “Some methods of speeding up the convergence of iteration methods,” Zh. Vychisl. Mat. Mat. Fiz., vol. 4, no. 5, pp. 791–803, 1964.
-  G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, “Improving neural networks by preventing co-adaptation of feature detectors,” arXiv:1207.0580, 2012.
-  H. Larochelle, D. Erhan, A. Courville, B. J., and Y. Bengio, “An empirical evaluation of deep architectures on problems with many factors of variation,” in ICML 2007, 2007.
-  R. Sutton and A. Barto, Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press, 1998.
-  M. Strens, “A Bayesian framework for reinforcement learning,” in ICML, 2000.
-  P. A. Ortega and D. A. Braun, “A minimum relative entropy principle for learning and acting,” Journal of Artificial Intelligence Research, vol. 38, pp. 475–511, 2010.