Log In Sign Up

Kalman Gradient Descent: Adaptive Variance Reduction in Stochastic Optimization

by   James Vuckovic, et al.

We introduce Kalman Gradient Descent, a stochastic optimization algorithm that uses Kalman filtering to adaptively reduce gradient variance in stochastic gradient descent by filtering the gradient estimates. We present both a theoretical analysis of convergence in a non-convex setting and experimental results which demonstrate improved performance on a variety of machine learning areas including neural networks and black box variational inference. We also present a distributed version of our algorithm that enables large-dimensional optimization, and we extend our algorithm to SGD with momentum and RMSProp.


page 1

page 2

page 3

page 4


Stochastic Gradient Variance Reduction by Solving a Filtering Problem

Deep neural networks (DNN) are typically optimized using stochastic grad...

Gradient Descent, Stochastic Optimization, and Other Tales

The goal of this paper is to debunk and dispel the magic behind black-bo...

KaFiStO: A Kalman Filtering Framework for Stochastic Optimization

Optimization is often cast as a deterministic problem, where the solutio...

Low-variance Black-box Gradient Estimates for the Plackett-Luce Distribution

Learning models with discrete latent variables using stochastic gradient...

Ensemble Kalman Inversion: A Derivative-Free Technique For Machine Learning Tasks

The standard probabilistic perspective on machine learning gives rise to...

Particle Filtering Methods for Stochastic Optimization with Application to Large-Scale Empirical Risk Minimization

There is a recent interest in developing statistical filtering methods f...

A Dual Control Variate for doubly stochastic optimization and black-box variational inference

In this paper, we aim at reducing the variance of doubly stochastic opti...

Code Repositories


A Python implementation of the KGD optimization algorithm and accompanying experiments.

view repo

1 Introduction

Stochastic optimization is an essential component of most state-of-the-art the machine learning techniques. Sources of stochasticity in machine learning optimization include handling large datasets, approximating expectations, and modelling uncertain dynamic environments. The seminal work of Robbins  Monro (1985) showed that, under certain conditions, it is possible to use gradient-based optimization in the presence of randomness. However, it is well-known that gradient randomness has an adverse effect on the performance of stochastic gradient descent (SGD) Wang . (2013). As a result, the construction of methods to reduce gradient noise is an active field of research Wang . (2013); Mandt  Blei (2014); Grathwohl . (2017); Roeder . (2017).

Below, we propose a method using the celebrated Kalman filter Kalman (1960) to reduce gradient variance in stochastic optimization in a way that is independent of the application area. Moreover, our method can be combined with existing (possibly application-specific) gradient variance-reduction methods. The specific contributions of this paper are:

  • A novel framework for performing linear filtering of stochastic gradient estimates in SGD;

  • Analysis of the asymptotic properties of the filter and a proof of convergence for the resulting optimization algorithm;

  • Extensions of the proposed framework to modern optimization algorithms, and analysis of asymptotic filter properties in these cases;

  • A novel, distributed variant of these algorithms to deal with high-dimensional optimization;

  • Experiments comparing our algorithm to traditional methods across different areas of machine learning, demonstrating improved performance.

The remainder of this paper is organized as follows: In Section 2, we set up the stochastic gradient descent algorithm as a linear system and construct the Kalman filter for this setup. In Section 3 we provide a theoretical analysis of the filter and the proposed optimization algorithm. In Section 4 we show how to extend this work and analysis to SGD with momentum and RMSProp, and propose a distributed variant of the algorithm suitable for large-scale optimization. In Section 5 we connect our method to other areas of the machine learning literature, and in Section 6 we apply these techniques to a variety of problems in machine learning. Finally, we discuss our conclusions in Section 7.

2 Problem Setup

We consider the problem

where is assumed to be at least differentiable, using stochastic gradient methods. Following the notation in Bottou . (2018), we will assume that we cannot directly observe , but instead we can evaluate a function with where is a

-valued random variable and


In other words, in our setup we cannot compute the gradient directly, but we can obtain a stochastic, unbiased estimate of the gradient.

Define a discrete-time stochastic process by


where is a sequence of i.i.d. realizations of , is a sequence of non-negative scalar stepsizes, and

is arbitrary. This is a stochastic approximation of the true gradient descent dynamics

. We will investigate how to set up this equation as a discrete-time, stochastic, linear dynamical system, and then apply linear optimal filtering to (1).

2.1 Linear Dynamics

The update (1) is a linear equation in variables and . We will set up this equation in such a way that it can be filtered with linear filtering methods. Hence consider the discrete-time, stochastic, linear time-varying (LTV) system


where with . When working with block matrices as above, we use the convention here and henceforth. Here, represents a hidden estimate of , not necessarily as above. This state-space setup is commonly called the “local linear model” in time-series filtering Särkkä (2013).

For the purposes of filtering, we must include a set of measurement equations for (2). We propose


with where . We will use this measurement equation to model by


where will be estimated by the Kalman filter. In this way, is implicitly modelled as .

By making the following identifications


we will often abbreviate the system (2),(3) to


It is important to note that the trajectories in (6) are not uniquely determined given . This is due to the absence of the initial value in our measurement equation. In reality, we will always know (it is required by the algorithm) so we must simply modify to reflect this by setting and for , and and otherwise.

2.2 Kalman Filtering

We will now develop the Kalman filter for the LTV stochastic dynamical system (6). We would like to compute at every timestep . The Kalman filter is a deterministic LTV system for computing in terms of and the observation . The filter also propagates the covariance matrix of the error measurement and similarly for .

The Kalman filter consists of a set of auxiliary (deterministic) LTV dynamics for Jazwinski (2007)


with initial values and . It is well-known that the Kalman filter dynamics produce the optimal (i.e. minimum variance) linear estimate of Jazwinski (2007).

Writing ; is then the optimal linear estimate of given the observations . Let us rewrite using equations (7)-(12), noting that


by multiplying by to obtain




If is a uniformly bounded, positive definite matrix, it is easy to see that is a bounded positive definite matrix s.t. s.t. . Intuitively, adapts depending on the uncertainty of the estimate relative to the measurement uncertainty .

Hence we see that is an exponentially smoothed version of where is an adaptive smoothing matrix. We will use this estimate as a “better” approximation for than in (1). Writing (not to be confused with the measurement noise term from before), we will study the properties of the update


which we call the Kalman gradient descent (KGD) dynamics.

  Initialize the Kalman Filter with
  for  do
     Assemble .
     Increment Kalman Filter .
     Extract from .
  end for
Algorithm 1 Kalman Gradient Descent

It is important to note that this setup is not equivalent to the “heavy-ball” momentum method, even in the case that is a constant scalar. See the remarks in Appendix A.3 for a calculation that shows that the two methods cannot be made equivalent by a change of parameters.

3 Analysis

The analysis of the trajectories defined by (19) is broken into two components: first, we study of the filter asymptotics (stability, convergence, and robustness); and second, we study the the convergence of (19) to a stationary point of .

3.1 Filtering

By using the Kalman filter estimate instead of or indeed , we lose any a priori guarantees on the behaviour of the gradient estimate (e.g. boundedness). Since these guarantees are usually required for subsequent analysis, we must show that the filtered estimate is, in-fact, well-behaved. More precisely, we will show that the linear dynamics are stable, have bounded error (in the -sense), and are insensitive to mis-specified initial conditions.

The general conditions under which these good properties hold are well-studied in LTV filtering. We defer the majority of these domain-specific details to the Appendix A.1 and Jazwinski (2007) while stating the results in-terms of our stochastic optimization setup below. We will need the following definition.

Definition 1 (Stability).

Let be an arbitrary controlled LTV system in . Let be the solution operator for the homogeneous equation . Such a system is called internally asymptotically stable if s.t.

for any . A system which is internally asymptotically stable is also BIBO stable (bounded-input bounded-output stable), in the sense that a sequence of bounded inputs will produce a sequence of bounded states.

We can rewrite the state estimate dynamics as


which is a controlled linear system with inputs . Hence, when we refer to BIBO stability of the Kalman filter, we are saying that (20) is BIBO stable.

We will use a partial ordering on real matrices by saying that iff is positive definite, and iff is positive semidefinite. In the sequel, we will also maintain the following assumption on which is the same as in Robbins  Monro (1985).

Assumption 1.

is non-increasing, , .

Theorem 1 (Filter Asymptotics).

Suppose Assumption 1 holds and that and are governed by the Kalman filter equations (7)-(12). Then:

  1. (Stability) The filtered dynamics (20) are internally asymptotically stable, hence BIBO stable;

  2. (Bounded Error Variance) If there exists and s.t.

  3. (Robustness) Let be two solutions to the Kalman filter dynamics with initial conditions resp. Then s.t.

    as .


Using Lemma 3 in Appendix A.1, apply Theorems 34, and 5 respectively. ∎

3.2 Optimization

We now study the convergence properties of the KGD dynamics (19). We first assume some conditions on , then prove convergence. In the sequel,

will denote expectation w.r.t the joint distribution of all

that appear in the expectation.

Assumption 2.

The objective function is , with uniformly bounded 1st, 2nd, and 3rd derivatives. In particular is Lipschitz with constant .

Assumption 3 (Bottou . (2018)).

The random variables satisfy the following properties:

  1. for some constant and ;

  2. for constants .

Theorem 2 (Convergence of KGD).

Assume that Assumptions 12, and 3 hold. Then

where evolves according to (19).


This follows from Corollary 1 and Proposition 2 in Appendix A.2. ∎

The proof of this result follows the same steps as in Bottou . (2018) Theorem 4.10, but in our case we must account for the fact that the smoothed estimate is not a true “descent direction”. However, if varies “slowly enough” then the smoothing error grows sufficiently slowly to allow for convergence. In practice, we see that the benefits of reduced variance greatly outweigh the drawbacks of using an approximate direction of descent.

3.3 Scalability

The main drawback of this algorithm is that the Kalman filter requires a series of matrix multiplications and inversions. In most implementations, these are for a matrix Cormen . (2009). Moreover, we require extra space to store these matrices. Fortunately, there are GPU-accelerated Kalman filtering algorithms available Huang . (2011) which improve this bottleneck. Also, in Section 4, we will introduce a distributed version of Algorithm 1 that specifically deals with the issue of high-dimensionality and enables horizontal scaling of the KGD algorithm in addition to the vertical scaling described here.

4 Extensions

We consider two types of extensions to the KGD algorithm: extending the setup to momentum and RMSProp, and a distributed version of KGD that addresses the scalability concerns of high-dimensional matrix operations.

4.1 Momentum & RMSProp

We study extensions of the KGD filtering setup to two modern optimization algorithms: SGD with momentum Qian (1999) and RMSProp Tieleman  Hinton (2012).

Consider the momentum update Qian (1999)


with and . Rewriting these dynamics in the style of (2)-(3) (and including the additive noise terms as before) we have a LTV representation of the momentum update which can be filtered:


In a similar fashion to momentum, consider the RMSProp update Tieleman  Hinton (2012)


with which we rewrite as


It is important to note that (25) does not correspond exactly to a realistic setup because we assume is used to construct the transition matrix, whereas in practice we will only have access to stochastic estimates of these quantities via . Dealing in-detail with random transition matrices is beyond the scope of this investigation. In the experiments below, we have always used whichever gradient estimate was provided to the optimization algorithm (i.e. ) to construct the transition matrix.

Proposition 1.

Assume that Assumptions 1 and 3 hold. If in (22) and in (25) are constant, then the Kalman filter dynamics for (22)-(23) and (25)-(26) are stable, have bounded error, and are robust in the sense of Theorem 1.


Use Lemma 4 in Appendix A.1 to apply Theorem 345 respectively. ∎

We see that the KGD algorithm can be easily adapted to include these more sophisticated updates, with the filtering step being adjusted according to the linear state-space model being used. In fact, the principle of pre-filtering gradients before using them in optimization is applicable to most optimization algorithms, such as AdaGrad Duchi . (2011) or Adam Kingma  Ba (2014).

4.2 Distributed KGD

In this section, we present a distributed version of KGD that specifically targets issues with high-dimensional matrix operations. Indeed, as pointed out in Section 3, the Kalman filter uses matrix multiplications and inversions which have a cost of -matrices Cormen . (2009). This makes dimensionality a very real concern, since machine learning applications may use hundreds of thousands or millions of parameters.

To combat this, we propose a “divide and conquer” variant of KGD (which applies mutatis mutandis

to the variations developed above) that splits the parameter vectors of dimension

into sub-vectors of dimension and runs separate synchronous optimizers on each sub-vector. This technique enables KGD to scale horizontally on a single machine or to several machines. See Algorithm 2 for a precise description.

  Compute and split into .
  Initialize Kalman Filters with .
  for  do
     Split into .
     for  do
        Assemble .
        Increment Kalman Filter .
        Extract from .
     end for
     Combine into .
  end for
Algorithm 2 Distributed Kalman Gradient Descent

Assuming is divisible by for simplicity, we write . In the ordinary case we have cost and in the distributed case we have . This is a speedup of , where is usually .

This speedup is balanced by the facts that 1. in practice, the quality of the gradient filter is decreased by the sub-vector approximation, and 2. by the constant factors involved with the operation of independent filtering optimizers. Hence, a good strategy is to find the largest which produces acceptable runtime for the matrix operations, and then implement the Algorithm 2 above.

While considerable speedups are available from this technique when used on a single machine due to the reduced dimension (e.g. the experiment in Section 6.4), it is also clear that Distributed KGD is amenable to a synchronous implementation on multiple cores or machines. Combined with GPU-accelerated Kalman filter implementations as described in Section 3, this distributed KGD framework is a potential candidate for large-scale optimization.

5 Related Work

The modern gradient-based stochastic optimization landscape has several key developments beyond pure gradient descent. These include momentum methods Qian (1999); Nesterov (1983); Tseng (1998), AdaGrad Duchi . (2011), RMSProp Tieleman  Hinton (2012), and Adam Kingma  Ba (2014). In particular, exponential moving averages are used by both KGD and Adam, though we use an adaptive scaling matrix in KGD instead of a constant factor (as in Adam) to control the averaging. Recently, a general framework for adaptive methods using exponential moving averages (which includes Adam) was presented in Reddi . (2018); KGD fits into this framework as well.

There have been a few previous examples of Kalman filters being applied to stochastic gradient descent Bittner  Pronzato (2004); Patel (2016); Akyildiz . (2018). In Bittner  Pronzato (2004), the authors instead develop dynamics for the gradient and an approximation of the Hessian, omitting the state variable. In Patel (2016)

, the authors specialize to the case of large-scale linear regression. Stopping rules for optimization algorithms using Kalman filters are derived in these cases. In

Akyildiz . (2018), the incremental proximal method (IPM) is linked to the Kalman filter.

There is an important connection between KGD and meta-learning and adaptive optimization Schraudolph (1999); Andrychowicz . (2016). In Schraudolph (1999), the authors propose a set of auxiliary dynamics for the stepsize , which is treated as a vector of individual parameters. More recently, in Andrychowicz . (2016), the authors propose the update where the function

is represented by a recurrent neural network and is learned during the optimization.

Our method can be considered a type of meta-learning similar to Schraudolph (1999) in which the meta-learning dynamics are those of the Kalman filter. Our setup also relates to Andrychowicz . (2016) by restricting the function

to be linear (albeit with a different loss function that promotes approximation rather than direct optimization). In this context, the KGD algorithm learns an optimizer in a recursive, closed form that is gradient-free.

Lastly, we note that variance reduction techniques in SGD are an active domain of research, see Wang . (2013). Techniques such as control-variates Wang . (2013); Grathwohl . (2017) and domain-specific reduced-variance gradient estimators Roeder . (2017) could be layered on top of our techniques for enhanced performance.

6 Experiments

To study the various properties of the family of KGD algorithms proposed thus far, we conducted several experiments. The first is a simple optimization problem to study in detail the behaviour of the KGD dynamics, the second is a Bayesian parameter inference problem using Black Box Variational Inference (BBVI), the third is a small-scale neural network regression problem, and the last is larger-scale distributed KGD training of a MLP MNIST classifier. Experiments 2-4 are based on some excellent examples of use-cases for Autograd

Maclaurin . (2018). All experiments use .111Software implementations of all variations of KGD as well as these experiments can be found at

6.1 2D Stochastic Optimization

We tested our filtered optimization technique on a two-dimensional optimization problem. Specifically, we minimized


with gradient descent (1), gradient descent with momentum (21), and RMSProp (24) using:

  1. The true gradient ;

  2. A noisy gradient of the form ; and

  3. The Kalman filtered noisy gradient .

The function is approximately “bowl-shaped” and has many suboptimal local minima which can trap the optimization dynamics. The results for the SGD dynamics are in Figure 1, with the momentum and RMSProp results in Figure 5 of Appendix B.

In Figure 1, we see that the noiseless dynamics did indeed get trapped in a local minimum. For the noisy gradient, the gradient noise actually had a “hill-climbing” effect, but ultimately this too was trapped by a local minimum. We see that the filtered dynamics were able to avoid the “bad” local minima of the first two versions and overall performed much better in terms of minimization and stability.

Figure 1: Optimization of the function in (27) via the dynamics ordinary SGD algorithm. The stepsize was held constant at and the algorithm was run for iterations.

6.2 Parameter Fitting with Black Box Variational Inference

In this experiment, we approximated a two-dimensional target distribution with a two-dimensional Gaussian which has a diagonal covariance. Specifically we approximated

with where is the log-normal p.d.f., by the variational family

where and is restricted to be diagonal.

We used Black Box Variational Inference (BBVI) Ranganath . (2013) to optimize a stochastic estimate of the evidence lower bound (ELBO)

This objective function was maximized over

using with the gradients coming from backpropagation via the reparameterization trick

Kingma . (2015). The gradient is necessarily stochastic since it is a Monte Carlo approximation of an expectation. In our experiment, we used the extreme case of a single sample from (i.e. ) to approximate the expectation.

In Figure 2, we see a comparison between unfiltered and filtered results for the BBVI problem above. We used the RMSProp dynamics (24) to maximize . The results illustrate the negative effect that high gradient variance can have on optimization performance.

Of course, one could improve the variance of the unfiltered dynamics by using more samples to estimate the objective; in this case, the performance of the filtered and unfiltered algorithms is similar. However, when a large number of samples is unavailable or impractical, filtering is a good way to improve performance.

Figure 2: Using BBVI to fit a multivariate normal approximation to a non-Gaussian target. The algorithm uses the reparameterization trick Kingma . (2015) and the automatic differentiation Python package Autograd Maclaurin . (2018) to maximize the evidence lower-bound (ELBO) of the variational objective. The ELBO is estimated using a single sample from the variational approximation. The algorithm is run for steps using RMSProp.

6.3 Minibatch Neural Network Training

In this experiment, we trained a simple multi-layer perceptron (MLP) to perform a 1-d regression task using minibatches to approximate the loss function. In particular, we used samples from

which were corrupted by additive Gaussian noise and unequally sampled, and whose inputs were scaled and shifted. We used data points, and a batch-size of 8, with randomly sampled batches from the whole data set. We also tested two architectures, layer sizes (1,4,4,1) and (1,20,20,1), to study the effect of dimensionality on the performance of the algorithm. The former is a 33-dimensional problem, and the latter is 481-dimensional.

In Figure 3, we see the improvement that filtering contributes to the optimization performance. In both cases, the filtered algorithm reached a local minimum in significantly fewer algorithm iterations (approx 7x for the small network and 2x for the large network). Both algorithms did converge to similar log-posteriors, and exhibited stability in their minima.

(a) (b)
Figure 3: Comparison of MLP regression performance. All examples use 1000 iterations, a batch-size of 8 from a sample size of 80, and a stepsize . (a): Using layer sizes (1,4,4,1). (b): Using layer sizes (1,20,20,1).

6.4 MNIST Classifier

We studied the use of the Distributed KGD algorithm from Section 4 to classify MNIST digits LeCun (1998) using the full-size dataset. We used a vanilla MLP classifier with layer sizes (748, 10, 10, 10). These small sizes were chosen to fit the computational limitations of the testing environment. Even with this relatively small network, training represents a 7,980-dimensional problem, over 10x the size of the problem in Section 6.3. Hence, the Distributed KGD algorithm (Algorithm 2) was required for this task to be completed in reasonable time.

We compared the regular and filtered Distributed RMSProp in two different regimes: small-batch (batch-size 32) and large-batch (batch-size 256). This allowed us to compare the performance of these algorithms in the presence of high- and low-noise gradients respectively. See Figure 4 for the results.

We see that the Distributed KGD algorithm has equal optimization performance in the presence of low gradient variance, but significantly outperforms in the high-variance regime. In the former case, both algorithms achieve approx. 0.9 out-of-sample accuracy, which is reasonable considering the small size of the network. In the high-variance regime, the filtered optimization’s accuracy plateaus significantly higher (approx 0.75) than the unfiltered version (approx 0.65) for the same number of steps. This suggests that the high variance in the gradient estimate caused the unfiltered optimization to converge to a worse local minimum in terms of accuracy. This behaviour can also be seen in Figure 1.

(a) (b)
Figure 4: Comparison of MNIST classification performance. Both tests use a stepsize of 0.001, and to train a (784, 10, 10, 10) network. The filtered version uses a sub-vector size of . (a)

: Batch-size 32 with 2 epochs (3,800 steps).

(b): Batch-size 256 with 8 epochs (1880 steps).

7 Conclusions

In this work, we have shown how to achieve superior performance in a variety of stochastic optimization problems through the application of Kalman filters to stochastic gradient descent. We provided a theoretical justification for the properties of these new stochastic optimization algorithms, and proposed methods for dealing with high-dimensional problems. We also demonstrated this algorithm’s superior per-iteration efficiency on a variety of optimization and machine learning tasks.


We would like to thank Serdar Yüksel and Graeme Baker for their time and insightful feedback during the development this paper.


  • Akyildiz . (2018) akyildiz2018incrementalAkyildiz, ÖD., Elvira, V.  Míguez, J.  2018. The incremental proximal method: A probabilistic perspective The incremental proximal method: A probabilistic perspective. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2018 ieee international conference on acoustics, speech and signal processing (icassp) ( 4279–4283).
  • Andrychowicz . (2016) andrychowicz2016learningAndrychowicz, M., Denil, M., Gomez, S., Hoffman, MW., Pfau, D., Schaul, T.De Freitas, N.  2016. Learning to learn by gradient descent by gradient descent Learning to learn by gradient descent by gradient descent. Advances in Neural Information Processing Systems Advances in neural information processing systems ( 3981–3989).
  • Bertsekas  Tsitsiklis (2000) bertsekas2000gradientBertsekas, DP.  Tsitsiklis, JN.  2000. Gradient convergence in gradient methods with errors Gradient convergence in gradient methods with errors. SIAM Journal on Optimization103627–642.
  • Bittner  Pronzato (2004) bittner2004kalmanBittner, B.  Pronzato, L.  2004. Kalman filtering in stochastic gradient algorithms: construction of a stopping rule Kalman filtering in stochastic gradient algorithms: construction of a stopping rule. 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing 2004 ieee international conference on acoustics, speech, and signal processing ( ii–709).
  • Bottou . (2018) bottou2018optimizationBottou, L., Curtis, FE.  Nocedal, J.  2018. Optimization methods for large-scale machine learning Optimization methods for large-scale machine learning. SIAM Review602223–311.
  • Cormen . (2009) CLRSCormen, TH., Leiserson, CE., Rivest, RL.  Stein, C.  2009. Introduction to Algorithms, Third Edition Introduction to algorithms, third edition (3rd ). The MIT Press.
  • Driver (2004) driver2004analysisDriver, BK.  2004. Analysis tools with examples Analysis tools with examples.
  • Duchi . (2011) duchi2011adaptiveDuchi, J., Hazan, E.  Singer, Y.  2011. Adaptive subgradient methods for online learning and stochastic optimization Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research12Jul2121–2159.
  • Grathwohl . (2017) grathwohl2017backpropagationGrathwohl, W., Choi, D., Wu, Y., Roeder, G.  Duvenaud, D.  2017. Backpropagation through the void: Optimizing control variates for black-box gradient estimation Backpropagation through the void: Optimizing control variates for black-box gradient estimation. arXiv preprint arXiv:1711.00123.
  • Huang . (2011) huang2011acceleratingHuang, MY., Wei, SC., Huang, B.  Chang, YL.  2011. Accelerating the Kalman Filter on a GPU Accelerating the kalman filter on a gpu. 2011 IEEE 17th International Conference on Parallel and Distributed Systems 2011 ieee 17th international conference on parallel and distributed systems ( 1016–1020).
  • Jazwinski (2007) jazwinski2007stochasticJazwinski, A.  2007. Stochastic Processes and Filtering Theory Stochastic processes and filtering theory. Dover Publications, Incorporated.
  • Kalman (1960) Kalman1960Kalman, RE.  1960. A New Approach to Linear Filtering and Prediction Problems A new approach to linear filtering and prediction problems. Transactions of the ASME–Journal of Basic Engineering82Series D35–45.
  • Kingma  Ba (2014) kingma2014adamKingma, DP.  Ba, J.  2014. Adam: A method for stochastic optimization Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  • Kingma . (2015) kingma2015variationalKingma, DP., Salimans, T.  Welling, M.  2015. Variational dropout and the local reparameterization trick Variational dropout and the local reparameterization trick. Advances in Neural Information Processing Systems Advances in neural information processing systems ( 2575–2583).
  • Lax (2014) lax2014linearLax, P.  2014. Linear algebra and its applications Linear algebra and its applications. Wiley.
  • LeCun (1998) lecun1998mnistLeCun, Y.  1998.

    The MNIST database of handwritten digits The mnist database of handwritten digits.

    http://yann. lecun. com/exdb/mnist/.
  • Maclaurin . (2018) autogradMaclaurin, D., Duvenaud, D., Johnson, M.  Townsend, J.  2018. Autograd. Autograd.
  • Mandt  Blei (2014) NIPS2014_5557Mandt, S.  Blei, D.  2014. Smoothed Gradients for Stochastic Variational Inference Smoothed gradients for stochastic variational inference. Z. Ghahramani, M. Welling, C. Cortes, ND. Lawrence  KQ. Weinberger (), Advances in Neural Information Processing Systems 27 Advances in neural information processing systems 27 ( 2438–2446). Curran Associates, Inc.
  • Nesterov (1983) nesterov1983methodNesterov, YE.  1983. A method for solving the convex programming problem with convergence rate O (1/k^ 2) A method for solving the convex programming problem with convergence rate o (1/k^ 2). Dokl. Akad. Nauk SSSR Dokl. akad. nauk sssr ( 269,  543–547).
  • Patel (2016) patel2016kalmanPatel, V.  2016. Kalman-based stochastic gradient method with stop condition and insensitivity to conditioning Kalman-based stochastic gradient method with stop condition and insensitivity to conditioning. SIAM Journal on Optimization2642620–2648.
  • Qian (1999) qian1999momentumQian, N.  1999. On the momentum term in gradient descent learning algorithms On the momentum term in gradient descent learning algorithms. Neural networks121145–151.
  • Ranganath . (2013) ranganath2013blackRanganath, R., Gerrish, S.  Blei, DM.  2013. Black box variational inference Black box variational inference. arXiv preprint arXiv:1401.0118.
  • Reddi . (2018) reddi2018convergenceReddi, SJ., Kale, S.  Kumar, S.  2018. On the convergence of adam and beyond On the convergence of adam and beyond.
  • Robbins  Monro (1985) robbins1985stochasticRobbins, H.  Monro, S.  1985. A stochastic approximation method A stochastic approximation method. Herbert Robbins Selected Papers Herbert robbins selected papers ( 102–109). Springer.
  • Roeder . (2017) roeder2017stickingRoeder, G., Wu, Y.  Duvenaud, DK.  2017. Sticking the landing: Simple, lower-variance gradient estimators for variational inference Sticking the landing: Simple, lower-variance gradient estimators for variational inference. Advances in Neural Information Processing Systems Advances in neural information processing systems ( 6925–6934).
  • Särkkä (2013) sarkka2013bayesianSärkkä, S.  2013. Bayesian filtering and smoothing Bayesian filtering and smoothing ( 3). Cambridge University Press.
  • Schraudolph (1999) schraudolph1999localSchraudolph, NN.  1999. Local gain adaptation in stochastic gradient descent Local gain adaptation in stochastic gradient descent.
  • Tieleman  Hinton (2012) Tieleman2012Tieleman, T.  Hinton, G.  2012. Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude. Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning.
  • Tseng (1998) tseng1998incrementalTseng, P.  1998. An incremental gradient (-projection) method with momentum term and adaptive stepsize rule An incremental gradient (-projection) method with momentum term and adaptive stepsize rule. SIAM Journal on Optimization82506–531.
  • van Handel (2010) van2010nonlinearvan Handel, R.  2010. Nonlinear filtering and systems theory Nonlinear filtering and systems theory. Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems (MTNS semi-plenary paper). Proceedings of the 19th international symposium on mathematical theory of networks and systems (mtns semi-plenary paper).
  • Wang . (2013) wang2013varianceWang, C., Chen, X., Smola, AJ.  Xing, EP.  2013. Variance reduction for stochastic gradient optimization Variance reduction for stochastic gradient optimization. Advances in Neural Information Processing Systems Advances in neural information processing systems ( 181–189).

Appendix A Proofs

a.1 Filtering Proofs

The Kalman filtering dynamics form a discrete-time LTV dynamical system, hence to study this system we will use the language and tools of LTV systems theory. In particular, the results below establish the asymptotic properties of the Kalman filter for general discrete-time LTV dynamical system


where and with positive definite matrices.

Definition 2.

The for (28), observability matrix is

and the controllability matrix is

where is the state transition matrix of (28), i.e. the solution to the homogeneous state equation.

Definition 3.

The Kalman filtered dynamics of (28) are uniformly completely observable if and s.t.

and uniformly completely controllable if and s.t.

Note that stability of the system’s internal dynamics does not necessarily imply stability of the filter van Handel (2010). The below theorems, which are from Jazwinski (2007), will be used to establish Theorem 1 in terms of uniform complete observability and controllability.

Theorem 3 (Kalman Filter Stability).

If (28) is uniformly completely observable and uniformly completely controllable, then the discrete-time Kalman filter is uniformly asymptotically stable.


See Jazwinski (2007) Theorem 7.4 p.240. ∎

Theorem 4 (Kalman Filter Error Boundedness).

If (28) is is uniformly completely observable and uniformly completely controllable, and if there exist and s.t.


See Jazwinski (2007) Lemmas 7.1 & 7.2 p.234. ∎

Theorem 5 (Kalman Filter Robustness).

Suppose (28) is uniformly completely observable and uniformly completely controllable, and let be two solutions to the Kalman filter dynamics with initial conditions resp. Then s.t.

as .


See Jazwinski (2007) Theorem 7.5 p.242. ∎

With the conditions for the stability of the Kalman filter above, we will show that Assumption 1 implies uniform complete observability and controllability for (6), and this implies Theorem 1.

Lemma 1.

Let be a , real, positive-definite matrix-valued function. Suppose that , and suppose that such that and every , where

are the eigenvalues of

. Then and every .


Assume WLOG that . Note that each and also that . Then consider making the smallest possible for any choice of under the constraint that . This obviously occurs when . In this case,

Hence . ∎

Lemma 2.
  1. For an invertible matrix which satisfies we have

  2. Suppose that are two positive definite matrices s.t. . Then we have

  3. For a square, invertible matrix

    , .

  1. Lax (2014) Ch 10.1 p.146.

  2. Lax (2014) Theorem 2, Ch 10.1 p.146

  3. Easy to directly verify.

Lemma 3 (KGD Observability and Controllability).

Under Assumption 1, the KGD dynamics (6) are uniformly completely observable and controllable.


This proof is organized into three steps: First, we prove uniform complete controllability, then extend the machinery of complete controllability, and finally apply this extended machinery to prove uniform complete observability.

  1. We will take . Then , in the definition above, hence we have

    It suffices to prove the case when , and to prove the uniform boundedness of the second term. We will first show that has upper bound which does not depend on for an arbitrary . We have that

    hence writing and using , we have that

    where by assumption. To show the existence of a uniform lower bound, we will appeal to Lemma 1 to show that the spectrum of is uniformly lower-bounded, and this implies the required matrix inequality. Indeed, so that


    and since we have just shown an uniform upper matrix bound (which implies a uniformly upper-bounded spectrum), Lemma 1 implies there is a uniform lower matrix bound which is .

  2. Using the multiplicative property from Lemma 2(a), we can iterate the result for the controllability matrix by conjugating with to see that is also uniformly bounded above for a fixed . The fact that we are conjugating by matrices whose determinants are all 1 allows us to use Lemma 1 again to conclude the existence of a uniform lower bound on . Now,


    by Lemma 2(c). Then Lemma 2(b) implies the existence of uniform bounds on .

  3. Now, for the observability matrix, consider for a fixed

    We will also assume that . Thus, by ensuring that at least once every timesteps, one of the matrices in the sum of the form (30) and hence is uniformly bounded above and below. The other terms in the sum are not definite since is rank-deficient at those times, but they are positive and uniformly bounded, hence the system is uniformly completely observable.

Note that since is arbitrary, we can make it can be very large. In practice, can be larger than the total number of timesteps of a run of the algorithm. This is why we have omitted this technical detail from the setup in Section 2.

Lemma 4 (Extended KGD Observability and Controllability).

Under Assumptions 1 and 2, if in (22) and in (25) updates are constant, then the dynamics for (22)-(23) and (25)-(26) are uniformly completely controllable and observable.


We need to prove a version of Lemma 3. This lemma uses the facts that for some uniformly in , and that the determinant of is constant. The latter follows immediately by inspection since is upper triangular. For the uniform upper-bound, consider first momentum: for we have