In signal processing and machine learning, it is often of interest to solve unconstrained optimization problems of the form
where is the cost function that is built up from the additions of a large number, , of terms given by the functions . The parameter to be optimized,
, is a real vector of dimension.
The setting of Eq. (1) typically arises when one has a large number of independent observations. When
is large, it is not possible to use classical first and second order optimization algorithms, either because gradients are too expensive to compute or Hessian matrices are impossible to store. Stochastic optimization methods have emerged as a powerful solution to this problem. Among many methods, stochastic gradient descent (SGD), proposed in
, has gained a significant popularity due to its simplicity and superior performance. SGD uses a randomly chosen subset of data to obtain a noisy and unbiased estimate of the true gradient. The well-known difficulty with this procedure is that one has to tune its step-size carefully to prevent divergence and a significant amount of work has been devoted to this topic, e.g. see[2, 3, 4].
A popular alternative to SGD algorithms is the class of incremental proximal methods (IPMs) . These methods utilize proximal operators in an online fashion, i.e., they minimize a single or a mini-batch of components of the cost function in Eq. (1) by using a regularizer that depends on the value taken at the previous iteration. Although proximal operators are straightforward to obtain analytically for the linear case, they are not easy to obtain for nonlinear problems. In these cases, every proximal step requires an iterative numerical solver which makes the IPM computationally disadvantegous compared to SGD.
In this paper, we develop a probabilistic interpretation of the IPM for large-scale problems. This approach enables us to obtain purely recursive IPM-type optimizers in the form of approximate filtering algorithms. To attain this goal, we first highlight the relationship between the Kalman filter and the IPM and show that these two algorithms essentially result in very similar update rules for the linear case. Then we discuss how this idea can be extended to nonlinear optimization problems.
Our work is related to an emerging class of algorithms called probabilistic numerical methods . These techniques aim at developing probabilistic models of numerical algorithms, which lead to procedures that explicitly tackle the uncertainties inherent to many numerical problems. Also, there is a body of related work on the usage of stochastic filters for nonlinear optimization. In  and , the extended Kalman filter (EKF) is viewed as an incremental Gauss-Newton method. In , the author shows that the optimal filter for a linear-Gaussian model can be seen as a stochastic approximation  method. More recently, in , quasi-Newton algorithms are derived as autoregressive filters. In , the authors derive the filter for linear regression by obtaining a scalar step-size from the posterior covariance. In , the author provides a Kalman-based SGD method which is similar to the algorithm in our linear filtering derivation.
2 Proximal Operators as Bayes Updates
Proximal algorithms have become popular in the signal processing, machine learning, and optimization literature; see  for a review from a signal processing perspective and  for a thorough review from an optimization perspective. These algorithms utilize proximal operators to move towards the minimum of a cost function. A proximal step can be seen as an implicit gradient step . Although they are very general, implementing proximal operators is not straightforward and this is seen as the main limitation of these methods.
Consider the proximal operator of a function , defined as
where is a proper distance and is a regularization parameter. This definition leads to the interpretation of proximal methods as majorization-minimization schemes  when is replaced by the current estimate . Unfortunately proximal operators are not realizable for general and . Now, if one considers a probabilistic model111Throughout the paper, . The notation is argument-wise, e.g., denotes the pdf of another random variable . The density is the conditional pdf of given .
where is the observation implicit in the cost function , we can recover the proximal operator as a maximum-a-posteriori (MAP) estimate, i.e.,
Although it seems a straightforward observation, this fact brings important implications. Specifically, it means that the family of Bayesian numerical methods can be used to implement proximal operators. Moreover, instead of aiming at the MAP estimate, one can estimate the posterior pdf , which can be used to quantify the uncertainty of the estimate provided by the optimizer , while the proximal operator has no notion of uncertainty over the solution it provides.
Next, we obtain the proximal operator for the linear regression case explicitly. The following derivation relies on the well-known Bayesian interpretation of the regularizer as a Gaussian prior, see e.g. . Nevertheless, we find it useful to state it in the proximal setting.
Consider the proximal operator for a function with the proximal term ,
where and and is a symmetric positive definite matrix. Then, is given by,
Next, we consider the probability model
where denotes the Gaussian pdf of the vector with mean and covariance matrix . Given this model, the posterior pdf can be written as 
One can see that the mean in Eq. (6) exactly coincides with the update of Eq. (2). As a byproduct of the Bayesian update, we get the covariance matrix in Eq. (7) as a measure of the uncertainty of our estimate.
In the next section, we apply this interpretation to the family of incremental proximal methods in order to get an online and probabilistic optimizer. Perhaps not surprisingly, this algorithm is related to the Kalman filter for the linear case. Then we discuss its extension to nonlinear optimization problems.
3 Incremental Proximal Methods
IPMs are a class of algorithms that aim at solving problems of the form of Eq. (1) by using only a single component at each iteration . Given the estimate of the minimum , the IPM generates the next estimate by solving the following problem:
which uses only the most recent “observation”. Let us abuse the notation a bit and let stand for where is sampled uniformly randomly from the index set (the same as for SGD algorithms). The matrix
is usually selected as the identity matrix but other choices, depending on the geometry of the problem, are possible. When the problem in Eq. (8) is solvable, it is argued that it leads to more stable algorithms than the SGD (see  for a discussion on convergence).
3.1 The IPM as a Kalman filter for linear regression
Given an observation vector and a feature matrix , the linear regression problem consists in fitting a vector which roughly satisfies . Formally, the problem can be framed as the minimization of . When is large, solving this problem analytically becomes unfeasible. Hence, stochastic optimization methods are often applied. We note that, in this setting, and each component can be written as where is a single observation and is a column of the feature matrix . Then, as shown in Section 2, the incremental proximal iteration can be written as
For a proper Bayesian interpretation, we would like to obtain Eq. (9) as a recursive posterior-mean update in a (Gaussian) probabilistic model. In this case, the probabilistic updates yield to a similar, but different, algorithm. Notice that, in the probabilistic interpretation of Section 2, denotes the posterior covariance. However, in Eq. (9), this matrix is kept constant through iterations, as there is no way to update it. In the online optimization literature, some algorithms are proposed to update this matrix, which amounts to updating the proximal term, such as AdaGrad . As we show below, in the probabilistic approach, the matrix is naturally updated as the posterior covariance matrix.
Let us consider the model
The relationship between the Eqs. (9) and (10) is evident. In the probabilistic approach, we have instead of , which means that we obtain a way to update the proximal term in the Eq. (8) in a principled way and with an intuitive meaning ( quantifies the uncertainty in the solution ).
3.2 Extended Kalman filter as an IPM for nonlinear regression
In this section, we consider a nonlinear regression problem. Given observations , we would like to obtain where is a nonlinear function of . Since the ’s are fixed, we set for notational conciseness. Note that . Then, we would like to solve the problem
The incremental proximal step for this problem is
for each iteration . Because this proximal step is intractable in general, the typical choice for problems like in Eq. (12) is the SGD. In what follows, we propose the use of EKF recursions as one-step approximations of the realization of the proximal operator.
To this end, let us consider the probabilistic model
Since the model is nonlinear, using the EKF is a natural way to solve the regression problem. Let us denote . Then, the EKF recursions can be written as,
Note that these updates are different from the ones that would be obtained from a naïve linearization of followed by the derivation of the IPM. For that case, we would have the term , instead of , which does not ensure numerically stable updates.
3.3 Nonstationary optimization
Throughout our discussion, we have kept the prior static, meaning that is assumed to be random but not changing with time. While this assumption is convenient when the cost function is not changing, it does not hold in most realistic settings. For this reason, the authors of  consider what they call nonstationary losses, where the cost function is also changing with time. Tackling such a scenario is trivial from our perspective, as one only needs to modify the algorithm slightly in order to get a dynamic algorithm. In particular, in addition to the update step, one needs to employ a prediction step, according to the assumed dynamics of the parameter. One can model the degree of nonstationarity by modifying the model over and filtering algorithms extend to such settings very naturally. We leave the detailed investigation of this aspect for future work.
4 Numerical Results
In this section, we investigate two algorithms on a simple problem of fitting a sigmoid function. The first algorithm, which we refer to asapproximate nonlinear IPM, consists of applying a standard iterative solver for each subiteration since the nonlinear problem of Eq. (13) is not solvable in general. The second algorithm is the EKF, as explained in Section 3.2. The model used in the experiment is of form Eq. (14) with,
where denotes the inputs, and the parameter vector is where with . Recall that, by choosing such a model, we aim at solving a problem of form given in Eq. (12). We set the value of the parameter while generating the data and we use the same value in algorithms. Also, the initial value of the approximate IPM is set randomly while the proximal matrix is the identity, denoted . Similarly, the prior for the EKF is initialized with where .
Figure 1(a) shows that the approximate IPM suffers from numerical instability as the parameter estimate becomes close to the actual minimum. One reason for this instability is that, in the proximal-type algorithms, there is no natural mechanism to reduce the size of the step taken by the algorithm. A natural remedy would be to update the proximal matrix in a way that it dampens the updates as the number of iterations increases, in a similar way to decreasing the step-size of the SGD. The EKF exactly employs this strategy in a natural way. This fact can be seen from Fig. 1(b)-(c) where the diagonal and nondiagonal entries of the covariance matrix are plotted, respectively. It is evident that the entries of this matrix converge to zero, meaning that the update (15) eventually converges to some point in the parameter space.
In this work, we have developed a probabilistic perspective for proximal and incremental proximal methods. We have shown that a probabilistic setting can provide a systematic way to derive algorithms when this is not possible from the classical perspective. In particular, within an online setup, we have argued that the use of filtering algorithms corresponds to employing an IPM-type scheme for optimization. However, filtering algorithms have natural dampening mechanisms for parameter updates, as they refine their uncertainty over the solution iteratively.
This line of work can be pushed forward in a number of different directions. First, different Kalman filters can be used in a similar way to get more advanced optimization schemes for more complicated problems. Among the candidates, the unscented Kalman filter (UKF) and the Ensemble Kalman filter (EnKF)  can be useful to tackle high dimensional, possibly time-varying, optimization problems. Second, this approach can be extended beyond quadratic functions by exploiting the relationship between exponential families and Bregman divergences . When the likelihood belongs to the exponential family, the cost function can be expressed in general as a Bregman divergence. In that case, since the Gaussianity assumption is violated, one needs to resort to more complicated numerical algorithms, such as particle filters  or other advanced filtering methods.
Ö. D. A. and J. M. acknowledge the support of Ministerio de Economía y Competitividad of Spain (TEC2015-69868-C2-1-R ADVENTURE), the Office of Naval Research Global (N62909-15-1-2011), and the regional government of Madrid (program CASICAM-CM S2013/ICE-2845). V. E. acknowledges support from the Agence Nationale de la Recherche of France under PISCES project (ANR-17-CE40-0031-01).
-  H. Robbins and S. Monro, “A stochastic approximation method,” Annals of Mathematical Statistics, vol. 22, pp. 400–407, 1951.
John Duchi, Elad Hazan, and Yoram Singer,
“Adaptive subgradient methods for online learning and stochastic optimization,”Journal of Machine Learning Research, vol. 12, no. Jul, pp. 2121–2159, 2011.
-  Tom Schaul, Sixin Zhang, and Yann LeCun, “No more pesky learning rates,” in International Conference on Machine Learning, 2013, pp. 343–351.
-  Maren Mahsereci and Philipp Hennig, “Probabilistic line searches for stochastic optimization,” in Advances In Neural Information Processing Systems, 2015, pp. 181–189.
-  Dimitri P Bertsekas, “Incremental proximal methods for large scale convex optimization,” Mathematical programming, vol. 129, no. 2, pp. 163–195, 2011.
-  Philipp Hennig, Michael A Osborne, and Mark Girolami, “Probabilistic numerics and uncertainty in computations,” in Proc. R. Soc. A. The Royal Society, 2015, vol. 471.
-  Dimitri P Bertsekas, “Incremental least squares methods and the Extended Kalman filter,” SIAM Journal on Optimization, vol. 6, no. 3, pp. 807–822, 1996.
-  Bradley M Bell and Frederick W Cathey, “The iterated Kalman filter update as a Gauss-Newton method,” IEEE Transactions on Automatic Control, vol. 38, no. 2, pp. 294–297, 1993.
-  Yu Chi Ho, “On the stochastic approximation method and optimal filtering theory,” Journal of Mathematical Analysis and Applications, vol. 6, no. 1, pp. 152 – 154, 1963.
-  Philipp Hennig and Martin Kiefel, “Quasi-Newton methods: A new direction,” The Journal of Machine Learning Research, vol. 14, no. 1, pp. 843–865, 2013.
-  Jesus Fernandez-Bes, Víctor Elvira, and Steven Van Vaerenbergh, “A probabilistic least-mean-squares filter,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015, pp. 2199–2203.
-  Vivak Patel, “Kalman-based stochastic gradient method with stop condition and insensitivity to conditioning,” SIAM Journal on Optimization, vol. 26, no. 4, pp. 2620–2648, 2016.
-  Patrick L Combettes and Jean-Christophe Pesquet, “Proximal splitting methods in signal processing,” in Fixed-point algorithms for inverse problems in science and engineering, pp. 185–212. Springer, 2011.
-  Neal Parikh, Stephen Boyd, et al., “Proximal algorithms,” Foundations and Trends in Optimization, vol. 1, no. 3, pp. 127–239, 2014.
“Should penalized least squares regression be interpreted as maximum a posteriori estimation?,”IEEE Transactions on Signal Processing, vol. 59, no. 5, pp. 2405–2410, 2011.
-  Christopher M. Bishop, Pattern Recognition and Machine Learning, Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2006.
-  Brian DO Anderson and John B Moore, “Optimal filtering,” Englewood Cliffs, NJ: Pren, 1979.
-  Simo Särkkä, Bayesian filtering and smoothing, Number 3. Cambridge University Press, 2013.
-  Arindam Banerjee, Srujana Merugu, Inderjit S Dhillon, and Joydeep Ghosh, “Clustering with Bregman divergences,” Journal of Machine Learning Research, vol. 6, no. Oct, pp. 1705–1749, 2005.
-  Petar M Djuric, Jayesh H Kotecha, Jianqui Zhang, Yufei Huang, Tadesse Ghirmai, Mónica F Bugallo, and Joaquin Miguez, “Particle filtering,” IEEE signal processing magazine, vol. 20, no. 5, pp. 19–38, 2003.