I Problem Formulation
Most online learning algorithms compute an estimateat time by recursively updating the prior estimate using data observed at that same time instant . We consider in this work a general mapping (i.e., learning rule) of the form:
where maps the iterate to using the data
. Throughout this manuscript, we allow for the mapping to be stochastic and time-varying due to the potentially time-varying distribution of the random variable. One popular instance of this recursion is the stochastic gradient algorithm :
which can be used to estimate the minimizer of stochastic risks of the form:
where we write , with a subscript , to allow for the possibility of the minimizer drifting with time due to changes in the distribution of the streaming data . Of course, description (1) captures many more algorithm variations, besides the stochastic gradient algorithm (2), such as proximal [18, 10], empirical 
, variance-reduced[15, 12], distributed [9, 2, 1, 5], and second-order constructions . We restrict ourselves in this work to the important class of mappings that satisfy the following mean-square contractive property. We illustrate later by means of examples that several popular learning mappings already satisfy this condition. [Mean-square contraction] We say that a mapping is “mean-square contractive” around a “mean-square fixed-point” if for any generated by the mapping it holds that:
with . In general, the point , the rate of contraction , and the additive term will be a function of the distribution of , and are hence allowed to be time-varying to account for non-stationarity. ∎ We refer to the point as the “mean-square fixed-point” of the mapping , since applying at yields in light of (4):
and hence for small in the mean-square sense.
If the mapping happens to be deterministic and , we can drop the additive term, as well as the expectation, and recover after taking square-roots:
which corresponds to the traditional definition of a contractive mapping 
. As we shall show, a number of stochastic algorithms are mean-square contractive, allowing our exposition to cover them all. In the case of the stochastic gradient descent algorithm (2), the point will correspond to the minimizer of (3), in which case and can be used interchangeably. In general, however, such as the decentralized strategies (21)–(22) listed further ahead, we will need to make a subtle distinction.
In addition to the stochastic nature of the mapping resulting from its dependence on the random variable , we allow for to be time-varying due to drifts in the distribution of , which results in a drift of the fixed-point over time (this explains why we are using a subscript in ). Relations similar to (4) frequently appear as intermediate results in the performance analysis of stochastic algorithms in stationary environments, although stationarity is not necessary for establishing (4). By establishing a general tracking result for mean-square contractive mappings, and subsequently appealing to prior results establishing (4
), we can recover known results, and also establish some new results on the tracking performance of stochastic learners for general loss functions.
I-a Related Works
The tracking performance of adaptive filters, focusing primarily on mean-square error designs is fairly well established (see, e.g., [4, 16]). In the decentralized setting, though generally restricted to deterministic optimization with exact gradients, the tracking performance of primal and primal-dual algorithms has been studied in [8, 21, 19]. In the stochastic setting, the tracking performance of the diffusion strategy is established in , while the work  considers a federated learning architecture. The purpose of this work is to establish a unified tracking analysis for the broad class of mean-square contractive mappings, which includes many algorithms as special cases, and will allow us to efficiently recover new tracking results as well.
Ii Tracking Analysis
Ii-a Non-stationary environments
We consider a time-varying environment, where the fixed-point evolves according to some random-walk model. Such models are prevalent in the study of non-stationary effects. [Random Walk] We assume that the mean-square fixed point of the mapping (1) evolves according to a random walk:
where is independent of . We will allow the random variable
to be non-stationary, with potentially non-zero mean, and only require a global bound on its second-order moment, namely. ∎ Note that, by allowing to be non-stationary with non-zero mean, the assumption is more relaxed than typically assumed in the adaptive filtering literature [16, 20]. On the other hand, by only imposing a bound on the second-order moment of
, rather than on its norm with probability one, condition (7) is also more relaxed than in related works on deterministic dynamic optimization (e.g., ). Letting and using (4), we have:
where in step we used Jensen’s inequality for along with Assumption II-A and .
If the random variable happens to be zero-mean and independent of , the inequality can be sharpened by avoiding the use of Jensen’s inequality in step of (II-A) and instead appealing to independence of with and . This results in:
In order to continue with the analysis, we assume the following. [Global bounds] The rate of contraction as well as the driving term are bounded from above for all , i.e., and . ∎ As we will see in Section III-A, Assumption II-A generalizes conditions typically imposed in the study of adaptive filters in non-stationary environments. After iterating (II-A) and (9), we arrive at the next result. [Tracking performance] Suppose is a -mean-square-contractive mapping according to Definition I. Then, we have:
In the case when for all , we have the tighter relation:
We note that in steady-state, the terms and vanish exponentially, and we are left with a drift term proportional to and a second term proportional . Furthermore, we note that the non-stationary result (11) can be obtained from the stationary result with by merely adding the drift term .
Iii Application to Learning Algorithms
We now show how Theorem II-A can be used to recover the tracking performance of several well-known algorithms under the random walk model (7). We begin by re-deriving and generalizing some known tracking results to illustrate the implications of Assumption II-A and verify Theorem II-A, and then proceed to derive new tracking results for the multitask diffusion algorithm [1, 13, 14].
Iii-a Least-Mean-Square (LMS) Algorithm
For illustration purposes, we begin with the least-mean square algorithm, which takes the form:
where the data arises from the linear model:
and denotes an independent sequence of regressors and denotes measurement noise. As is standard in the study of the transient behavior of adaptive filters (see, e.g., [16, Part V]), we subtract (12) from , take squares and expectations to obtain:
with , , and . Examination of and shows that the LMS algorithm (12) satisfies Assumption II-A whenever the moments of the regressor and measurement noise are time-invariant (or bounded). This does not restrict the drift of the objective and the measurement which will, of course, be non-stationary as a result. This assumption is also consistent with the modeling conditions typically applied when studying the tracking performance of adaptive filters [16, Eq. (20.16)]. Assuming stationarity of the regressor and measurement noise we find for small step-sizes :
Hence, we have from (11):
The result is consistent with [16, Lemma 21.1], with the factor appearing in (17) since we are considering here the mean-square deviation of around , rather than the excess mean-square error studied in [16, Lemma 21.1]. When the drift term is no longer zero-mean, we can bound:
and find from (10):
We observe that the drift penalty incurred in the case when has non-zero mean is , which is significantly larger than in the case where , which is . This is to be expected as the cumulative effect of in the recursive relation (7) is no longer equal to zero when .
Iii-B Decentralized Stochastic Optimization
We now consider the problem of general decentralized stochastic optimization. We associate with each agent a cost:
for pursuing the minimizer of the aggregate cost:
denotes the right Perron eigenvector associated with the left-stochastic combination matrix. If we collect and , the diffusion recursion (21)–(22) can be viewed as an instance of (1). Note that by setting the number of agents to one we recover ordinary centralized stochastic gradient descent (2), and as such the results in this section will apply to that case as well. We impose the following standard assumptions on the cost as well as the stochastic gradient approximation . [Bounded Hessian] Each cost is twice-differentiable with bounded Hessian for all , i.e., . ∎ Note that this condition ensures that each is strongly-convex with Lipschitz gradients and that the respective parameters are bounded independently of . Independence of the bounds on problem parameters over time is common in the study of optimization algorithms in non-stationary and dynamic environments [20, 6] and will ensure that Assumption II-A is satisfied. We additionally assume that the objectives of the agents do not drift too far apart. [Bounded Disagreement] The distance between each local minimizer is bounded independently of , i.e.:
for all pairs and times . ∎ We also make the following common assumption on the quality of the gradient estimate. [Gradient noise] Using approximates the true gradient of (20) sufficiently well, i.e.:
where denotes the filtration of random variables up to , , for all and some constants independent of . ∎ It has already been established that the diffusion recursion (21)–(22) is a mean-square contractive mapping according to Definition I for some and in stationary environments [2, Eq. (58)]. In order to recover tracking performance through Theorem II-A, we need to ensure that the rate of contraction and driving term can be bounded independent of time , i.e., that Assumption II-A holds under conditions 23–24. [Tracking performance of diffusion] The diffusion algorithm (21)–(22) is mean-square contractive around with , and
denotes the second largest magnitude eigenvalue of the combination matrixand denote problem-independent constants. The quantity denotes the fixed-point from Definition I, which in light of (28), is within of the minimizer of (23). The tracking performance is given by:
where . When , we have:
∎ When the gradient approximation is exact, i.e., , we recover from (29) which aligns with the result [6, Remark 1], where deterministic dynamic optimization with exact gradients is considered. On the other hand, when , we find from (30) and recover [20, Eq. (80)] up to problem-independent factors.
Iii-C Multitask Decentralized Learning
where denotes the weighted Laplacian matrix associated with the graph adjacency matrix . The formulation (31), in contrast to (23), does not force each agent in the network to reach consensus, and instead allows for the independent minimization of subject to a coupling smoothness regularizer . We refer the reader to [13, 14] for a more detailed motivation for minimizing (31) instead of (23), and will focus here on the tracking performance of the resulting algorithm. A solution to (31) can be pursued via the multitask strategy [1, 13]:
where if and otherwise. Comparing the diffusion strategy (21)–(22) to the multitask strategy (32)–(33) we note a structural similarity with the subtle difference that the combination weights in (33), in contract to in (22) are not constant and depend on the step-size and regularization parameter . The multitask diffusion strategy (32)–(33) has also been shown to be mean-square contractive [13, Eq. (54)] and hence, we can verify Assumption II-A and appeal to Theorem II-A to infer its tracking performance. [Tracking performance of multitask diffusion] The multitask diffusion algorithm (32)–(33) is mean-square contractive around with , and
where denote problem-independent constants. The quantity denotes the fixed-point from Definition I, which in light of (34), is within of the minimizer of (31). The tracking performance is hence given by:
where . When , we have:
Iv Simulation Results
Iv-a Tracking Multitask Problems
agents connected by a randomly generated graph. Each agent observes feature vectorsand labels [17, Appendix G]. The collection of initial hyperplanes are generated to be smooth over the graph using the procedure of [13, Sec. VI] and subsequently subjected to a common drift term . Performance is displayed in Fig. 1. We observe that an optimal step-size choice exists for both drift rates, with smaller allowing for smaller step-sizes, resulting in smaller effects of the gradient noise and overall better tracking performance. The trends align with the results of Corollary III-C.
Iv-B Illustration of Theorem Ii-A in the Presence of Drift Bias
We next verify one of the main conclusions of Theorem II-A, namely that the dominant term in the expressions for tracking performance deteriorates from when (Eq. (30)) to in the non-zero mean case (Eq. (29)). We consider a collection of agents observing independent data originating from a common linear model according to (13), subjected to a drift term . All agents construct local least-squares cost functions , and pursue by means of the resulting diffusion strategy (21)–(22). The tracking performance in both the zero-mean and biased drift settings for various choices of the step-sizes parameter is displayed in Fig 2.
-  (2014-08) Multitask diffusion adaptation over networks. IEEE Transactions on Signal Processing 62 (16), pp. 4129–4144. External Links: Cited by: §I, §III-C, §III.
-  (2013-04) Distributed Pareto optimization via diffusion strategies. IEEE Journal of Selected Topics in Signal Processing 7 (2), pp. 205–220. External Links: Cited by: §I, §III-B.
-  (2020-02) Dynamic federated learning. available as arXiv:2002.08782. Cited by: §I-A.
-  (2014) Adaptive filter theory. Pearson. External Links: Cited by: §I-A.
-  (2014) Communication-efficient distributed dual coordinate ascent. In Proc. International Conference on Neural Information Processing Systems, Montreal, Canada, pp. 3068–3076. Cited by: §I.
-  (2020-03) Can primal methods outperform primal-dual methods in decentralized dynamic optimization?. available as arXiv:arXiv:2003.00816. Cited by: §II-A, §III-B.
-  (1989) Introductory Functional Analysis with Applications. John Wiley & Sons. Cited by: §I.
-  (2014-03) Decentralized dynamic optimization through the alternating direction method of multipliers. IEEE Transactions on Signal Processing 62 (5), pp. 1185–1197. External Links: Cited by: §I-A.
-  (2009-01) Distributed subgradient methods for multi-agent optimization. IEEE Trans. Automatic Control 54 (1), pp. 48–61. External Links: Cited by: §I.
-  (2013) Proximal algorithms. Foundations and Trends in Optimization 1 (3), pp. 127–239. External Links: Cited by: §I.
-  (1997) Introduction to Optimization. Optimization Software. Cited by: §I.
-  (2019-05) Stabilized SVRG: simple variance reduction for nonconvex optimization. available as arXiv:1905.00529. Cited by: §I.
-  (2018-05) Learning over multitask graphs - Part I: Stability analysis. available as arXiv:1805.08535. Cited by: §III-C, §III, §IV-A.
-  (2020-01) Multitask learning over graphs. submitted for publication, available as arXiv:2001.02112. Cited by: §III-C, §III.
-  (2016) Stochastic variance reduction for nonconvex optimization. In Proc. of ICML, New York, NY, USA, pp. 314–323. Cited by: §I.
-  (2008) Adaptive Filters. John Wiley & Sons, Inc.. External Links: Cited by: §I-A, §I, §II-A, §III-A.
Adaptation, learning, and optimization over networks.
Foundations and Trends in Machine Learning7 (4-5), pp. 311–801. External Links: Cited by: §III-B, §IV-A.
-  (2011) Convergence rates of inexact proximal-gradient methods for convex optimization. In Proc. Advances in Neural Information Processing Systems 24, Granada, Spain, pp. 1458–1466. External Links: Cited by: §I.
-  (2017-11) Decentralized prediction-correction methods for networked time-varying convex optimization. IEEE Transactions on Automatic Control 62 (11), pp. 5724–5738. External Links: Cited by: §I-A.
-  (2013) On distributed online classification in the midst of concept drifts. Neurocomputing 112, pp. 138–152. Cited by: §I-A, §II-A, §III-B.
-  (2016-12) Distributed dynamic optimization over directed graphs. In Proc. IEEE Conference on Decision and Control (CDC), Vol. , Las Vegas, USA, pp. 245–250. External Links: Cited by: §I-A.
-  (2016) On the convergence of decentralized gradient descent. SIAM Journal on Optimization 26 (3), pp. 1835–1854. External Links: Cited by: §I.