# Logarithmic Regret for Online Control

We study optimal regret bounds for control in linear dynamical systems under adversarially changing strongly convex cost functions, given the knowledge of transition dynamics. This includes several well studied and fundamental frameworks such as the Kalman filter and the linear quadratic regulator. State of the art methods achieve regret which scales as O(√(T)), where T is the time horizon. We show that the optimal regret in this setting can be significantly smaller, scaling as O(poly( T)). This regret bound is achieved by two different efficient iterative methods, online gradient descent and online natural gradient.

## Authors

• 19 publications
• 45 publications
• 23 publications
• ### Logarithmic Regret Bound in Partially Observable Linear Dynamical Systems

We study the problem of adaptive control in partially observable linear ...
03/25/2020 ∙ by Sahin Lale, et al. ∙ 4

• ### Logarithmic Regret for Adversarial Online Control

We introduce a new algorithm for online linear-quadratic control in a kn...
02/29/2020 ∙ by Dylan J. Foster, et al. ∙ 0

• ### Online Optimal Control with Affine Constraints

This paper considers online optimal control with affine constraints on t...
10/10/2020 ∙ by Yingying Li, et al. ∙ 0

• ### Distributed Online Linear Quadratic Control for Linear Time-invariant Systems

Classical linear quadratic (LQ) control centers around linear time-invar...
09/29/2020 ∙ by Ting-Jui Chang, et al. ∙ 0

• ### Efficient Online Portfolio with Logarithmic Regret

We study the decades-old problem of online portfolio management and prop...
05/18/2018 ∙ by Haipeng Luo, et al. ∙ 0

• ### Efficient Reinforcement Learning for High Dimensional Linear Quadratic Systems

We study the problem of adaptive control of a high dimensional linear qu...
03/24/2013 ∙ by Morteza Ibrahimi, et al. ∙ 0

• ### A Poisson Kalman Filter to Control the Dynamics of Neonatal Sepsis and Postinfectious Hydrocephalus

Neonatal sepsis (NS) and resulting complications, such as postinfectious...
03/25/2020 ∙ by Donald Ebeigbe, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Algorithms for regret minimization typically attain one of two performance guarantees. For general convex losses, regret scales as square root of the number of iterations, and this is tight. However, if the loss function exhibit more curvature, such as quadratic loss functions, there exist algorithms that attain poly-logarithmic regret. This distinction is also known as “fast rates” in statistical estimation.

Despite their ubiquitous use in online learning and statistical estimation, logarithmic regret algorithms are almost non-existent in control of dynamical systems. This can be attributed to fundamental challenges in computing the optimal controller in the presence of noise.

Time-varying cost functions in dynamical systems can be used to model unpredictable dynamic resource constraints, and the tracking of a desired sequence of exogenous states. At a pinch, if we have changing (even, strongly) convex loss functions, the optimal controller for a linear dynamical system is not immediately computable via a convex program. For the special case of quadratic loss, some previous works [9] remedy the situation by taking a semi-definite relaxation, and thereby obtain a controller which has provable guarantees on regret and computational requirements. However, this semi-definite relaxation reduces the problem to regret minimization over linear costs, and removes the curvature which is necessary to obtain logarithmic regret.

In this paper we give the first efficient poly-logarithmic regret algorithms for controlling a linear dynamical system with noise in the dynamics (i.e. the standard model). Our results apply to general convex loss functions that are strongly convex, and not only to quadratics.

Reference Noise Regret loss functions
here stochastic strongly convex

### 1.1 Our Results

The setting we consider is a linear dynamical system, a continuous state Markov decision process with linear transitions, described by the following equation:

 xt+1=Axt+But+wt. (1.1)

Here is the state of the system, is the action (or control) taken by the controller, and is the noise. In each round , the learner outputs an action upon observing the state and incurs a cost of , where is convex. The objective here is to choose a sequence of adaptive controls so that a minimum total cost may be incurred.

The approach taken by [9] and other previous works is to use a semi-definite relaxation for the controller. However, this removes the properties associated with the curvature of the loss functions, by reducing the problem to an instance of online linear optimization. It is known that without curvature, regret bounds are tight (see [13]).

Therefore we take a different approach, initiated by [4]. We consider controllers that depend on the previous noise terms, and take the form . While this resulting convex relaxation does not remove the curvature of the loss functions altogether, it results in an overparametrized representation of the controller, and it is not a priori clear that the loss functions are strongly convex with respect to the parameterization. We demonstrate the appropriate conditions on the linear dynamical system under which the strong convexity is retained.

Henceforth we present two methods that attain poly-logarithmic regret. They differ in terms of the regret bounds they afford and the computational cost of their execution. The online gradient descent update (OGD) requires only gradient computation and update, whereas the online natural gradient (ONG) update, in addition, requires the computation of the preconditioner, which is the expected Gram matrix of the Jacobian, denoted , and its inverse. However, the natural gradient update admits an instance-dependent upper bound on the regret, which while being at least as good as the regret bound on OGD, offers better performance guarantees on benign instances (See Corollary 4.5, for example).

Algorithm Update rule (simplified) Applicability
OGD , diag s.t.
ONG ,

### 1.2 Related Work

For a survey of linear dynamical systems (LDS), as well as learning, prediction and control problems, see [17]

. Recently, there has been a renewed interest in learning dynamical systems in the machine learning literature. For fully-observable systems, sample complexity and regret bounds for control (under Gaussian noise) were obtained in

[3, 10, 2]. The technique of spectral filtering for learning and open-loop control of partially observable systems was introduced and studied in [15, 7, 14]. Provable control in the Gaussian noise setting via the policy gradient method was also studied in [11].

The closest work to ours is that of [1] and [9], aimed at controlling LDS with adversarial loss functions. The authors in [3] obtain a regret algorithm for changing quadratic costs (with a fixed hessian), but for dynamical systems that are noise-free. In contrast, our results apply to the full (noisy) LDS setting, which presents the main challenges as discussed before. Cohen et al. [9] consider changing quadratic costs with stochastic noise to achieve a regret bound.

We make extensive use of techniques from online learning [8, 16, 13]. Of particular interest to our study is the setting of online learning with memory [5]. We also build upon the recent control work of [4], who use online learning techniques and convex relaxation to obtain provable bounds for LDS with adversarial perturbations.

## 2 Problem Setting

We consider a linear dynamical system as defined in (1.1) with costs , where is strongly convex. In this paper we assume that the noise

is a random variable generated independently at every time step. For any algorithm

, we attribute a cost defined as

 JT(A)=E{wt}[T∑t=1ct(xt,ut)],

where , and represents the expectation over the entire noise sequence. For the rest of the paper we will drop the subscript from the expectation as it will be the only source of randomness. Overloading notation, we shall use to denote the cost of a linear controller which chooses the action as .

##### Assumptions.

In the paper we assume that 111This is only for convenience of presentation. The case with a bounded can be handled similarly., as well as the following conditions.

###### Assumption 2.1.

We assume that . Furthermore, the perturbation introduced per time step is bounded, i.i.d, and zero-mean with a lower bounded covariance i.e.

 ∀twt∼Dw,E[wt]=0,E[wtw⊤t]⪰σ2I and ∥wt∥≤W

While we make the assumption that the noise vectors are bounded with probability 1, we can generalize to the case of sub-gaussian noise by conditioning on the event that none of the noise vectors are ever large. This can be done at an expense of another multiplicative

factor in the regret. Furthermore we assume the following,

###### Assumption 2.2.

The costs are -strongly convex. Further, as long as it is guaranteed that , it holds that

 ∥∇xct(x,u)∥,∥∇uct(x,u)∥≤GD.

The class of linear controllers we work with are defined as follows.

###### Definition 2.3 (Diagonal Strong Stability).

Given a dynamics , a linear policy/matrix is -diagonal strongly stable for real numbers , if there exists a complex diagonal matrix and a non-singular complex matrix , such that and the following conditions are met:

1. The spectral norm of is strictly smaller than one, i.e., .

2. The controller and the transforming matrices are bounded, i.e., and .

The notion of strong stability was introduced by [9]. Both strong stability and diagonal strong stability are quantitative measures of the classical notion of stabilizing controllers 222A controller is stabilizing if the spectral radius of that permit a discussion on non-asymptotic regret bounds. We note that an analogous notion for quantification of open-loop stability appears in the work of [14].

On the generality of the diagonal strong stability notion, the following comment may be made: while not all matrices are complex diagonalizable, an exhaustive characterization of complex diagonal matrices is the existence of

linearly independent eigenvectors; for the later, it suffices, but is not necessary, that a matrix has

distinct eigenvalues (See

[18]). It may be observed that almost all matrices admit distinct eigenvalues, and hence, are complex diagonalizable insofar the complement set admits a zero-measure. By this discussion, almost all stabilizing controllers are diagonal strongly stable for some . The astute reader may note the departure here from the more general notion – strongly stability – in that all stabilizing controllers are strongly stable for some choice of parameters.

##### Regret Formulation.

Let . For an algorithm , the notion of regret we consider is pseudo-regret, i.e. the sub-optimality of its cost with respect to the cost for the best linear controller i.e.,

 Regret=JT(A)−minK∈KJT(K).

## 3 Preliminaries

##### Notation.

We reserve the letters for states and for actions. We denote by to be the dimensionality of the state and the control space respectively. Let . We reserve capital letters for matrices associated with the system and the policy. Other capital letters are reserved for universal constants in the paper. We use the shorthand to denote a subsequence . For any matrix , define to be a flattening of the matrix where we stack the columns upon each other. Further for a collection of matrices , let be the flattening defined by stacking the flattenings of upon each other. We use to denote the matrix induced norm. The rest of this section provides a recap of the relevant definitions and concepts introduced in [4].

### 3.1 Reference Policy Class

For the rest of the paper, we fix a -diagonally strongly stable matrix (The bold notation is to stress that we treat this matrix as fixed and not a parameter). Note that this can be any such matrix and it can be computed via a semi-definite feasibility program [9] given the knowledge of the dynamics, before the start of the game. We work with following the class of policies.

###### Definition 3.1 (Disturbance-Action Policy).

A disturbance-action policy , for horizon is defined as the policy which at every time , chooses the recommended action at a state , defined 333 is completely determined given . Hence, the use of only serves to ease the burden of presentation. as

 ut(M)≜−Kxt+H∑i=1M[i−1]wt−i.

For notational convenience, here it may be considered that for all .

The policy applies a linear transformation to the disturbances observed in the past

steps. Since is a linear function of the disturbances in the past under a linear controller , formulating the policy this way can be seen as a relaxation of the class of linear policies. Note that is a fixed matrix and is not part of the parameterization of the policy. As was established in [4] (and we include the proof for completeness), with the appropriate choice of parameters, superimposing such a , to the policy class allows it to approximate any linear policy in terms of the total cost suffered with a finite horizon parameter .

We refer to the policy played at time as where the subscript refers to the time index and the superscript refers to the action of on . Note that such a policy can be executed because is perfectly determined on the specification of as .

### 3.2 Evolution of State

This section describes the evolution of the state of the linear dynamical system under a non-stationary policy composed of a sequence of policies, where at each time the policy is specified by . We will use to denote such a non-stationary policy. The following definitions ease the burden of notation.

1. Define . shall be helpful in describing the evolution of state starting from a non-zero state in the absence of disturbances.

2. For any sequence of matrices , define as a linear function that describes the effect of on the state , formally defined below.

###### Definition 3.2.

For any sequence of matrices , define the disturbance-state transfer matrix for , to be a function with inputs defined as

 Ψi(M0:H)≜~Ai1i≤H+H∑j=0~AjBM[i−j−1]H−j1i−j∈[1,H].

It will be important to note that is a linear function of its argument.

### 3.3 Surrogate State and Surrogate Cost

This section introduces a couple of definitions required to describe our main algorithm. In essence they describe a notion of state, its derivative and the expected cost if the system evolved solely under the past steps of a non-stationary policy.

###### Definition 3.3 (Surrogate State & Surrogate Action).

Given a sequence of matrices and independent invocations of the random variable given by , define the following random variables denoting the surrogate state and the surrogate action:

 y(M0:H) =2H∑i=0Ψi(M0:H)w2H−i−i, v(M0:H+1) =−Ky(M0:H)+H∑i=1M[i−1]H+1w2H−i.

When is the same across all arguments we compress the notation to and respectively.

###### Definition 3.4 (Surrogate Cost).

Define the surrogate cost function to be the cost associated with the surrogate state and the surrogate action defined above, i.e.,

 ft(M0:H+1)=E[ct(y(M0:H),v(M0:H+1))].

When is the same across all arguments we compress the notation to .

###### Definition 3.5 (Jacobian).

Let . Since are random linear functions of , can be reparameterized as , where

is a random matrix, which derives its randomness from the random perturbations

.

### 3.4 OCO with Memory

We now describe the setting of online convex optimization with memory introduced in [5]. In this setting, at every step , an online player chooses some point , a loss function is then revealed, and the learner suffers a loss of . We assume a certain coordinate-wise Lipschitz regularity on of the form such that, for any , for any ,

 ∣∣ft(x0:j−1,xj,xj+1:H)−ft(x0:j−1,~xj,xj+1:H)∣∣≤L∥xj−~xj∥. (3.1)

In addition, we define , and we let

 Gf=supt∈{0,…,T},x∈K∥∇ft(x)∥, D=supx,y∈K∥x−y∥. (3.2)

The resulting goal is to minimize the policy regret [6], which is defined as

 PolicyRegret=T∑t=Hft(xt−H:t)−minx∈KT∑t=Hft(x).

## 4 Algorithms & Statement of Results

The two variants of our method are spelled out in Algorithm 1. Theorems 4.1 and 4.3 provide the main guarantees for the two algorithms.

###### Theorem 4.1 (Online Gradient Update).

Suppose Algorithm 1 (Online Gradient Update) is executed with being any -diagonal strongly stable matrix and , on an LDS satisfying Assumption 2.1 with control costs satisfying Assumption 2.2. Then, it holds true that

 JT(A)−minK∈KJT(K)≤~O(G2W4ασ2log7(T)).

The above result leverages the following lemma which shows that the function is strongly convex with respect to its argument . Note that strong convexity of the cost functions over the state-action space does not by itself imply the strong convexity of the surrogate cost over the space of controllers . This is because, in the surrogate cost , is applied to which themselves are linear functions of ; the linear map is necessarily column-rank-deficient. To observe this, note that maps from a space of dimensionality to that of . The next theorem, which forms the core of our analysis, shows that this is not the case using the inherent stochastic nature of the dynamical system.

###### Lemma 4.2.

If the cost functions are -strongly convex, is a diagonal strongly stable matrix and Assumption 2.1 is met then the idealized functions are -strongly convex with respect to where

 λ=ασ2γ236κ10

We present the proof of simpler instances, including a one dimensional version of the theorem, in Section 8, as they present the core ideas without the tedious notation necessitated by the general setting. We provide the general proof in Section D of the Appendix.

###### Theorem 4.3 (Online Natural Gradient Update).

Suppose Algorithm 1 (Online Natural Gradient Update) is executed with , on an LDS satisfying Assumptions 2.1 and with control costs satisfying Assumption 2.2. Then, it holds true that

 JT(A)−minK∈KJT(K)≤~O(GW2αμlog7(T))whereμ−1≜maxM∈M∥(E[JTJ])−1∇Mvecft(M)∥.

In Theorem 4.3, the regret guarantee depends on an instance-dependent parameter , which is a measure of hardness of the problem. First, we note that the proof of Lemma 4.2 establishes that the Gram matrix of the Jacobian (Defintion 3.5) is strictly positive definite and hence we recover the logarithmic regret guarantee achieved by the Online Gradient Descent Update, with the constants preserved.

###### Corollary 4.4.

In addition to the assumptions in Theorem 4.3, if is a -diagonal strongly stable matrix, then for the natural gradient update

 JT(A)−minK∈KJT(K)≤~O(G2W4ασ2log7(T)),
###### Proof.

The conclusion follows from Lemma 5.2 and Lemma 8.1 which is the core component in the proof of Lemma 4.2 showing that . ∎

Secondly, we note that, being instance-dependent, the guarantee the Natural Gradient update offers can potentially be stronger than that of the Online Gradient method. A case in point is the following corollary involving spherically symmetric quadratic costs, in which case the Natural Gradient update yields a regret guarantee under demonstrably more general conditions, in that the bound does not depend on the minimum eigenvalue of the covariance of the disturbances , unlike the one OGD affords 444A more thorough analysis of the improvement in this case shows a multiplicative gain of . Furthermore, Theorem 4.3 and Corollary 4.4 hold more generally under strong stability of the comparator class and , as opposed to diagonal strong stability..

###### Corollary 4.5.

Under the assumptions on Theorem 4.3, if the cost functions are of the form , where is an adversarially chosen sequence of numbers and is chosen to be a -diagonal strongly stable matrix, then the natural gradient update guarantees

 JT(A)−minK∈KJT(K)≤~O(β2W2αlog7(T)),
###### Proof.

It suffices to note . ∎

## 5 Reduction to Low Regret with Memory

The next lemma is a condensation of the results from [4] which we present in this form to highlight the reduction to OCO with memory. It shows that achieving low policy regret on the memory based function is sufficient to ensure low regret on the overall dynamical system. Since the proof is essentially provided by [4], we provide it in the Appendix for completeness. Define,

 M≜{M={M[0]…M[H−1]}:∥M[i−1]∥≤κ3κB(1−γ)i}.
###### Lemma 5.1.

Let the dynamical system satisfy Assumption 2.1 and let be any -diagonal strongly stable matrix. Consider a sequence of loss functions satisfying Assumption 2.2 and a sequence of policies satisfying

 PolicyRegret=T∑t=0ft(Mt−H−1:t)−minM∈MT∑t=0ft(M)≤R(T)

for some function and as defined in Definition 3.4. Let be an online algorithm that plays the non-stationary controller sequence . Then as long as is chosen to be larger than we have that

 J(A)−minK∗∈KJ(K∗)≤R(T)+O(GW2log(T)),

Here , contain polynomial factors in .

###### Lemma 5.2.

The function as defined in Definition 3.4 is coordinate-wise -lipschitz and the norm of the gradient is bounded by , where

 L=2DGWκBκ3γ,Gf≤GDWHd(H+2κBκ3γ)
 where D≜Wκ2(1+Hκ2Bκ3)γ(1−κ2(1−γ)H+1)+κBκ3Wγ.

The proof of this lemma is identical to the analogous lemma in [4] and hence is omitted.

## 6 Analysis for Online Gradient Descent

In the setting of Online Convex Optimization with Memory, as shown by [5], by running a memory-based OGD, we can bound the policy regret by the following theorem.

###### Theorem 6.1.

Consider the OCO with memory setting defined in Section 3.4. Let be Lipschitz loss functions with memory such that are -strongly convex, and let and be as defined in (3.1) and (3.2). Then, there exists an algorithm which generates a sequence such that

 T∑t=Hft(xt−H:t)−minx∈KT∑t=H~ft(x)≤G2f+LH2Gfλ(1+log(T)).

We provide the requisite algorithm and the proof of the above theorem in the Appendix.

##### Specialization to the Control Setting:

We combine bound the above with the listed reduction.

###### Proof of Theorem 4.1.

Setting , Theorem 6.1, in conjunction with Lemma 5.2, implies that policy regret is bounded by . An invocation of Lemma 5.1 now suffices to conclude the proof of the claim. ∎

## 7 Analysis for Online Natural Gradient Descent

In this section, we consider structured loss functions of the form , where . is a random matrix, and ’s are adversarially chosen strongly convex loss functions. In a similar vein, define to be the specialization of when input the same argument, i.e. , times. Define .

The following lemma provides upper bounds on the regret bound as well as the norm of the movement of iterate at every round for the Online Natural Gradient Update (Algorithm 1).

###### Lemma 7.1.

For -strongly convex , if the iterates are chosen as per the update rule:

 [Mt+1]vec=ΠM([Mt]vec−ηt(E[JTJ])−1∇[Mt]vecft(Mt))

with a decreasing step size of , it holds that

 T∑t=1ft(Mt)−minM∗∈MT∑t=1ft(M∗)≤(2α)−1maxM∈M∥∇Mvecft(M)∥2(E[JTJ])−1logT.

Moreover, the norm of the movement of consecutive iterates is bounded for all as

 ∥[Mt+1]vec−[Mt]vec∥≤(αt)−1maxM∈M∥(E[JTJ])−1∇Mvecft(M)∥.

The following theorem now bounds the total for the online game with memory.

###### Theorem 7.2.

In the setting desribed in this subsection, let be -strongly convex, and be such that it satisfies equation (3.1) with constant , and . Then, the online natural gradient update generates a sequence such that

 T∑t=Hft(Mt−H:t)−minM∈MT∑t=H~ft(M)≤maxM∈M∥∇M% vecft(M)∥2(E[JTJ])−1+LH2Gfα(1+log(T)).
###### Proof of Theorem 7.2.

We know by (3.1) that, for any ,

 |ft(Mt−H:t)−ft(M)| ≤LH∑j=1∥[Mt]vec−[Mt−j]vec∥≤LH∑j=1j∑l=1∥[Mt−l+1]vec−[Mt−l]vec∥ ≤LH∑j=1j∑l=1ηt−lmaxM∈M∥(E[JTJ])−1∇Mvecft(M)∥ ≤LH2ηt−HmaxM∈M∥(E[JTJ])−1∇Mvecft(M)∥,

and so we have that

 ∣∣ ∣∣T∑t=Hft(Mt−H:t)−T∑t=Hft(Mt)∣∣ ∣∣≤LH2Gfα(1+log(T)).

The result follows by invoking Lemma 7.1. ∎

##### Specialization to the Control Setting:

We combine bound the above with the listed reduction.

###### Proof of Theorem 4.3.

First observe that . Setting , Theorem 7.2, in conjunction with Lemma 5.2, imply the stated bound on policy regret. An invocation of Lemma 5.1 suffices to conclude the proof of the claim. ∎

## 8 Proof of Strong Convexity in simpler cases

In this section we illustrate the proof of strong convexity of the function with respect to , i.e. Lemma 4.2, in two settings.

1. The case when is a diagonal strongly stable policy.

2. A specialization of Lemma 4.2 to one-dimensional state and one-dimensional control.

This latter case highlights the difficulty caused in the proof due to a choosing a non-zero

and presents the main ideas of the proof without the tedious tensor notations necessary for the general case.

We will need some definitions and preliminaries that are outlined below. By definition we have that . Since we know that is strongly convex we have that

 ∇2ft(M)=E{wk}2H−1k=0[∇2ct(y(M),v(M))]⪰αE{wk}2H−1k=0[J⊤yJy+J⊤vJv].

We remind the reader that are random matrices dependent on the noise vectors . In each of the above cases, we will demonstrate the truth of the following lemma implying Lemma 4.2.

###### Lemma 8.1.

If Assumption 2.1 is satisfied and is chosen to be a -diagonal strongly stable matrix, then the following holds,

 E{wk}2H−1k=0[J⊤yJy+J⊤vJv]⪰γ2σ236κ10⋅I.

To analyze , we will need to rearrange the definition of to make the dependence on each individual explicit. To this end consider the following definition for all .

 ~vk(M)≜H∑i=1M[i−1]w2H−i−k

Under this definition it follows that

 y(M)=H∑k=1(A−BK)k−1B~vk(M)+H∑k=1(A−BK)k−1w2H−k
 v(M)=−Ky(M)+~v0(M)

From the above definitions, may be characterized in terms of the Jacobian of with respect to , which we define for the rest of the section as . Defining as the stacking of rows of each vertically, i.e. stacking the columns of , it can be observed that for all ,

 J~vk=∂~vk(M)∂M=[Idu⊗w⊤2H−k−1Idu⊗w⊤2H−k−2…Idu⊗w⊤H−k]

where is the dimension of the controls. We are now ready to analyze the two simpler cases. Further on in the section we drop the subscripts from the expectations for brevity.

### 8.1 Proof of Lemma 8.1: K=0

In this section we assume that is a -diagonal strongly stable policy for . Be definition, we have . One may conclude the proof with the following observation.

 E[J⊤yJy+J⊤vJv]⪰E[J⊤vJv]=E[J⊤~v0J~v0]=Idu⊗Σ⪰σ2I.

### 8.2 Proof of Lemma 8.1: 1-dimensional case

Note that in the one dimensional case, the policy given by is an dimensional vector with being a scalar. Furthermore are scalars and hence their Jacobians with respect to are vectors. In particular we have that,

 J~vk=∂~vk(M)∂M=[w2H−k−1w2H−k−2…wH−k]

Therefore using the fact that for and , it can be observed that for any , we have that

 E[J⊤~vk1J~vk2]=Tk1−k2⋅σ2 (8.1)

where is defined as an matrix with if and only if and otherwise. This in particular immediately gives us that,

 E[J⊤yJy] =⎛⎝H∑k1=1H∑k2=1Tk1−k2⋅(A−BK)k1−1+k2−1⎞⎠≜G⋅B2⋅σ2 (8.2) E[J⊤~v0Jy] =(H∑k=1T−k(A−BK)k−1)≜Y⋅B⋅σ2 (8.3)

First, we prove a few spectral properties of the matrices and defined above. From Gershgorin’s circle theorem, and the fact that is -diagonal strongly stable, we have

 (8.4)

The spectral properties of summarized in the lemma below form the core of our analysis.

###### Lemma 8.2.

is a symmetric positive definite matrix. In particular

 G⪰14⋅I.

Now consider the statements which follow by the respective definitions.

 E[J⊤vJv] =K2⋅E[J⊤yJy]−K⋅E[J⊤yJ~v0]−K⋅E[J⊤~v0Jy]+E[J⊤~v0J~v0]

Now . To prove Lemma 8.1, it suffices that for every vector of appropriate dimensions, we have that

 m⊤(F+B2⋅G)m≥γ2∥m∥236κ10.

To prove the above we will consider two cases. The first case is when . Noting , in this case Lemma 8.2 immediately implies that

 m⊤(F+B2⋅G)m≥m⊤(B2⋅G)m≥14∥m∥29γ−2κ2≥γ2∥m∥236κ10,

In the second case (when ), (8.4) implies that

 m⊤(F+B2⋅G)m ≥m⊤(I−BK⋅(Y+Y⊤))m≥(1/3)∥m∥2≥γ2∥m∥236κ10.

#### 8.2.1 Proof of Lemma 8.2

Define the following matrix for any complex number .

 G(ψ)=H∑k1=1H∑k2=1Tk1−k2(ψ†)k1−1ψk2−1

Note that in Lemma 8.2 is equal to . The following lemma provides a lower bound on the spectral properties of the matrix . The lemma presents the proof of a more general case ( is complex) that while unnecessary in the one dimensional case, aids the multi-dimensional case. A special case when was proven in [12], and we follow a similar approach relying on the inverse of such matrices.

###### Lemma 8.3.

Let be a complex number such that . Furthermore let is defined as an matrix with if and only if and otherwise. Define the matrix as

 G(ψ)=H∑k1=1H∑k2=1Tk1−k2(ψ†)k1−1ψk2−1.

We have that

 G(ψ)⪰(1/4)⋅IH

#### 8.2.2 Proof of Lemma 8.3

###### Proof of Lemma 8.3.

The following definitions help us express the matrix in a more convenient form. For any number , such that and any define,

 Sψ(h)=h∑i=1|ψ|2(i−1)=1−|ψ|2h1−|ψ|2.

With the above definition it can be seen that the entries can be expressed in the following manner,

 [G(ψ)]ij=Sψ(H−|i−j|)⋅ψi−j if j≥i
 [G(ψ)]ij=(ψ†)j−i⋅Sψ(H−|i−j|) if i≥j

Schematically the matrix looks like

We analytically compute the inverse of the matrix below and bound its spectral norm.

###### Claim 8.4.

The inverse of has the following form.

 [G(ψ)]−