# Dynamic Online Gradient Descent with Improved Query Complexity: A Theoretical Revisit

We provide a new theoretical analysis framework to investigate online gradient descent in the dynamic environment. Comparing with the previous work, the new framework recovers the state-of-the-art dynamic regret, but does not require extra gradient queries for every iteration. Specifically, when functions are α strongly convex and β smooth, to achieve the state-of-the-art dynamic regret, the previous work requires O(κ) with κ = β/α queries of gradients at every iteration. But, our framework shows that the query complexity can be improved to be O(1), which does not depend on κ. The improvement is significant for ill-conditioned problems because that their objective function usually has a large κ.

There are no comments yet.

## Authors

• 6 publications
• 14 publications
• 25 publications
• 9 publications
• ### Improved Analysis for Dynamic Regret of Strongly Convex and Smooth Functions

In this paper, we present an improved analysis for dynamic regret of str...
06/10/2020 ∙ by Peng Zhao, et al. ∙ 0

• ### Unconstrained Online Optimization: Dynamic Regret Analysis of Strongly Convex and Smooth Problems

The regret bound of dynamic online learning algorithms is often expresse...
06/06/2020 ∙ by Ting-Jui Chang, et al. ∙ 0

• ### Efficient Processing of k-regret Minimization Queries with Theoretical Guarantees

Assisting end users to identify desired results from a large dataset is ...
03/22/2021 ∙ by Jiping Zheng, et al. ∙ 0

• ### Adversarial Delays in Online Strongly-Convex Optimization

We consider the problem of strongly-convex online optimization in presen...
05/20/2016 ∙ by Daniel Khashabi, et al. ∙ 0

• ### Trading-Off Static and Dynamic Regret in Online Least-Squares and Beyond

Recursive least-squares algorithms often use forgetting factors as a heu...
09/06/2019 ∙ by Jianjun Yuan, et al. ∙ 0

• ### Dimension Independence in Unconstrained Private ERM via Adaptive Preconditioning

In this paper we revisit the problem of private empirical risk minimziat...
08/14/2020 ∙ by Peter Kairouz, et al. ∙ 0

• ### Dynamic Evaluation of Neural Sequence Models

We present methodology for using dynamic evaluation to improve neural se...
09/21/2017 ∙ by Ben Krause, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Online Gradient Descent (OGD) has drawn much attention in the community of machine learning

Zhu and Xu (2015); Hazan and Seshadhri (2007); Hall and Willett (2015); Shalev-Shwartz (2012); Garber (2018); Bedi et al. (2018). It is widely used in various applications such as online recommendation Song et al. (2008), search ranking Moon et al. (2010). Generally, OGD is formulated as a game between a learner and an adversary. At the -th round of the game, the learner submits from the feasible set , and the adversary selects a function . Then, the function is returned to the learner, and incurs the loss .

Recently, there has been a surge of interest in analyzing OGD by using the dynamic regret Zinkevich (2003); Mokhtari et al. (2016); Yang et al. (2016); Lei et al. (2017). The dynamic regret is usually defined as

 R∗T=T∑t=1ft(xt)−T∑t=1ft(x∗t), (1)

where . Unfortunately, it is well-known that a sublinear dynamic regret bound cannot be achieved in the worst case Zinkevich (2003). The reason is that the functions may be changed arbitrarily in the dynamic environment. But, it is possible to upper bound the dynamic regret in terms of certain regularity of the comparator sequence. Those regularities are usually defined as the path length Mokhtari et al. (2016); Yang et al. (2016):

 P∗T:=P(x∗1,...,x∗T)=T∑t=2∥x∗t−x∗t−1∥,

or squared path length Zhang et al. (2017):

 S∗T:=S(x∗1,...,x∗T)=T∑t=2∥x∗t−x∗t−1∥2.

They capture the cumulative Euclidean norm or the square of Euclidean norm of the difference between successive comparators. When all the functions are -strongly convex and -smooth, the dynamic regret is bounded by Mokhtari et al. (2016). When the local variations are small, is much smaller than . Thus, the state-of-the-art dynamic regret of OGD is improved to be Zhang et al. (2017).

But, to achieve the state-of-the-art dynamic regret, i.e., , the variant of OGD in Zhang et al. (2017) has to query gradients for every iteration. Here, represents the condition number for the smooth and strongly convex objective function . For a large , the extremely large query complexity makes it not practical in the online setting. In the paper, we investigate the basic online gradient descent, and provide a new theoretical analysis framework. Using the new analysis framework, we show that the dynamic regret can be achieved with , instead of queries of gradients in Zhang et al. (2017). Main theoretical results are outlined in Table 1 briefly.

The improvement of the query complexity is vitally important for ill-conditioned111‘ill-conditioned’ may be notated by ‘ill-posed’ or ‘badly posed’ in some literatures. problems Tarantola (2004); Hansen et al. (2006); Marroquin et al. (1987) whose objective function usually has a large condition number, i.e., . Let us take the image deblurring problem as an example Hansen et al. (2006). Suppose we have a blurred image , which is modeled by using an unknown real image and a blurring matrix . That is, . Here, is usually a non-singular matrix with a large condition number, e.g., . We want to recover the real image from the blurred image , that is, . Comparing with the method in Zhang et al. (2017), our new analysis framework shows that OGD is good enough, and the required queries of gradients can be reduced by multiple orders.

The paper is organized as follows. Section 2 reviews the related work. Section 3 presents the preliminaries. Section 4 presents our theoretical analysis framework. Section 5 presents the improved bounds of regret and query complexity for the strongly convex case. Section 6 concludes the paper.

## 2 Related work

### 2.1 Regrets of OGD in the static environment.

Online gradient descent in the static environment has been extensively investigated over the last ten years. The sublinear static regrets for smooth or strongly convex functions have been obtained in many literatures Shalev-Shwartz (2012); Hazan (2016); Duchi et al. (2011); Zinkevich (2003). Specifically, when is strongly convex, the regret of online gradient descent is Hazan (2016). When is convex but not strongly convex, the regret of online gradient descent is Hazan (2016).

### 2.2 Regrets of OGD in the dynamic environment.

When all the functions are strongly-convex and smooth, the dynamic regret of OGD is Mokhtari et al. (2016); Yang et al. (2016). If OGD queries gradients at every iteration, the dynamic regret of OGD can be improved to be Zhang et al. (2017). But, our analysis framework shows that the gradient queries for every iteration is enough to obtain dynamic regret. Additionally, there are some other regularities including the functional variation Zhu and Xu (2015); Besbes, Omar et al. (2015) and the gradient variation Chiang et al. (2012). Those regularities measure different aspects of the variation in the dynamic environment. Since they are not comparable directly, some researchers consider to bound the dynamic regret by using the mixed regularity Jadbabaie et al. (2015). Extending our theoretical framework to different regularities is an interesting avenue for future work.

Besides, the new proposed theoretical analysis framework is inspired by Joulani et al. (2017). Joulani et al. (2017) provides a theoretical analysis framework in the static environment, but our theoretical analysis framework works in the dynamic environment.

## 3 Preliminaries

### 3.1 Notations and assumptions

We use the following notation.

• The bold lower-case letters, e.g.,

represent vectors. The normal letters, e.g.,

represent a scalar number.

• represents the learning rate of Algorithm 1 at the -th iteration, and .

• The condition number is defined by for any smooth and strongly convex function .

• represents the norm of a vector.

• represents the projection to a set .

• represents the minimizer set at the -th iteration.

• Bregman divergence is defined by for any function .

In the paper, functions are assumed to be convex and smooth (defined as follows).

###### Definition 1 (β smoothness).

A function is smooth, if, for any and , we have .

If the function is smooth, according to the definition of the Bregman divergence, we have holds for any and . The other assumptions used in the paper are presented as follows.

###### Assumption 1 (α strong convexity).

For any , the function is strongly convex. That is, for any and , .

###### Assumption 2 (Boundedness of gradients).

We assume for any .

###### Assumption 3 (Boundedness of the domain of x).

We assume for any .

The above assumptions, i.e., Assumptions 1-3, are the basic assumptions, which are used widely in previous researches Shalev-Shwartz (2012); Hazan (2016); Duchi et al. (2011); Zinkevich (2003). Additionally, we make the following assumption, which is used to model the dynamic environment.

The above assumptions, i.e., Assumptions 1-3, are the basic assumptions, which are used widely in previous researches Shalev-Shwartz (2012); Hazan (2016); Duchi et al. (2011); Zinkevich (2003). Additionally, we make the following assumption, which allows the environment to change within a range. It is a mild assumption for many tasks such as time-serise prediction Kuznetsov and Mohri (2016); Anava et al. (2013), traffic forecasting Buch et al. (2011), time-varying medical image analysis Wang et al. (2008); Lee and Shen (2009), online recommendation Chang et al. (2017).

###### Assumption 4 (Boundedness of variations in the dynamic environment.).

Denote . For any and , when and , there exists a constant such that .

### 3.2 Algorithm

Recall the algorithm of the OGD. At the -th iteration, it submits

, and receives the loss function

. Querying the gradient of , it updates by using the projected gradient descent method. The details are presented in Algorithm 1.

Comparing with the state-of-the-art method, i.e., Algorithm 2, OGD only requires one query of gradient for every iteration, while Algorithm 2 requires queries of gradient. When is large, the query complexity of Algorithm 2 is much higher than OGD. Comparing with OMGD, i.e., Algorithm 2, our new theoretical analysis framework shows that OGD is good enough to recover the state-of-the-art dyanmic regret yielded by OMGD, but it only leads to query of gradient, instead of queries of gradient required by OMGD.

## 4 A new theoretical analysis framework

In the section, we first provide a modular analysis framework, which does not depend on the assumption on the functions. Then, equipped with the strongly convex assumption, it yields specific results.

### 4.1 High-level thought

Our original goal is equivalent to investigate whether the basic OGD, i.e., Algorithm 1 can obtain the state-of-the-art dynamic regret, i.e., . Using the divide-and-control strategy, we divide the dynamic regret of OGD into two parts.

1. The first part, denoted by , is caused by the online setting in the dynamic environment. It does not depend on the strongly convex assumption on the function .

2. The second part, denoted by , is due to the projected gradient descent step in Algorithm 1. It depends on the assumption on the function such as convexity or strong convexity.

In the paper, our first contribution is to provide an upper bound of without the strongly convex assumption of . Then, benefiting from the rich theoretical tools in the static optimization, we successfully bound by using the strongly convex assumption of .

### 4.2 Meta framework

Generally, the dynamic regret of OGD is bounded as follows.

###### Theorem 1.

For any in Algorithm 1, the dynamic regret of OGD defined in (1) is bounded by

 R∗T≤RoT+RmT

where

 RoT:=T∑t=112ηt(−∥∥x∗t−xt+1∥∥2+∥∥x∗t−xt∥∥2)

and

 RmT:= T∑t=11ηt(−Bηtft(x∗t,xt)+ηt(ft(xt)−ft(xt+1)))+T∑t=11ηt(βηt−12∥xt+1−xt∥2).

In Theorem 1, represents the regret due to the online setting, and represents the regret due to the projected gradient descent updating step in Algorithm 1.

###### Remark 1.

Note that the upper bound of depends on the strongly convex assumption of the function .

###### Theorem 2.

Use Assumption 4, and set in Algorithm 1. Denote and . For any , the regret due to the online setting, i.e., is bounded by

 RoT≤1−ρ+2ρV2ηmin(1−ρ)S∗T+12η1∥∥x∗1−x1∥∥2+12(T−1∑t=1(1ηt+1−1ηt)∥∥x∗t+1−xt+1∥∥2).
###### Remark 2.

Note that this upper bound of does not depend on the strongly convex assumption of the function . It still holds for the convex function .

###### Lemma 1 (Appeared in Proposition 2 in Mokhtari et al. (2016)).

Use Assumption 1. Let and . Denote . If and , we have .

According to Lemma 1, when ’s are strongly convex, (See Lemma 1). When ’s are just convex, (that is, ). Recall that depends on the strongly convex assumption of ’s. Equipped by Lemma 1, we find that as long as is further bounded, we are able to provide an upper bound for the dynamic regret.

## 5 Improved query complexity for strongly convex ft

When all ’s are smooth and strongly convex, the dynamic regret of our method OGD is upper bounded by the following theorem.

###### Theorem 3.

Use Assumptions 1, 2, 3 and 4. Setting in Algorithm 1, and , we bound the dynamic regret of OGD as

 R∗T≤min{J1,J2},

where

 J1= (1−ρ+2ρV)(β+β2α)1−ρS∗T+(β+β2α)∥∥x∗1−x1∥∥2+12(β+β2α)T∑t=1∥∥∇ft(x∗t)∥∥2 ≲ S∗T+T∑t=1∥∥∇ft(x∗t)∥∥2,

and

 J2= G∥∥x1−x∗1∥∥1−ρP∗T+G1−ρ≲P∗T.
###### Corollary 1.

Suppose . According to Theorem 3, the dynamic regret of OGD is bounded by

 R∗T≤min{J1,J2}≲min{P∗T,S∗T},

where and are defined in Theorem 3.

###### Proof.

Recall Assumption 3, and we have . When , we have . Similarly, we have . Thus, we finally obtain

 R∗T≤min{J1,J2}≲min{P∗T,S∗T}.

It completes the proof.

Recall the previous method, i.e., Algorithm 2. Its dynamic regret has been proved, and we present it as follows.

###### Lemma 2 (Appeared in Theorem 3 and Corollary 4 in Zhang et al. (2017).).

Use Assumptions 1, 2, and 3, and choose in Algorithm 2. Denote the dynamic regret of Algorithm 2 by . Then, for any constant , is bounded by

 ~R∗T≤min{J3,J4},

where

 J3= 2GP∗T+2G∥∥x1−x∗1∥∥≲P∗T, J4= ≲ S∗T+T∑t=1∥∥∇ft(x∗t)∥∥2.

Furthermore, suppose , and we thus have .

Comparing with Lemma 2, our new result achieves the same bound of the regret. But, OGD, i.e., Algorithm 1, only requires one query of gradient for every iteration, which does not depend on , and thus outperforms Algorithm 2 by reduing the query complexity significantly. The following remarks hightlight the advantages of our analysis framework.

###### Remark 3.

Our analysis framework achieves the state-of-the-art dynamic regret presented in Zhang et al. (2017) with a constant factor, and outperforms the dynamic regret presented in Mokhtari et al. (2016).

###### Remark 4.

Our analysis framework shows that queries of gradients for every iteration is enough to achieve the state-of-the-art dynamic regret, but Zhang et al. (2017) requires queries of gradients for every iteration.

## 6 Conclusion

We provide a new theoretical analysis framework to analyze the regret and query complexity of OGD in the dynamic environment. Comparing with the previous work, our framework achieves the state-of-the-art dynamic regret, and improve the required queries of gradient to be .

## Proof of theorems.

Proof of Theorem 1:

###### Proof.
 R∗T= T∑t=11ηt(ηtft(xt)−ηtft(x∗t)) = T∑t=11ηt⎛⎜ ⎜⎝⟨ηt∇ft(xt),xt+1−x∗t⟩I1−Bηtft(x∗t,xt)⎞⎟ ⎟⎠+T∑t=11ηt⎛⎜ ⎜⎝⟨ηt∇ft(xt),xt−xt+1⟩I2⎞⎟ ⎟⎠. (2)

Now, we begin to bound . According to Lemma 4, we obtain

 I1≤12(−∥∥x∗t−xt+1∥∥2+∥∥x∗t−xt∥∥2−∥xt+1−xt∥2). (3)

After that, we begin to bound .

 I2 =⟨ηt∇ft(xt),xt−xt+1⟩ =ηtft(xt)−ηtft(xt+1)+ηtBft(xt+1,xt) ≤ηt(ft(xt)−ft(xt+1))+βηt2∥xt+1−xt∥2. (4)

The last inequality holds because that all ’s are smooth. Substituting (3) and (4) into (2), we finally complete the proof. ∎

Proof of Theorem 2:

###### Proof.

According to the cosine theorem, we have

 −∥∥x∗t−xt+1∥∥2+∥∥x∗t+1−xt+1∥∥2≤2∥∥x∗t+1−x∗t∥∥∥∥xt+1−x∗t+1∥∥−∥∥x∗t+1−x∗t∥∥2. (5)

According to Lemma 1, if is convex and smooth, holds for . Specifically, holds when is strongly convex, and holds when is just convex. We thus have

 2∥∥x∗t+1−x∗t∥∥∥∥xt+1−x∗t+1∥∥−∥∥x∗t+1−x∗t∥∥2≥−ρ2∥∥xt−x∗t∥∥2+∥∥x∗t+1−xt+1∥∥2.

Let , , and we thus have

 2At+1Mt+1−M2t+1≥A2t+1−ρ2A2t,

that is, . Thus, we have

 At+1−Mt+1 ≤ρAt ρAt−ρMt ≤ρ2At−1 ⋯ ρt−1A2−ρt−1M2 ≤ρtA1.

Summing up, we obtain

 At+1≤ ρtA1+(Mt+1+ρMt+...+ρt−1M2) = ρt∥∥x1−x∗1∥∥+t+1∑i=2ρt+1−i∥∥x∗i−x∗i−1∥∥ \textcircled1= t+1∑i=1ρt+1−i∥∥x∗i−x∗i−1∥∥ = ∥∥x∗t+1−x∗t∥∥+t∑i=1ρi∥∥x∗t+1−i−x∗t−i∥∥. (6)

holds due to letting .

Substituting (6) into (5), we obtain,

 −∥∥x∗t−xt+1∥∥2+∥∥x∗t+1−xt+1∥∥2≤ 2∥∥x∗t+1−x∗t∥∥At+1−∥∥x∗t+1−x∗t∥∥2 ≤ ∥∥x∗t+1−x∗t∥∥2+2∥∥x∗t+1−x∗t∥∥(t∑i=1ρi∥∥x∗t+1−i−x∗t−i∥∥). (7)

Case . When , according to (7), we have

 −∥∥x∗t−xt+1∥∥2+∥∥x∗t+1−xt+1∥∥2\textcircled1≤ ∥∥x∗t+1−x∗t∥∥2+2V∥∥x∗t+1−x∗t∥∥2(t∑i=1ρi) (8) ≤ ∥∥x∗t+1−x∗t∥∥2+2ρV1−ρ∥∥x∗t+1−x∗t∥∥2 = 1−ρ+2ρV1−ρ∥∥x∗t+1−x∗t∥∥2.

holds according to Assumption 4.

Case . When , according to (7), we have

 −∥∥x∗t−xt+1∥∥2+∥∥x∗t+1−xt+1∥∥2=0=1−ρ+2ρV1−ρ∥∥x∗t+1−x∗t∥∥2. (9)

Combining Case and Case , we obtain

 −∥∥x∗t−xt+1∥∥2+∥∥x∗t+1−xt+1∥∥2≤1−ρ+2ρV1−ρ∥∥x∗t+1−x∗t∥∥2. (10)

Thus, we obtain

 = T−1∑t=11ηt(−∥∥x∗t−xt+1∥∥2+∥∥x∗t+1−xt+1∥∥2)+T−1∑t=1(1ηt+1−1ηt)∥∥x∗t+1−xt+1∥∥2 +1η1∥∥x∗1−x1∥∥2−1ηT∥∥x∗T−xT+1∥∥2 ≤ T−1∑t=11ηt(−∥∥x∗t−xt+1∥∥2+∥∥x∗t+1−xt+1∥∥2)+1η1∥∥x∗1−x1∥∥2+T−1∑t=1(1ηt+1−1ηt)∥∥x∗t+1−xt+1∥∥2 \textcircled1≤ ≤ 1−ρ+2ρVηmin(1−ρ)S∗T+T−1∑t=1(1ηt+1−1ηt)∥∥x∗t+1−xt+1∥∥2+1η1∥∥x∗1−x1∥∥2.

Here, . holds due to (10). Dividing on both sides, we complete the proof. ∎

Proof of Theorem 3:

###### Proof.

When the function is strongly convex, we have

 Bft(x∗t,xt)≥α2∥∥x∗t−xt∥∥2. (11)

Substituting (11) into Theorem 1, we obtain

 R∗T ≤ +T∑t=11ηt(ηt(ft(xt)−ft(xt+1))) \textcircled1≤ T∑t=112ηt(−∥∥x∗t−xt+1∥∥2+∥∥x∗t−xt∥∥2) +T∑t=1ηt(β+12ηt+β2α)−12ηt∥xt+1−xt∥2+T∑t=1ηt∥∥∇ft(x∗t)∥∥2 \textcircled2≤ T∑t=112ηt(−∥∥x∗t−xt+1∥∥2+∥∥x∗t−xt∥∥2)+T∑t=1ηt∥∥∇ft(x∗t)∥∥2 ≤ (1−ρ+2ρV)(β+β2α)1−ρS∗T+(β+β2α)∥∥x∗1−x1∥∥2+12(β+β2α)T∑t=1∥∥∇ft(x∗t)∥∥2.

holds due to (16) in Lemma 5 by setting and . holds because of for . The last inequality holds due to Theorem 2.

Combining Lemma 6, we finally complete the proof.

## Proof of lemmas.

###### Lemma 3.

Denote . If , we have

 xt+1∈\operatornamewithlimitsArgminx∈Xh(x).
###### Proof.

Consider the following convex optimization problem

 (12)

Denote the optimum set is , that is, for any , holds.

According to the first-order optimality condition Boyd and Vandenberghe (2004), we have, for any and ,

 0≤ ⟨∇h(x∗),z−x∗⟩