# Faster Rates for Convex-Concave Games

We consider the use of no-regret algorithms to compute equilibria for particular classes of convex-concave games. While standard regret bounds would lead to convergence rates on the order of O(T^-1/2), recent work RS13,SALS15 has established O(1/T) rates by taking advantage of a particular class of optimistic prediction algorithms. In this work we go further, showing that for a particular class of games one achieves a O(1/T^2) rate, and we show how this applies to the Frank-Wolfe method and recovers a similar bound D15. We also show that such no-regret techniques can even achieve a linear rate, O((-T)), for equilibrium computation under additional curvature assumptions.

## Authors

• 16 publications
• 6 publications
• 15 publications
• 7 publications
• ### Learning in Cournot Games with Limited Information Feedback

In this work, we study the interaction of strategic players in continuou...
06/15/2019 ∙ by Yuanyuan Shi, et al. ∙ 0

• ### Linear Last-iterate Convergence for Matrix Games and Stochastic Games

06/16/2020 ∙ by Chung-Wei Lee, et al. ∙ 0

• ### Proving μ>1

Choosing the right selection rate is a long standing issue in evolutiona...
04/24/2020 ∙ by Laurent Meunier, et al. ∙ 0

• ### Acceleration through Optimistic No-Regret Dynamics

We consider the problem of minimizing a smooth convex function by reduci...
07/27/2018 ∙ by Jun-Kun Wang, et al. ∙ 0

• ### Resource-Aware Protocols for Network Cost-Sharing Games

We study the extent to which decentralized cost-sharing protocols can ac...
07/07/2020 ∙ by George Christodoulou, et al. ∙ 0

• ### Deep Online Convex Optimization with Gated Games

Methods from convex optimization are widely used as building blocks for ...
04/07/2016 ∙ by David Balduzzi, et al. ∙ 0

• ### Tight last-iterate convergence rates for no-regret learning in multi-player games

We study the question of obtaining last-iterate convergence rates for no...
10/26/2020 ∙ by Noah Golowich, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

A large number of core problems in statistics, optimization, and machine learning, can be framed as the solution of a two-player zero-sum game. Linear programs, for example, can be viewed as a competition between a feasibility player, who selects a point in

, and a constraint player that aims to check for feasibility violations Adler2013. Boosting freund1999adaptive can be viewed as the competition between an agent that selects hard distributions and a weak learning oracle that aims to overcome such challenges freund1996game. The hugely popular technique of Generative Adversarial Networks (GANs) goodfellow2014generative, which produce implicit generative models from unlabelled data, has been framed in terms of a repeated game, with a distribution player aiming to produce realistic samples and a discriminative player that seeks to distinguish real from fake.

While many vanilla supervised learning problems reduce to finding the minimum of an objective function

over some constraint set, tasks that require the search for a saddle point—that is, min-max solution of some convex-concave payoff function —don’t easily lend themselves to standard optimization protocols such as gradient descent, Newton’s method, etc. It is not clear, for example, whether successive iterates should even increase or decrease the payoff . This issue has been noticed in the training of GANs, for example, where the standard update method is a simultaneous gradient descent procedure, and many practitioners have raised concerns about cycling.

On the other hand, what has emerged as a very popular and widely-used trick is the following: simulate a pair of online learning algorithms, each competing in the game with the objective of minimizing regret, and return the time-averaged sequence of actions taken by the players as an approximate solution. The method of applying no-regret learning strategies to find equilibria in zero-sum games was explored in freund1999adaptive, yet the idea goes back at least as far as work by blackwell1956analog and hannan1957approximation. This methodology has several major benefits, which include the following. First, this framework “decouples” the optimization into two parallel routines that have very little communication overhead. Second, the use of no-regret learning is ideal in this scenario, as most of the guarantees for such algorithms are robust to even adversarial environments. Third, one is able to bound the approximation error of the returned saddle point simply in terms of the total regret of the two players. Finally, several surprising recent results have suggested that this parallel online learning methodology leads to even stronger guarantees than what the naïve theory would tell you. In short, whereas the typical no-regret analysis would lead to an approximation error of after iterations, the use of optimistic learning strategies CJ12 can be shown to guarantee convergence; this technique was developed by RK13 and further expanded by SALS15.

In this work we go further, showing that even faster rates are achievable for some specific cases of saddle-point problems. In particular:

1. [itemsep=0mm]

2. AW17 observed that the optimization method known as Frank-Wolfe is simply an instance of the above no-regret framework for solving a particular convex-concave game, leading to a rate of . In this work we further analyze the Frank-Wolfe game, and show that when the objective function and constraint set have additional structure, and both algorithms use optimistic learning procedures, then we can achieve a rate of . This generalizes a result of D15 who proved a similar convergence rate for Frank-Wolfe.

3. Additionally, we show that when the game payoff function is suitably curved in both inputs—i.e. it is strongly-convex-concave and smooth—then we can use no-regret dynamics to achieve a linear rate, with the error decaying as . Applying our technique to the Frank-Wolfe game we are able to recover the linear rate results of LP66,DR70 and [Dunn(1979)].

A notable aspect of our work is the combination of several key algorithmic techniques. First, our Frank-Wolfe result relies on regularization using the squared gauge function, allowing the learner to need only a single linear optimization call on each round. Second, we introduce a notion of weighted regret minimization, and our rates depend on the careful selection of the weight schedule as well as a careful analysis of what has been called Optimistic FollowTheRegularizedLeader. Third, our linear convergence rate leans on a trick developed recently by L17 that generates an adaptive weighting scheme based on the norm of the observed gradients.

### 1.1 Preliminaries

We first provide some definitions that are used in this paper. Let be some function.

is a subgradient of at for any , .

is -smooth w.r.t. a norm if is everywhere differentiable and for any . An equivalent definition of smoothness is that has Lipschitz continuous gradient, i.e., .

is -strongly convex w.r.t. a norm if for any , for some constant .

For a convex function , its Fenchel conjugate is . Note that if is convex then so is its conjugate , since it is defined as the maximum over linear functions of ([Boyd(2004)]). Morever, when the function is strictly convex and the above supremum is attained, we have that . Furthermore, the biconjugate equals if and only if is closed and convex. It is known that is -strongly convex w.r.t. if and only if is strongly smooth with respect to the dual norm ([Kakade et al.(2009)Kakade, Shalev-shwartz, and Tewari]), assuming that is a closed and convex function.

A convex set is a -strongly convex set w.r.t. a norm if for any , any , the ball centered at with radius is included in . For examples of strongly-convex sets, we refer the readers to D15.

Let be any closed convex set which contains the origin. Then the gauge function of is One can show that the gauge function is a convex function (e.g. [Rockafellar(1996)]). It is known that several closed convex sets can lead to the same gauge function ([Bach(2013)]). But if a closed convex set contains the origin, then the gauge function is unique and one has . Furthermore,

Next we provide a characterization of sets based on their gauge function. [-Gauge set] Let be a closed convex set which contains the origin. We say that is -Gauge if its squared gauge function, , is -strongly-convex. This property captures a wide class of constraints. Among these are balls, Schatten balls, and the Group ball. We refer the reader to Appendix B for more details. Curiously, all of these Gauge sets are also known to be strongly-convex. We conjecture that strong-convexity and the Gauge property are equivalent.

## 2 Minimizing Regret to Solve Games

Let us now turn our attention to a now-classical trick: using sequential no-regret learning strategies to find equilibria in zero-sum games.

### 2.1 Weighted Regret Minimization

We begin by briefly defining the standard online learning setup. We imagine a learner who must make a sequence of decisions, selecting at each round a point that lies within a convex and compact decision space . After selecting she is charged for her action, where is the loss function in round , and she proceeds to the next round. Typically it is assumed that when the learner selects in round

, she has observed all loss functions

up to, but not including, time . However, we will also consider learners that are prescient, i.e. that can choose with knowledge of the loss functions up to and including time .

The standard objective for adversarial online learning is the regret, defined as the difference between the learner’s loss over the sequence, discounted by the loss of the best fixed action in hindsight. However, for the purposes of this paper we consider a generalized notion which we call the weighted regret, where every time period has an importance weight that can differ from round to round. More precisely, we assume that the learning process is characterized by a sequence of weights , where for every . Now we define the weighted regret according to

 α\textsc−Reg:=∑Tt=1αtℓt(xt)−minx∈X∑Tt=1αtℓt(x).

(Note that when we drop the , this implies that for all ). The sequence of ’s can arbitrary, and indeed we will consider scenarios under which these weights can be selected in an online fashion, according to the observed loss sequence. The learners also observe at the end of each round. Throughout the paper we will use to denote the cumulative sum , and of particular importance will be the weighted average regret .

### 2.2 Algorithms

In this section we present several of the classical, and a few more recent, algorithms with well-established regret guarantees. For the most part, we present these algorithms in unweighted form, without reference to the weight sequence . In later sections we specify more precisely their weighted counterparts.

One of the most well-known online learning strategies is known as FollowTheRegularizedLeader (FTRL), in which the decision point is chosen as the “best” point over the previous loss functions, with some additional regularization penalty according to some convex . Precisely, given a parameter , the learner chooses on round the point

 xt=argminx∈K{η∑t−1s=1ℓs(x)+R(x)}. (1)

For convenience, let be the gradient . If we assume that is a strongly convex function with respect to some norm , then a well-known regret analysis grants the following bound:

 \textscRegT≤Dη+η2∑Tt=1∥∇t∥2∗, (2)

where . With an appropriately-tuned , one achieves , which is as long the gradients have bounded norm. See, e.g., [Shalev-Shwartz et al.(2012), Hazan(2014), Rakhlin and Sridharan(2016)] for further details on this analysis.

The FollowTheLeader (FTL) strategy minimizes the objective (1), but without the regularization penalty; i.e. . Another way to formalize this is to consider . Given that the above bound has a term, it is clear we can not simply apply the same analysis of FollowTheRegularizedLeader, and indeed one can find examples where linear regret is unavoidable [Cesa-Bianchi and Lugosi(2006), Shalev-Shwartz et al.(2012)]. On the other hand, it has been shown that a strong regret guarantee is achievable even without regularization, as long as the sequence of loss functions are strongly convex. In particular, [Kakade and Shalev-Shwartz(2009)] show the following result: [Corollary 1 from [Kakade and Shalev-Shwartz(2009)]] Let be a sequence of functions such that for all , is -strongly convex. Assume that the FTL algorithm runs on this sequence and for each , let be in . Then,

 ∑Tt=1ℓt(xt)−minx∑Tt=1ℓt(x)≤12∑Tt=1∥vt∥2∑tτ=1στ (3)

Furthermore, let and assume that for all . Then, the regret is bounded by . In the context of solving zero-sum games, the online learning framework allows for one of the two players to be prescient, so she has access to one additional loss function before selecting her . In such a case it is much easier to achieve low regret, and we present three standard prescient algorithms:

Indeed it is easy to show that, for the first two of these prescient strategies, one easily obtains kalai2005efficient. The regret of BeTheRegularizedLeader is no more than . We also consider optimistic algorithms, which we discuss in Appendix A.

#### Gauge Function FTRL.

While the analysis of FollowTheRegularizedLeader is natural and leads to a simple intuitive bound (2), it requires solving a non-linear optimization problem on each round even when the loss functions are themselves linear – a very common scenario. From a computational perspective, it is often impractical to solve the FollowTheRegularizedLeader objective. Nevertheless, in many scenarios a (computationally feasible) linear optimization oracle is at hand. In such instances, much attention has been focused on a perturbed version of FollowTheLeader

, where one solves the unregularized optimization problem but with a linear noise term added to the objective; there is much work analyzing these algorithms and we refer the reader to kalai2005efficient,cesa2006prediction,abernethy2014online among many others. The main downside of such randomized approaches is that they have good expected regret but suffer in variance, which makes them less suitable in various reductions.

In this work, we introduce a family of FollowTheRegularizedLeader algorithms that rely solely on a linear oracle, and we believe this is a novel approach to online linear optimization problems. The restriction we require is that the regularizer is chosen as the squared gauge function for the decision set of the learner. Here we will assume111One can reduce any arbitrary convex loss to the linear loss case by convexity . ([Shalev-Shwartz et al.(2012), Hazan(2014), Rakhlin and Sridharan(2016)]). for every that for some vector , hence the objective (1) reduces to

 xt=argminx∈Kη⟨Lt−1,x⟩+γ2K(x), (7)

where . Denote as the boundary of the constraint set . We can reparameterize the above optimization, by observing that any point can be written as where , and . Hence we have

 minρ∈[0,1]minz∈bndry(K)η⟨Lt−1,ρz⟩+γ2K(ρz) = minρ∈[0,1](minz∈bndry(K)η⟨Lt−1,z⟩)ρ+ρ2. (8)

We are able to remove the dependence on the gauge function since it is homogeneous, , and is identically 1 on the boundary of . The inner minimization reduces to the linear optimization , and the optimal is .

### 2.3 Solving zero-sum convex-concave games

Let us now apply the tools described above to the problem of solving a particular class of zero-sum games; these are often referred to as convex-concave saddle point problems. Assume we have convex and compact sets , , known as the action spaces for the two players. We are given a convex-concave payoff function ; that is, is convex in its first argument for every fixed , and is concave in its second argument for every fixed . We say that a pair is an -equilibrium for if .

The celebrated minimax theorem, first proven by von Neumann for a simple class of biaffine payoff functions v1928theorie,neumann1944theory and generalized by sion1958general and others, states that there exist 0-equilibria for convex-concave games under reasonably weak conditions. Another way to state this , and we tend to call this quantity , the value of the game .

The method of computing an -equilibrium using a pair of no-regret algorithms is reasonably straightforward, although here we will emphasize the use of weighted regret, which has been much less common in the literature. Algorithm 1 describes a basic template used throughout the paper.

Assume that a convex-concave game payoff and a -length sequence are given. Assume that we run Algorithm 1 using no-regret procedures and , and the -weighted average regret of each is and , respectively. Then the output is an -equilibrium for , with The theorem can be restated in terms of , where we get the following “-sandwich”:

 V∗−ϵ≤infx∈Xg(x,¯yα)≤V∗≤supy∈Yg(¯xα,y)≤V∗+ϵ (9)

But the key insight is that the regret analysis leads immediately to a convergence rate for the algorithmic template presented above. We provide the proof of Section 2.3 in Appendix C.

### 2.4 Application: the Frank-Wolfe Algorithm

We can tie the above set of tools together with an illustrative application, describing a natural connection to the Frank-Wolfe (FW) method frank1956algorithm for constrained optimization. The ideas presented here summarize the work of AW17, but in Section 3 we significantly strengthen the result for a special case.

We have a convex set , an -smooth convex function , and some initial point . The FW algorithm makes repeated calls to a linear optimization oracle over , followed by a convex averaging step:

 (linear opt)vt=argminv∈K⟨v,∇f(wt−1)⟩;(update)wt=(1−ηt)wt−1+ηtvt,

where the parameter is a learning rate, and following the standard analysis one sets . A well-known result is that .

Let us leverage Theorem 2.3 to obtain a convergence rate from a no-regret perspective. With a brief inspection, one can verify that FW is indeed a special case of Algorithm 1, assuming that (a) the game payoff is , where is the Fenchel conjugate of ; (b) the sequence is ; (c) the -player and -player employ FollowTheLeader and BestResponse, respectively; we output the final iterate as . We refer to AW17 for a thorough exposition, but it is striking that this use of Algorithm 1 leads to Frank-Wolfe even up to .

As we have reframed FW in terms of our repeated game, we can now appeal to our main theorem to obtain a rate. We must first observe, using the duality of Fenchel conjugation, that

 V∗=supyinfxg(x,y)=−(infysupx⟨x,y⟩−f∗(x))=−(infyf(y)). (10)

Using (9) and the above equality, we can obtain .

The convergence rate of FW thus boils down to bounding the regret of the two players. We note first that the -player is prescient and employs BestResponse, hence we conclude that . The -player on the other hand will suffer the -weighted regret of FollowTheLeader. But notice, critically, that the choice of payoff happens to be strongly convex in , as -smoothness of implies -strong convexity in . We may thus use Lemma 2.2 to obtain:

 ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯α\textsc−RegxLemma ???lemma:ogdstrCvx≤12AT∑Tt=1α2tD∑tτ=1ατ(1/L)≤O(∑Tτ=1LD2/AT)=O(LD2T),

where we use the fact that the x-player observes an strongly convex function, , and that in Lemma 2.2 is , where is the diameter of . We conclude by noting that the absence of the term, which tends to arise from the regret of online strongly convex optimization, was removed by carefully selecting the sequence of weights .

## 3 Fast convergence in the FW game

In this section, we introduce a new FW-like algorithm that achieves a convergence rate on -Gauge sets accessed using a linear optimization oracle. The design and analysis are based on a reweighting scheme and Optimistic-FTRL, taking advantage of recent tools developed for fast rates in solving games CJ12,RS13,SALS15.

In Theorem 3.1 we give an instantiation of Algorithm 1 that finds an approximate saddle point for the FW game . In this instantiation the -player plays Optimistic-FTL and the -player plays BeTheRegularizedLeader. With an appropriate weighting, the weighted regret guarantees of these two algorithms imply that we can find an -approximate saddle point solution of the FW game in rounds. Recalling that , this immediate translates to a convergence rate of for the the problem .

The algorithm that we describe in Theorem 3.1 does not immediately yield a FW-like algorithm—in general, we may not be able to compute the -player’s BeTheRegularizedLeader iterates using only a linear optimization oracle. However, if the -player uses the squared gauge function of as a regularizer, then the iterates are computable using a linear optimization oracle, as shown in Section 2.2. This fact immediately implies that for -Gauge sets and upon choosing the gauge function as regularizer, Algorithm 2 instantiates a projection-free procedure which provides a convergence rate of for the problem (see Corollary 3.1). In Appendix G, we discuss how to get a faster rate than for arbitrary convex sets if BeThePerturbedLeader rather than BeTheRegularizedLeader is used by the -player in the FW game.

### 3.1 Solving the FW game with Optimistic-FTL and BeTheRegularizedLeader

In this section, we present our algorithm for finding -saddle point solutions to the FW game. We instantiate Algorithm 1 using the FW objective , where we assume is -smooth and -strongly convex. The -player plays Optimistic-FTL and the -player plays BeTheRegularizedLeader.   Assume that we instantiate Algorithm 1 with the FW game , weight sequence , and the following strategies for the players. The -player plays Optimistic-FTL:

 xt=argminx∈X∑t−1s=1αsℓs(x)+mt(x) with mt(x)=αtℓt−1(x) (11)

where , and the -player plays BeTheRegularizedLeader:

 yt=argminy∈Y∑ts=1αshs(y)+1ηR(y) (12)

with a -strongly-convex regularizer and , where . Then the output of Algorithm 1 is an -approximate saddle point solution to the FW game, where .

Now recall that for the FW setting, we are interested in -players that may only employ a linear optimization oracle. In general it is impossible to solve Equation (12) within calls to such oracles in each round. Nevertheless, recall that for -Gauge sets, choosing induces a -strongly-convex regularizer, while enabling us to solve Equation (12) with a single call to the linear oracle, as shown in Equation 8. The proof of Section 3.1 shows that the -player’s strategy is the gradient of the primal objective at the point , where is a weight vector such that = for and and is the -weighted average of (See Equation 18). This leads to Algorithm 2 and Corollary 3.1.

We get the following corollary of the above theorem. The full proof is in Appendix F.

Let be -smooth and -strongly-convex. Also assume that is a -Gauge set. Let . Then the output in Algorithm 2 is an -approximate optimal solution to the optimization problem . Moreover, Algorithm 2 only requires a single linear optimization oracle call in each round.

### 3.2 Proof of Section 3.1

[Proof of Theorem 3.1] In the FW game, we observe that the loss functions seen by the x-player are -strongly convex, since the function is smooth, which implies that is -strongly convex.

The -player chooses based on Optimistic-FTL: , where . To analyze the regret of the -player, let us first denote the update of the standard FollowTheLeader as

 zt=argminx∑t−1s=1αsℓs(x). (13)

Denote .222The following analysis actually holds for any . Now we are going to analyze the -weighted regret of the -player, which is

 α\textsc−Regx :=∑Tt=1αtℓt(xt)−αtℓt(x∗) (14) =∑Tt=1αtℓt(xt)−αtℓt(zt+1)−mt(xt)+mt(zt+1)+∑Tt=1mt(xt)−mt(zt+1) +∑Tt=1αtℓt(zt+1)−αtℓt(x∗). ≤∑Tt=1⟨αt∇ℓt(xt)−∇mt(zt+1),xt−zt+1⟩+∑Tt=1mt(xt)−mt(zt+1) +∑Tt=1αtℓt(zt+1)−αtℓt(x∗)−2αtL∥xt−zt+1∥2,

where the last inequality uses strong convexity of so that

 (15)

and that

 −mt(xt)+mt(zt+1)≤⟨∇mt(zt+1),zt+1−xt⟩−αtL∥xt−zt+1∥2. (16)

There are three sums in (14). Note that the second sum should be small because the expression for “exploits” more than the expression for does. The third sum is the regret of BeTheLeader, which is non-positive. In Lemma D, we show that the second and third sums in eq. 14 are in total non-positive. For the proof, please see Appendix D.

Since , each term in the first sum in (14) can be bounded by

 ⟨αt∇ℓt(xt)−∇mt(zt+1),xt−zt+1⟩ =αt⟨∇ℓt(xt)−∇ℓt−1(zt+1),xt−zt+1⟩ =αt⟨−yt+∇f∗(xt)+yt−1−∇f∗(zt+1),xt−zt+1⟩
 ≤αt(∥yt−yt−1∥∗∥xt−zt+1∥+1σ∥xt−zt+1∥2), (17)

where the last inequality uses Hölder’s inequality and the fact that is -strongly convex so that is smooth. Let us analyze . Note that, by Fenchel conjugacy, , where is the -weighted average of For notational simplicity, let us define a new weight vector , where = for and . Similarly, for , we have

 xt:=argmin{αt(−y⊤t−1x+f∗(x))+∑t−1s=1αs(−y⊤sx+f∗(x))}=∇f(¯yα′1:t−1), (18)

where is the -weighted average of . According to (18),

 ∥xt−zt+1∥2 =∥∇f(¯yα1:t)−∇f(¯yα′1:t−1)∥2≤L2∥¯yα1:t−¯yα′1:t−1∥2 =L2A2t∥t∑s=1αsys−t−1∑s=1α′sys∥2=L2A2t∥αt−1yt−1+αtyt−α′t−1yt−1∥2
 =L2A2t∥αt(yt−1−yt)∥2=(αtLAt)2∥yt−1−yt∥2. (19)

Combining and , we get

 (αt∇ℓt(xt)−∇mt(zt+1))⊤(xt−zt+1)≤αt(∥yt−yt−1∥∗∥xt−zt+1∥+1σ∥xt−zt+1∥2) (20) ≤αt((αtLAt)∥yt−yt−1∥2+1σ(αtLAt)2∥yt−yt−1∥2).

Therefore, we have shown that the first sum in (14) is bounded by

 ∑Tt=1αt((αtLAt)∥yt−yt−1∥2+1σ(αtLAt)2∥yt−yt−1∥2). (21)

By (14), (21), and Lemma D, we get the upper bound of the regret of the -player,

 α\textsc−Regx≤∑Tt=1αt((αtLAt)∥yt−yt−1∥2+1σ(αtLAt)2∥yt−yt−1∥2). (22)

Now let us switch to analyze the regret of the -player, which is defined as which equals . This means that the -player actually observes the linear loss in each round , due to the fact that the -player plays after the -player plays. We can reinterpret BeTheRegularizedLeader as Optimistic-FTRL ([Syrgkanis et al.(2015)Syrgkanis, Agarwal, Luo, and Schapire]) when the learner is fully informed as to the loss function for the current round. That is, we may write the update as , where and is -strongly convex with respect to a norm on .

For loss vectors , Appendix E gives the regret of Optimistic-FTRL as

 ∑Tt=1⟨yt−y∗,θt⟩≤R(y∗)−R(z)−β2(∑Tt=1∥yt−zt∥2+∑Tt=1∥yt−zt+1∥2)η+∑Tt=1ηβ∥θt−mt∥2∗, (23)

where is FTRL update, defined as , while is Optimistic-FTRL update, defined as . We prove Appendix E in Appendix E.

Since in our case , we get the bound of the regret of the -player in the FW game,

 α\textsc−Regy≤R(y∗)−R(z)−β2∑Tt=1∥yt−yt−1∥2η. (24)

Combining (22) and (24), we get

 α\textsc−Regx+α\textsc−Regy≤ ∑Tt=1{αt((αtLAt)∥yt−yt−1∥2+1σ(αtLAt)2∥yt−yt−1∥2) (25) +R(y∗)−R(z)−β2∑Tt=1∥yt+1−y