# Accelerating Optimization via Adaptive Prediction

We present a powerful general framework for designing data-dependent optimization algorithms, building upon and unifying recent techniques in adaptive regularization, optimistic gradient predictions, and problem-dependent randomization. We first present a series of new regret guarantees that hold at any time and under very minimal assumptions, and then show how different relaxations recover existing algorithms, both basic as well as more recent sophisticated ones. Finally, we show how combining adaptivity, optimism, and problem-dependent randomization can guide the design of algorithms that benefit from more favorable guarantees than recent state-of-the-art methods.

## Authors

• 44 publications
• 5 publications
07/06/2017

### Convergence Analysis of Optimization Algorithms

The regret bound of an optimization algorithms is one of the basic crite...
09/05/2019

### More Adaptive Algorithms for Tracking the Best Expert

In this paper, we consider the problem of prediction with expert advice ...
03/21/2022

### What is a randomization test?

The meaning of randomization tests has become obscure in statistics educ...
06/20/2017

### A Unified Approach to Adaptive Regularization in Online and Stochastic Optimization

We describe a framework for deriving and analyzing online optimization a...
12/31/2020

### Optimizing Optimizers: Regret-optimal gradient descent algorithms

The need for fast and robust optimization algorithms are of critical imp...
05/20/2016

### Structured Prediction Theory Based on Factor Graph Complexity

We present a general theoretical analysis of structured prediction with ...
08/22/2014

### Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Optimization

All swarm-intelligence-based optimization algorithms use some stochastic...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Online convex optimization algorithms represent key tools in modern machine learning. These are flexible algorithms used for solving a variety of optimization problems in classification, regression, ranking and probabilistic inference. These algorithms typically process one sample at a time with an update per iteration that is often computationally cheap and easy to implement. As a result, they can be substantially more efficient both in time and space than standard batch learning algorithms, which often have optimization costs that are prohibitive for very large data sets.

In the standard scenario of online convex optimization (Zinkevich, 2003), at each round , the learner selects a point out of a compact convex set and incurs loss , where is a convex function defined over . The learner’s objective is to find an algorithm that minimizes the regret with respect to a fixed point :

 RegT(A,x∗)=T∑t=1ft(xt)−ft(x∗)

that is the difference between the learner’s cumulative loss and the loss in hindsight incurred by , or with respect to the loss of the best in ,

. We will assume only that the learner has access to the gradient or an element of the sub-gradient of the loss functions

, but that the loss functions

can be arbitrarily singular and flat, e.g. not necessarily strongly convex or strongly smooth. This is the most general setup of convex optimization in the full information setting. It can be applied to standard convex optimization and online learning tasks as well as to many optimization problems in machine learning such as those of SVMs, logistic regression, and ridge regression. Favorable bounds in online convex optimization can also be translated into strong learning guarantees in the standard scenario of batch supervised learning using online-to-batch conversion guarantees

(Littlestone, 1989; Cesa-Bianchi et al., 2004; Mohri et al., 2012).

In the scenario of online convex optimization just presented, minimax optimal rates can be achieved by standard algorithms such as online gradient descent (Zinkevich, 2003). However, general minimax optimal rates may be too conservative. Recently, adaptive regularization methods have been introduced for standard descent methods to achieve tighter data-dependent regret bounds (see (Bartlett et al., 2007), (Duchi et al., 2010), (McMahan and Streeter, 2010), (McMahan, 2014), (Orabona et al., 2013)). Specifically, in the “AdaGrad” framework of (Duchi et al., 2010), there exists a sequence of convex functions such that the update yields regret:

 RegT(A,x)≤√2maxt∥x−xt∥∞n∑i=1 ⎷T∑t=1|gt,i|2,

where is an element of the subgradient of at , , and is the Bregman divergence defined using the convex function . This upper bound on the regret has shown to be within a factor of the optimal a posteriori regret:

  ⎷ninfs≽0,⟨1,s⟩≤nT∑t=1∥gt∥2diag(s)−1.

Note, however, that this upper bound on the regret can still be very large, even if the functions admit some favorable properties (e.g. , linear). This is because the dependence is directly on the norm of s.

An alternative line of research has been investigated by a series of recent publications that have analyzed online learning in “slowly-varying” scenarios (Hazan and Kale, 2009; Chiang et al., 2012; Rakhlin and Sridharan, 2013; Chiang et al., 2013). In the framework of (Rakhlin and Sridharan, 2013), if is a self-concordant function, is the semi-norm induced by its Hessian at the point ,111The norm induced by a symmetric positive definite (SPD) matrix is defined for any by . and is a “prediction” of a time subgradient based on information up to time , then one can obtain regret bounds of the following form:

 RegT(A,x)≤1ηR(x)+2ηT∑t=1∥gt−~gt∥∇2R(xt),∗.

Here, denotes the dual norm of : for any , . This guarantee can be very favorable in the optimistic case where for all . Nevertheless, it admits the drawback that much less control is available over the induced norm since it is difficult to predict, for a given self-concordant function , the behavior of its Hessian at the points selected by an algorithm. Moreover, there is no guarantee of “near-optimality” with respect to an optimal a posteriori regularization as there is with the adaptive algorithm.

This paper presents a powerful general framework for designing online convex optimization algorithms combining adaptive regularization and optimistic gradient prediction which helps address several of the issues just pointed out. Our framework builds upon and unifies recent techniques in adaptive regularization, optimistic gradient predictions, and problem-dependent randomization. In Section 2, we describe a series of adaptive and optimistic algorithms for which we prove strong regret guarantees, including a new Adaptive and Optimistic Follow-the-Regularized-Leader (AO-FTRL) algorithm (Section 2.1) and a more general version of this algorithm with composite terms (Section 2.3). These new regret guarantees hold at any time and under very minimal assumptions. We also show how different relaxations recover both basic existing algorithms as well as more recent sophisticated ones. In a specific application, we will also show how a certain choice of regularization functions will produce an optimistic regret bound that is also nearly a posteriori optimal, combining the two different desirable properties mentioned above. Lastly, in Section 3, we further combine adaptivity and optimism with problem-dependent randomization to devise algorithms benefitting from more favorable guarantees than recent state-of-the-art methods.

### 2.1 AO-FTRL algorithm

In view of the discussion in the previous section, we present an adaptive and optimistic version of the Follow-the-Regularized-Leader (FTRL) family of algorithms. In each round of standard FTRL, a point is chosen that is the minimizer of the average linearized loss incurred plus a regularization term. In our new version of FTRL, we will find a minimizer of not only the average loss incurred, but also a prediction of the next round’s loss. In addition, we will define a dynamic time-varying sequence of regularization functions that can be used to optimize against this new loss term. Algorithm 1 shows the pseudocode of our Adaptive and Optimistic Follow-the-Regularized-Leader (AO-FTRL) algorithm.

The following result provides a regret guarantee for the algorithm when one uses proximal regularizers, i.e.  functions such that .

###### Theorem 1 (AO-FTRL-Prox).

Let be a sequence of proximal non-negative functions, and let

be the learner’s estimate of

given the history of functions and points . Assume further that the function is 1-strongly convex with respect to some norm (i.e. is 1-strongly convex with respect to ). Then, the following regret bound holds for AO-FTRL (Algorithm 1):

 RegT(\sc AO-FTRL,x)=T∑t=1ft(xt)−ft(x)≤r0:T(x)+T∑t=1∥gt−~gt∥2(t),∗.
###### Proof.

Recall that , and let Then, by convexity, the following inequality holds:

 T∑t=1ft(xt)−ft(x) ≤T∑t=1gt⋅(xt−x) =T∑t=1(gt−~gt)⋅(xt−yt)+~gt⋅(xt−yt)+gt⋅(yt−x).

Now, we first prove by induction on that for all the following inequality holds:

 T∑t=1~gt⋅(xt−yt)+gt⋅yt≤T∑t=1gt⋅x+r0:T(x).

For , since and , the inequality follows by the definition of . Now, suppose the inequality holds at iteration . Then, we can write

 T+1∑t=1~gt⋅(xt−yt)+gt⋅yt =[T∑t=1~gt⋅(xt−yt)+gt⋅yt] +~gT+1⋅(xT+1−yT+1)+gT+1⋅yT+1 ≤[T∑t=1gt⋅xT+1+r0:T(xT+1)] +~gT+1⋅(xT+1−yT+1)+gT+1⋅yT+1 (by the induction hypothesis for x=xT+1) ≤[(g1:T+~gT+1)⋅xT+1+r0:T+1(xT+1)] +~gT+1⋅(−yT+1)+gT+1⋅yT+1 (since rt≥0, ∀t) ≤[(g1:T+~gT+1)⋅yT+1+r0:T+1(yT+1)] +~gT+1⋅(−yT+1)+gT+1⋅yT+1 (by definition of xT+1) ≤g1:T+1⋅y+r0:T+1(y), for any y. (by definition of yT+1)

Thus, we have that and it suffices to bound . Notice that, by duality, one can immediately write . To bound in terms of the gradient, recall first that since is proximal and ,

 xt=argminxh0:t−1(x)+rt(x), yt=argminxh0:t−1(x)+rt(x)+(gt−~gt)⋅x.

The fact that is -strongly convex with respect to the norm implies that is as well. In particular, it is -strongly convex at the points and . But this then implies that the conjugate function is -strongly smooth on the image of the gradient, including at and (see Lemma 1 in the appendix or (Rockafellar, 1970) for a general reference), which means that

Since and , we have that . ∎

The regret bound just presented can be vastly superior to the adaptive methods of (Duchi et al., 2010), (McMahan and Streeter, 2010), and others. For instance, one common choice of gradient prediction is , so that for slowly varying gradients (e.g. nearly “flat” functions), , but . Moreover, for reasonable gradient predictions, generally, so that in the worst case, Algorithm 1’s regret will be at most a factor of two more than standard methods. At the same time, the use of non self-concordant regularization allows one to more explicitly control the induced norm in the regret bound as well as provide more efficient updates than those of (Rakhlin and Sridharan, 2013). Section 2.2.1 presents an upgraded version of online gradient descent as an example, where our choice of regularization allows our algorithm to accelerate as the gradient predictions become more accurate.

Note that the assumption of strong convexity of is not a significant constraint, as any quadratic or entropic regularizer from the standard mirror descent algorithms will satisfy this property.

Moreover, if the loss functions themselves are -strongly convex, then one can set and still get a favorable induced norm . If the gradients and gradient predictions are uniformly bounded, this recovers the worst-case regret bounds. At the same time, Algorithm 1 would also still retain the potentially highly favorable data-dependent and optimistic regret bound.

Liang and Steinhardt (2014) (Steinhardt and Liang, 2014) also studied adaptivity and optimism in online learning in the context of mirror descent-type algorithms. If, in the proof above, we assume their condition:

 r∗0:t+1(−ηg1:t)≤r∗0:t(−η(g1:t−~gt))−ηxTt(gt−~gt),

then we obtain the following regret bound: Our algorithm, however, is generally easier to use since it holds for any sequence of regularization functions and does not require checking for that condition.

In some cases, it may be preferable to use non-proximal adaptive regularization. Since non-adaptive non-proximal FTRL corresponds to dual averaging, this scenario arises, for instance, when one wishes to use regularizers such as the negative entropy to derive algorithms from the Exponentiated Gradient (EG) family (see (Shalev-Shwartz, 2012) for background). We thus present the following theorem for this family of algorithms: Adaptive Optimistic Follow-the-Regularized-Leader - General version (AO-FTRL-Gen).

###### Theorem 2 (AO-FTRL-Gen).

Let be a sequence of non-negative functions, and let be the learner’s estimate of given the history of functions and points . Assume further that the function is 1-strongly convex with respect to some norm (i.e. is 1-strongly convex wrt ). Then, the following regret bound holds for AO-FTRL (Algorithm 1):

 T∑t=1ft(xt)−ft(x)≤r0:T−1(x)+T∑t=1∥gt−~gt∥2(t−1),∗

Due to spatial constraints, the proof of this theorem, as well as that of all further results in the remainder of Section 2, are presented in Appendix 5.

As in the case of proximal regularization, Algorithm 1 applied to general regularizers still admits the same benefits over the standard adaptive algorithms. In particular, the above algorithm is an easy upgrade over any dual averaging algorithm. Section 2.2.2 illustrates one such example for the Exponentiated Gradient algorithm.

###### Corollary 1.

With the following suitable choices of the parameters in Theorem 3, the following regret bounds can be recovered:

1. Adaptive FTRL-Prox of (McMahan, 2014) (up to a constant factor of 2): .

3. Optimistic FTRL of (Rakhlin and Sridharan, 2013): where and a self-concordant function, .

### 2.2 Applications

###### Corollary 2 (Ao-Gd).

Let be an -dimensional rectangle, and denote . Set

 r0:t=n∑i=1t∑s=1Δs,i−Δs−1,i2Ri(xi−xs,i)2.

Then, if we use the martingale-type gradient prediction , the following regret bound holds:

 RegT(\sc AO-GD,x)≤4n∑i=1Ri ⎷T∑t=1(gt,i−gt−1,i)2.

Moreover, this regret bound is nearly equal to the optimal a posteriori regret bound:

 Rin∑i=1 ⎷T∑t=1(gt,i−gt−1,i)2 =maxiRi ⎷ninfs≽0,⟨s,1⟩≤nT∑t=1∥gt−gt−1∥2diag(s)−1.

Notice that the regularization function is minimized when the gradient predictions become more accurate. Thus, if we interpret our regularization as an implicit learning rate, our algorithm uses a larger learning rate and accelerates as our gradient predictions become more accurate. This is in stark contrast to other adaptive regularization methods, such as AdaGrad, where learning rates are inversely proportional to simply the norm of the gradient.

Moreover, since the regularization function decomposes over the coordinates, this acceleration can occur on a per-coordinate basis. If our gradient predictions are more accurate in some coordinates than others, then our algorithm will be able to adapt accordingly. Under the simple martingale prediction scheme, this means that our algorithm will be able to adapt well when only certain coordinates of the gradient are slowly-varying, even if the entire gradient is not.

In terms of computation, the AO-GD update can be executed in time linear in the dimension (the same as for standard gradient descent). Moreover, since the gradient prediction is simply the last gradient received, the algorithm also does not require much more storage than the standard gradient descent algorithm. However, as we mentioned in the general case, the regret bound here can be significantly more favorable than the standard bound of online gradient descent, or even its adaptive variants.

###### Corollary 3 (Ao-Eg).

Let be the -dimensional simplex and the negative entropy. Assume that for all and set

 r0:t= ⎷2C+∑ts=1∥gs−~gs∥2∞log(n)(φ+log(n)).

Then, if we use the martingale-type gradient prediction , the following regret bound holds:

 RegT(\sc AO-EG,x) ≤2 ⎷2log(n)(C+T−1∑t=1∥gt−gt−1∥2∞).

The above algorithm admits the same advantages over predecessors as the AO-GD algorithm. Moreover, observe that this bound holds at any time and does not require the tuning of any learning rate. Steinhardt and Liang (Steinhardt and Liang, 2014) also introduce a similar algorithm for EG, one that could actually be more favorable if the optimal a posteriori learning rate is known in advance.

In some cases, we may wish to impose some regularization on our original optimization problem to ensure properties such as generalization (e.g. -norm in SVM) or sparsity (e.g. -norm in Lasso). This “composite term” can be treated directly by modifying the regularization in our FTRL update. However, if we wish for the regularization penalty to appear in the regret expression but do not wish to linearize it (which could mitigate effects such as sparsity), then some extra care needs to be taken.

We modify Algorithm 1 to obtain Algorithm 2, and we provide accompanying regret bounds for both proximal and general regularization functions. In each theorem, we give a pair of regret bounds, depending on whether the learner considers the composite term as an additional part of the loss.

All proofs are provided in Appendix 5.

###### Theorem 3 (CAO-FTRL-Prox).

Let be a sequence of proximal non-negative functions, such that , and let be the learner’s estimate of given the history of functions and points . Let be a sequence of non-negative convex functions, such that . Assume further that the function is 1-strongly convex with respect to some norm . Then the following regret bounds hold for CAO-FTRL (Algorithm 2):

 T∑t=1ft(xt)−ft(x)≤ψ1:T−1(x)+r0:T−1(x)+T∑t=1∥gt−~gt∥2(t−1),∗ T∑t=1[ft(xt)+ψt(xt)]−[ft(x)+ψt(x)]≤r0:T(x)+T∑t=1∥gt−~gt∥2(t),∗.

Notice that if we don’t consider the composite term as part of our loss, then our regret bound resembles the form of AO-FTRL-Gen. This is in spite of the fact that we are using proximal adaptive regularization. On the other hand, if the composite term is part of our loss, then our regret bound resembles the one using AO-FTRL-Prox.

###### Theorem 4 (CAO-FTRL-Gen).

Let be a sequence of non-negative functions, and let be the learner’s estimate of given the history of functions and points . Let be a sequence of non-negative convex functions such that . Assume further that the function is 1-strongly convex with respect to some norm . Then, the following regret bound holds for CAO-FTRL (Algorithm 2):

 T∑t=1ft(xt)−ft(x)≤ψ1:T−1(x)+r0:T−1(x)+T∑t=1∥gt−~gt∥2(t−1),∗ T∑t=1ft(xt)+ψt(xt)−[ft(x)+ψt(x)]≤r0:T−1(x)+T∑t=1∥gt−~gt∥2(t),∗.

We now generalize the scenario to that of stochastic online convex optimization, where, instead of exact subgradient elements

, we receive only estimates. Specifically, we assume access to a sequence of vectors of the form

, where . This extension is in fact well-documented in the literature (see (Shalev-Shwartz, 2012) for a reference), and the extension of our adaptive and optimistic variant follows accordingly. For completeness, we provide the proofs of the following theorems in Appendix 8.

###### Theorem 5 (CAOS-FTRL-Prox).

Let be a sequence of proximal non-negative functions, such that , and let be the learner’s estimate of given the history of noisy gradients and points . Let be a sequence of non-negative convex functions, such that . Assume further that the function is 1-strongly convex with respect to some norm . Then, the update of Algorithm 3 yields the following regret bounds:

 E[T∑t=1ft(xt)−ft(x)]≤E[ψ1:T−1(x)+r0:T−1(x)+T∑t=1∥^gt−~gt∥2(t−1),∗] E[T∑t=1ft(xt)+ψt(xt)−ft(x)−αtψt(x)]≤E[r0:T(x)+T∑t=1∥^gt−~gt∥2(t),∗].
###### Theorem 6 (CAOS-FTRL-Gen).

Let be a sequence of non-negative functions, and let be the learner’s estimate of given the history of noisy gradients and points . Let be a sequence of non-negative convex functions, such that . Assume furthermore that the function is 1-strongly convex with respect to some norm . Then, the update of Algorithm 3 yields the regret bounds:

 E[T∑t=1ft(xt)−ft(x)]≤E[ψ1:T−1(x)+r0:T−1(x)+T∑t=1∥^gt−~gt∥2(t−1),∗] E[T∑t=1ft(xt)+ψt(xt)−ft(x)−ψt(x)]≤E[r0:T−1(x)+T∑t=1∥^gt−~gt∥2(t−1),∗]

The algorithm above enjoys the same advantages over its non-adaptive or non-optimistic predecessors. Moreover, the choice of the adaptive regularizers and gradient predictions now also depend on the randomness of the gradients received. While masked in the above regret bounds, this interplay will come up explicitly in the following two examples, where we, as the learner, impose randomness into the problem.

### 3.2 Applications

#### 3.2.1 Randomized Coordinate Descent with Adaptive Probabilities

Randomized coordinate descent is a method that is often used for very large-scale problems where it is impossible to compute and/or store entire gradients at each step. It is also effective for directly enforcing sparsity in a solution since the support of the final point cannot be larger than the number of updates introduced.

The standard randomized coordinate descent update is to choose a coordinate uniformly at random (see e.g. (Shalev-Shwartz and Tewari, 2011)). Nesterov (2012) (Nesterov, 2012)

analyzed random coordinate descent in the context of loss functions with higher regularity and showed that one can attain better bounds by using non-uniform probabilities.

In the randomized coordinate descent framework, at each round we specify a distribution over the coordinates and pick a coordinate

randomly according to this distribution. From here, we then construct an unbiased estimate of an element of the subgradient:

. This technique is common in the online learning literature, particularly in the context of the multi-armed bandit problem (see e.g. (Cesa-Bianchi and Lugosi, 2006) for more information).

The following theorem can be derived by applying Theorem 5 to the gradient estimates just constructed. We provide a proof in Appendix 9.

###### Theorem 7 (Cao-Rcd).

Assume . Let

be a random variable sampled according to the distribution

, and let

 ^gt=(gt⋅eit)eitpt,it,^~gt=(~gt⋅eit)eitpt,it,

be the estimated gradient and estimated gradient prediction. Denote , and let

 r0:t= n∑i=1t∑s=1Δs,i−Δs−1,i2Ri(xi−xs,i)2

be the adaptive regularization. Then, the regret of the algorithm can be bounded by:

 E[T∑t=1ft(xt)+αtψ(xt)−ft(x)−αtψ(x)]≤4n∑i=1Ri ⎷T∑t=1E[(gt,i−~gt,i)2pt,i]

In general, we do not have access to an element of the subgradient before we sample according to . However, if we assume that we have some per-coordinate upper bound on an element of the subgradient uniform in time, i.e. , then we can use the fact that to motivate setting and (by computing the optimal distribution). This yields the following regret bound.

###### Corollary 4 (CAO-RCD-Lipschitz).

Assume that at any time the following per-coordinate Lipschitz bounds hold on the loss function: . Set

as the probability distribution at time

, and set . Then, the regret of the algorithm can be bounded as follows:

 E[T∑t=1ft(xt)+αtψ(xt)−ft(x)−αtψ(x)]≤2√T(n∑i=1(RiLi)2/3)3/2.

An application of Hölder’s inequality will reveal that this bound is strictly smaller than the

bound one would obtain from randomized coordinate descent using the uniform distribution. Moreover, the algorithm above still entertains the intermediate data-dependent bound of Theorem

7.

Notice the similarity between the sampling distribution generated here with the one suggested by (Nesterov, 2012). However, Nesterov assumed higher regularity in his algorithm (i.e. ) and generated his probabilities from there. In our setting, we only need . It should be noted that (Afkanpour et al., 2013) also proposed an importance-sampling based approach to random coordinate descent for the specific setting of multiple kernel learning. In their setting, they propose updating the sampling distribution at each point in time instead of using uniform-in-time Lipschitz constants, which comes with a natural computational tradeoff. Moreover, the introduction of adaptive per-coordinate learning rates in our algorithm allows for tighter regret bounds in terms of the Lipschitz constants.

We can also derive the analogous mini-batch update:

###### Corollary 5 (CAO-RCD-Lipschitz-Mini-Batch).

Assume . Let be a partition of the coordinates, and let . Assume we had the following Lipschitz condition on the partition: .

Define . Set