# Direct Runge-Kutta Discretization Achieves Acceleration

We study gradient-based optimization methods obtained by directly discretizing a second-order ordinary differential equation (ODE) related to the continuous limit of Nesterov's accelerated gradient. When the function is smooth enough, we show that acceleration can be achieved by a stable discretization of the ODE using standard Runge-Kutta integrators. Specifically, we prove that under Lipschitz-gradient, convexity, and order-(s+2) differentiability assumptions, the sequence of iterates generated by discretizing the proposed second-order ODE converges to the optimal solution at a rate of O(N^-2s/s+1), where s is the order of the Runge-Kutta numerical integrator. By increasing s, the convergence rate of our method approaches the optimal rate of O(N^-2). Furthermore, we introduce a new local flatness condition on the objective, according to which rates even faster than (N^-2) can be achieved with low-order integrators and only gradient information. Notably, this flatness condition is satisfied by several standard loss functions used in machine learning, and it may be of broader independent interest. We provide numerical experiments that verify the theoretical rates predicted by our results.

Comments

There are no comments yet.

## Authors

• 10 publications
• 27 publications
• 62 publications
• 45 publications
• ### Preconditioned accelerated gradient descent methods for locally Lipschitz smooth objectives with applications to the solution of nonlinear PDEs

We analyze preconditioned Nesterov's accelerated gradient descent method...
06/11/2020 ∙ by Jea-Hyun Park, et al. ∙ 0

read it

• ### A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights

We derive a second-order ordinary differential equation (ODE) which is t...
03/04/2015 ∙ by Weijie Su, et al. ∙ 0

read it

• ### Acceleration via Symplectic Discretization of High-Resolution Differential Equations

We study first-order optimization methods obtained by discretizing ordin...
02/11/2019 ∙ by Bin Shi, et al. ∙ 0

read it

• ### The connections between Lyapunov functions for some optimization algorithms and differential equations

In this manuscript we study the properties of a family of a second order...
09/01/2020 ∙ by J. M. Sanz-Serna, et al. ∙ 0

read it

• ### A Control-Theoretic Perspective on Optimal High-Order Optimization

In this paper, we provide a control-theoretic perspective on optimal ten...
12/16/2019 ∙ by Tianyi Lin, et al. ∙ 0

read it

• ### Fast Gradient Methods with Alignment for Symmetric Linear Systems without Using Cauchy Step

The performance of gradient methods has been considerably improved by th...
09/03/2019 ∙ by Qinmeng Zou, et al. ∙ 0

read it

• ### Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method

We consider the problem of minimizing a Lipschitz differentiable functio...
09/29/2015 ∙ by Zhaosong Lu, et al. ∙ 0

read it

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In this paper, we study accelerated first-order optimization algorithms for the problem

 minx∈Rdf(x), (1)

where is convex and sufficiently smooth. A classical method for solving (1) is gradient descent (Gd), which displays a sub-optimal convergence rate of —i.e., the gap between Gd and the optimal value decreases to zero at the rate of . Nesterov’s seminal accelerated gradient method (Nesterov, 1983) matches the oracle lower bound of  (Nemirovskii et al., 1983), and is thus a central result in the theory of convex optimization. However, ever since its introduction, acceleration has remained somewhat mysterious, especially because Nesterov’s original derivation relies on elegant but unintuitive algebraic arguments. This lack of understanding has spurred a variety of recent attempts to uncover the rationale behind the phenomenon of acceleration (Allen-Zhu and Orecchia, 2014; Bubeck et al., 2015; Lessard et al., 2016; Hu and Lessard, 2017; Scieur et al., 2016; Fazlyab et al., 2017). We pursue instead an approach to Nag (and accelerated methods in general) via a continuous-time perspective. This view was recently studied by Su et al. (2014), who showed that the continuous limit of Nag is a second order ODE describing a physical system with vanishing friction; Wibisono et al. (2016) generalized this idea and proposed a class of ODEs by minimizing Bregman Lagrangians. Although these works succeed in providing a richer understanding of Nesterov’s scheme via its continuous time ODE, they fail to provide a general discretization procedure that generates provably convergent accelerated methods. In contrast, we introduce a second-order ODE that generates an accelerated first-order method for smooth functions if we simply discretize it using any Runge-Kutta numerical integrator and choose a suitable step size.

### 1.1 Summary of results

Assuming that the objective function is convex and sufficiently smooth, we establish the following:

• We propose a second-order ODE, and show that the sequence of iterates generated by discretizing using a Runge-Kutta integrator converges to the optimal solution at the rate , where is the order of the integrator. By using a more precise numerical integrator, (i.e., a larger ), this rate approaches the optimal rate .

• We introduce a new local flatness condition for the objective function (Assumption 1), under which Runge-Kutta discretization obtains convergence rates even faster than , without requiring high-order integrators. In particular, we show that if the objective is locally flat around a minimum, by using only gradient information we can obtain a convergence rate of , where quantifies the degree of local flatness. Acceleration due to local flatness may seem counterintuitive at first, but our analysis reveals why it helps.

To the best of our knowledge, this work presents the first direct111That is, discretize the ODE with known numerical integration schemes without resorting to reverse engineering Nag’s updates. discretization of an ODE that yields accelerated gradient methods. Unlike Betancourt et al. (2018) who study symplecticity and consider variational integrators, and Scieur et al. (2017) who study consistency of integrators, we focus on the order of integrators (see §2.1). We argue that the stability inherent to the ODE and order conditions on the integrators suffice to achieve acceleration.

### 1.2 Additional related work

Several works (Alvarez, 2000; Attouch et al., 2000; Bruck Jr, 1975; Attouch and Cominetti, 1996) have studied the asymptotic behavior of solutions to dissipative dynamical systems. However, these works retain a theoretical focus as they remain in the continuous time domain and do not discuss the key issue, namely, stability of discretization. Other works such as (Krichene et al., 2015), study the counterpart of Su et al. (2014)’s work for mirror descent algorithms and achieve acceleration via Nesterov’s technique. Diakonikolas and Orecchia (2017) proposes a framework to analyze the first order mirror descent algorithms by studying ODEs derived from duality gaps. Also, Raginsky and Bouvrie (2012) obtain nonasymptotic rates for continuous time mirror descent in a stochastic setting. A textbook treatment of numerical integration is given in (Hairer et al., 2006); some of our proofs build on material from Chapters 3 and 9. (Isaacson and Keller, 1994) and (West, 2004) also provide nice introductions to numerical analysis.

## 2 Problem setup and background

Throughout the paper we assume that the objective is convex and sufficiently smooth. Our key result rests on two key assumptions introduced below. The first assumption is a local flatness condition on around a minimum; our second assumption requires to have bounded higher order derivatives. These assumptions are sufficient to achieve acceleration simply by discretizing suitable ODEs without either resorting to reverse engineering to obtain discretizations or resorting to other more involved integration mechanisms. We will require our assumptions to hold on a suitable subset of . Let be the initial point to our proposed iterative algorithm. First consider the sublevel set

 S:={x∈Rd∣f(x)≤exp(1)((f(x0)−f(x∗)+∥x0−x∗∥2)+1}, (2)

where is a minimum of (1). Later we will show that the sequence of iterates obtained from discretizing a suitable ODE never escapes this sublevel set. Thus, the assumptions that we introduce need to hold only within a subset of . Let this subset be defined as

 A:={x∈Rd∣∃x′∈S, ∥x−x′∥≤1}, (3)

that is, the set of points at unit distance to the initial sublevel set (2). The choice of unit distance is arbitrary, and one can scale that to any desired constant.

###### Assumption 1.

There exists an integer and a positive constant such that for any point , and for all indices , we have the lower-bound

 f(x)−f(x∗)≥1L∥∇(i)f(x)∥pp−i, (4)

where minimizes and

denotes the operator norm of the tensor

.

Assumption 1 bounds high order derivatives by function suboptimality, so that these derivatives vanish as the suboptimality converges to . Thus, it quantifies the flatness of the objective around a minimum.222One could view this as an error bound condition that reverses the gradient-based upper bounds on suboptimality stipulated by the Polyak-Łojasiewicz condition (Lojasiewicz, 1965; Attouch et al., 2010). When , Assumption 1 is slightly weaker than the usual Lipschitz-continuity of gradients (see Example 1) typically assumed in the analysis of first-order methods, including Nag. If we further know that the objective’s Taylor expansion around an optimum does not have low order terms, p would be the degree of the first nonzero term.

###### Example 1.

Let be convex with -Lipschitz continuous gradients, i.e., . Then, for any we have

 f(x)≥f(y)+⟨∇f(y),x−y⟩+1L∥∇f(x)−∇f(y)∥2.

In particular, for , an optimum point, we have , and thus we have which is nothing but inequality (4) for and .

###### Example 2.

Consider the -norm regression problem: , for even integer . If , then satisfies inequality (4) for , and depends on and the operator norm of .

Logistic loss satisfies a slightly different version of Assumption 1 because its minimum can be at infinity. We will explain this point in more detail in Section 3.1. Next, we introduce our second assumption that adds additional restrictions on differentiability and bounds the growth of derivatives.

###### Assumption 2.

There exists an integer and a constant , such that is order differentiable. Furthermore, for any , the following operator norm bounds hold:

 ∥∇(i)f(x)∥≤M,for i=p,p+1,…,s,s+1,s+2. (5)

When the sublevel sets of are compact and hence the set is also compact; as a result, the bound (5) on high order derivatives is implied by continuity. In addition, an loss of the form also satisfy (5) with .

### 2.1 Runge-Kutta integrators

Before moving onto our new results (§3) let us briefly recall explicit Runge-Kutta (RK) integrators used in our work. For a more in depth discussion please see the textbook (Hairer et al., 2006).

###### Definition 1.

Given a dynamical system , let the current point be and the step size be . An explicit stage Runge-Kutta method generates the next step via the following update:

 gi =y0+hi−1∑j=1aijF(gj),Φh(y0)=y0+hS∑i=1biF(gi), (6)

where and are suitable coefficients defined by the integrator;

is the estimation of the state after time step

, while (for ) are a few neighboring points where the gradient information is evaluated.

By combining the gradients at several evaluation points, the integrator can achieve higher precision by matching up Taylor expansion coefficients. Let be the true solution to the ODE with initial condition ; we say that an integrator has order if its discretization error shrinks as

 ∥Φh(y0)−φh(y0)∥=O(hs+1),as h→0. (7)

In general, RK methods offer a powerful class of numerical integrators, encompassing several basic schemes. The explicit Euler’s method defined by is an explicit RK method of order 1, while the midpoint method is of order 2. Some high-order RK methods are summarized in (Verner, 1996). An order 4 RK method requires 4 stages, i.e., 4 gradient evaluations, while an order 9 method requires 16 stages.

## 3 Main results

In this section, we introduce a second-order ODE and use explicit RK integrators to generate iterates that converge to the optimal solution at a rate faster than (where denotes the time variable in the ODE). A central outcome of our result is that, at least for objective functions that are smooth enough, it is not the integrator type that is the key ingredient of acceleration, but a careful analysis of the dynamics with a more powerful Lyapunov function that achieves the desired result. More specifically, we will show that by carefully exploiting boundedness of higher order derivatives, we can achieve both stability and acceleration at the same time. We start with Nesterov’s accelerated gradient (Nag) method that is defined according to the updates

 xk=yk−1−h∇f(yk−1),yk=xk+k−1k+2(xk−xk−1). (8)

Su et al. (2014) showed that the iteration (8) in the limit is equivalent to the following ODE

 ¨x(t)+3t˙x(t)+∇f(x(t))=0,% where ˙x=dxdt (9)

when one drives the step size to zero. It can be further shown that in the continuous domain the function value decreases at the rate of along the trajectories of the ODE. This convergence rate can be accelerated to an arbitrary rate in continuous time via time dilation as in  (Wibisono et al., 2016). In particular, the solution to

 ¨x(t)+p+1t˙x(t)+p2tp−2∇f(x(t))=0, (10)

has a convergence rate . When , Wibisono et al. (2016) proposed rate matching algorithms via utilizing higher order derivatives (e.g., Hessians). In this work, we focus purely on first-order methods and study the stability of discretizing the ODE directly when . Though deriving the ODE from the algorithm is solved, deriving the update of Nag or any other accelerated method by directly discretizing an ODE is not. As stated in (Wibisono et al., 2016), explicit Euler discretization of the ODE in (9) may not lead to a stable algorithm. Recently, Betancourt et al. (2018) observed empirically that Verlet integration is stable and suggested that the stability relates to the symplectic property of the Verlet integration. However, in our proof, we found that the order condition of Verlet integration would suffice to achieve acceleration. Though symplectic integrators are known to preserve modified Hamiltonians for dynamical systems, we weren’t able to leverage this property for dissipative systems such as (11). This principal point of departure from previous works underlies Algorithm 1, which solves (1) by discretizing the following ODE with an order- integrator:

 ¨x(t)+2p+1t˙x(t)+p2tp−2∇f(x(t))=0. (11)

where we have augmented the state with time, to turn the non-autonomous dynamical system into an autonomous one. The solution to (11) exists and is unique when . This claim follows by local Lipschitzness of and is discussed in more details in Appendix A.2 of Wibisono et al. (2016).

We further highlight that the ODE in (11) can also be written as the dynamical system

 ˙y=F(y)=⎡⎢ ⎢⎣−2p+1tv−p2tp−2∇f(x)v1⎤⎥ ⎥⎦,where y=[v;x;t]. (12)

We have augmented the state with time to obtain an autonomous system, which can be readily solved numerically with a Runge-Kutta integrator as in Algorithm 1. To avoid singularity at , Algorithm 1 discretizes the ODE starting from with initial condition . The choice of can be replaced by any arbitrary positive constant. Notice that the ODE in (11) is slightly different from the one in (10); it has a coefficient for instead of . This modification is crucial for our analysis via Lyapunov functions (more details in Section 4 and Appendix A). The parameter in the ODE (11) is set to be the same as the constant in Assumption 1 to achieve the best theoretical upper bound by balancing stability and acceleration. Particularly, the larger is, the faster the system evolves. Hence, the numerical integrator requires smaller step sizes to stabilize the process, but a smaller step size increases the number of iterations to achieve a target accuracy. This tension is alleviated by Assumption 1. The larger is, the flatter the function is around its stationary points. In other words, Assumption 1 implies that as the iterates approach a minimum, the high order derivatives of the function , in addition to the gradient, also converge to zero. Consequently, the trajectory slows down around the optimum and we can stably discretize the process with a large enough step size. This intuition ultimately translates into our main result.

###### Theorem 1.

(Main Result) Consider the second-order ODE in (11). Suppose that the function is convex and Assumptions 1 and 2 are satisfied. Further, let be the order of the Runge-Kutta integrator used in Algorithm 1, be the total number of iterations, and be the initial point. Also, let . Then, there exists a constant such that if we set the step size as , the iterate generated after running Algorithm 1 for iterations satisfies the inequality

 f(xN)−f(x∗)≤C2E0[(L+M+1)E0Nss+1]p=O(N−pss+1), (13)

where the constants and only depend on , , and the Runge-Kutta integrator. Since each iteration consumes gradient, will converge as with respect to the number of gradient evaluations. Note that for commonly used Runge-Kutta integrators, .

The proof of this theorem is quite involved; we provide a sketch in Section 4, deferring the detailed technical steps to the appendix. We do not need to know the constant exactly in order to set the step size . Replacing by any smaller positive constant leads to the same polynomial rate. Theorem 1 indicates that if the objective has bounded high order derivatives and satisfies the flatness condition in Assumption 1 with , then discretizing the ODE in (11) with a high order integrator results in an algorithm that converges to the optimal solution at a rate that is close to . In the following corollaries, we highlight two special instances of Theorem 1.

###### Corollary 2.

If the function is convex with -Lipschitz gradients and is order differentiable, then simulating the ODE (11) for with a numerical integrator of order for N iterations results in the suboptimality bound

 f(xN)−f(x∗)≤C2(f(x0)−f(x∗)+∥x0−x∗∥2+1)3(L+M+1)2N4/3.

Note that higher order differentiability allows one to use a higher order integrator, which leads to the optimal rate in the limit. The next example is based on high order polynomial or norm.

###### Corollary 3.

Consider the objective function . Simulating the ODE (11) for with a numerical integrator of order for iterations results in the suboptimality bound

 f(xN)−f(x∗)≤C2(f(x0)−f(x∗)+∥x0−x∗∥2+1)5(L+M+1)4N16/5.

### 3.1 Logistic loss

Discretizing logistic loss does not fit exactly into the setting of Theorem 1 due to nonexistence of . This potentially causes two problems. First, Assumption 1 is not well defined. Second, the constant in Theorem 1 is not well defined. We explain in this section how we can modify our analysis to admit logistic loss by utilizing its structure of high order derivatives. The first problem can be resolved by replacing by in Assumption 1; then, the logistic loss satisfies Assumption 1 with arbitrary integer . To approach the second problem, we replace by that satisfies the following relaxed inequalities. For some we have

 ⟨x−~x,∇f(x)⟩≥f(x)−f(~x)−ϵ1, (14)
 f(x)−f(~x)≥1L∥∇(i)f(x)∥pp−i−ϵ2,f(~x)−infx∈Rdf(x)≤ϵ3. (15)

As the inequalities are relaxed, there exists a vector

that satisfies the above conditions. If we follow the original proof and balance the additional error terms by picking carefully, we obtain

###### Corollary 4.

(Informal) If the objective is , then discretizing the ODE  (11) with an order numerical integrator for iterations with step size results in a convergence rate of .

## 4 Proof of Theorem 1

We prove Theorem 1 as follows. First(Proposition 5), we show that the suboptimality along the continuous trajectory of the ODE (11) converges to zero sufficiently fast. Second(Proposition 6), we bound the discretization error , which measures the distance between the point generated by discretizing the ODE and the true continuous solution. Finally(Proposition 7), a bound on this error along with continuity of the Lyapunov function (16) implies that the suboptimality of the discretized sequence of points also converges to zero quickly. Central to our proof is the choice of a Lyapunov function used to quantify progress. We propose in particular the Lyapunov function defined as

 E([v;x;t]):=t24p2∥v∥2+∥∥x+t2pv−x∗∥∥2+tp(f(x)−f(x∗)). (16)

The Lyapunov function (16) is similar to the ones used by Wibisono et al. (2016); Su et al. (2014), except for the extra term . This term allows us to bound by . This dependency is crucial for us to achieve the bound(see Lemma 11 for more details). We begin our analysis with Proposition 5, which shows that the function is non-increasing with time, i.e., . This monotonicity then implies that both and are bounded above by some constants. The bound on provides a convergence rate of on the sub-optimality . It further leads to an upper-bound on the derivatives of the function in conjunction with Assumption 1.

###### Proposition 5 (Monotonicity of E).

Consider the vector as a trajectory of the dynamical system (12). Let the Lyapunov function be defined by (16). Then, for any trajectory , the time derivative is non-positive and bounded above; more precisely,

 ˙E(y)≤−tp∥v∥2. (17)

The proof of this proposition follows from convexity and (11); we defer the details to Appendix A. Next, to bound the Lyapunov function for numerical solutions, we need to bound the distance between points in the discretized and continuous trajectories. As in Section 2.1, for the dynamical system , let denote the solution generated by a numerical integrator starting at point with step size . Similarly, let be the corresponding true solution to the ODE. An ideal numerical integrator would satisfy ; however, due to discretization error there is always a difference between and determined by the order of the integrator as in (7). Let be the sequence of points generated by the numerical integrator, that is, . In the following proposition, we derive an upper bound on the resulting discretization error .

###### Proposition 6 (Discretization error).

Let be the current state of the dynamical system defined in (12). Suppose defined in  (2). If we use a Runge-Kutta integrator of order to discretize the ODE for a single step with a step size such that , then

 ∥Φh(yk)−φh(yk)∥≤C′hs+1(M+L+1)[[(1+E(yk))]s+1tk+h[(1+E(yk))]s+2tk], (18)

where the constants , , and only depend on , , and the integrator.

The proof of Proposition 6 is the most challenging part in proving Theorem 1. Details may be found in Appendix B. The key step is to bound . To do so, we first bound the high order derivative tensor using Assumption 1 and Proposition 5 within a region of radius . By carefully selecting , we can show that for a reasonably small , and is constrained in the region. Second, we need to compute the high order derivatives of as a function of which is bounded in the region of radius R. As shown in Appendix E, the expressions for higher derivatives become quite complicated as the order increases. We approach this complexity by using the notation for elementary differentials (see Appendix E) adopted from (Hairer et al., 2006); we then induct on the order of the derivatives to bound the higher order derivatives. The flatness assumption (Assumption 1) provides bounds on the operator norm of high order derivatives relative to the objective function suboptimality, and hence proves crucial in completing the inductive step. By the conclusion in Proposition 6 and continuity of the Lyapunov function , we conclude that the value of at a discretized point is close to its continuous counterpart. Using this observation, we expect that the Lyapunov function values for the points generated by the discretized ODE do not increase significantly. We formally prove this key claim in the following proposition.

###### Proposition 7.

Consider the dynamical system defined in (12) and the Lyapunov function  defined in (16). Let be the initial state of the dynamical system and be the final point generated by a Runge-Kutta integrator of order after iterations. Further, suppose that Assumptions 1 and 2 are satisfied. Then, there exists a constant determined by and the numerical integrator, such that if the step size satsfies , then we have

 E(yN)≤exp(1) E(y0)+1. (19)

Please see Appendix C for a proof of this claim. Proposition 7 shows that the value of the Lyapunov function at the point is bounded above by a constant that depends on the initial value . Hence, if the step size satisfies the required condition in Proposition 7, we can see that

 f(xN)−f(x∗)≤E(yN)tpN≤eE(y0)+1(1+Nh)p. (20)

The first inequality in (20) follows from the definition of the  (16). Replacing the step size in (20) by the choice used in Proposition 7 yields

 f(xN)−f(x∗)≤(L+M+1)p(eE(y0)+1)p+1~CNpss+1, (21)

and the claim of Theorem 1 follows. Note: The dependency of the step size on the degree of the integrator suggests that an integrator of higher order allows for larger step size and therefore faster convergence rate.

## 5 Numerical experiments

In this section, we perform a series of numerical experiments to study the performance of the proposed scheme for minimizing convex functions through the direct discretization (DD) of the ODE in (11) and compare it with gradient descent (GD) as well as Nesterov’s accelerated gradient (NAG). All figures in this section are on log-log scale. For each method tested, we empirically choose the largest step size among subject to the algorithm remaining stable in the first 1000 iterations.

### 5.1 Quadratic functions

We now verify our theoretical results by minimizing a quadratic convex function of the form by simulating the ODE in (11) for the case that , i.e.,

 ¨x(t)+5t˙x(t)+4∇f(x(t))=0,

where and . The first five entries of are valued and the rest are . Rows in are generated by an

multivariate Gaussian distribution conditioned on

. The data is linearly separable. Note that the quadratic objective satisfies the condition in Assumption 1 with . It is also clear that it satisfies the condition in Assumption 2 regarding the bounds on higher order derivatives. Convergence paths of GD, NAG, and the proposed DD procedure for minimizing the quadratic function are demonstrated in Figure 1(a). For the proposed method we consider integrators with different degrees, i.e., . Observe that GD eventually attains linear rate since the function is strongly convex around the optimal solution. Nag displays local acceleration close to the optimal point as mentioned in (Su et al., 2014; Attouch and Peypouquet, 2016). For DD, if we simulate the ODE with an integrator of order , the algorithm is eventually unstable. This result is consistent with the claim in (Wibisono et al., 2016) and our theorem that requires the step size to scale with . Notice that using a higher order integrator leads to a stable algorithm. Our theoretical results suggest that the convergence rate for should be worse than and one can approach such rate by making sufficiently large. However, as shown in Figure 1(a), in practice with an integrator of order , the DD algorithm achieves a convergence rate of . Hence, our theoretical convergence rate in Theorem 1 might be conservative.

We also compare the performances of these algorithms when they are used to minimize

 f([x1,x2])=∥Ax1−b∥2+∥Cx2−d∥44. (22)

Matrix and vector are generated similarly as and . The result is shown in Figure 1(b). As expected, we note that Gd no longer converges linearly, but the other methods converge at the same rate as in Figure 1(a).

### 5.2 Decoupling ODE coefficients with the objective

Throughout this paper, we assumed that the constant in (11) is the same as the one in Assumption 1 to attain the best theoretical upper bounds. In this experiment, however, we empirically explore the convergence rate of discretizing the ODE

 ¨x(t)+2q+1t˙x(t)+q2tq−2∇f(x(t))=0,

when . In particular, we use the same quadratic objective as in the previous section. This objective satisfies Assumption 1 with . We simulate the ODE with different values of using the same numerical integrator with the same step size. Figure 2 summarizes the experimental results. We observe that when , the algorithm diverges. Even though the suboptimality along the continuous trajectory will converge at a rate of , the discretized sequence cannot achieve the lower bound which is of .

### 5.3 Beyond Nesterov’s acceleration

In this section, we discretize ODEs with objective functions that satisfy Assumption 1 with . For all ODE discretization algorithms, we use an order- RK integrator that calls the gradient oracle twice per iteration. Then we run all algorithms for iterations and show the results in Figure 3. As shown in Example 2, the objective

 f(x)=∥Ax−b∥44 (23)

satisfies Assumption 1 for . By Theorem 1 if we set , we can achieve a convergence rate close to the rate . We run the experiments with different values of and summarize the results in Figure 3(a). Note that when , the convergence of direct discretization methods is faster than Nag. Interestingly, when , the discretization is still stable with convergence rate roughly . This suggests that our theorem may be conservative. We then simulate the ODE for the objective function

 f(x)=10∑i=1log(1+e−wTix),

for a dataset of linearly separable points. The data points are generated in the same way as in Section 5.1. As shown in Section 3.1, it satisfies Assumption 1 for arbitrary . As shown in Figure 3(b), the objective decreases faster for larger ; this verifies Corollary 4.

## 6 Discussion

This paper specifies sufficient conditions for stably discretizing an ODE to obtain accelerated first-order (i.e., purely gradient based) methods. Our analysis allows for the design of optimization methods via direct discretization using Runge-Kutta integrators based on the flatness of objective functions. complementing the existing studies that derive ODEs from optimization methods, we show that one can prove convergence rates of a optimization algorithms by leveraging properties of its ODE representation. We hope that this perspective will lead to more general results. In addition, we identified a new condition in Assumption 1 that quantifies the local flatness of convex functions. At first, this condition may appear counterintuitive, because gradient descent actually converges fast when the objective is not flat and the progress slows down if the gradient vanishes close to the minimum. However, when we discretize the ODE, the trajectories with vanishing gradients oscillate slowly, and hence allow stable discretization with large step sizes, which ultimately allows us to achieve acceleration. We think this high-level idea, possibly as embodied by Assumption 1 could be more broadly used in analyzing and designing other optimization methods. Based on the above two points, this paper contains both positive and negative message for the recent trend in ODE interpretation of optimization methods. On one hand, it shows that with careful analysis, discretizing ODE can preserve some of its trajectories properties. On the other hand, our proof suggests that nontrivial additional conditions might be required to ensure stable discretization. Hence, designing an ODE with nice properties in the continuous domain doesn’t guarantee the existence of a practical optimization algorithm. Although our paper answers a fundamental question regarding the possibility of obtaining accelerated gradient methods by directly discretizing second order ODEs (instead of reverse engineering Nesterov-like constructions), it does not fully explain acceleration. First, unlike Nesterov’s accelerated gradient method that only requires first order differentiability, our results require the objective function to be -times differentiable (where is the order of the integrator). Indeed, the precision of numerical integrators only increases with their order when the function is sufficiently differentiable. This property inherently limits our analysis. Second, while we achieve the

convergence rate, some of the constants in our bound are loose (e.g., for squared loss and logistic regression they are quadratic in

versus linear in for Nag). Achieving the optimal dependence on initial errors , the diameter , as well as constants and requires further investigation.

## Acknowledgement

AJ and SS acknowledge support in part from DARPA FunLoL, DARPA Lagrange; AJ also acknowledges support from an ONR Basic Research Challenge Program, and SS acknowledges support from NSF-IIS-1409802.

## Appendix A Proof of Proposition 5

According to the dynamical system in (12) we can write

 ˙x=v,¨x=˙v=−2p+1tv−p2tp−2∇f(x). (24)

Using these definitions we can show that

 ˙E= t24p2⟨2v,˙v⟩+2t4p2⟨v,v⟩+2⟨x+t2pv−x∗,˙x+˙x2p+t2p¨x⟩+tp⟨∇f(x),˙x⟩ +ptp−1(f(x)−f(x∗)) = 2t24p2⟨˙x,¨x+2p+1t˙x⟩−2t24p2⟨˙x,2pt˙x⟩+2t2p⟨x+t2p˙x−x∗,¨x+2p+1t˙x⟩ +tp⟨∇f(x),˙x⟩+ptp−1(f(x)−f(x∗)) = t22p2⟨˙x,−p2tp−2∇f⟩−tp∥˙x∥2+tp⟨x+t2p˙x−x∗,−p2tp−2∇f⟩ +tp⟨∇f(x),˙x⟩+ptp−1(f(x)−f(x∗)) = −tp∥˙x∥2+ptp−1(f(x)−f(x∗))−ptp−1⟨x−x∗,∇f⟩ ≤ −tp∥˙x∥2. (25)

The equalities follows from rearrangement and (11). The last inequality holds due to convexity.

## Appendix B Proof of Proposition 6 (Discretization Error)

In this section, we aim to bound the difference between the true solution defined by the ODE and the point generated by the integrator, i.e., . Since the integrator has order , the difference should be proportional to . Here, we intend to formally derive an upper bound of on . We start by introducing some notations. Given a vector , we define the following projection operators

 πx(y)=x∈Rd,πv(y)=v∈Rd,πt(y)=t∈R,πv,x(y)=\vektorv,x,∈R2d. (26)

We also define the set which is a ball with center and radius as

 B(xc,R)={x∈Rd|∥x−xc∥≤R}, (27)

and define the set as

 UR,0.2(yc)={y=[v;x;t]|∥v−vc∥≤R,∥x−xc∥≤R,|t−tc|≤0.2}. (28)

In the following Lemma, we show that if we start from the point and choose a sufficiently small stepsize, the true solution defined by the ODE and the point generated by the integrator remain in the set .

###### Lemma 8.

Let where , , and . Suppose that (defined in (3)) and hence Assumptions 1 and 2 are satisfied. If , the true solution defined by the ODE and the point generated by the integrator remain in the set , i.e.,

 φh(yc)∈UR,0.2(yc),Φh(yc)∈UR,0.2(yc), (29)

where is a constant determined by the Runge-Kutta integrator. In addition, the intermediate points defined in Definition 1 also belong to the set .

###### Proof.

Note that , . Clearly when ,

 πtφh(yc)−yc=h≤0.2. (30)

Similarly, for any integrator that is at least order ,

 πtΦh(yc)−yc=h≤0.2. (31)

Therefore, we only need to focus on bounding the remaining coordinates. By Lemma 10, we have that when ,

 ∥πv,xF(y)∥≤C(E(yc)+1)(L+M+1)tc. (32)

By definition 1,

 gi=yk+hi−1∑j=1aijF(gj)Φh(yk)=yk+hs−1∑i=0biF(gi).

Let , we have that when ,

 gi∈UR,0.2(yc)Φh(yc)∈UR,0.2(yc). (33)

By fundamental theorem of calculus, we have that

 φh(yc)=yc+∫h0F(φt(yc))dt∈UR,0.2(yc). (34)

Rearrange and apply Cauchy-Schwarz, we get

 ∥πv,x[φh(yc)−yc]∥≤∫h0∥πv,xF(φt(yc))∥dt∈UR,0.2(yc). (35)

By mean value theorem and proof of contradiction, we can show that when ,

 ∫h0∥πv,xF(φt(yc))∥dt≤R. (36)

In particular, if , then exists and such that and . By mean value theorem, this implies that exist such that , which contradicts Lemma 10. Therefore we proved that

 φh(yc)∈UR,0.2(yc). (37)

The result in Lemma 8 shows that and remain in the set . In addition, we can bound the operator norm of in by Lemma 9. Since is a function of , we can show in Lemma 11 that the derivative of and are bounded above by

 ∥∥∥∂qφh(yc)∂hq∥∥∥≤C0[E(yc)+1]q(L+M+1)qtc, (38)

and

 ∥∥∥∂qΦh(yc)∂hq∥∥∥≤C1[1+E(yc)]q(L+M+1)q+C2h[1+E(yc)]q+1(L+M+1)p+1tc. (39)

Since the integrator has order , we can write

 ∂i∂hi[Φh(yk)−φh(yk)]=0   for i=1,...,s. (40)

Therefore, the difference between the true solution defined by the ODE and the point generated by the integrator can be upper bounded by

 ∥Φh(yc)−φh(yc)∥≤(∥∥∥∂s+1φ