 # A Conservation Law Method in Optimization

We propose some algorithms to find local minima in nonconvex optimization and to obtain global minima in some degree from the Newton Second Law without friction. With the key observation of the velocity observable and controllable in the motion, the algorithms simulate the Newton Second Law without friction based on symplectic Euler scheme. From the intuitive analysis of analytical solution, we give a theoretical analysis for the high-speed convergence in the algorithm proposed. Finally, we propose the experiments for strongly convex function, non-strongly convex function and nonconvex function in high-dimension.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Non-convex optimization is the dominating algorithmic technique behind many state-of-art results in machine learning, computer vision, natural language processing and reinforcement learning. Finding a global minimizer of a non-convex optimization problem is NP-hard. Instead, the local search method become increasingly important, which is based on the method from convex optimization problem. Formally, the problem of unconstrained optimization is stated in general terms as that of finding the minimum value that a function attains over Euclidean space, i.e.

 minx∈Rnf(x).

Numerous methods and algorithms have been proposed to solve the minimization problem, notably gradient methods, Newton’s methods, trust-region method, ellipsoid method and interior-point method [25, 19, 31, 16, 5, 6].

First-order optimization algorithms are the most popular algorithms to perform optimization and by far the most common way to optimize neural networks, since the second-order information obtained is supremely expensive. The simplest and earliest method for minimizing a convex function

 {xk+1=xk−h∇f(xk)Any Initial Point:x0. (1.1)

There are two significant improvements of the gradient method to speed up the convergence. One is the momentum method, named as Polyak heavy ball method, first proposed in , i.e.,

 {xk+1=xk−h∇f(xk)+γk(xk−xk−1)Any Initial Point:x0. (1.2)

Let

be the condition number, which is the ratio of the smallest eigenvalue and the largest eigenvalue of Hessian at local minima. The momentum method speed up the local convergence rate from

to . The other is the notorious Nesterov’s accelerated gradient method, first proposed in  and an improved version [20, 19], i.e.

 ⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩yk+1=xk−1L∇f(xk)xk+1=xk+γk(xk+1−xk)Any Initial Point:x0=y0 (1.3)

where the parameter is set as

 γk=αk(1−αk)α2k+αk+1andα2k+1=(1−αk+1)α2k+αk+1κ.

The scheme devised by Nesterov does not only own the property of the local convergence for strongly convex function, but also is the global convergence scheme, from to for strongly convex function and from to for non-strongly convex function.

Although there is the complex algebraic trick in Nesterov’s accelerated gradient method, the three methods above can be considered from continuous-time limits [24, 27, 29, 30] to obtain physical intuition. In other words, the three methods can be regarded as the discrete scheme for solving the ODE. The gradient method (1.1) is correspondent to

 {˙x=−∇f(xk)x(0)=x0, (1.4)

and the momentum method and Nesterov accelerated gradient method are correspondent to

 {¨x+γt˙x+∇f(x)=0x(0)=x0,˙x(0)=0, (1.5)

the difference of which are the setting of the friction parameter . There are two significant intuitive physical meaning in the two ODEs (1.4) and (1.5). The ODE (1.4) is the governing equation for potential flow, a correspondent phenomena of waterfall from the height along the gradient direction. The infinitesimal generalization is correspondent to heat conduction in nature. Hence, the gradient method (1.1) is viewed as the implement in computer or optimization simulating the phenomena in the real nature. The ODE (1.5) is the governing equation for the heavy ball motion with friction. The infinitesimal generalization is correspondent to chord vibration in nature. Hence, the momentum method (1.2) and the Nesterov’s accelerated gradient method (1.3) are viewed as the update version implement in computer or optimization by use of setting the friction force parameter .

Furthermore, we can view the three methods above as the thought for dissipating energy implemented in the computer. The unknown objective function in black box model can be viewed as the potential energy. Hence, the initial energy is from the potential function at any position to the minimization value at the position . The total energy is combined with the kinetic energy and the potential energy. The key observation in this paper is that we find the kinetic energy, or the velocity, is observable and controllable variable in the optimization process. In other words, we can compare the velocities in every step to look for local minimum in the computational process or re-set them to zero to arrive to artificially dissipate energy.

Let us introduce firstly the governing motion equation in a conservation force field, that we use in this paper, for comparison as below,

 {¨x=−∇f(x)x(0)=x0,˙x(0)=0. (1.6)

The concept of phase space, developed in the late 19th century, usually consists of all possible values of position and momentum variables. The governing motion equation in a conservation force field (1.6) can be rewritten as

 ⎧⎪⎨⎪⎩˙x=v˙v=−∇f(x)x(0)=x0,v(0)=0. (1.7)

In this paper, we implement our discrete strategy with the utility of the observability and controllability of the velocity, or the kinetic energy, as well as artificially dissipating energy for two directions as below,

• To look for local minima in non-convex function or global minima in convex function, the kinetic energy, or the norm of the velocity, is compared with that in the previous step, it will be re-set to zero until it becomes larger no longer.

• To look for global minima in non-convex function, an initial larger velocity is implemented at the any initial position . A ball is implemented with (1.7), the local maximum of the kinetic energy is recorded to discern how many local minima exists along the trajectory. Then implementing the strategy above to find the minimum of all the local minima.

For implementing our thought in practice, we utilize the scheme in the numerical method for Hamiltonian system, the symplectic Euler method. We remark that a more accuracy version is the Störmer-Verlet method for practice.

### 1.1 An Analytical Demonstration For Intuition

For a simple 1-D function with ill-conditioned Hessian, with the initial position at . The solution and the function value along the solution for (1.4) are given by

 x(t)=x0e−1100t (1.8) f(x(t))=1200x20e−150t. (1.9)

The solution and the function value along the solution for (1.5) with the optimal friction parameter are

 x(t)=x0(1+110t)e−110t (1.10) f(x(t))=1200x20(1+110t)2e−15t. (1.11)

The solution and the function value along the solution for (1.7) are

 x(t)=x0cos(110t)andv(t)=x0sin(110t) (1.12) f(x(t))=1200x20cos2(110t) (1.13)

stop at the point that arrive maximum. Combined with (1.9), (1.11) and (1.13) with stop at the point that arrive maximum, the function value approximating are shown as below, Figure 1: Minimizing f(x)=1200x2 by the analytical solution for (1.9), (1.11) and (1.13) with stop at the point that |v| arrive maximum, starting from x0=1000 and the numerical step size Δt=0.01.

From the analytical solution for local convex quadratic function with maximum eigenvalue and minimum eigenvalue , in general, the step size by

for momentum method and Nesterov accelerated gradient method, hence the simple estimate for iterative times is approximately

 n∼π2√Lμ.

hence, the iterative times is proportional to the reciprocal of the square root of minimal eigenvalue , which is essentially different from the convergence rate of the gradient method and momentum method.

The rest of the paper is organized as follows. Section 2 summarize relevant existing works. In Section 3, we propose the artificially dissipating energy algorithm, energy conservation algorithm and the combined algorithm based on the symplectic Euler scheme, and remark a second-order scheme — the Störmer-Verlet scheme . In Section 4, we propose the locally theoretical analysis for High-Speed converegnce. In section 5, we propose the experimental result for the proposed algorithms on strongly convex function, non-strongly convex function and nonconvex function in high-dimension. Section 6 proposes some perspective view for the proposed algorithms and two adventurous ideas based on the evolution of Newton Second Law — fluid and quantum.

## 2 Related Work

The history of gradient method for convex optimization can be back to the time of Euler and Lagrange. However, since it is relatively cheaper to only calculate for first-order information, this simplest and earliest method is still active in machine learning and nonconvex optimization, such as the recent work [8, 1, 13, 10]. The natural speedup algorithms are the momentum method first proposed in  and Nesterov accelerated gradient method first proposed in  and an improved version . A acceleration algorithm similar as Nesterov accelerated gradient method, named as FISTA, is designed to solve composition problems . A related comprehensive work is proposed in .

The original momentum method, named as Polyak heavy ball method, is from the view of ODE in 

, which contains extremely rich physical intuitive ideas and mathematical theory. An extremely important work in application on machine learning is the backpropagation learning with momentum

. Based on the thought of ODE, a lot of understanding and application on the momentum method and Nesterov accelerated gradient methods have been proposed. In , a well-designed random initialization with momentum parameter algorithm is proposed to train both DNNs and RNNs. A seminal deep insight from ODE to understand the intuition behind Nesterov scheme is proposed in . The understanding for momentum method based on the variation perspective is proposed on , and the understanding from Lyaponuv analysis is proposed in . From the stability theorem of ODE, the gradient method always converges to local minima in the sense of almost everywhere is proposed in . Analyzing and designing iterative optimization algorithms built on integral quadratic constraints from robust control theory is proposed in .

Actually the “high momentum” phenomenon has been firstly observed in  for a restarting adaptive accelerating algorithm, and also the restarting scheme is proposed by . However, both works above utilize restarting scheme for an auxiliary tool to accelerate the algorithm based on friction. With the concept of phase space in mechanics, we observe that the kinetic energy, or velocity, is controllable and utilizable parameter to find the local minima. Without friction term, we can still find the local minima only by the velocity parameter. Based on this view, the algorithm is proposed very easy to practice and propose the theoretical analysis. Meanwhile, the thought can be generalized to nonconvex optimization to detect local minima along the trajectory of the particle.

## 3 Symplectic Scheme and Algorithms

In this chapter, we utilize the first-order symplectic Euler scheme from numerically solving Hamiltonian system as below

 {xk+1=xk+hvk+1vk+1=vk−h∇f(xk) (3.1)

to propose the corresponding artifically dissipating energy algorithm to find the global minima for convex function, or local minima in non-convex function. Then by the observability of the velocity, we propose the energy conservation algorithm for detecting local minima along the trajectory. Finally, we propose a combined algorithm to find better local minima between some local minima.

###### Remark 3.1.

In all the algorithms below, the symplectic Euler scheme can be taken place by the Störmer-Verlet scheme, i.e.

 ⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩vk+1/2=vk−h2∇f(xk)xk+1=xk+hvk+1/2vk+1=vk+1/2−h2∇f(xk+1) (3.2)

which works perfectly better than the symplectic scheme even if doubling step size and keep the left-right symmetry of the Hamiltonian system. The Störmer-Verlet scheme is the natural discretization for 2nd-order ODE

 xk+1−2xk+xk−1=−h2∇f(xk) (3.3)

which is named as leap-frog scheme in PDEs. We remark that the discrete scheme (3.3) is different from the finite difference approximation by the forward Euler method to analyze the stability of 2nd ODE in , since the momentum term is biased.

### 3.1 The Artifically Dissipating Energy Algorithm

Firstly, the artificially dissipating energy algorithm based on (3.1) is proposed as below.

###### Remark 3.2.

In the actual algorithm 1, the codes in line and are not need in the while loop in order to speed up the computation.

#### 3.1.1 A Simple Example For Illustration

Here, we use a simple convex quadratic function with ill-conditioned eigenvalue for illustration as below,

 f(x1,x2)=12(x21+αx22), (3.4)

of which the maximum eigenvalue is and the minimum eigenvalue is . Hence the scale of the step size for (3.4) is

 1L=√1L=1.

In figure 2, we demonstrate the convergence rate of gradient method, momentum method, Nesterov accelerated gradient method and artifically dissipating energy method with the common step size and , where the optimal friction parameter for momentum method and Nesterov accelerated gradient method with . A further result for comparison with the optimal step size in gradient method , the momentum method , and Nesterov accelerated gradient method with and the artifically disspating energy method with shown in figure 3. Figure 2: Mimimize the function in (3.4) for artificially dissipating energy algorithm comparing with gradient method, momentum method and Nesterov accelerated gradient method with stop criteria ϵ=1e−6. The Step size: Left: h=0.1; Right: h=0.5. Figure 3: Mimimize the function in (3.4) for artificially dissipating energy algorithm comparing with gradient method, momentum method and Nesterov accelerated gradient method with stop criteria ϵ=1e−6. The Coefficient α: Left: α=10−5; Right: α=10−6.

With the illustrative convergence rate, we need to learn the trajectory. Since the trajectories of all the four methods are so narrow in ill-condition function in (3.4), we use a relatively good-conditioned function to show it as in figure 4. Figure 4: The trajectory for gradient method, momentum method, Nesterov accelerated method and artifically dissipating energy method for the function (3.4) with α=0.1.

A clear fact in figure 4 shows that the gradient correction decrease the oscillation to comparing with momentum method. A more clear observation is that artificially dissipating method owns the same property with the other three method by the law of nature, that is, if the trajectory come into the local minima in one dimension will not leave it very far. However, from figure 2 and figure 3, the more rapid convergence rate from artificially dissipating energy method has been shown.

### 3.2 Energy Conservation Algorithm For Detecting Local Minima

Here, the energy conservation algorithm based on (3.1) is proposed as below.

###### Remark 3.3.

In the algorithm 2, we can set such that the total energy large enough to climb up some high peak. Same as the algorithm 1, the function value is not need in the while loop in order to speed up the computation.

#### 3.2.1 The Simple Example For Illustration

Here, we use the non-convex function for illustration as below,

 f(x)=⎧⎪⎨⎪⎩2cos(x),x∈[0,2π]cos(x)+1,x∈[2π,4π]3cos(x)−1,x∈[4π,6π] (3.5)

which is the 2nd-order smooth function but not 3rd-order smooth. The maximum eigenvalue can be calculated as below

 maxx∈[0,6π]|f′′(x)|=3.

then, the step length is set . We illustrate that the algorithm 2 simulate the trajectory and find the local minima in figure 5. Figure 5: The Left: the step size h=0.1 with 180 iterative times. The Right: the step size h=0.3 with 61 iterative times.

Another 2D potential function is shown as below,

 f(x1,x2)=12[(x1−4)2+(x2−4)2+8sin(x1+2x2)]. (3.6)

which is the smooth function with domain in . The maximum eigenvalue can be calculated as below

 maxx∈[0,6π]|λ(f′′(x))|≥16.

then, the step length is set . We illustrate that the algorithm 2 simulate the trajectory and find the local minima in figure 6. Figure 6: The common step size is set h=0.1. The Left: the position at (2,0) with 23 iterative times. The Right: the position at (0,4) with 62 iterative times.
###### Remark 3.4.

We point out that for the energy conservation algorithm for detecting local minima along the trajectory cannot detect saddle point in the sense of almost everywhere, since the saddle point in original function is also a saddle point for the energy function . The proof process is fully the same in .

### 3.3 Combined Algorithm

Finally, we propose the comprehensive algorithm combining the artificially dissipating energy algorithm (algorithm 1) and the energy conservation algorithm (algorithm 2) to find global minima.

###### Remark 3.5.

We remark that the combined algorithm (algorithm 3) cannot guarantee to find global minima if the initial position is not ergodic. The tracking local minima is dependent on the trajectory. However, the time of computation and precision based on the proposed algorithm is far better than the large sampled gradient method. Our proposed algorithm first makes the global minima found become possible.

## 4 An Asymptotic Analysis for The Phenomena of Local High-Speed Convergence

In this section, we analyze the phenomena of high-speed convergence shown in figure 1, figure 2 and figure 3. Without loss of generality, we use the translate transformation ( is the point of local minima) and into (3.1), shown as below,

 {yk+1=yk+hvk+1vk+1=vk−h∇f(x⋆+yk), (4.1)

the locally linearized scheme of which is given as below,

 {yk+1=yk+hvk+1vk+1=vk−h∇2f(x⋆)yk. (4.2)
###### Remark 4.1.

The local linearized analysis is based on the stability theorem in finite dimension, the invariant stable manifold theorem and Hartman-Grobman linearized map theorem . The thought is firstly used in  to estimate the local convergence of momentum method. And in the paper , the thought is used to exclude the possiblity to converegnce to saddle point. However, the two theorems above belong to the qualitative theorem of ODE. Hence, the linearized scheme (4.2) is only an approximate estimate for the original scheme (4.1) locally.

### 4.1 Some Lemmas For The Linearized Scheme

Let be the positive-semidefinite and symmetric matrix to represent in (4.2).

###### Lemma 4.2.

The numerical scheme, shown as below

 (yk+1vk+1)=(I−h2AhI−hAI)(ykvk) (4.3)

is equivalent to the linearized symplectic-Euler scheme (4.2

), where we note that the linear transformation is

 M=(I−h2AhI−hAI). (4.4)
###### Proof.
 (I−hI0I)(yk+1vk+1)=(I0−hAI)(ykvk)⇔(yk+1vk+1)=(I−h2AhI−hAI)(ykvk)

###### Lemma 4.3.

For every matrix in (4.4), there exists the orthogonal transformation such that the matrix is similar as below

 UTMU=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝T1T2⋱Tn⎞⎟ ⎟ ⎟ ⎟ ⎟⎠ (4.5)

where is matrix with the form

 Ti=(1−ω2ih2h−ω2ih1) (4.6)

where is the eigenvalue of the matrix .

###### Proof.

Let be the diagonal matrix with the eigenvalues of the matrix as below

 Λ=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝ω21ω22⋱ω2n⎞⎟ ⎟ ⎟ ⎟ ⎟⎠.

Since

is positive definite and symmetric, there exists orthogonal matrix

such that

 UT1AU1=Λ.

Let be the permuation matrix satisfying

 Πi,j=⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩1,jodd% ,i=j+121,jeven,i=n+j20,otherwise

where is the row index and is the column index. Then, let , we have by conjugation

 UTMU =ΠT(UT1UT1)(I−h2AhI−hAI)(U1U1)Π =ΠT(I−h2ΛhI−hΛI)Π =⎛⎜ ⎜ ⎜ ⎜ ⎜⎝T1T2⋱Tn⎞⎟ ⎟ ⎟ ⎟ ⎟⎠

From Lemma 4.3, we know that the equation (4.3) can be written as the equivalent form

 ((UT1y)k+1,i(UT1v)k+1,i)=Ti((UT1y)k,i(UT1v)k,i)=(1−ω2ih2h−ω2ih1)((UT1y)k,i(UT1v)k,i) (4.7)

where .

###### Lemma 4.4.

For any step size satisfying , the eigenvalues of the matrix are complex with absolute value .

###### Proof.

For , we have

 |λI−Ti|=0⇔λ1,2=1−h2ω2i2±hωi√1−h2ω2i4.

Let and for for the new coordinate variables as below

 ⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩cosθi=1−h2ω2i2sinθi=hωi√1−h2ω2i4,⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩cosϕi=hωi2sinϕi=√1−h2ω2i4 (4.8)

In order to make and located in , we need to shrink to .

###### Lemma 4.5.

With the new coordinate in (4.8) for , we have

 2ϕi+θi=π (4.9)

and

 {sinθi=sin(2ϕi)=hωisinϕisin(3ϕi)=−(1−h2ω2i)sinϕi (4.10)
###### Proof.

With Sum-Product identities of trigonometric function, we have

 sin(θi+ϕi) =sinθicosϕi+cosθisinϕi =hωi√1−h2ω2i4⋅hωi2+(1−h2ω2i2)√1−h2ω2i4 =√1−h2ω2i4 =sinϕi.

Since , we have , we can obtain that

 θi+ϕi=π−ϕi⇔θi=π−2ϕi

and with the coordinate transfornation in (4.8), we have

 sinθi=hωisinϕi⇔sin(2ϕi)=hωisinϕi.

Next, we use Sum-Product identities of trigonometric function furthermore,

 sin(θi−ϕi) =sinθicosϕi−cosθisinϕi =hωi√1−h2ω2i4⋅hωi2−(1−h2ω2i2)√1−h2ω2i4 =(h2ω2i−1)√1−h2ω2i4 =−(1−h2ω2i)sinϕi

and with , we have

 sin(3ϕi)=−(1−h2ω2i)sinϕi

###### Lemma 4.6.

With the new coordinate in (4.8), the matrix () in (4.6) can expressed as below,

 (4.11)
###### Proof.

For the coordinate transformation in (4.8), we have

 Ti(1ωieiϕi)=(1ωieiϕi)eiθiandTi(1ωie−iϕi)=(1ωie−iϕi)e−iθi

Hence, (4.11) is proved. ∎

### 4.2 The Asymptotic Analysis

###### Theorem 4.7.

Let the initial value and , after the first steps without reseting the velocity, the iterative solution (4.2) with the equivalent form (4.7) has the form as below

 ((UT1y)k,i(UT1v)k,i)=Tki((UT1y)0,i(UT1v)0,i)=⎛⎜⎝−sin(kθi−ϕi)sinϕisin(kθi)ωisinϕi−ωisin(kθi)sinϕisin(kθi+ϕi)sinϕi⎞⎟⎠((UT1y)0,i(UT1v)0,i) (4.12)
###### Proof.

With Lemma 4.6 and the coordinate transformation (4.8), we have

 Tki =1ωi(e−iϕi−eiϕi)(11ωieiϕiωie−iϕi)(eiθi00e−iθi)k(ωie−iϕi−1−ωieiϕi1) =1ωi(e−iϕi−eiϕi)(11ωieiϕiωie−iϕi)(ωei(kθi−ϕi)−eikθi−ωe−i(kθi−ϕi)e−ikθi) =⎛⎜⎝−sin(kθi−ϕi)sinϕisin(kθi)ωisinϕi−ωisin(kθi)sinϕisin(kθi+ϕi)sinϕi⎞⎟⎠

The proof is complete. ∎

Comparing (4.12) and (4.7), we can obtain that

 sin(kθi−ϕi)sinϕi=1−h2ω2i.

With the intial value , then the initial value for (4.7) is . In order to make sure the numerical solution, or the iterative solution owns the same behavior as the analytical solution, we need to set .

###### Remark 4.8.

Here, the behavior is similar as the thought in . The step size make sure the global convergence of gradient method. And the step size make the uniqueness of the trajectory along the gradient method, the thought of which is equivalent of the existencen and uniqueness of the solution for ODE. Actually, the step size owns the property with the solution of ODE, the continous-limit version. A global existence of the solution for gradient system is proved in .

For the good-conditioned eigenvalue of the Hessian , every method such as gradient method, momentum method, Nesterov accelerated gradient method and artificially dissipating energy method has the good convergence rate shown by the experiment. However, for our artificially dissipating energy method, since there are trigonometric functions from (4.12), we cannot propose the rigorous mathematic proof for the convergence rate. If everybody can propose a theoretical proof, it is very beautiful. Here, we propose a theoretical approximation for ill-conditioned case, that is, the direction with small eigenvalue .

###### Assumption A1.

If the step size for (4.2), for the ill-conditioned eigenvalue , the coordinate variable can be approximated by the analytical solution as

 θi=hωi,andϕi=π2. (4.13)

With Assumption A1, the iterative solution (4.12) can be rewritten as

 ((UT1y)k,i(UT1v)k,i)=(cos(khωi)sin(khωi)ωi−ωisin(khωi)−cos(khωi))((UT1y)0,i(UT1v)0,i) (4.14)
###### Theorem 4.9.

For every ill-conditioned eigen-direction, with every initial condition , if the algorithm 1 is implemented at , then there exist an eigenvalue such that

 kωih≥π2.
###### Proof.

When , then . While for the , we can write in the analytical form,

 ∥∥UT1v∥∥= ⎷n∑i=1ω2i(U1y0)2isin2(khωi)

if there is no , increase with increasing. ∎

For some such that approximating , we have

 ∣∣(UT1y)k+1,i∣∣∣∣(UT1y)k,i∣∣ =cos((k+1)hωi)cos(khωi) (4.15) =elncos((k+1)hωi)−lncos(khωi) =e−tan(ξ)hωi

where . Hence, with approximating , approximatie with the linear convergence, but the coefficient will also decay with the rate with . With the Laurent expansion for at , i.e.,

 tanξ=−1ξ−π2+13(ξ−π2)+145(ξ−π2)3+O((ξ−π2)5)

the coefficient has the approximating formula

 e−tan(ξ)hωi≈ehωiξ−π2≤(π2−ξ)n.

where is an arbitrary large real number in for .

## 5 Experimental Demonstration

In this section, we implement the artificially dissipating energy algorithm (algorithm 1), energy conservation algorithm (algorithm 2) and the combined algorithm (algorithm 3) into high-dimension data for comparison with gradient method, momentum method and Nesterov accelerated gradient method.

### 5.1 Strongly Convex Function

Here, we investigate the artificially dissipating energy algorithm (algorithm 1) for the strongly convex function for comparison with gradient method, momentum method and Nesterov accelerated gradient method (strongly convex case) by the quadratic function as below.

 f(x)=12xTAx+bTx (5.1)

where is symmetric and positive-definite matrix. The two cases are shown as below:

1. [label=()]

2. The generative matrix is random positive definite matrix with eigenvalue from to with one defined eigenvalue

. The generative vector

follows i.i.d. Gaussian distribution with mean

and variance

.

3. The generative matrix is the notorious example in Nesterov’s book , i.e.,

 A=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝2−1−12−1−12⋱⋱⋱⋱⋱⋱−1−12⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠

the eigenvalues of the matrix are

 λk=2−2cos(kπn+1)=4sin2(kπ2(n+1))

and is the dimension of the matrix

. The eigenvector can be solved by the second Chebyshev’s polynomial. We implement

and is zero vector. Hence, the smallest eigenvalue is approximating

 λ1=4sin2(π2(n+1))≈π210012≈10−5 Figure 7: The Left: the case (a) with the initial point x0=0. The Right: the case (b) with the initial point x0=1000

### 5.2 Non-Strongly Convex Function

Here, we investigate the artificially dissipating energy algorithm (algorithm 1) for the non-strongly convex function for comparison with gradient method, Nesterov accelerated gradient method (non-strongly convex case) by the log-sum-exp function as below.

 f(x)=ρlog[n∑i=1exp(⟨ai,x⟩−biρ)] (5.2)

where is the matrix with , the column vector of and is the vector with component . is the parameter. We show the experiment in (5.2): the matrix and the vector are set by the entry following i.i.d Gaussian distribution for the paramter and . Figure 8: The convergence rate is shown from the initial point x0=0. The Left: ρ=5; The Right: ρ=10.

### 5.3 Non-convex Function

For the nonconvex function, we exploit classical test function, known as artificial landscape, to evaluate characteristics of optimization algorithms from general performance and precision. In this paper, we show our algorithms implementing on the Styblinski-Tang function and Shekel function, which is recorded in the virtual library of simulation experiments111https://www.sfu.ca/ ssurjano/index.html. Firstly, we investigate Styblinski-Tang function, i.e.

 f(x)=12d∑i=1(x4i−16x2i+5xi) (5.3)

to demonstrate the general performance of the algorithm 2 to track the number of local minima and then find the local minima by algorithm 3. Figure 9: Detecting the number of the local minima of 2-D Styblinski-Tang function by algorithm 3 with step length h=0.01. The red points are recorded by algorithm 2 and the blue point are the local minima by algorithm 1. The Left: The Initial Position (5,5); The Right: The Initial Position (−5,5).

To the essential 1-D nonconvex Styblinski-Tang function of high dimension, we implement the algorithm 3 to obtain the precision of the global minima as below.

The global minima calculated at the position is shown on the Table 1. And the real global minima at is .

Furthermore, we demonstrate the numerical experiment from Styblinski-Tang function to more complex Shekel function

 f(x)=−m∑i=1(4∑j=1(xj−Cji)2+βi)−1 (5.4)

where

 β=110(1,2,2,4,4,6,3,7,5,5)T

and

 C=⎛⎜ ⎜ ⎜⎝4.01.08.06.03.02.05.08.06.07.04.01.08.06.07.09.03.01.02.03.64.01.08.06.03.02.05.08.06.07.04.01.08.06.07.09.03.01.02.03.6⎞⎟ ⎟ ⎟⎠.
1. [label=(0)]

2. Case , the global minima at is .

1. [label=()]

2. From the position , the experimental result with the step length and the iterative times is shown as below

Detect Position (Algorithm 2)

 ⎛⎜ ⎜ ⎜⎝7.98796.01363.85256.29142.78187.99585.95533.91966.24326.74347.98796.01363.85256.29142.78187.99585.95533.91966.24326.7434⎞⎟ ⎟ ⎟⎠

Detect value

 (−5.0932−2.6551−6.5387−1.6356−1.7262)

Final position (Algorithm 1)

 ⎛⎜ ⎜ ⎜⎝7.99965.99874.00005.99873.00187.99966.00034.00016.00036.99837.99965.99874.00005.99873.00187.99966.00034.00016.00036.9983⎞⎟ ⎟ ⎟⎠

Final value

 (−5.1008−