Strong convergence of an inertial Tseng's extragradient algorithm for pseudomonotone variational inequalities with applications to optimal control problems

07/23/2020
by   Bing Tan, et al.
NetEase, Inc
0

We investigate an inertial viscosity-type Tseng's extragradient algorithm with a new step size to solve pseudomonotone variational inequality problems in real Hilbert spaces. A strong convergence theorem of the algorithm is obtained without the prior information of the Lipschitz constant of the operator and also without any requirement of additional projections. Finally, several computational tests are carried out to demonstrate the reliability and benefits of the algorithm and compare it with the existing ones. Moreover, our algorithm is also applied to solve the variational inequality problem that appears in optimal control problems. The algorithm presented in this paper improves some known results in the literature.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

03/18/2020

A One Dimensional Elliptic Distributed Optimal Control Problem with Pointwise Derivative Constraints

We consider a one dimensional elliptic distributed optimal control probl...
02/20/2020

A minimax approach for inverse variational inequalities

In this work, we characterize the existence of solution for a certain va...
03/18/2021

Krasnoselskij-type algorithms for variational inequality problems and fixed point problems in Banach spaces

Existence and uniqueness as well as the iterative approximation of fixed...
11/16/2020

Several self-adaptive inertial projection algorithms for solving split variational inclusion problems

This paper is to analyze the approximation solution of a split variation...
01/17/2020

Chebyshev Inertial Landweber Algorithm for Linear Inverse Problems

The Landweber algorithm defined on complex/real Hilbert spaces is a grad...
11/23/2020

Geometry-Aware Universal Mirror-Prox

Mirror-prox (MP) is a well-known algorithm to solve variational inequali...

1 Introduction

The goal of this study is to investigate a fast iterative method for discovering a solution to the variational inequality problem (in short, VIP). In this paper, one always assumes that is a real Hilbert space with and the induced norm , and is a closed and convex nonempty subset in . Let us first elaborate on the issues involved in this research as follows:

(VIP)

where is a nonlinear mapping. We denote the solution set of (VIP) as .

Variational inequalities are powerful tools and models in applied mathematics and act an essential role in society, optimization, economics, transportation, mathematical programming, engineering mechanics, and other fields (see, for instance, QA ; SYVS ; AIY ). In the last decades, various effective solution methods have been investigated and developed to solve the problems of type (VIP); see, e.g., Cho2 ; SIna ; tanjnca and the references therein. It should be pointed out that these approaches usually require that mapping has certain monotonicity. In this paper, we consider that the mapping associated with (VIP) is pseudomonotone (see the definition below), which is a broader category than monotone mappings.

Let us review some nonlinear mappings in nonlinear analysis for further use. For any elements , one recalls that a mapping is said to be:

  1. -strongly monotone if there is a positive number such that

  2. -inverse strongly monotone if there is a positive number such that

  3. monotone if

  4. -strongly pseudomonotone if there is a positive number such that

  5. pseudomonotone if

  6. -Lipschitz continuous if there is such that

  7. sequentially weakly continuous if for any sequence weakly converges to a point , weakly converges to .

It can be easily checked that the following relations: and . Note that the opposite statement is generally incorrect. Recall that a mapping is called the metric projection from onto , if for all , there is a unique nearest point in , which is represented by , such that .

The oldest and simplest projection approach to solve variational inequality problems is the projected-gradient method, which reads as follows:

(PGM)

where represents the metric projection onto , mapping is -Lipschitz continuous and -strongly monotone and the step size . Then the iterative sequence defined by (PGM) converges to the solution of (VIP) provided that is nonempty. It should be noted that the iterative sequence formulated by (PGM) does not necessarily converge when mapping is “only” monotone. Recently, Malitsky PRGM introduced a projected reflected gradient method, which can be viewed as an improvement of (PGM). Indeed, the sequence generated by this method is as follows:

(PRGM)

He proved that the sequence created by iterative scheme (PRGM) converges to when the mapping is monotone. Further extensions of (PRGM) can be found in PRGM1 ; PRGM2 .

In many kinds of research on solving variational inequalities controlled by pseudomonotone and Lipschitz continuous operators, the most commonly used algorithm is the extragradient method (see EGM ) and its variants. Indeed, Korpelevich proposed the extragradient method (EGM) in EGM to find the solution of the saddle point problem in finite-dimensional spaces. The details of EGM are described as follows:

(EGM)

where mapping is -Lipschitz continuous monotone and fixed step size . Under the condition of , the iterative sequence defined by (EGM) converges to an element of . In the past few decades, EGM has been considered and extended by many authors for solving (VIP) in infinite-dimensional spaces, see, e.g., SILD ; SLD ; tanarxiv and the references therein. Recently, Vuong EGMpVIP extended EGM to solve pseudomonotone variational inequalities in Hilbert spaces, and proved that the iterative sequence constructed by the algorithm converges weakly to a solution of (VIP). On the other hand, it is not easy to calculate the projection on the general closed convex set , especially when has a complex structure. Note that in the extragradient method, two projections need to be calculated on the closed convex set for each iteration, which may severely affect the computational performance of the algorithm used.

Next, we introduce two types of methods to enhance the numerical efficiency of EGM. The first approach is the Tseng’s extragradient method (referred to as TEGM, also known as the forward-backward-forward method) proposed by Tseng tseng . The advantage of this method is that the projection on the feasible set only needs to be calculated once in each iteration. More precisely, TEGM is expressed as follows:

(TEGM)

where mapping is -Lipschitz continuous monotone and fixed step size . Then the iterative sequence formulated by (TEGM) converges to a solution of (VIP) provided that is nonempty. Very recently, Bot, Csetnek and Vuong in their recent work BotpVIP

proposed a Tseng’s forward-backward-forward algorithm for solving pseudomonotone variational inequalities in Hilbert spaces and performed an asymptotic analysis of the formed trajectories. The second method is the subgradient extragradient method (SEGM) proposed by Censor, Gibali and Reich 

SEGM . This can be regarded as a modification of EGM. Indeed, they replaced the second projection in (EGM) by a projection onto a half-space. SEGM is calculated as follows:

(SEGM)

where mapping is -Lipschitz continuous monotone and fixed step size . SEGM not only converges to monotone variational inequalities (see CGR1 ), but also to pseudomonotone variational inequalities (see CGR2 ; TSI ).

It is worth mentioning that (EGM), (TEGM) and (SEGM

) are weakly convergent in infinite-dimensional Hilbert spaces. Some practical problems that occur in the fields of image processing, quantum mechanics, medical imaging and machine learning need to be modeled and analyzed in infinite-dimensional space. Therefore, strong convergence results are preferable to weak convergence results in infinite-dimensional space. Recently, Thong and Vuong 

MaTEGM introduced the modified Mann-type Tseng’s extragradient method to solve the (VIP) involving a pseudomonotone mapping in Hilbert spaces. Their method uses an Armijo-like line search to eliminate the reliance on the Lipschitz continuous constant of the mapping . Indeed, the proposed algorithm is stated as follows:

(MaTEGM)

where the mapping is pseudomonotone, sequentially weakly continuous on and uniformly continuous on bounded subsets of , , are two real positive sequences in such that for some and , , and is the smallest non-negative integer satisfying (, , ). They showed that the iteration scheme formed by (MaTEGM) converges strongly to an element under , where .

To accelerate the convergence rate of the algorithms, in 1964, Polyak inertial considered the second-order dynamical system , where , represents the gradient of , and denote the first and second derivatives of at , respectively. This dynamic system is called the Heavy Ball with Friction (HBF).

Next, we consider the discretization of this dynamic system (HBF), that is,

Through a direct calculation, we can get the following form:

where and . This can be considered as the following two-step iteration scheme:

This iteration is now called the inertial extrapolation algorithm, the term is referred to as the extrapolation point. In recent years, inertial technology as an acceleration method has attracted extensive research in the optimization community. Many scholars have built various fast numerical algorithms by employing the inertial technology. These algorithms have shown advantages in theoretical and computational experiments and have been successfully applied to many problems, see, for instance, FISTA ; GHjfpta ; zhoucoam and the references therein.

Very recently, inspired by the inertial method, the SEGM and the viscosity method, Thong, Hieu and Rassias ViSEGM presented a viscosity-type inertial subgradient extragradient algorithm to solve pseudomonotone (VIP) in Hilbert spaces. The algorithm is of the form:

(ViSEGM)

where the mapping is pseudomonotone, -Lipschitz continuous, sequentially weakly continuous on , and the inertia parameters are updated in the following ways:

Note that the Algorithm (ViSEGM) uses a simple step size rule, which is generated through some calculations of previously known information in each iteration. Therefore, it can work well without the prior information of the Lipschitz constant of the mapping . They confirmed the strong convergence of (ViSEGM) under mild assumptions on cost mapping and parameters.

Motivated and stimulated by the above works, we introduce a new inertial Tseng’s extragradient algorithm with a new step size for solving the pseudomonotone (VIP) in Hilbert spaces. The advantages of our algorithm are: (1) only one projection on the feasible set needs to be calculated in each iteration; (2) do not require to know the prior information of the Lipschitz constant of the cost mapping; (3) the addition of inertial makes it have faster convergence speed. Under mild assumptions, we confirm a strong convergence theorem of the suggested algorithm. Lastly, some computational tests appearing in finite and infinite dimensions are proposed to verify our theoretical results. Furthermore, our algorithm is also designed to solve optimal control problems. Our algorithm improves some existing results MaTEGM ; ViSEGM ; THna ; YLNA ; FQOPT2020 .

The organizational structure of our paper is built up as follows. Some essential definitions and technical lemmas that need to be used are given in the next section. In Section 3, we propose an algorithm and analyze its convergence. Some computational tests and applications to verify our theoretical results are presented in Section 4. Finally, the paper ends with a brief summary.

2 Preliminaries

Let be a closed and convex nonempty subset of a real Hilbert space . The weak convergence and strong convergence of to are represented by and , respectively. For each and , we have the following facts:

  1. ;

  2. .

It is known that has the following basic properties:

  • ;

  • ;

  • .

We give some explicit formulas to calculate projections on special feasible sets.

  1. The projection of onto a half-space is given by

  2. The projection of onto a box is given by

  3. The projection of onto a ball is given by

The following lemmas play an important role in our proof.

Lemma 2.1 (Cy1992 ).

Assume that is a closed and convex subset of a real Hilbert space . Let operator be continuous and pseudomonotone. Then, is a solution of (VIP) if and only if .

Lemma 2.2 (Sy2012 ).

Let be a positive sequence, be a sequence of real numbers, and be a sequence in such that . Assume that

If for every subsequence of satisfying , then .

3 Main results

In this section, we present a self adaptive inertial viscosity-type Tseng’s extragradient algorithm, which is based on the inertial method, the viscosity method and the Tseng’s extragradient method. The major benefit of this algorithm is that the step size is automatically updated at each iteration without performing any line search procedure. Moreover, our iterative scheme only needs to calculate the projection once in each iteration. Before starting to state our main result, we assume that our algorithm satisfies the following five assumptions.

  1. The feasible set is closed, convex and nonempty.

  2. The solution set of the (VIP) is nonempty, that is, .

  3. The mapping is pseudomonotone and -Lipschitz continuous on , and sequentially weakly continuous on .

  4. The mapping is -contractive with .

  5. The positive sequence satisfies , where such that and .

Now, we can state the details of the iterative method. Our algorithm is described as follows.

  Initialization: Given , , . Let be two initial points.
  Iterative Steps: Calculate the next iteration point as follows:
where and are updated by (3.1) and (3.2), respectively.
(3.1)
(3.2)
Algorithm 1 Self adaptive inertial viscosity-type Tseng’s extragradient algorithm
Remark 3.1.

It follows from (3.1) and Assumption e that

Indeed, we obtain , which together with yields

Lemma 3.1.

The sequence formulated by (3.2) is nonincreasing and

Proof.

On account of (3.2), we have . Hence, is nonincreasing. Moreover, we get that by means of is -Lipschitz continuous. Thus,

which together with (3.2) implies that . Therefore, since sequence is lower bounded and nonincreasing. ∎

The following lemmas have a significant part to play in the convergence proof of our algorithm.

Lemma 3.2.

Suppose that Assumptions ac hold. Let and be two sequences formulated by Algorithm 1. If there exists a subsequence convergent weakly to and , then .

Proof.

From the property of projection and , we have

which can be written as follows

Through a direct calculation, we get

(3.3)

We have that is bounded since is convergent weakly to . Then, from the Lipschitz continuity of and , we obtain that and are also bounded. Since , one concludes from (3.3) that

(3.4)

Moreover, one has

(3.5)

Since and is Lipschitz continuous, we get . This together with (3.4) and (3.5) yields that .

Next, we select a positive number decreasing sequence such that as . For any , we represent the smallest positive integer with such that

(3.6)

It can be easily seen that the sequence is increasing because is decreasing. Moreover, for any , from , we can assume (otherwise, is a solution) and set . Then, we get . Now, we can deduce from (3.6) that . According to the fact that is pseudomonotone on , we can show that

which further yields that

(3.7)

Now, we prove that . We get that since and . From , we have . In view of is sequentially weakly continuous on , one has that converges weakly to . One assumes that (otherwise, is a solution). According to the fact that norm mapping is sequentially weakly lower semicontinuous, we obtain . Using and as , we have

That is, . Thus, from the facts that is Lipschitz continuous, sequences and are bounded and , we can conclude from (3.7) that . Therefore,

Consequently, we observe that by Lemma 2.1. This completes the proof. ∎

Remark 3.2.

If is monotone, then does not need to satisfy sequential weak continuity, see DSC .

Lemma 3.3.

Suppose that Assumptions ac hold. Let sequences and be formulated by Algorithm 1. Then, we have

and

Proof.

First, using the definition of , one obtains

(3.8)

Indeed, if then (3.8) clearly holds. Otherwise, it follows from (3.2) that

Consequently, we have

Therefore, inequality (3.8) holds when and . From the definition of , one sees that

(3.9)

Since , using the property of projection, we obtain

or equivalently

(3.10)

From (3.8), (3.9) and (3.10), we have

(3.11)

From , one has . Using the pseudomonotonicity of , we get

(3.12)

Combining (3.11) and (3.12), we can show that

According to the definition of and (3.8), we obtain

This completes the proof of the Lemma 3.3. ∎

Theorem 3.1.

Suppose that Assumptions ae hold. Then the iterative sequence formulated by Algorithm 1 converges to in norm, where .

Proof.

Claim 1. The sequence is bounded. According to Lemma 3.3, we get that . Therefore, there is a constant that satisfies From Lemma 3.3, one has

(3.13)

By the definition of , one sees that

(3.14)

From Remark 3.1, one gets . Thus, there is a constant that satisfies

(3.15)

Using (3.13), (3.14) and (3.15), we obtain

(3.16)

Using the definition of and (3.16), we have