1 Introduction
The goal of this study is to investigate a fast iterative method for discovering a solution to the variational inequality problem (in short, VIP). In this paper, one always assumes that is a real Hilbert space with and the induced norm , and is a closed and convex nonempty subset in . Let us first elaborate on the issues involved in this research as follows:
(VIP) 
where is a nonlinear mapping. We denote the solution set of (VIP) as .
Variational inequalities are powerful tools and models in applied mathematics and act an essential role in society, optimization, economics, transportation, mathematical programming, engineering mechanics, and other fields (see, for instance, QA ; SYVS ; AIY ). In the last decades, various effective solution methods have been investigated and developed to solve the problems of type (VIP); see, e.g., Cho2 ; SIna ; tanjnca and the references therein. It should be pointed out that these approaches usually require that mapping has certain monotonicity. In this paper, we consider that the mapping associated with (VIP) is pseudomonotone (see the definition below), which is a broader category than monotone mappings.
Let us review some nonlinear mappings in nonlinear analysis for further use. For any elements , one recalls that a mapping is said to be:

strongly monotone if there is a positive number such that

inverse strongly monotone if there is a positive number such that

monotone if

strongly pseudomonotone if there is a positive number such that

pseudomonotone if

Lipschitz continuous if there is such that

sequentially weakly continuous if for any sequence weakly converges to a point , weakly converges to .
It can be easily checked that the following relations: and . Note that the opposite statement is generally incorrect. Recall that a mapping is called the metric projection from onto , if for all , there is a unique nearest point in , which is represented by , such that .
The oldest and simplest projection approach to solve variational inequality problems is the projectedgradient method, which reads as follows:
(PGM) 
where represents the metric projection onto , mapping is Lipschitz continuous and strongly monotone and the step size . Then the iterative sequence defined by (PGM) converges to the solution of (VIP) provided that is nonempty. It should be noted that the iterative sequence formulated by (PGM) does not necessarily converge when mapping is “only” monotone. Recently, Malitsky PRGM introduced a projected reflected gradient method, which can be viewed as an improvement of (PGM). Indeed, the sequence generated by this method is as follows:
(PRGM) 
He proved that the sequence created by iterative scheme (PRGM) converges to when the mapping is monotone. Further extensions of (PRGM) can be found in PRGM1 ; PRGM2 .
In many kinds of research on solving variational inequalities controlled by pseudomonotone and Lipschitz continuous operators, the most commonly used algorithm is the extragradient method (see EGM ) and its variants. Indeed, Korpelevich proposed the extragradient method (EGM) in EGM to find the solution of the saddle point problem in finitedimensional spaces. The details of EGM are described as follows:
(EGM) 
where mapping is Lipschitz continuous monotone and fixed step size . Under the condition of , the iterative sequence defined by (EGM) converges to an element of . In the past few decades, EGM has been considered and extended by many authors for solving (VIP) in infinitedimensional spaces, see, e.g., SILD ; SLD ; tanarxiv and the references therein. Recently, Vuong EGMpVIP extended EGM to solve pseudomonotone variational inequalities in Hilbert spaces, and proved that the iterative sequence constructed by the algorithm converges weakly to a solution of (VIP). On the other hand, it is not easy to calculate the projection on the general closed convex set , especially when has a complex structure. Note that in the extragradient method, two projections need to be calculated on the closed convex set for each iteration, which may severely affect the computational performance of the algorithm used.
Next, we introduce two types of methods to enhance the numerical efficiency of EGM. The first approach is the Tseng’s extragradient method (referred to as TEGM, also known as the forwardbackwardforward method) proposed by Tseng tseng . The advantage of this method is that the projection on the feasible set only needs to be calculated once in each iteration. More precisely, TEGM is expressed as follows:
(TEGM) 
where mapping is Lipschitz continuous monotone and fixed step size . Then the iterative sequence formulated by (TEGM) converges to a solution of (VIP) provided that is nonempty. Very recently, Bot, Csetnek and Vuong in their recent work BotpVIP
proposed a Tseng’s forwardbackwardforward algorithm for solving pseudomonotone variational inequalities in Hilbert spaces and performed an asymptotic analysis of the formed trajectories. The second method is the subgradient extragradient method (SEGM) proposed by Censor, Gibali and Reich
SEGM . This can be regarded as a modification of EGM. Indeed, they replaced the second projection in (EGM) by a projection onto a halfspace. SEGM is calculated as follows:(SEGM) 
where mapping is Lipschitz continuous monotone and fixed step size . SEGM not only converges to monotone variational inequalities (see CGR1 ), but also to pseudomonotone variational inequalities (see CGR2 ; TSI ).
It is worth mentioning that (EGM), (TEGM) and (SEGM
) are weakly convergent in infinitedimensional Hilbert spaces. Some practical problems that occur in the fields of image processing, quantum mechanics, medical imaging and machine learning need to be modeled and analyzed in infinitedimensional space. Therefore, strong convergence results are preferable to weak convergence results in infinitedimensional space. Recently, Thong and Vuong
MaTEGM introduced the modified Manntype Tseng’s extragradient method to solve the (VIP) involving a pseudomonotone mapping in Hilbert spaces. Their method uses an Armijolike line search to eliminate the reliance on the Lipschitz continuous constant of the mapping . Indeed, the proposed algorithm is stated as follows:(MaTEGM) 
where the mapping is pseudomonotone, sequentially weakly continuous on and uniformly continuous on bounded subsets of , , are two real positive sequences in such that for some and , , and is the smallest nonnegative integer satisfying (, , ). They showed that the iteration scheme formed by (MaTEGM) converges strongly to an element under , where .
To accelerate the convergence rate of the algorithms, in 1964, Polyak inertial considered the secondorder dynamical system , where , represents the gradient of , and denote the first and second derivatives of at , respectively. This dynamic system is called the Heavy Ball with Friction (HBF).
Next, we consider the discretization of this dynamic system (HBF), that is,
Through a direct calculation, we can get the following form:
where and . This can be considered as the following twostep iteration scheme:
This iteration is now called the inertial extrapolation algorithm, the term is referred to as the extrapolation point. In recent years, inertial technology as an acceleration method has attracted extensive research in the optimization community. Many scholars have built various fast numerical algorithms by employing the inertial technology. These algorithms have shown advantages in theoretical and computational experiments and have been successfully applied to many problems, see, for instance, FISTA ; GHjfpta ; zhoucoam and the references therein.
Very recently, inspired by the inertial method, the SEGM and the viscosity method, Thong, Hieu and Rassias ViSEGM presented a viscositytype inertial subgradient extragradient algorithm to solve pseudomonotone (VIP) in Hilbert spaces. The algorithm is of the form:
(ViSEGM) 
where the mapping is pseudomonotone, Lipschitz continuous, sequentially weakly continuous on , and the inertia parameters are updated in the following ways:
Note that the Algorithm (ViSEGM) uses a simple step size rule, which is generated through some calculations of previously known information in each iteration. Therefore, it can work well without the prior information of the Lipschitz constant of the mapping . They confirmed the strong convergence of (ViSEGM) under mild assumptions on cost mapping and parameters.
Motivated and stimulated by the above works, we introduce a new inertial Tseng’s extragradient algorithm with a new step size for solving the pseudomonotone (VIP) in Hilbert spaces. The advantages of our algorithm are: (1) only one projection on the feasible set needs to be calculated in each iteration; (2) do not require to know the prior information of the Lipschitz constant of the cost mapping; (3) the addition of inertial makes it have faster convergence speed. Under mild assumptions, we confirm a strong convergence theorem of the suggested algorithm. Lastly, some computational tests appearing in finite and infinite dimensions are proposed to verify our theoretical results. Furthermore, our algorithm is also designed to solve optimal control problems. Our algorithm improves some existing results MaTEGM ; ViSEGM ; THna ; YLNA ; FQOPT2020 .
The organizational structure of our paper is built up as follows. Some essential definitions and technical lemmas that need to be used are given in the next section. In Section 3, we propose an algorithm and analyze its convergence. Some computational tests and applications to verify our theoretical results are presented in Section 4. Finally, the paper ends with a brief summary.
2 Preliminaries
Let be a closed and convex nonempty subset of a real Hilbert space . The weak convergence and strong convergence of to are represented by and , respectively. For each and , we have the following facts:

;

.
It is known that has the following basic properties:

;

;

.
We give some explicit formulas to calculate projections on special feasible sets.

The projection of onto a halfspace is given by

The projection of onto a box is given by

The projection of onto a ball is given by
The following lemmas play an important role in our proof.
Lemma 2.1 (Cy1992 ).
Assume that is a closed and convex subset of a real Hilbert space . Let operator be continuous and pseudomonotone. Then, is a solution of (VIP) if and only if .
Lemma 2.2 (Sy2012 ).
Let be a positive sequence, be a sequence of real numbers, and be a sequence in such that . Assume that
If for every subsequence of satisfying , then .
3 Main results
In this section, we present a self adaptive inertial viscositytype Tseng’s extragradient algorithm, which is based on the inertial method, the viscosity method and the Tseng’s extragradient method. The major benefit of this algorithm is that the step size is automatically updated at each iteration without performing any line search procedure. Moreover, our iterative scheme only needs to calculate the projection once in each iteration. Before starting to state our main result, we assume that our algorithm satisfies the following five assumptions.

The feasible set is closed, convex and nonempty.

The solution set of the (VIP) is nonempty, that is, .

The mapping is pseudomonotone and Lipschitz continuous on , and sequentially weakly continuous on .

The mapping is contractive with .

The positive sequence satisfies , where such that and .
Now, we can state the details of the iterative method. Our algorithm is described as follows.
Remark 3.1.
Lemma 3.1.
The sequence formulated by (3.2) is nonincreasing and
Proof.
The following lemmas have a significant part to play in the convergence proof of our algorithm.
Lemma 3.2.
Proof.
From the property of projection and , we have
which can be written as follows
Through a direct calculation, we get
(3.3) 
We have that is bounded since is convergent weakly to . Then, from the Lipschitz continuity of and , we obtain that and are also bounded. Since , one concludes from (3.3) that
(3.4) 
Moreover, one has
(3.5) 
Since and is Lipschitz continuous, we get . This together with (3.4) and (3.5) yields that .
Next, we select a positive number decreasing sequence such that as . For any , we represent the smallest positive integer with such that
(3.6) 
It can be easily seen that the sequence is increasing because is decreasing. Moreover, for any , from , we can assume (otherwise, is a solution) and set . Then, we get . Now, we can deduce from (3.6) that . According to the fact that is pseudomonotone on , we can show that
which further yields that
(3.7) 
Now, we prove that . We get that since and . From , we have . In view of is sequentially weakly continuous on , one has that converges weakly to . One assumes that (otherwise, is a solution). According to the fact that norm mapping is sequentially weakly lower semicontinuous, we obtain . Using and as , we have
That is, . Thus, from the facts that is Lipschitz continuous, sequences and are bounded and , we can conclude from (3.7) that . Therefore,
Consequently, we observe that by Lemma 2.1. This completes the proof. ∎
Remark 3.2.
If is monotone, then does not need to satisfy sequential weak continuity, see DSC .
Lemma 3.3.
Proof.
First, using the definition of , one obtains
(3.8) 
Indeed, if then (3.8) clearly holds. Otherwise, it follows from (3.2) that
Consequently, we have
Therefore, inequality (3.8) holds when and . From the definition of , one sees that
(3.9)  
Since , using the property of projection, we obtain
or equivalently
(3.10) 
From (3.8), (3.9) and (3.10), we have
(3.11)  
From , one has . Using the pseudomonotonicity of , we get
(3.12) 
Combining (3.11) and (3.12), we can show that
According to the definition of and (3.8), we obtain
This completes the proof of the Lemma 3.3. ∎
Theorem 3.1.
Proof.
Claim 1. The sequence is bounded. According to Lemma 3.3, we get that . Therefore, there is a constant that satisfies From Lemma 3.3, one has
(3.13) 
By the definition of , one sees that
(3.14)  
From Remark 3.1, one gets . Thus, there is a constant that satisfies
(3.15) 
Using (3.13), (3.14) and (3.15), we obtain
(3.16) 
Using the definition of and (3.16), we have