## 1 Introduction

In this paper, given a function , we consider finding a saddle point of the problem

(1) |

where a saddle point of Problem (1) is defined as a pair that satisfies

for all . Throughout the paper, we assume that the function is convex-concave, i.e., is convex for all and is concave for all . This problem appears in several areas, including zero-sum games [Basar & Olsder, 1999], robust optimization [Ben-Tal et al., 2009], robust control [Hast et al., 2013]

and more recently in machine learning in the context of Generative adversarial networks (GANs) (see

[Goodfellow et al., 2014] for an introduction to GANs and [Arjovsky et al., 2017] for the formulation of Wasserstein GANs).Our focus in this paper is on convergence rate of discrete-time gradient based optimization algorithms for finding a saddle point of Problem (1). In particular, we focus on Extra-gradient (EG) and Optimistic Gradient Descent Ascent (OGDA) methods because of their widespread use in GAN training (see [Daskalakis et al., 2018; Liang & Stokes, 2018]). EG is a classical method for saddle point problems introduced by Korpelevich [1976]. Its convergence rate for the constrained convex-concave setting was first established by Nemirovski [2004] under the assumption that the feasible set is convex and compact.^{1}^{1}1The result in [Nemirovski, 2004] shows a convergence rate for the mirror-prox algorithm which specializes to the EG method for the Euclidean case. Monteiro & Svaiter [2010] established a similar convergence rate for EG without assuming compactness of the feasible set by using a new termination criterion that relies on enlargement of the subdifferential of the objective function defined in [Burachik et al., 1997]. The iteration complexity of OGDA for the convex-concave case was not studied previously.

In this paper, we provide a unified convergence analysis framework for establishing a sublinear convergence rate of for both OGDA and EG for convex-concave saddle point problems. Our analysis holds for unconstrained problems and does not require boundedness of the feasible set, and it establishes rate results using the function value differences as used in [Nemirovski, 2004] (suitably redefined for an unconstrained feasible set, see Section 5). Therefore, we get convergence of EG method for unconstrained problems without using the modified termination (error) criterion proposed in [Monteiro & Svaiter, 2010]. Our result for OGDA is also novel and provides the first convergence guarantee for OGDA in the general convex-concave setting. The key idea of our approach is to view both OGDA and EG iterates as approximations of the iterates of the proximal point method that was first introduced by Martinet [1970] and later studied by Rockafellar [1976]. The idea of interpreting OGDA and EG as approximations of proximal point method was first studied by Mokhtari et al. [2019] for analyzing OGDA and EG in bilinear and strongly convex-strongly concave problems.

More specifically, we first consider a proximal point method with error and establish key properties of its iterates. We then focus on OGDA as an approximation of proximal point method and use this connection to show that the iterates of OGDA remain in a compact set. We incorporate this result to prove a sublinear convergence rate of for the averaged iterates generated by the OGDA update. We next consider EG where two gradient pairs are used in each iteration, one to compute a midpoint and other to find the new iterate using the gradient of the midpoint. Our first step again is to show boundedness of the iterates generated by EG. We then approximate the evaluation of the midpoints using a proximal point method and use this approximation to establish convergence rate for the averaged iterates generated by EG.

### Related Work

The convergence properties of OGDA were recently studied by Daskalakis et al. [2018], which showed the convergence of the iterates to a neighborhood of the solution when the objective function is bilinear, i.e., . Liang & Stokes [2018] used a dynamical system approach to prove the linear convergence of the OGDA method for the special case when and the matrix is square and full rank. They also presented a linear convergence rate of the vanilla Gradient Ascent Descent (GDA) method when the objective function is strongly convex-strongly concave.^{2}^{2}2Note that when we state that is strongly convex-strongly concave, it means that is strongly convex for all and is strongly concave for all . Recently, Gidel et al. [2018] considered a variant of the EG method, relating it to OGDA updates, and showed a linear convergence rate for the corresponding EG iterates in the case where is strongly convex-strongly concave (though without showing the convergence rate for the OGDA iterates). Mokhtari et al. [2019] also established a linear convergence rate for OGDA via proximal point approximation approach when is strongly convex-strongly concave or bilinear. Optimistic gradient methods have also been studied in the context of convex online learning [Chiang et al., 2012; Rakhlin & Sridharan, 2013a, b].

Nedić & Ozdaglar [2009] analyzed the (sub)Gradient Descent Ascent (GDA) algorithm for convex-concave saddle point problems when the (sub)gradients are bounded over the constraint set, and they showed a convergence rate of .

Chambolle & Pock [2011] studied Problem (1) for the case where the coupling term in the objective function is bilinear, i.e., , where and are convex functions. They proposed a proximal point based algorithm which converges at a rate and further showed linear convergence when the functions are strongly convex. Chen et al. [2014] proposed an accelerated variant of this algorithm when is smooth and established an optimal rate of , where and are the smoothness parameters of and the norm of the linear operator , respectively. When the functions and are strongly convex, primal-dual gradient-type methods converge linearly, as shown in [Chen & Rockafellar, 1997; Bauschke et al., 2011]. Further, Du & Hu [2018] showed that GDA achieves a linear convergence rate when is convex and is strongly convex.

For the case that is strongly concave with respect to , but possibly nonconvex with respect to , Sanjabi et al. [2018] provided convergence to a first-order stationary point using an algorithm that requires running multiple updates with respect to at each step.

Notation. Lowercase boldface

denotes a vector and uppercase boldface

denotes a matrix. We use to denote the Euclidean norm of vector . Given a multi-input function , its gradient with respect to and at points are denoted by and , respectively.## 2 Preliminaries

In this section we present properties and notations used in our results.

###### Definition 1.

A function is -smooth if it has -Lipschitz continuous gradients on , i.e., for any , we have

###### Definition 2.

A continuously differentiable function is convex on if for any , we have

Further, is concave if is convex.

###### Definition 3.

The pair is a saddle point of a convex-concave function , if for any and , we have

Throughout the paper, we will assume that the following conditions are satisfied.

###### Assumption 1.

The function is continuously differentiable in and . Further, it is convex in and concave in .

###### Assumption 2.

The gradient , is -Lipschitz in and the gradient , is -Lipschitz in , i.e.,

Further, we define as the maximum of the Lipschitz continuity of with respect to and .

###### Assumption 3.

The solution set defined as

(2) |

is nonempty.

In the following sections, we present and analyze three different iterative algorithms for solving the saddle point problem introduced in (1). The iterates of these algorithms are denoted by . We denote the average (ergodic) iterates by , defined as follows:

(3) |

In our convergence analysis, we use a variational inequality approach in which we define the vector as our decision variable and define the operator as

(4) |

In the following lemma we characterize the properties of operator in (4) when the conditions in Assumptions 1 and 2 are satisfied. We would like to emphasize that the following lemma is well-known – see, e.g., [Nemirovski, 2004] – and we state it for completeness.

###### Lemma 1.

According to Lemma 1, when is convex-concave and smooth, the operator defined in (4) is monotone and Lipschitz. The third result in Lemma 1 shows that any saddle point of problem (1) satisfies the first-order optimality condition for the operator .

Before moving to the main part of the paper, we prove the following lemma that we will use later in the analysis of OGDA and EG.

###### Lemma 2.

###### Proof.

Based on the definition of the operator we can write

(6) |

where the inequality holds due to the fact that is convex-concave. We again can use convexity of with respect to and concavity of with respect to to show that

(7) |

By combining inequalities (6) and (7), we obtain that

and the proof is complete. ∎

## 3 Proximal point method with error

One of the classical algorithms studied for solving the saddle point problem in (1) is the proximal point method introduced by Martinet [1970] and studied by Rockafellar [1976], which generates a sequence of iterates according to the updates

(8) |

It is well-known that the proximal point method achieves a sublinear rate of when is the number of iterations for convex minimization (see [Güler, 1991, 1992]). We present the convergence analysis of the proximal point method for convex-concave saddle point problems in the following theorem.

###### Theorem 1.

###### Proof.

Check Section 7. ∎

The result in Theorem 1 shows that by following the update of proximal point method the gap between the function value for the average iterates and the objective function value for a saddle point of the problem (1) approaches zero at a sublinear rate of .

We aim to prove a similar convergence properties for OGDA and EG using the fact that these two methods can be interpreted as an approximate version of proximal point method. To do so, let us first rewrite the update of proximal point as

(10) |

where and the operator is defined in (4). In the following lemma, we derive a result for a general form of the proximal point update with an additional error, and we will use it later in the analysis of EG and OGDA.

###### Lemma 3.

Consider the sequence of iterates generated by the following update

(11) |

where is a monotone and Lipschitz continuous operator, is an arbitrary vector, and is a positive constant. Then for any and for each iteration we have

(12) |

###### Proof.

According to the update in (11), we can show that for any we have

(13) |

Now add and subtract the inner product to the right hand side and regroup the terms to obtain

(14) |

Replace with to obtain

(15) |

On rearranging the terms, we get the following inequality

(16) |

and the proof is complete. ∎

## 4 Optimistic Gradient Descent Ascent

In this section, we focus on analyzing the performance of optimistic gradient descent ascent (OGDA) for solving a general smooth convex-concave saddle point problem. It has been shown that the OGDA method recovers the convergence rate of the proximal point for both strongly convex-strongly concave and bilinear problems (See [Mokhtari et al., 2019]). However, its convergence rate for a general smooth convex-concave case is not established. In this section, we aim to close this gap and derive a convergence rate of for OGDA, which matches the convergence rate of proximal point shown in Theorem 1.

Given a positive stepsize , the update of OGDA for the iterates and can be written as

(17) |

The main difference between the updates of OGDA in (4) and the gradient descent ascent (GDA) method is in the additional “momentum” terms and . This additional term makes the update of OGDA a better approximation to the update of proximal point method comparing to the update of the GDA; for more details we refer readers to Proposition 1 in [Mokhtari et al., 2019].

To establish the convergence rate of OGDA for convex-concave problems, we first illustrate the connection between the updates of proximal point and OGDA. Note that based on the definitions of the vector and the operator we can rewrite the update of the OGDA algorithm at iteration as

(18) |

Considering this expression, we can also write the update of OGDA as an approximation of the proximal point method, i.e.,

(19) |

where the error vector is given by

(20) |

Therefore, OGDA can be considered as an approxmation of the proximal point method with the error defined in (20).

To derive the convergence rate of OGDA for the unconstrained problem in (1), we first use the result in Lemma 3 to show that the iterates generated by OGDA belong to a bounded and close set.

###### Lemma 4.

###### Proof.

Recall the result in (3). As mentioned above, when we interpret OGDA as an approximation of proximal point, the error vector is equivalent to . Applying this substitution into (3) leads to

(22) |

Now add and subtract the inner product to the right hand side to obtain

(23) |

Note that can be upper bounded by

(24) |

where the second inequality holds due to Lipschitz continuity of the operator and the last inequality holds due to Young’s inequality. Now replace in (4) by its upper bound in (4) to obtain

(25) |

where the second inequality follows as and therefore . On taking the sum from , we obtain that

(26) |

Set as which is a solution of (1). Then, we obtain

(27) |

Note that each term of the summand the sum in the left is nonnegative due to monotonicity of and therefore the sum is also nonnegative. Further, we know that . Using these observations we can write that

(28) |

Therefore, we can write that

(29) |

Regrouping the terms implies that

(30) |

Using the condition that it follows that for any iterate we have

(31) |

and the claim follows. ∎

According to Lemma 4, the sequence of iterates generated by OGDA stays within a closed and bounded convex set. We use this result to prove a sublinear convergence rate of for the average iterates generated by OGDA for smooth and convex-concave saddle point problems.

###### Theorem 2.

Consider the optimistic gradient descent ascent (OGDA) method introduced in (4). Further, recall the definition of the time-average iterates in (3) and the compact convex set in (21). If Assumptions 1-3 hold and the stepsize satisfies the condition , then the iterates generated by OGDA satisfy

(32) |

where and .

###### Proof.

Note that the result in (32) also implies that as we show in the following corollary.

###### Corollary 1.

Suppose the conditions in Theorem 2 are satisfied. Then, the iterates generated by OGDA satisfy

###### Proof.

First note that both and are nonnegative. To verify note that and (since ). Further, note that belongs to the set . Hence, it yields . Also, we can show that . Therefore, . ∎

The result in Corollary 1 shows that the average iterates generated by OGDA converge to a saddle point of problem (1) at a sublinear rate of when the objective function is smooth and convex-concave. To the best of our knowledge, this is the first non-asymptotic complexity bound for OGDA for the convex-concave setting. Moreover, note that without computing any extra gradient evaluation, i.e., computing only one gradient per iteration with respect to and , OGDA recovers the convergence rate of proximal point method.

## 5 Extragradient Method

In this section, we focus on the extra-gradient (EG) method for solving the unconstrained min-max problem in (1). We show that by interpreting EG as an approximation of the proximal point method it is possible to establish a convergence rate of through a simple and short analysis.

Consider the update of EG in which we first compute a set of mid-point iterates by following the update of gradient descent-ascent, i.e.,

(37) |

Then, we compute the next iterates using the gradients of mid-points , i.e.,

(38) |

We aim to show that EG, similar to OGDA, can be analyzed for convex-concave problems by considering it as an approximation of the proximal point. To do so, let us use the notation and to write the update of EG as

(39) |

We first use this notation to show that the iterates generated by EG stay within a closed and bounded convex set for all iterates .

Comments

There are no comments yet.