In this paper, we focus on the following smoothed saddle point problem
where is strongly-convex in and strongly-concave in . We target to find the saddle point which holds that
for all and
. This formulation contains a lot of scenarios including game theory[2, 36], AUC maximization [14, 40], robust optimization [3, 13, 33], empirical risk minimization 11].
There are a great number of first-order optimization algorithms for solving problem (1), including extragradient method [18, 35], optimistic gradient descent ascent , proximal point method  and dual extrapolation . These algorithms iterate with first-order oracle and achieve linear convergence. Lin et al. , Wang and Li  used Catalyst acceleration to reduce the complexity for unbalanced saddle point problem, nearly matching the lower bound of first-order algorithms [25, 41] in specific assumptions. Compared with first-order methods, second-order methods usually enjoy superior convergence in numerical optimization. Huang et al.  extended cubic regularized Newton (CRN) method [23, 22] to solve saddle point problem (1), which has quadratic local convergence. However, each iteration of CRN requires accessing the exact Hessian matrix and solving the corresponding linear systems. These steps arise time complexity, which is too expensive for high dimensional problems.
Quasi-Newton methods [6, 5, 34, 4, 9] are popular ways to avoid accessing exact second-order information applied in standard Newton methods. They approximate the Hessian matrix based on the Broyden family updating formulas , which significantly reduces the computational cost. These algorithms are well studied for convex optimization. The famous quasi-Newton methods including Davidon-Fletcher-Powell (DFP) method [9, 12], Broyden-Fletcher-Goldfarb-Shanno (BFGS) method [6, 5, 34] and symmetric rank 1 (SR1) method [4, 9] enjoy local superlinear convergence [27, 7, 10] when the objective function is strongly-convex. Recently, Rodomanov and Nesterov [29, 30, 31] proposed greedy variant of quasi-Newton methods, which first achieves non-asymptotic superlinear convergence. Later, Lin et al.  established a better convergence rate which is condition-number-free. Jin and Mokhtari , Ye et al.  showed the non-asymptotic superlinear convergence rate also holds for classical DFP, BFGS and SR1 methods.
In this paper, we study quasi-Newton methods for saddle point problem (1). Noticing the Hessian matrix of our objective function is indefinite, the existing Broyden family update formulas and their convergence analysis cannot be applied directly. To overcome this issue, we propose a variant framework of greedy quasi-Newton methods for saddle point problems, which approximates the square of the Hessian matrix during the iteration. Our theoretical analysis characterizes the convergence rate by the Euclidean distance to the saddle point, rather than the weighted norm of gradient used in convex optimization [29, 30, 31, 20, 39, 17]. We summarize the theoretical results for proposed algorithms in Table 1. The local convergence behaviors for all of the algorithms have two periods. The first period has iterations with a linear convergence rate . The second one enjoys superlinear convergence:
For general Broyden family methods, we have an explicit rate .
For BFGS method and SR1 method, we have the faster explicit rate , which is condition-number-free.
Additionally, our ideas also can be used for solving general non-linear equations.
In Section 2, we start with notation and preliminaries throughout this paper. In Section 3, we first propose greedy quasi-Newton methods for quadratic saddle point problem which enjoys local superlinear convergence. Then we extend it to solve general strongly-convex-strongly-concave saddle point problems. In Section 4, we show our theory also can be applied to solve more general non-linear equations and give the corresponding convergence analysis. We conclude our work in Section 5. All proofs are deferred to appendix.
|Algorithms||Upper Bound of|
|Broyden family (Algorithm 5)|
|BFGS/SR1 (Algorithm 6/7)|
2 Notation and Preliminaries
to present spectral norm and Euclidean norm of matrix and vector respectively. We denote the standard basis forby and let
be the identity matrix. The trace of a square matrix is denoted by. Given two positive definite matrices and , we define their inner product as
We introduce the following notation to measure how well does matrix approximate matrix :
If we further suppose , it holds that
by Rodomanov and Nesterov .
Using the notation of problem (1), we let where and denote the gradient and Hessian matrix of at as
We suppose the saddle point problem (1) satisfies the following assumptions.
The objective function is twice differentiable and has -Lipschitz continuous gradient and -Lipschitz continuous Hessian , i.e., there exists constants and such that
for any .
The objective function is twice differentiable, -strongly-convex in and -strongly-concave in , i.e., there exists such that and for any .
Note that inequality (4) means the spectral norm of Hessian matrix can be upper bounded, that is
for all . Additionally, the condition number of the objective function is defined as
3 Quasi-Newton Methods for Saddle Point Problems
The update rule of standard Newton’s method for solving problem (1) can be written as
This iteration scheme has quadratic local convergence, but solving linear system (7) takes time complexity. For convex minimization, quasi-Newton methods including BFGS/SR1 [6, 5, 34, 4, 9] and their variants [20, 39, 29, 30] focus on approximating the Hessian and reduce the computational cost to for each round. However, all of these algorithms and related convergence analysis are based on the assumption that the Hessian matrix is positive definite, which is not suitable for our saddle point problems since is indefinite.
We introduce the auxiliary matrix be the square of Hessian
The following lemma means is always positive definite.
Hence, we can reformulate the update of Newton’s method (7) by
Then it is natural to characterize the second-order information by estimating the auxiliary matrix , rather than the indefinite Hessian . If we have obtained a symmetric positive definite matrix as an estimator for , the update rule of (9) can be approximated by
The remainder of this section introduce several strategies to construct , resulting the quasi-Newton methods for saddle point problem with local superlinear convergence. We should point out the implementation of iteration (10) is unnecessary to construct Hessian matrix explicitly, since we are only interested in the Hessian-vector product , which can be computed efficiently [26, 32].
3.1 The Broyden Family Updates
We first introduce the Broyden family [24, Section 6.3] of quasi-Newton updates for approximating an positive definite matrix by using the information of current estimator .
Suppose two positive definite matrices satisfy . For any , if , we define . Otherwise, we define
The different choices of parameter for formula (11) contain several popular quasi-Newton updates:
For , it corresponds to BFGS update
For , it corresponds to SR1 update
The general Broyden family update as Definition 3.2 has the following properties.
Lemma 3.3 ([29, Lemma 2.1 and Lemma 2.2]).
Suppose two positive definite matrices satisfy for some , then for any and , we have
Additionally, for any , we have
We first introduce the greedy update method  by choosing as follows
Lemma 3.4 ([29, Theorem 2.5]).
Suppose two positive definite matrices satisfy . Let , then for any , we have
For specific Broyden family updates BFGS and SR1 shown in (12) and (13), we can replace (15) by scaling greedy direction , which leads to a better convergence result. Concretely, for BFGS method, we first find such that , where is an upper triangular matrix. This step can be implemented with complexity [20, Proposition 15]. We present the subroutine for factorizing in Algorithm 1 and give its detailed implementation in Appendix B.
Then we use direction for BFGS update, where
For SR1 method, we choose the direction by
Lemma 3.5 ([20, Theorem 2.6]).
Suppose two positive definite matrices satisfy . Let , where is an upper triangular matrix such that . Then we have
The effect of SR1 update can be characterized by the following measure
Lemma 3.6 ([20, Theorem 2.3]).
Suppose two positive definite matrices satisfy . Let , then we have
3.2 Algorithms for Quadratic Saddle Point Problems
Then we consider solving quadratic saddle point problem of the form
where , is symmetric and . We suppose could be partitioned as
where the sub-matrices , , and satisfy , and . Recall the condition number is defined as . Using notations introduced in Section 2, we have
We present the detailed procedure of greedy quasi-Newton methods for quadratic saddle point problem by using Broyden family update, BFGS update and SR1 update in Algorithm 2, 3 and 4 respectively. We define as the Euclidean distance from to the saddle point for our convergence analysis, that is
The definition of in this paper is different from the measure used in convex optimization [29, 20]111In later section, we will see the measure is suitable to convergence analysis of quasi-Newton methods for saddle point problems., but it also holds the similar property as follows.
The next theorem states the assumptions of Lemma 3.7 always holds with , which means converges to 0 linearly.
Combing the results of Theorem 3.8 and 3.9, we achieve the two-stages convergence behavior, that is, the algorithm has global linear convergence and local superlinear convergence. The formal description is summarized as follows.
3.3 Algorithms for General Saddle Point Problems
In this section, we consider the general saddle point problem
The key idea of designing quasi-Newton methods for saddle point problems is characterizing the second-order information by approximating the auxiliary matrix . Note that we have supposed the Hessian of is Lipschitz continuous and bounded in Assumption 2.1 and 2.2, which means the auxiliary matrix operator is also Lipschitz continuous.
Let and be a positive definite matrix such that
for some . We additionally define and , then we have
for all , and .
The relationship (32) implies it is reasonable to establish the algorithms by update rule
with and . Similarly, we can also achieve by such for specific BFGS and SR1 update. Combining iteration (10) with (33), we propose the quasi-Newton methods for general strongly-convex-strongly-concave saddle point problems. The details is shown in Algorithm 5, 6 and 7 for greedy Broyden family, BFGS and SR1 updates respectively.
3.3.2 Convergence Analysis
We turn to consider building the convergence guarantee for algorithms proposed in Section 3.3.1. We start from the following lemma which is useful to further analysis.
We still use Euclidean distance for analysis and establish the relationship between and , which is shown in Lemma 3.14.
Rodomanov and Nesterov [29, Lemma 4.3] derive a result similar to Lemma 3.14 for minimizing the strongly-convex function , but depends on the different measure .222The original notations of Rodomanov and Nesterov  is minimizing the strongly-convex function and establishing the convergence result by . To avoid ambiguity, we use notations and to describe their work in this paper. Note that our algorithms are based on the iteration rule
Compared with quasi-Newton methods for convex optimization, there exists an additional term between and , which leads to the convergence analysis based on is so difficult. Fortunately, we find directly using Euclidean distance makes sure everything goes smoothly.
For further analysis, we also denote
Then we analyze how does change after one iteration to show the local superlinear convergence for greedy Broyden family method (Algorithm 5) and greedy BFGS method (Algorithm 6). Recall that is defined in (39) to measure how well does the matrix approximate the auxiliary matrix .
The analysis for greedy SR1 method is based on constructing such that