A Universal Algorithm for Variational Inequalities Adaptive to Smoothness and Noise

02/05/2019
by   Francis Bach, et al.
Inria
4

We consider variational inequalities coming from monotone operators, a setting that includes convex minimization and convex-concave saddle-point problems. We assume an access to potentially noisy unbiased values of the monotone operators and assess convergence through a compatible gap function which corresponds to the standard optimality criteria in the aforementioned subcases. We present a universal algorithm for these inequalities based on the Mirror-Prox algorithm. Concretely, our algorithm simultaneously achieves the optimal rates for the smooth/non-smooth, and noisy/noiseless settings. This is done without any prior knowledge of these properties, and in the general set-up of arbitrary norms and compatible Bregman divergences. For convex minimization and convex-concave saddle-point problems, this leads to new adaptive algorithms. Our method relies on a novel yet simple adaptive choice of the step-size, which can be seen as the appropriate extension of AdaGrad to handle constrained problems.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

10/15/2020

Adaptive and Universal Single-gradient Algorithms for Variational Inequalities

Variational inequalities with monotone operators capture many problems o...
10/30/2019

UniXGrad: A Universal, Adaptive Algorithm with Optimal Guarantees for Constrained Optimization

We propose a novel adaptive, accelerated algorithm for the stochastic co...
11/23/2020

Geometry-Aware Universal Mirror-Prox

Mirror-prox (MP) is a well-known algorithm to solve variational inequali...
07/09/2020

Higher-order methods for convex-concave min-max optimization and monotone variational inequalities

We provide improved convergence rates for constrained convex-concave min...
05/12/2022

Optimal Methods for Higher-Order Smooth Monotone Variational Inequalities

In this work, we present new simple and optimal algorithms for solving t...
06/17/2022

Optimal Extragradient-Based Bilinearly-Coupled Saddle-Point Optimization

We consider the smooth convex-concave bilinearly-coupled saddle-point pr...
10/25/2016

Frank-Wolfe Algorithms for Saddle Point Problems

We extend the Frank-Wolfe (FW) optimization algorithm to solve constrain...

1 Introduction

Variational inequalities are a classical and general framework to encompass a wide variety of optimization problems such as convex minimization and convex-concave saddle-point problems, which are ubiquitous in machine learning and optimization

(Nemirovski, 2004; Juditsky et al., 2011; Juditsky and Nemirovski, 2016). Given a convex subset of , these inequalities are often defined from a monotone operator (which we will assume single-valued for simplicity), such that for any , . The goal is then to find a strong solution to the variational inequality, that is, such that

(1)

For convex minimization problems, the operator is simply the subgradient operator, while for convex-concave saddle-point problems, the operator is composed of the subgradient with respect to the primal variable, and the negative supergradient with respect to the dual variables (see a detailed description in Section 2.3). In these two classical cases, solving the variational inequality corresponds to the usual notion of solution for these two problems. While our main motivation is to have a unique framework for these two subcases, the variational inequality framework is more general (see e.g. Nemirovski (2004) and references therein).

In this paper we are interested in algorithms to solve the inequality in Eq. (1), while only accessing an oracle for for any given

, or only an unbiased estimate of

. We also assume that we may efficiently project onto the set (which we assume compact throughout this paper) using Bregman divergences. In terms of complexity bounds, this problem is by now well-understood with matching upper and lower bounds in a variety of situations. In particular the notion of smoothness (i.e., Lipschitz-continuity of vs. simply assuming that is bounded) and the presence of noise are the two important factors influencing the convergence rates. For example, the “Mirror-Prox” algorithm of Nemirovski (2004) and Juditsky et al. (2011), given the correct step-size (that depends heavily on the properties of the problem, see Section 2), attains the following bounds:

  • For non-smooth problems where the operator (and its unbiased estimates) is bounded by , the rate is attained after iterations, where is the proper notion of diameter for the set .

  • For smooth problems with

    -Lipschitz operators, and a noise variance of

    , the convergence rate is .

These rates are actually optimal for this class of problems333The class of problems indeed includes convex optimization with lower bounds in  (Nemirovskii and Yudin, 1983) and bilinear saddle-point problems with lower bound in  (Nemirovsky, 1992).. However, practitioners may not know in which class their problem lies or know all the required constants needed for running the algorithms. Thus universal (sometimes called adaptive) algorithms are needed to leverage the potentially unknown properties of an optimization problem. Moreover, locally, the problem could be smoother or less noisy than globally, and thus classical algorithms would not benefit from extra local speed-ups.

In this paper we make the following contributions:

  • We present a universal algorithm for variational inequalities based on the Mirror-Prox algorithm, for both deterministic and stochastic settings. Our method employs a simple adaptive choice of the step-size that leads to optimal rates for smooth and non-smooth variational inequalities. Our algorithm does not require prior knowledge regarding the smoothness or noise properties of the problem.

  • This is done in the general set-up of arbitrary norms and compatible Bregman divergences.

  • For convex minimization and convex-concave saddle-point problems, this leads to new adaptive algorithms. In particular, our new adaptive method can be seen as extension of AdaGrad (McMahan and Streeter, 2010; Duchi et al., 2011), that is more appropriate to handling constrained problems.

On the technical side, our work combines the Mirror-Prox method with a novel adaptive learning rate rule inspired by online learning techniques such as AdaGrad (McMahan and Streeter, 2010; Duchi et al., 2011), and optimistic OGD (Chiang et al., 2012; Rakhlin and Sridharan, 2013).

Related work.

Algorithms for solving variational inequalities date back to Korpelevich (1976) who was the first to suggest the extragradient method. The key idea behind this method is the following: in each round  we make two updates. First, we take a gradient step from the current iterate , which leads to a point . Then, instead of applying another gradient step starting in , we go back to and take a step using the gradient of , which leads to .

The work of Korpelevich (1976) was followed by Korpelevich (1983); Noor (2003), who further explored the asymptotic behaviour of such extragradient-like algorithms. The seminal work of Nemirovski (2004) was the first to establish non-asymptotic convergence guarantees of such a method, establishing a rate of for smooth problems. Nemirovski’s method named Mirror-Prox was further explored by Juditsky et al. (2011), who analyze the stochastic setting, and present a Mirror-Prox version that obtains a rate of , where is the variance of the noise terms. It is also known that in the non-smooth case, Mirror-Prox obtains a rate of (Juditsky and Nemirovski, 2011). Note that the Mirror-Prox versions that we have mentioned so far require prior knowledge about the smoothness/non-smoothness and on the noise properties of the problem (i.e., ), in order to obtain the optimal bounds for each case444For the special case of bi-linear saddle-point problems, Juditsky et al. (2013) designed an algorithm that is adaptive to noise, but not to non-smoothness (which is irrelevant for bi-linear problems).. Conversely, our method obtains these optimal rates without any such prior knowledge. Note that Yurtsever et al. (2015); Dvurechensky et al. (2018)

devise universal methods to solve variational inequalities that adapt to the smoothness of the problem. Nevertheless, these methods build on a line search technique that is inappropriate for handling noisy problems. Moreover, these methods require a predefined accuracy parameter as an input, which requires careful hyperparameter tuning.

In the past years there have been several works on universal methods for convex optimization (which is a particular case of the variational inequalities framework). Nesterov (2015) designed a universal method that obtains the optimal convergence rates of and for smooth/non-smooth optimization, without any prior knowledge of the smoothness. Yet, this method builds on a line search technique that is inappropriate to handling noisy problems. Moreover, it also requires a predefined accuracy parameter as an input, which requires careful tuning.

Levy (2017) designed alternative universal methods for convex minimization that do not require line search, yet these methods obtain a rate of rather than the accelerated rate for smooth objectives. Moreover, their results for the smooth case only holds for unconstrained problems. The same also applies to the well known AdaGrad method (McMahan and Streeter, 2010; Duchi et al., 2011). Recently, Levy et al. (2018) have presented a universal method that obtains the optimal rates for smooth/non-smooth and noisy/noiseless settings, without any prior knowledge of these properties. Nevertheless, their results for the smooth case are only valid in the unconstrained setting. Finally, note that these convex optimization methods are usually not directly applicable to the more general variational inequality framework.

Methods for solving convex-concave zero-sum games or saddle-point problems (another particular case of the variational inequality framework) were explored by the online learning community. The seminal work of Freund et al. (1999) has shown how to employ regret minimization algorithms to solve such games at a rate of . While the Mirror-Prox method solves such games at a faster rate of , it requires communication between the players. Interestingly, Daskalakis et al. (2011) have shown how to achieve a rate of without communication. Finally, Rakhlin and Sridharan (2013) have provided a much simpler algorithm that obtains the same guarantees.

2 Variational Inequalities and Gap Functions

Here we present our general framework of variational inequalities with monotone operators, and introduce the notion of associated convex gap function. In Section 2.2 and 2.3, we show how this framework captures the settings of convex optimization, as well as convex-concave minimax games.

Preliminaries.

Let be a general norm and be its dual norm. A function is -strongly convex over a convex set , if for any and any , a subgradient of at ,

A function is -smooth over if,  Also, for a convex differentiable function , we define its Bregman divergence as follows,

Note that is always non-negative. For more properties, see, e.g., Nemirovskii and Yudin (1983) and references therein.

2.1 Gap functions

We are considering a monotone operator from to , which is single-valued for simplicity555That is, each is mapped to a single ; we could easily extend to the multi-valued setting (Bauschke and Combettes, 2011), at the expense of more cumbersome notations.. Formally, a monotone operator satisfies,

And we are usually looking for a strong solution of the variational inequality, that satisfies

When is monotone, as discussed by Juditsky and Nemirovski (2016), a strong solution is also a weak solution, that is, . Note that we do not use directly the monotonicity property of ; we only use the existence of a compatible gap function with respect to , which is an adapted notion of merit function to characterize convergence, that we define in Def. 2.1. We show below that this definition captures the settings of convex optimization and convex-concave games.

We thus assume that we are given a convex set , as well as a gap function . For a given solution , we define its duality gap as follows,

(2)

We assume to have an access to an oracle for , i.e., upon querying this oracle with , we receive . Our goal is to find a solution such that its duality gap is (approximately) zero. We also consider a stochastic setting (similarly to Juditsky et al. (2011)), where our goal is to provide guarantees on the expected duality gap. Next we present the central definition of this paper:

Definition 2.1 (Compatible gap function).

Let be a convex set, and let , such that is convex with respect to its first argument. We say that the function is a gap function compatible with the monotone operator if,

and is a solution of Eq. (1) if and only if .

Note that given the notion of solution to the variational inequality in Eq. (1), the function is a good candidate for , but it is not convex in in general and thus Jensen’s inequality cannot be applied.

Assumptions on .

Throughout this paper we will assume there exists a bound on the magnitude of (and all of its unbiased estimates), i.e.,

We will sometimes consider the extra assumption that is -smooth w.r.t. a given norm , i.e.,

where is the dual norm of . Note that we define the notion of smoothness for functions , as well as to monotone operators . These two different notions coincide when is the gradient of (see Sec. 2.2).

Next we show that the setting that we described in the section (see Def. 2.1) captures two important settings, namely convex optimization and convex-concave zero-sum games.

2.2 Convex Optimization

Assume that is a convex set, and is convex over . In the convex optimization setting our goal is to minimize , i.e.,

We assume that we may query (sub)gradients of . Next we show how this setting is captured by the variational inequality setting. Let us define a gap function and an operator as follows,

Then by the (sub)gradient inequality for convex functions, it immediately follows that is a compatible gap function with respect to . Also, it is clear that is convex with respect to . Finally, note that the duality gap in this case is the natural sub-optimality measure, i.e.,

Moreover, if is -smooth w.r.t. a norm , then is smooth with respect to the same norm.

2.3 Convex-Concave Zero-sum Games

Let , where is convex in and concave in , and are compact convex sets. The convex-concave zero-sum game induced by is defined as follows,

The performance measure for such games is the duality gap which is defined as,

(3)

The duality gap is always non-negative, and we seek an (approximate) equilibrium, i.e., a point such that .

This setting can be classically described as a variational inequality problem. Let us denote,

For any , define a gap function and an operator , as follows,

It is immediate to show that this gap function, , induces the duality gap appearing in Eq. (3), i.e., . Also, from the convex-concavity of it immediately follows that is convex in . The next lemma from Nemirovski (2004) shows that is a gap function compatible with (for completeness we provide its proof in Appendix A.1).

Lemma 2.1.

The following applies for any :

Mirror Map for Zero-sum Games.

In this work, our variational inequality method employs a mirror-map over . For the case of zero-zum games , and we usually have separate mirror-map terms, , and . Juditsky and Nemirovski (2011) have found a way to appropriately define a mirror-map over using . We hereby describe it.

Assume that the separate mirror-maps are -strongly convex w.r.t. norms and , and let and be the respective dual norms. Also, define , and similarly define . Juditsky and Nemirovski (2011) suggest to employ,

and to define,

(4)

In this case is -strongly-convex w.r.t. . Also, the dual norm of in this case is,

(5)
Smooth Zero-sum Games.

It can be shown that if the gradient mapping , and are Lipschitz-continuous with respect to both and , then the monotone operator defined through is also smooth. Concretely, let , and be norms over and , and let , and be their respective dual norms. Juditsky and Nemirovski (2011) show that if the following holds ,

Then it can be shown that

where , and are defined in Equations (4) and (5), and,

3 Universal Mirror-Prox

This section presents our variational inequality algorithm. We first introduce the optimistic-OGD algorithm of Rakhlin and Sridharan (2013), and present its guarantees. Then we show how to adapt this algorithm together with a novel learning rate rule in order to solve variational inequalities in a universal manner. Concretely, we present an algorithm that, without any prior knowledge regarding the problem’s smoothness, obtains a rate of for smooth problems (Thm. 3.1), and an rate for non-smooth problems (Thm. 3.2). Our algorithm can be seen as an adaptive version of the Mirror-Prox method (Nemirovski, 2004).

We provide a proof sketch of Thm. 3.1 in Section 3.3. The full proofs are deferred to the Appendix.

3.1 Optimistic OGD

Here we introduce the optimistic online gradient descent (OGD) algorithm of Rakhlin and Sridharan (2013). This algorithm applies to the online linear optimization setting that can be described as a sequential game over rounds between a learner and an adversary. In each round ,

  • the learner picks a decision point ,

  • the adversary picks a loss vector

    ,

  • the learner incurs a loss of , and gets to view as a feedback.

The performance measure for the learner is the regret which is defined as follows,

and we are usually interested in learning algorithms that ensure a regret which is sublinear in .

Hint Vectors.

Rakhlin and Sridharan (2013) assume that in addition to viewing the loss sequence , the learner may access a sequence of “hint vectors” . Meaning that in each round , prior to choosing , the player gets to view a “hint vector” . In the case where the hints are good predictions for the loss vectors, i.e., , Rakhlin and Sridharan (2013) show that this could be exploited to provide improved regret guarantees. Concretely, they suggest to use the following optimistic OGD method:  Choose , and ,

(6)

where is a -strongly-convex function over w.r.t. a given norm , and is the Bregman divergence of . The following guarantees for optimistic OGD hold, assuming that the learning rate sequence is non-increasing (see proof in Appendix C.1) ,

Lemma 3.1 (Rakhlin and Sridharan (2013)).
(7)

where , and is the dual norm of  .

3.2 Universal Mirror-Prox

Here we describe a new adaptive scheme for the learning rate of the above mentioned optimistic OGD. Then we show that applying this adaptive scheme to solving variational inequalities yields an algorithm that adapts to smoothness and noise.

A new Adaptive Scheme.

Rakhlin and Sridharan (2013) suggest to apply the following learning rate scheme inside optimistic OGD (Equation (6)),

They show by employing this rule with a version of optimistic OGD yields an algorithm that solves zero-sum matrix games at a fast rate, without any communication between the players. While the Mirror-Prox algorithm (Nemirovski, 2004) achieves such a fast rate, it requires both players to communicate their iterates to each other in every round.

Our goal here is different. We would like to adapt to the smoothness and noise of the objective, while allowing players to communicate. To do so, we suggest to use the following adaptive scheme,

(8)

with the same definition of the diameter as in Lemma 3.1, and is an arbitrary constant. Note that the best choice for is a tight upper bound on the dual norms of the ’s and ’s, which we denote here by , i.e., . Nevertheless, even if we still achieve convergence guarantees that scales with

In this work we assume to know , yet we do not assume any prior knowledge of .

Finally, note that for any ; this immediately follows by the next lemma.

Lemma 3.2.

Let be a bound on the dual norms of . Then the above holds for , that are used in Optimistic OGD (Eq. (6)),

Solving variational inequalities.

So far we have described the online setting where the loss and hint vectors may change arbitrarily. Here we focus on the case where there exists a gap function that is compatible with a given monotone operator (see Definition 2.1). Recall that in this setting our goal is to minimize the duality gap induced by . To do so, we choose and in each round as follows,

(9)

These choices correspond to the extragradient (Korpelevich, 1976) and to Mirror-Prox (Nemirovski, 2004) methods.

In Alg. 1 we present our universal Mirror-Prox algorithm for solving variational inequalities. This algorithm combines the Mirror-Prox algorithm (i.e., combining Eq. (9) inside the optimistic OGD of Eq. (6)), together with the new adaptive scheme that we propose in Eq. (8).

  Input: #Iterations , , learning rate as in Eq. (8)
  for   do
     Set
     Update:
  end for
  Output:
Algorithm 1 Universal Mirror-Prox
Intuition.

Before stating the guarantees of Alg. 1, let us give some intuition behind the learning rate that we suggest in Eq. (8). Note that the original Mirror-Prox algorithm employs two extreme learning rates for the non-smooth and smooth cases. In the smooth case the learning rate is constant, i.e., , and in the non-smooth case it is decaying, i.e., . Next we show how our adaptive learning rate seems to implicitly adapts to the smoothness of the problem.

For simplicity, let us focus on the convex optimization setting, where our goal is to minimize a convex function , and therefore . Also assume we use . In this case, optimistic OGD (Eq. (6)) is simply, , and , where is the orthogonal projection onto . Now, let , and let us imagine two situations:
(i) If is non-smooth around , then the norms of the gradients are not decaying as we approach , and in this case the ’s are lower bounded by some constant along all rounds. This implies that will be proportional to .
(ii) Imagine that is smooth around . If in addition , this intuitively implies that the magnitudes and go to zero as we approach , and therefore ’s will also go to zero. This intuitively means that tends to a constant when tends to infinity. However, note that this behaviour can also be achieved by using an AdaGrad-like (Duchi et al., 2011) learning rate rule, i.e., . The reason that we employ the more complicated learning rate of Eq. (8) is in order to handle the case where is smooth yet . In this case, the norms of and will not decay as we approach ; nevertheless the norms of ’s will intuitively go to zero, implying tends to a constant. Thus, in a sense, our new learning rate rule can be seen as the appropriate adaptation of AdaGrad to the constrained case.

Guarantees.

We are now ready to state our guarantees. We show that when the monotone operator is smooth, then we minimize the duality gap in Eq. (2) at a fast rate of . Conversely, when the monotone operator is non-smooth, then we obtain a rate of . This is achieved without any prior knowledge regarding the smoothness of . The next result addresses the smooth case (we provide a proof sketch in Sec. 3.3; the full proof appears in App. A.3),

Theorem 3.1.

Assume that is -smooth, and -bounded. Then Alg. 1 used with the learning rate of Eq. (8) implies the following bound,

Recall that measures the quality of our prior knowledge regarding the actual bound on the norms of . Next we present our guarantees for the non-smooth case.

Theorem 3.2.

Assume that is -bounded. Alg. 1 used with the learning rate of Eq. (8) implies,

Up to logarithmic terms, we recover the results from Juditsky et al. (2011), with a potential extra factor , which is equal to if we know a bound on the norms of the values of (but we do not require this value to obtain the correct dependence in ). The proof of Thm. 3.2 appears in App. A.4.

3.3 Proof Sketch of Theorem 3.1

Proof.

We shall require the following simple identity (see Rakhlin and Sridharan (2013)),

Using the above with , together with , and using the -smoothness of gives,

Combining this inside the regret bound of Eq. (7) and re-arranging we obtain,

Regret (10)

where we have used, .
Let us define , and divide the last term of the regret as follows,

where in the second line we use which holds for ; implying that . Plugging the above back into Eq. (10) we obtain,

Regret (11)

Next we bound terms and above. To bound we will require the following lemma,

Lemma.

For any non-negative numbers , and , the following holds:

Recalling that   (see Eq. (8)), and also recalling that we can use the above lemma to bound term ,

(12)

where we have used the definition of which implies .

Bounding term :

In the full proof (Appendix A.3) we show that .

Conclusion:

Combining the bounds on and into Eq. (11) and using implies,

where we used the definition . Combining the above with the definition of and using Jensen’s inequality (recall is convex in its first argument), as well as with the fact that is a compatible gap function w.r.t.  concludes the proof. ∎

4 Stochastic Setting

In this section we present the stochastic variational inequality setting. Then we show that using the exact same universal Mirror-Prox algorithm (Alg. 1) that we have presented in the previous section, enables to provide the optimal guarantees for the stochastic setting. This is done without any prior knowledge regarding the smoothness or the stochasticity of the problem.

Setting.

The stochastic setting is similar to the deterministic setting that we have described in Sec. 2. The only difference is that we do not have an access to the exact values of . Instead, we assume that when querying a point we receive an unbiased noisy estimate of the exact monotone mapping . More formally, we have an access to an oracle , such that for any we have,

We also assume to have a bound on the dual norms of , i.e., almost surely, We are now ready to state our guarantees. Up to logarithmic terms, we recover the results from Juditsky et al. (2011) with an universal algorithm that does not need the knowledge of the various regularity constants. The first results regards the non-smooth noisy case.

Theorem 4.1.

Assume that we receive unbiased (noisy) estimates instead of inside Alg. 1. Then Alg. 1 used with the learning rate of Eq. (8) ensures the following,

Next we further assume a bound on the variance of , i.e., but we do not assume any prior knowledge of . The next theorem regards the smooth noisy case.

Theorem 4.2.

Assume that is -smooth, and assume that we receive unbiased (noisy) estimates instead of inside Alg. 1. Then Alg. 1 used with the learning rate of Eq. (8) ensures the following,

4.1 Proof Sketch of Theorem 4.1

Proof.

Let us denote by the noisy estimates of . Following the exact steps as in the proof of Theorem 3.2 implies the following holds w.p. 1,

Recalling the definition of , and using Jensen’s inequality implies that for any ,

(13)

where we denote, And clearly is a martingale difference sequence. Let . Taking and taking expectation over Eq. (4.1) gives,

To establish the proof we are left to show that . This is challenging since

, by its definition, is a random variable that may depend on

, implying that is not zero-mean. Nevertheless, we are able to make use of the martingale difference property of in order to bound . This is done using the following proposition,

Proposition.

Let be a convex set, and be a -strongly-convex function w.r.t. a norm over . Also assume that