Distributed forward-backward (half) forward algorithms for generalized Nash equilibrium seeking

10/30/2019 ∙ by Barbara Franci, et al. ∙ 0

We present two distributed algorithms for the computation of a generalized Nash equilibrium in monotone games. The first algorithm follows from a forward-backward-forward operator splitting, while the second, which requires the pseudo-gradient mapping of the game to be cocoercive, follows from the forward-backward-half-forward operator splitting. Finally, we compare them with the distributed, preconditioned, forward-backward algorithm via numerical experiments.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Generalized Nash equilibrium problems have been widely studied in the literature [1, 2, 3, 4]. Such a strong interest is motivated by numerous applications ranging from economics to engineering [5, 6]. In a generalized Nash equilibrium problem (GNEP), each agent seeks to minimize his own cost function, under some coupled feasibility constraints. Both the cost function and the constraints depend on the strategies chosen by the other agents. Due to the presence of these shared constraints, the search for generalized Nash equilibria is usually a quite challenging task.

For the computation of GNE’s, diverse algorithms have been proposed, both distributed [7, 8], and semi-decentralized [4, 9]. When dealing with coupling constraints, a common principle is the focus on a special class of equilibria, to reflect some notion of fairness among the agents. This class is known as variational equilibria, see [10] or [4] for a survey. The attractive feature of variational equilibria is their deep relation with variational inequalities, and in turn with monotone inclusions, which allows one to exploit fixed-point iterations [11, 3], and tailor them to multi-agent equilibrium problems. A recent breakthrough along these lines is the distributed, preconditioned forward-backward (FB) algorithm conceived in [8] for strongly-monotone games. The key lesson from [8] is that the FB method cannot be directly applied to equilibrium problems with structure, such as GNEP’s, thus a suitable preconditioning is necessary. From a technical perspective, the FB operator splitting requires that the pseudo-gradient mapping of the game is strongly monotone, or cocoercive, which is not always the case in monotone games, even with linear cost functions [12].

Motivated by these observations, in this paper, we propose two distributed algorithms based on operator splitting for computing a variational GNE in (cocoercive) monotone games. Specifically, without the additional assumption of strong-monotone pseudo-gradient mapping, we propose a distributed forward-backward-forward (FBF) algorithm [13]. Second, under the assumption of cocoercive pseudo-gradient mapping, we present a distributed forward-backward-half-forward (FBHF) algorithm [14]. Both our algorithms are fully distributed in sense that each agent needs to know only his local cost function and its local feasible set, and there is no central coordinator that updates and broadcasts the dual variables. The latter is the main difference with semi-decentralized schemes for aggregative games [15, 9]. Moreover, our algorithms do not need a preconditioning procedure. Our main technical results are thus to show global convergence of these two algorithms to a variational GNE of the (strongly) monotone game, for suitable choices of the step sizes.

We emphasize that, compared with the FB and the FBHF algorithms, the FBF requires less restrictive assumptions to guarantee convergence, i.e., non-strong monotonicity of the pseudo-gradient mapping. Computationally speaking, the main drawback of the FBF algorithm is that, at each iteration, it requires two evaluations of the pseudo-gradient mapping, which means that the agents should communicate at least twice at each iterative step. Compared with the FBF algorithm, our second proposal, the FBHF algorithm, is instead faster at each iteration, since it requires only one evaluation of the pseudo-gradient mapping. The FBHF algorithm is guaranteed to converge under the same strong-monotonicity assumption of the preconditioned FB proposed in [8]. From a computational perspective, the FBHF and the FB should perform similarly. In our numerical simulations, the FBHF algorithm shows faster convergence. Indeed, the convergence analysis shows that it can tolerate slightly larger step sizes.

Ii Notation

indicates the set of real numbers and . and

are respectively the vectors of all zeros and all ones. The Euclidean inner product and norm are indicated with

and , respectively. Let be a symmetric, positive definite matrix, . The induced inner product is , and the associated norm is . We call the Hilbert space with norm . Given a set , the normal cone mapping is defined as the operator if , and if . The identity mapping is denoted by . Given a set-valued operator , the graph of is the set The set of zeros is . The resolvent of a maximally monotone operator is the map , which is single-valued and firmly nonexpansive. Let be a proper, lower semi-continuous, convex function. We denote the subdifferential as the maximal monotone operator The proximal operator is defined as For a linear operator , the operator norm is defined as

Iii Mathematical Setup: The Monotone Game and Variational Generalized Nash Equilibria

We consider a game with agents where each one should choose an action , , from its local decision set . Let us define the product space and . Each agent has a local cost function of the form


where is the vector of all decision variables except for . The function in (1) has the typical splitting into smooth and non-smooth parts. We assume that the non-smooth part is captured by the function , which can model not only a local cost, but also local constraints.

Standing Assumption 1 (Local cost)

For each , the function in (1) is lower semicontinuous and convex. For each , is a closed set.

By convexity of , it follows that is convex as well. A classical example for the local cost function is the indicator function of the local feasible set, i.e., if , and otherwise. Other examples are regularizer functions, that promote sparsity, and penalty functions, as used in statistics and signal processing [16].

For the function in (1), we assume convexity and differentiability, as usual in the GNEP literature [4].

Standing Assumption 2 (Local convexity)

For each and for all , the function in (1) is convex and continuously differentiable.

Next, we introduce the shared constraints that couple the actions of the agents and that we assume to be affine. Specifically, we define the collective feasible set


where and . Effectively, each matrix defines how agent is involved in the coupling constraints, thus we consider it to be private information of agent . Then, for each agent , given the strategies of all other agents , the feasible decision set is


In order to perform a primal-dual analysis later on, let us assume convexity and regularity.

Standing Assumption 3

(Constraint qualification) The set in (2) satisfies Slater’s constraint qualification.

We are now ready to formalize the solution concept adopted in this paper. Specifically, the aim of each agent is to solve his local optimization problem, i.e.,


Namely, each agent seeks to find his best possible decision, given the decisions of the other agents. Thus, the solution concept for such a competitive scenario is the generalized Nash equilibrium [1], [4].

Definition 1

(Generalized Nash equilibrium) A collective strategy is a generalized Nash equilibrium of the game in (4) if, for all ,

In other words, a GNE is a set of decision variables where no agent can decrease its cost by unilaterally deviating from his strategy.

Next, to decouple the coupling constraints, we rewrite the local optimization problems via a primal-dual analysis. For each agent , given the strategies of the other agents , we define its Lagrangian function as


where is the dual variable associated with the coupling constraints, .

Under our constraint qualification, the Karush–Kuhn–Tucker (KKT) theorem ensures the existence of a pair , which depends on , such that the following inclusions hold:


We recall that the stationarity conditions and the complementary slackness conditions can be efficiently written as a parallel inclusion. Thanks to the sum rule of the subgradient for Lipschitz continuous functions [17, §1.8], we can write the subgradient of agent as . Then, (6) can be equivalently written as


We conclude the section by postulating a standard assumption for GNEP’s [4] and inclusion problems in general [11], that is, the monotonicity and Lipschitz continuity of the mapping that collects the partial gradients .

Standing Assumption 4 (Monotonicity)

The mapping


is monotone, i.e., for all ,

and -Lipschitz continuous, , i.e., for all ,

Among all possible GNE of the game, we follow the traditional approach and focus on the subset of so-called variational GNE (v-GNE) [4, Def. 3.10], namely, primal strategies that solve the KKT systems in (7) with the same Lagrange multiplier [18, Th. 3.1], [19, Th. 3.1]:


Iv Distributed Generalized Nash equilibrium seeking via Operator Splitting

In this section, we present the proposed distributed algorithms. We allow each agent to have information on his own local problem data only, i.e., and . We let each agent control its local decision , and a local copy of dual variables, as well as a local auxiliary variable used to enforce consensus of the dual variables. Since each cost function depends on the decision variables of other agents, we indicate with the set of agents such that depends explicitly on .

We also let the agents exchange information about their local dual variables via the graph . Specifically, we consider an undirected graph with vertex set and edge set representing the exchange of information among agents on the private dual variables. An edge is present if agent can receive from agent . The set of neighbours of agent in the graph is . We characterize the communication graph by a weighted adjacency matrix . To each active edge in the communication graph we attach a weight , otherwise we have .

Standing Assumption 5 (Graph connectivity)

The communication graph is undirected and connected.

Given this assumption it follows that the weighted adjacency matrix is symmetric and irreducible. Define the weighted Laplacian as the matrix . It holds that , and that, given Standing Assumption 5,

is positive semi-definite with real and distinct eigenvalues

. Moreover, given the maximum degree of the graph , , it holds that . Denoting by , it holds that

. We define the tensorized Laplacian as the matrix

, and we set .

Let and define


Let us also define the operator , and the set-valued operator


Let us summarize the properties of the operators above.

Lemma 1

The following statements hold:

  • is maximally monotone and -Lipschitz continuous.

  • is maximally monotone and -Lipschitz continuous.

  • is maximally monotone and -Lipschitz continuous.

  • is maximally monotone.

(i) The operator is maximally monotone being the direct sum of the maximally monotone operator and the gradient of the convex function [11, Prop. 20.23]. Furthermore, given and , it holds that


showing that is -Lipschitz continuous.
(ii) The operator

is skew-symmetric, and therefore maximally monotone

[11, Cor. 20.28]. By a computation similar to (12), it can be shown that is -Lipschitz, with a constant depending on the matrices and .
(iii) The operator is maximally monotone since it is the sum of maximally monotone operators and [11, Prop. 20.23]. It is -Lipschitz continuous because sum of Lipschitz operators.
(iv) The operator is maximally monotone by [8, Lem. 5]. It follows that the sum is maximally monotone [11, Prop. 20.23].

The following result holds for monotone operators and it will be recalled later on.

Lemma 2

Let and be a monotone operator, then is monotone in the Hilbert space .

It follows from the definition of the inner product :

Now, given the operators , and as in (10) and (11), we show that the zeros of the sum are v-GNE of the game in (4).

Theorem 1

The set is the set of v-GNE of the game in (4). It holds that , thus the game in (4) has a GNE.

Let . Existence follows from [11, Prop. 23.36]. To show that elements of are v-GNE, we proceed as follows. Let , i.e. . Writing out this condition explicitly gives

The second condition implies that for some . Thus, for all . Summing the third condition over all the agents gives the complementary slackness condition Therefore, the pair is a v-GNE.

From now on, the triplet defines the state variable of the distributed algorithms we describe later on. The following notation is used: is the action profile at iteration , indicates the auxiliary consensus enforcing variable and is the local dual variable.

Iv-a Forward-backward operator splitting

The aim of this section is to revisit a distributed forward-backward (FB) splitting algorithm for the distributed computation of a v-GNE, see Algorithm 1 [8].

Initialization: and
Iteration : Agent
() Receives for for , then updates

() Receives for , then updates

Algorithm 1 Preconditioned Forward Backward

Given , the FB algorithm can be written as fixed-point iteration of the form



and is the preconditioning matrix defined as


The matrices


collect the step sizes of the primal, the auxiliary and the dual variables, respectively. By choosing the step sizes appropriately, the preconditioning matrix can be made positive definite [7]. The FB algorithm is known to converge to a zero of a monotone inclusion when the operators are maximally monotone and the single-valued operator is cocoercive [11, Thm. 26.14]. Thus, the pseudogradient mapping in (8) should satisfy the following assumption.

Assumption 1 (Strong monotonicity)

There exists such that, for all ,

To ensure the cocoercivity condition, we refer to the following result.

Lemma 3

[8, Lem. 5 and Lem. 7] Let and as in (8) satisfy Assumption 1. Then, the following hold:

  • is -cocoercive with .

  • is -cocoercive with .

We recall that convergence to a v-GNE has been demonstrated in [8, Th. 3], if the step sizes in (14) are chosen small enough [8, Lem. 6].

Iv-B Forward-backward-forward splitting

In this section, we propose our distributed forward-backward-forward (FBF) scheme, Algorithm 2.

Initialization: and
Iteration : Agent
() Receives for , and for then updates

() Receives for , and for then updates

Algorithm 2 Distributed Forward Backward Forward

In compact form, the FBF algorithm generates two sequences as follows:


In (16), is a block-diagonal matrix that contains the step sizes:


with , and being diagonal matrices as in (15).

We recall that is single-valued, maximally monotone and Lipschitz continuous by Lemma 1. Each iteration differs from the scheme in (13) by one additional forward step and the fact that the resolvent is now defined in terms of the maximal monotone operator only. Writing the coordinates as and , the iterates explicitly read as Algorithm 2.

FBF operates on the splitting and it can be compactly written as the fixed-point iteration


where the mapping is defined as


We are now ready to prove the convergence of our proposed Algorithm 2 to a v-GNE of the game in (4).

Assumption 2

, with as in (17) and being the Lipschitz constant of as in Lemma 1.

Theorem 2

Let Assumption 2 hold. The sequence generated by Algorithm 2 converges to , thus the primal variable converges to a v-GNE of the game in (4).

The fixed-point iteration in (18) with as in (19) can be derived from (16) by substituting in the second line. Then, writing explicitly the iterations of (16) and solving for and we obtain Algorithm 2. Therefore Algorithm 2 is the fixed point iteration in (18). Then, the sequence generated by Algorithm 2 converges to a v-GNE by [11, Th.26.17] and [13, Th.3.4] since is monotone by Lemma 2 and is maximally monotone by Lemma 1.

We emphasize that Algorithm 2 does not require strong monotonicity (Assumption 1) of the pseudo-gradient mapping in (8). Moreover, we note that the FBF algorithm requires two evaluations of the individual gradients. In the formulation of the algorithm, this means that we have to compute the operator twice per iteration. At the level of the individual agents, this means that we need two communication rounds per iteration in order to exchange the necessary information. Compared with the FB algorithm, the non-strong monotonicity assumption comes at the price of increased communications at each iteration.

Iv-C Forward-backward-half forward splitting

Should the strong monotonicity condition (Assumption 1) be satisfied, an alternative to the FB is the forward-backward-half-forward (FBHF) operator splitting, developed in [14]. Let us then propose our second GNE seeking algorithm, that is, the distributed FBHF in Algorithm 3.

Initialization: and
Iteration : Agent
() Receives for , and for then updates

() Receives and for then updates

Algorithm 3 Distributed Forward Backward Half Forward

In compact form, the FBHF algorithm reads as the iteration


We note that the iterates of FBHF are similar to those of the FBF, but the second forward step requires the operator only. More simply, we can write the FBHF as




Also in this case, we have a bound on the admissible step sizes.

Assumption 3

, with as in Lemma 3 and as in Lemma 1.

We note that in Assumption 3, the step sizes in can be chosen larger compared to those in Assumption 2, since the upper bound is related to the Lipschitz constant of the operator , not of as for the FBF (Assumption 2). A similar comparison can be done with respect to the FB algorithm. Intuitively, larger step sizes should be beneficial in term of convergence speed.

We can now establish our convergence result for the FBHF algorithm.

Theorem 3

Let Assumptions 1 and 3 hold. The sequence generated by Algorithm 3 converges to , thus the primal variable converges to a v-GNE of the game in (4).

The fixed-point iteration in (21) with as in (22) corresponds to the scheme in (20) using the definition of . Expanding the iterations in (20) with as in (17) and solving for and we obtain exactly the steps in Algorithm 3. Therefore Algorithm 3 is the fixed point iteration in (22) whose convergence is guaranteed by [14, Th. 2.3] because is cocoercive by Lemma 3.

V Case study and numerical simulations

We consider a networked Cournot game with market capacity constraints [8], with companies that operate over a set of markets. Each company decides the quantity of product to deliver in the markets it is connected with. Each company has a local cost function related to the production process. Each market has a bounded capacity so that the collective constraints are given by where and specifies in which market company participates. Each market has a price, collected in the mapping . In general, is supposed to be a linear function. The cost function of each agent reads as Clearly, if is strongly convex with Lipschitz continuous gradient and the prices are linear, the pseudo gradient of is strongly monotone.

V-a Numerical example

As a numerical setting, we consider a set of 20 companies and 7 markets, similarly to [8]. Each company has a local constraint where each component of is randomly drawn from . The maximal capacity of each market is , randomly drawn from . The local cost function of company is , where indicates the component of . For all , is randomly drawn from , and the components of are randomly drawn from . Notice that is strongly convex with Lipschitz continuous gradient.

The price is taken as a linear function where each component of is randomly drawn from while the entries of are randomly drawn from . Recall that the cost function of company is influenced by the variables of the agents selling in the same market. Such informations can be retrieved from the network graph depicted in Fig. 4. The communication graph for the dual variables is a cycle graph with the addiction of the edges and .

As local cost functions we use the indicator functions. In this way, the proximal step is a projection on the local constraints sets.

The aim of these simulations is to compare the proposed schemes. The step sizes are taken differently for every algorithm. In particular, we take , and as in [8, Lem. 6], , and