I Introduction
Generalized games model the interaction between selfish decision makers, or agents, that aim optimizing their individual, but interdependent, objective functions, subject to shared constraints. This competitive scenario has received increasing attention with the spreading of networked systems, due to the numerous engineering applications, including demand response in competitive markets [1], demandside management in the smart grid [2], charging/discharging of electric vehicles [3] and radio communication [4]. From a gametheoretic perspective, the challenge is is to assign the agents behavioral rules that eventually ensure the attainment of an equilibrium. In fact, a recent part of the literature focuses on designing distributed algorithms to seek a GNE, a joint action from which no agent has interest to unilaterally deviate [5], [6], [7], [8]. In the cited works, the computational effort is partitioned among the agents, but under the assumption that each of them has access to the decision of all the competitors (or to an aggregation value, in the case of aggregative games). Such an hypothesis, referred as fulldecision information, requires the presence of a central coordinator that can communicate with all the agents, and might be impractical in some domains [9], [10]. One example is the NashCournot competition model described in [11], where the profit of each of a group of firms depends not only on its own production, but also on the total supply, a quantity not directly accessible by any of the firms. A solution is offered by fullydistributed algorithms that can be implemented by relying on peertopeer communication only. Specifically, we consider the socalled partialdecision information
scenario, where the agents agree on sharing their strategies with some neighbors on a network; based on the knowledge exchange, they can estimate and eventually reconstruct the action of all the competitors.
The partialdecision information setup has only been studied very recently. A number of approaches have been proposed for nongeneralized games (i.e., in the absence of coupling constraints) [11], [12], [13], [14]. Instead, fewer works deal with the presence of shared constraints, despite this is a significant extension, which arises naturally when the agents compete for common resources [5, §2]. For example, in the NashCournot model described above, the overall production of the firms is bounded by the market capacity. Of particular interest for this paper is the technique in [15], where the GNE problem is reformulated as that of finding a zero of a monotone operator. Indeed, the operatortheoretic approach is very elegant and convenient: several splitting methods are already well established to solve monotone inclusions, and the properties of fixedpoint iterations are well understood [16, §26], thus providing a unified framework to design algorithms and study their convergence. For instance, a fullydistributed method for aggregative games with affine coupling constraints is proposed in [17], based on a preconditioned forwardbackward splitting [16, §26.5]. The authors of [18] exploit results on fixedpoint iterations with errors [19] to solve generalized aggregative games on timevarying networks. All the aforementioned formulations resort to (projected) gradient and consensus dynamics, and are singlelayer (i.e., they require a fixed number of communications per iteration). As a drawback, due to the partialdecision information assumption, theoretical guarantees are obtained only for small (or vanisihing) stepsizes, which significantly affects the speed of convergence. Alternatively, the work [20] presents a PPA to solve (merely monotone) GNE problems, possibly under partialinformation, but that requires an increasing number of communications at each step. Similarly, doublelayer proximal bestresponse dynamics are designed in [21] for stochastic games.
However, the extensive communication required may be a performance bottleneck, both if a large number of iterations is needed to converge and if the agents have to send information multiple times for each time step. In fact, the communication time can overwhelm the time spent on local useful processing  e.g., this is a common problem in parallel computing [22]. Even neglecting the time lost in the transmission, sending large volumes of data on wireless networks results in an increased energetic cost.
Contributions: To improve speed and efficiency, we design fast, singlelayer, fixedstep, fullydistributed algorithms to solve GNE problems with affine coupling constraints, in a partialdecision information scenario. Our contributions are summarized as follows:

[leftmargin=*]

We derive a novel GNE seeking PPPA, with convergence guarantees under strong monotonicity and Lipschitz continuity of the game mapping. Convergence holds even if the proximal operator is computed inexactly (with summable errors). Our analysis relies on fixedpoint iterations and exploits a restricted monotonicity property. Thanks to the use of a novel preconditioning matrix, our algorithm is fullydistributed and requires only one communication per iteration. To the best of our knowledge, our scheme is the first nongradientbased, singlelayer (G)NE seeking method for the partialdecision information setup (§IIIIV);

We tailor our method to efficiently solve aggregative games. Specifically, we design a singlelayer GNE seeking PPPA where the agents only keep and exchange an estimate of the aggregative value, instead of an estimate of all the other agents’ actions (§V);

Via numerical simulations, we compare our approach to the pseudogradient method in [15], which is the only other known fullydistributed, singlelayer, fixedstep GNE seeking algorithm (excluding that in [17], for the special class of aggregative games). Our simulations show that our PPPA significantly outperforms the method in [15] in terms of the number of iterations needed to converge, thus considerably reducing the communication burden, at the price of locally solving a stronglyconvex optimization problem, rather than performing a projection, at each time step. Moreover, our scheme only requires one communication per iteration, instead of two (§VII).
Basic notation: denotes the set of natural numbers, including . () is the set of (nonnegative) real numbers. (
) denotes the vector of dimension
with all elements equal to ();denotes the identity matrix of dimension
; the subscripts might be omitted when there is no ambiguity. For a matrix , its transpose is , represents the element on the row and column . and are the nullspace and image of , respectively. denotes the Kronecker product.is the largest singular value of
, the maximum of the absolute row sums of . stands for symmetric positive definite matrix. Given , denotes the induced inner product of the vectors and , denotes the the induced norm of the vector ; we omit the subscript if . If is symmetric,denote its eigenvalues.
denotes the block diagonal matrix with on its diagonal. Given vectors , . For a differentiable function , denotes its gradient. is the set of absolutely summable sequences.Operatortheoretic background: For a function , . The mapping denotes the indicator function for the set , i.e., if , otherwise. A setvalued mapping (or operator) is characterized by its graph . , and denote the domain, set of fixed points and set of zeros, respectively. denotes the inverse operator of , defined through its graph as . is (strongly) monotone if , for all ,. denotes the identity operator. For a function , denotes its subdifferential operator, defined as ; if is differentiable and convex, its subdifferential operator is its gradient. denotes the normal cone operator for the the set , i.e., if , otherwise. If is closed and convex, it holds that , and is the Euclidean projection onto the set . denotes the resolvent operator of . A singlevalued operator is firmly (quasi)nonexpansive if , for all , (.
Ii Mathematical setup
We consider a set of agents, , where each agent shall choose its decision variable (i.e., strategy) from its local decision set . Let denote the stacked vector of all the agents’ decisions, the overall action space and . The goal of each agent is to minimize its objective function , which depends on both the local variable and on the decision variables of the other agents .
Furthermore, the feasible decision set of each agent depends also on the action of the other agents via affine coupling constraints. Specifically, the overall feasible set is
(1) 
where and , with and being local data. The game then is represented by the interdependent optimization problems:
(2) 
The technical problem we consider here is the computation of a GNE, as formalized next.
Definition 1
A collective strategy is a generalized Nash equilibrium if, for all ,
Next, we postulate some common regularity and convexity assumptions for the constraint sets and cost functions, see, e.g., [15, Ass. 1], [24, Ass. 1].
Standing Assumption 1
For each , the set is nonempty, closed and convex; is nonempty and satisfies Slater’s constraint qualification; is continuous and the function is convex and continuously differentiable for every .
Among all the possible GNE, we focus on the subclass of vGNE [5, Def. 3.11], which enjoys important structural properties, such as “economic fairness”^{1}^{1}1Informally speaking, a vGNE is a GNE where the cost of the common limitations is fairly shared; for example, if and for all , the first condition in (4) means that, at a vGNE, the marginal loss due to the presence of the coupling constraints is the same for each agent, namely . For an overview on vGNE, please refer to [5], [24].. The vGNE are so called because they coincide with the solutions of the variational inequality VI^{2}^{2}2For an operator and a set , the variational inequality VI is the problem of finding a vector such that , for all [25, Def. 1.1.1]., where is the pseudogradient mapping of the game:
(3) 
Under Standing Assumption 1, is a vGNE of the game in (2) if and only if there exist a dual variable such that the following KKT conditions are satisfied [5, Th. 4.8]:
(4) 
A sufficient condition for the existence of a unique vGNE for the game in (2) is the strong monotonicity of the pseudogradient [25, Th. 2.3.3], as postulated next. This assumption is always used for (G)NE seeking under partialdecision information with fixed step sizes, e.g., in [13, Ass. 2], [15, Ass. 3] (while it is sometimes replaced by strict monotonicity and compactness of when allowing for vanishing step sizes [11, Ass. 2]). It implies strong convexity of the functions for every , but not necessarily (strong) convexity of in the full argument.
Standing Assumption 2
The pseudogradient mapping in (3) is strongly monotone and Lipschitz continuous, for some , : for any pair , and .
Iii Fullydistributed equilibrium seeking
In this section, we present an algorithm to seek a GNE of the game in (2) in a fullydistributed way. Specifically, each agent only knows its own cost function and feasible set , and a portion of the coupling constraints, namely . Moreover, agent does not have full knowledge of , and only relies on the information exchanged locally with some neighbors over an undirected communication network . The unordered pair belongs to the set of edges, , if and only if agent and can mutually exchange information. We denote: the weighted symmetric adjacency matrix of , with if , otherwise, and the convention for all ; the weighted symmetric Laplacian matrix of , where is the degree matrix of , i.e., and , for all ; the set of neighbors of agent . Moreover, we label the edges , where is the cardinality of the edges set , and we assign to each edge an arbitrary orientation. We denote the weighted incidence matrix as , where if and is the output vertex of , if and is the input vertex of , otherwise. It holds that ; moreover, under the following connectedness assumption [26, Ch. 8].
Standing Assumption 3
The communication graph is undirected and connected.
In the partialdecision information scenario, to cope with the lack of knowledge, each agent keeps an estimate of all other agents’ actions [27], [28], [15]. We denote , where and is agent ’s estimate of agent ’s action, for all ; let also . Moreover, each agent keeps an estimate of the dual variable and an auxiliary variable . Our proposed dynamics are summarized in Algorithm 1, where the global parameter and the step sizes , , for all , and have to be chosen appropriately (see §IV). We note that in Algorithm 1 the agents evaluate their cost functions in their local estimates, not on the actual collective strategy.
Initialization: For all , set , , , .

[leftmargin=6.6em]


[leftmargin=1em]

Communication: The agents exchange the variables with their neighbors.
Each agent does:
Distributed Averaging:Local variables update:

In steady state, agents should agree on their estimates, i.e., , for all . This motivates the presence of consensual terms for both primal and dual variables. We denote the consensual space of dimension and its orthogonal complement, for any integer . Specifically, is the estimate consensus subspace and is the dual variable consensus subspace.
Iv Derivation and convergence analysis
In this section, we derive Algorithm 1 as a PPPA and show its convergence by leveraging a restricted monotonicity property. Before going into details, we need some definitions. We denote . Besides, let us define, as in [15, Eq.1314], for all ,
(5a)  
(5b) 
where , . In simple terms, selects the th dimensional component from an dimensional vector, while removes it. Thus, and . We define , . It follows that and . Moreover, we have that
(6) 
We define the extended pseudogradient mapping as
(7) 
and the operators
(8)  
(9) 
where is a fixed design parameter, , with , , , , , , and .
The following lemma relates the unique vGNE of the game in (2) to the zeros of the operator . The proof is analogous to [15, Th. 1] or Lemma 10 in §V, and hence it is omitted.
Lemma 1
The following statements hold:

If , then and is the vGNE of the game in (2).

.
Iva Derivation of the algorithm
Lemma 1 is fundamental, because it allows us recast the GNE problem as that of computing a zero of the mapping in (9). In turn, this can be efficiently done by applying standard operatorsplitting methods [16, §2628]. By following this approach, fullydistributed GNE seeking dynamics were developed by the authors of [15], [18]. In effect, in this section we show that also Algorithm 1 is an instance of the PPA [16, Th. 23.41], applied to seek a zero of the (suitably preconditioned) operator .
Nonetheless, technical difficulties arise because of the partialdecision information setup. Specifically, the operator is not monotone in general, not even if strong monotonicity of the pseudogradient mapping holds, i.e., Standing Assumption 2. This is due to the fact that, in the extended pseudogradient in (7), the partial gradient is evaluated on the local estimate , and not on the actual value . Only when the estimates belong to the consensus subspace, i.e. (namely, the estimate of each agents coincide with the actual value of ), we have that .
We remark that many operatortheoretic properties are not guaranteed for the resolvent of a nonmonotone operator . By definition, it still holds that , but may have a limited domain, or be not singlevalued. In this general case, we write the PPA as
(10) 
that is well defined only if for all .
Next, we show that Algorithm 1 is obtained by applying the iteration in (10) to the operator , where
(11) 
is called preconditioning matrix, and the step sizes , , , have to be chosen such that is positive definite. In this case, it also holds that . Sufficient conditions that ensure are provided in the next lemma, that follows by the Gershgorin’s circle theorem.
Lemma 2
The matrix in (11) is positive definite if and, for all , , .
In the following, we always assume that the step sizes in Algorithm 1 are chosen such that . Then, we are able to formulate the following result.
Lemma 3
Proof. By definition of inverse operator we have that
(13) 
In turn, the first inclusion in (13) can be split in two components by leftmultiplying both sides with and . By noticing that , and , we get
Therefore, since the zeros of the subdifferential of a (strongly) convex function coincide with the minima (unique minimum) [16, Th. 16.3], (13) can be rewritten as
(14) 
The conclusion follows by defining , where and are local auxiliary variables kept by each agent, provided that . The latter is ensured by , as in Algorithm 1.
Remark 2
The preconditioning matrix is designed to make the system of inclusions in (13) block triangular, i.e., to remove the term and from the first inclusion, and the terms from the second: in this way, and do not depend on , for , or . This ensures that the resulting iteration can be computed by the agents in a fullydistributed fashion. Furthermore, the change of variable reduces the number of auxiliary variables and decouples the dual update in (14) from the graph structure.
IvB Convergence analysis
The convergence of Algorithm 1 cannot be inferred by standard results for the PPA, because the operator (or ) is not monotone in general. The loss of monotonicity is the main technical difficulty that arises when studying (G)NE seeking under partialdecision information, and it is due to the fact that is very rarely monotone in cases of interest (see Appendix D). However, a restricted strong monotonicity property holds for the operator in (8), that was exploited, e.g., in [29], [15], [27]. Analogously, we make use of a restricted monotonicity property of the operator , which can be guaranteed for any game satisfying Standing Assumptions 13, without additional hypotheses, as formalized in the next two statements.
Lemma 4 ([30, Lemma 3])
The extended pseudogradient mapping in (7) is Lipschitz continuous, for some : for any , .
Lemma 5
Let
(15) 
If , then is restricted monotone with respect to : for any and any such that , it holds that
Proof. The operator in (9) is the sum of three operators. The second is monotone by properties of normal cones [16, Th. 20.25]
; the third is a linear skewsymmetric operator, hence monotone
[16, Ex. 20.35]. Let and . By Lemma 1, , hence by [15, Lemma 3], it also holds that . Therefore, the first term in is restricted monotone with respect to , and the conclusion follows readily.Moreover, the operator retains this property, in the space induced by the inner product .
Lemma 6
Let be as in (15) and assume that is chosen. Then is restricted monotone, with respect to , in the induced space: for all and all such that , it holds that
Proof. By definition, . Hence the restricted monotonicity in Lemma 5 reads as
Based on the restricted monotonicity property in Lemma 6, in the remainder of the section we show that the iteration in (12) converges to a point in . Our analysis is based on an existing result for iterations of FQNE operators, that is reported next for readability.
Lemma 7 ([16, Prop. 4.2, Th. 4.3])
Let be a finite dimensional Hilbert space equipped with a scalar product , and be a firmly quasinonexpansive operator, such that , . Let be a sequence in , and a sequence in such that . Let and set:
(16) 
Then the following statements hold:

.

.

Suppose that every cluster point of belongs to . Then, converges to a point in .
We already noted in Remark 3 that the operator is single valued, with . However, to be able to apply the previous lemma to the iteration in (12), we still need the following two lemmas.
Lemma 8
Let , as in (15). Then, if firmly quasinonexpansive in the induced norm, i.e., for all , for all , it holds that
Proof. Denote and recall that . By definition of resolvent and by definition of inverse operator, it follows that , and . Therefore the restricted monotonicity property in Lemma 6 reads as
In turn, the last inequality is equivalent to the firmly quasinonexpansiveness of [16, Prop. 4.2(iv)].
Lemma 9
is continuous.
Proof. See Appendix A.
Theorem 1
Proof. By Lemma 3, we can equivalently study the convergence of the iteration in (12). In turn, (12) can be rewritten as (16) with and , , for all , since is firmly quasinonexpansive with respect to the norm by Lemma 8 and it has full domain by Remark 3. Moreover, by definition of resolvent, and hence by Lemma 1. The sequence is bounded by Lemma 7(i). Therefore, admits at least one cluster point, say , and denote a nondecreasing diverging subsequence such that converges to . Since by Lemma 7(ii) and by the continuity of in Lemma 9, it follows that . Therefore all the cluster points of belongs to and the convergence to an equilibrium of (12) follows by Lemma 7(iii). The conclusion follows by Lemma 1.
Remark 4
Algorithm 1 requires each agent to solve an optimization problem to compute , at each iteration. However, from Lemma 7 and by inspection of the proof of Theorem 1, it is evident that the convergence result in Theorem 1 still holds if an approximation is used in place of the exact solution of the in Algorithm 1, provided that the errors with respect to the exact solution of the optimization, , are absolutely summable, i.e., , for all . Further, the optimization problems are strongly convex, hence they can be efficiently solved via iterative algorithms.
V Aggregative games
In aggregative games, for all (hence ) and the cost function of each agent depends only on its local decision and on the value of the average strategy Therefore, for each , there is a function such that the original cost function in (2) can be written as