GNE problems arise in several engineering applications, including demand-side management in the smart grid , charging/discharging of electric vehicles ,
formation control ,
communication networks . In these examples, multiple selfish decision makers, or agents, aim at optimizing their individual, yet inter-dependent, objective functions, subject to shared constraints.
From a game-theoretic perspective, the goal is to design distributed GNE seeking algorithms, using the local information available to each agent.
Moreover, in the cyber-physical sytems framework, games are often played by agents with their own dynamics , . In this case, the “strategy” of each agent consists of the output of a dynamical system,
and controllers have to be conceived to steer the physical processes to a Nash equilibrium, while ensuring closed-loop stability.
Therefore, it is advantageous to consider continuous-time schemes, for which control-theoretic properties are more easily unraveled.
Literature review: A variety of different algorithms have been proposed to seek GNE in a distributed way , , . A recent part of the literature focuses on aggregative games, for which the cost of each agent depends on the others agents’ strategy only via an aggregative function , , . These works refers to (aggregative) games played in a full-information setting, where each agent can access the decision of all the competitors (aggregate value), for example in the presence of a central coordinator that broadcasts the data to the network. Nevertheless, in many applications, the existence of a node with bidirectional communication with all the agents may be impractical, and the agents can only rely on local information. One solution is offered by pay-off based methods , 
, that are decentralized, but require the agents to measure their cost functions. Alternatively, in this paper, we assume that the agents agree on sharing some information with their neighbors. Each agent keeps an estimate of all the competitors’ action and asymptotically reconstruct the true value, exploiting the data exchanged over the network. Such a partial-decision information scenario has been investigated for games without coupling constraints, resorting to (projected) gradient and consensus dynamics, both in discrete-time, , and continuous-time , , . Of major interest for this paper is the method in , where a nonlinear averaging integral controller is used to tune on-line the weights of the communication. The advantage is to guarantee convergence to a NE without requiring the knowledge of any global parameter or the use of a constant, high-enough, gain, which is the solution proposed in . Fewer works deal with generalized games. A double-layer algorithm was presented in . Remarkably, Pavel in  derived a single-timescale, fixed step-size GNE learning algorithm, by leveraging an elegant operator splitting approach. The authors of  addressed aggregative games with equality constraints, via continuous-time design. Moreover, all the results mentioned above consider static or single-integrator agents only. Distributively driving a network of more complex physical systems to game theoretic solutions is still a relatively unexplored problem. With regard to aggregative games, a proportional integral feedback algorithm was developed in  to seek a NE in networks of passive nonlinear second-order systems; in , , continuous-time gradient-based controllers were introduced for some classes of nonlinear dynamics with uncertainties. The authors of  addressed generally coupled cost games played by linear agents, via an extremum seeking approach. NE problems arising in systems of multi-integrator agents were studied in . Moreover, all the references cited do not consider generalized games. Despite the scarcity of results, the presence of coupling constraints is a significant extension, that arises naturally in a variety of fields, when the agents share some common resource or limitation [12, §2].
Contributions: Motivated by the above, in this paper we investigate continuous-time GNE seeking for networks of multi-integrator agents. We consider games with affine coupling constraints, played under partial-decision information. Specifically:
We introduce two primal-dual projected-gradient controllers, for the case of single-integrator agents. The first is a continuous-time version of the algorithm in . It employs a constant gain, whose choice requires the knowledge of the algebraic connectivity of the communication graph and of the Lipschitz and strong monotonicity constants of the game mapping. To relax this condition, we present a novel distributed averaging integral controller, that extends the solution of  to generalized games. In particular, the adaptive weights in place of the fixed global gain allows for a fully-decentralized tuning, that does not need any non-local information. For both algorithms, we prove convergence of primal and dual variables, under strong monotonicity and Lipschitz continuity of the game mapping. We are not aware of any other continuous-time GNE seeking scheme, for generally coupled costs games, whose convergence is guaranteed under such mild assumptions. (3)
We propose a controller, with dynamic gains, specifically designed for generalized aggregative games. The agents keep and exchange an estimate of the aggregate value only, thus reducing communication and computation cost. With respect to , we can also handle inequality constraints. Furthermore, our algorithm requires no knowledge of global parameters and virtually no tuning. (§4)
We show how all of our controllers can be adapted to learning GNE in games with shared constraints played by multi-integrator agents. To the best of our knowledge, we are the first to address generalized games with higher-order dynamical agents. Besides, the use of adaptive weights still ensures convergence without any a priori information on the game. (§5)
Some preliminary results of this paper have been submitted in , where algorithms with adaptive gains and aggregative games are not considered.
Basic notation: () denotes the set of (nonnegative) real numbers. For a differentiable function , is its gradient. (
) denotes a matrix/vector with all elements equal to(); to improve clarity, we may add the dimension of these matrices/vectors as subscript.
denotes the identity matrix of dimension. and
denote the transpose and the largest singular value of a matrix, respectively. If is symmetric,
denote its eigenvalues.stands for a symmetric positive definite matrix. denotes the block diagonal matrix with the matrices on its diagonal. denotes the Kronecker product of the matrices and . For , let and denote the Euclidean inner product and norm, respectively. Given vectors , we may denote , and, for each , .
Operator-theoretic definitions: A mapping is monotone (-strongly monotone, with ) if, for all , . A mapping is -Lipschitz continuous, with , if, for all , . denotes the closure of a set . Given a closed convex set , the mapping denotes the projection onto , i.e., . The set-valued mapping denotes the normal cone operator for the the set , i.e., if , otherwise. The tangent cone operator of is defined as , . denotes the projection on the tangent cone of at . By Moreau’s Decomposition Theorem [1, Th. 6.30], it holds that and , for any .
For any nonempty closed convex set , any and any , it holds that
Thus, if , then
Proof. By Moreau’s theorem, , hence for any ,
2 Mathematical Background
We consider a group of noncooperative agents , where each agent shall choose its decision variable (i.e., strategy) from its local decision set . Let denote the stacked vector of all the agents’ decisions, the overall action space and . Moreover, let denote the collective strategy of all the agents, except that of agent . The goal of each agent is to minimize its objective function , which depends on both the local strategy and on the decision variables of the other agents . Furthermore, we address generalized games, where the coupling among the agents arises also via their feasible decision sets. In particular, we consider affine coupling constraints; thus the overall feasible set is
where and , with and being local data. The game then is represented by inter-dependent optimization problems:
In this paper, we consider the problem to compute a GNE, as formalized next.
A collective strategy is a generalized Nash equilibrium if, for all ,
Next, we formulate standard convexity and regularity assumptions for the constraint sets and cost functions.
Standing Assumption 1.
For each , the set is non-empty, closed and convex; is non-empty and satisfies Slater’s constraint qualification; is continuously differentiable and the function is convex for every .
Moreover, among all the possible GNE, we focus on the important subclass of v-GNE [12, Def. 3.11]. Under the previous assumption, is a v-GNE of the game in (2) if and only if there exist a dual variable such that the following KKT conditions are satisfied [12, Th. 4.8]:
where is the pseudo-gradient mapping of the game:
A sufficient condition for the existence of a unique v-GNE is the strong monotonicity of the pseudo-gradient [13, Th. 2.3.3], as postulated next. This assumption was used, e.g., in [17, Ass. 2], [4, Ass. 3], [8, Ass. 4].
Standing Assumption 2.
The pseudo-gradient mapping in (4) is -strongly monotone and -Lipschitz continuous, for some , .
3 Distributed generalized Nash equilibrium seeking
In this section,we consider the game in (2), where each agent is associated with the following dynamical system:
Our aim is to design the inputs to seek a v-GNE in a fully distributed way. Specifically, each agent only knows its own cost function and feasible set . Besides, agent does not have full knowledge of , and only relies on the information exchanged locally with neighbors over a communication network . The unordered pair belongs to the set of edges if and only if agent and can exchange information. We denote the symmetric adjacency matrix of , with if , otherwise; the symmetric Laplacian matrix of ; the set of neighbors of agent
Standing Assumption 3.
The communication graph is undirected and connected.
In the remainder of the section, we present two dynamic controllers to asymptotically drive the system in (5) towards a v-GNE, in a fully-distributed fashion.
3.1 Distributed generalized Nash equilibrium seeking algorithm with constant gain
Our first algorithm is the continuous-time counterpart of [23, Alg. ]. To cope with partial-decision information, each agent keeps an estimate of all other agents’ action. We denote , where and is agent ’s estimate of agent ’s action, for all . Moreover, each agent keeps an estimate of the dual variable, and an auxiliary variable to allow for distributed consensus of the multipliers estimates. Our proposed dynamics are summarized in Algorithm 1, where is a global constant parameter and the initial conditions can be chosen arbitrarily.
For all :
We note that the agents exchange with their neighbors only, therefore the controller can be implemented distributedly. In steady state, agents should agree on their estimates, i.e., , for all . This motivates the presence of consensual terms for both primal and dual variables. We denote the consensual space of dimension , for an integer , and its orthogonal complement. Specifically, is the estimate consensus subspace and is the multiplier consensus subspace.
To write the closed-loop system in compact form, let us define, as in [23, Eq. 13-14], for all ,
where , . We note that selects the -th dimensional component from an -dimensional vector, while removes it. Thus, and . We define , . It follows that and . Moreover, we have that
Let , , , and, for any integer , . Furthermore, we define the extended pseudo-gradient mapping as:
Then, the closed-loop system, in compact form, reads as
We remark that, in Algorithm 1, each agent evaluates the gradient of its cost function in its local estimate, not on the actual collective strategy. In fact, only when the estimates belong to the consensus space, i.e., (in the case of full information, for example), we have that . It follows that the operator is not necessarily monotone, not even if the pseudo gradient in (4) is strongly monotone (Standing Assumption 2). This is the main technical difficulty that arises when studying NE seeking under partial-information. To deal with this complication, cocoercivity of the extended pseudo-gradient (on the augmented estimate space) is sometimes postulated [30, Ass. 4], [27, Ass. 5]. However, this is a limiting assumption, which does not hold in general [23, Rem. 6]. Instead, our analysis is based on a weaker restricted monotonicity property, which can be guaranteed for any game satisfying Standing Assumptions 1-3, without additional hypotheses, as formalized in the next two statements.
The extended pseudo-gradient mapping in (8) is -Lipschitz continuous, for some : for any , .
Proof. See Appendix A
Lemma 4 ([23, Lem. 3]).
For any , for any and any , it holds that and also that
The restricted strong monotonicity property of the previous statement is not new in the context of of games played under partial-information, see, e.g., , , . By leveraging Lemma 4, we next show the convergence of the dynamics in (9) to a v-GNE. For brevity of notation, let us define, in the remainder of the paper, the set
Proof. See Appendix E
3.2 Distributed generalized Nash equilibrium seeking algorithm with adaptive gains
The dynamic controller proposed in the last subsection allows to seek a v-GNE in a fully distributed way, provided that the global fixed gain is chosen high-enough, as in Theorem 1. However, selecting a gain that ensures convergence requires global knowledge about the graph , i.e., the algebraic connectivity, and about the game mapping, i.e., the strong monotonicity and Lipschitz constants. These parameters are unlikely to be available locally in a network system, when the cost function of each agent is private. To overcome this limitation and enhance the scalability of the design, the authors of  proposed a controller for the integrator systems in (5), where the gain is tuned online, thus relaxing the need for global information, for games without coupling constraints. In this section we extend their result to the GNE problem, i.e., to games with shared constraints.
Our proposed controller is given in Algorithm 2. For all , is the adaptive gain of agent , is a constant local parameter, , and the initial conditions can be chosen arbitrarily.
For all :
We can rewrite the overall closed-loop, in compact form, as
where , , , , .
Proof. See Appendix B
The following result is analogous to Lemma 4. The proof relies on the decomposition of along the consensus subspace , where is strongly monotone, and the disagreement subspace , where is strongly monotone.
For any and , for any and any , it holds that and also that
Proof. See Appendix C Building on this property, we can now present the main result of this section.
Proof. See Appendix D
Algorithm 2 allows for a fully uncoupled tuning. Specifically, each agent can choose locally the initial conditions and the rate , independently of the other agents and without any need for communication or knowledge of global parameters. Compared to Algorithm 1, the agents exchange some extra information, namely the variables .
4 Distributed generalized Nash equilibrium seeking for aggregative games
In this section, we focus on aggregative games. We assume that for all (hence ). In (average) aggregative games, the cost function of each agent depends on the local decision and on the value of the average strategy, i.e., It follows that, for each , there is a function such that the original cost function in (2) can be written as
Since an aggregative game is only a particular instance of the game in (2), all the considerations on the existence and uniqueness of a v-GNE and equivalence with the KKT conditions in (3) are still valid.
Moreover, Algorithms 1-2 could still be used to drive a system of single integrators (5) towards a v-GNE. This would require each agent to keep (and exchange) an estimate of all other agents’ action, i.e., a vector of components. In practice, however, the cost of each agent is only a function of the aggregative value , whose dimension is independent of the number of agents. To reduce the communication and computation burden, in this section, we introduce two distributed controllers, that are scalable with the number of agents, specifically designed to seek a v-GNE in aggregative games.
Our proposed dynamics are obtained by adapting Algorithms 1, 2 to take into account the aggregative structure of the game, and are illustrated in Algorithms 3, 4, respectively. Since the agents rely on local information only, they don’t have access to the actual value of the average strategy. Therefore, we embed each agent with an auxiliary error variable , that is an estimate of the quantity . Each agent aims at asymptotically reconstructing the true aggregate value, based on the information received from its neighbors. We used the notation
For all :
For all :
Let , . Furthermore, let us define the extended pseudo-gradient mapping as
The mapping in (15) is -Lipschitz continuous, for some : for any , Therefore, the mapping is -Lipschitz continuous, for some , for all .
Proof. It follows from Lemma 3, by observing that .
We note that, in Algorithms 3, 4, each agent evaluates the gradient of its cost function in its local estimate of the average strategy. Only if all the estimates coincide with the actual value, i.e., , we can conclude that , as in (4).
The convergence analysis of the dynamics in (16), (17), to a v-GNE makes use of an invariance property of the systems, namely that along any trajectory, if the initial conditions are chosen opportunely. In fact, it is crucial to set , that ensures . Indeed, the dynamics in (16b) or (17b) can be regarded as a continuous time dynamic tracking  for the time-varying quantity . By leveraging this invariance property, we obtain a refinement of the restricted strong monotonicity properties in Lemmas 4, 6, as demonstrated next.
Lemma 8 ([16, Lemma 4]).
For any , for any such that and any such that , it holds that , and also that
For any and , for any such that and any such that , it holds that , and also that
Proof. See Appendix F
We are now ready to prove the main results of this section.
Proof. See Appendix H.
For any initial condition in such that , the system in (17) has a unique Carathèodory solution, which belongs to for all . The solution converges to an equilibrium , with ,