I Introduction
Aggregative game theory [2] is a mathematical framework to model interdependent optimal decision making problems for a set of noncooperative agents, where the decision of each agent is affected by some aggregate effect of all the agents. Motivated by application domains where this aggregative feature arises, e.g. demand side management [3] and network congestion control, equilibrium seeking in aggregative games is currently an active research area.
Existence and uniqueness of (Nash) equilibria in (aggregative) games has been comprehensively studied, especially in close connection with variational inequalities [4], [5, §12]. Distributed and semidecentralized algorithms [6, 7], [8, 9, 10] have been proposed as discretetime dynamics that converge to an equilibrium of the game, e.g. Nash or aggregative equilibrium, under appropriate technical assumptions and sufficient conditions on the problem data. Specifically, one can characterize the desired equilibria as the zeros of a monotone operator, e.g. via the concatenation of interdependent Karush–Kuhn–Tucker operators, and formulate an equivalent fixedpoint problem, to be solved via discretetime dynamics with guaranteed global asymptotic convergence [9, 10].
Within the literature on equilibrium seeking for aggregative games with coupling constraints, the available solution methods are algorithms in discrete time, where tuning the step size is typically a hard task. Therefore, in this paper, we address the aggregative equilibrium computation problem via continuoustime dynamics. Inspired by passivity arguments [11], our original contribution is to provide simple, primaldual integral dynamics for the computation of generalized aggregative and Nash equilibria via semidecentralized dynamics.
To handle both local and global constraints, we propose equilibrium seeking dynamics that are characterized as the dynamics of a projected dynamical system [12], whose solutions are intended as locally absolutely continuous functions. Thus, we exploit invariance arguments for differential inclusions with maximally monotone setvalued righthand side, and apply it to our primaldual projected dynamics [13, 14]. From the technical perspective, our main contribution is to prove global asymptotic convergence of the proposed dynamics to a generalized (primaldual) equilibrium of the aggregative game, under mild assumptions on the problem data, namely, local convexity of cost functions, convexity of constraints, and strict monotonicity of the pseudogradient mapping. Compared to our preliminary contribution [1], in this paper, we consider aggregative games with coupling constraints, propose primaldual dynamics, and discuss convergence to both generalized aggregative equilibria and generalized Nash equilibria.
The paper is organized as follows. We introduce and mathematically characterize the problem setup in Section II. We propose the equilibrium seeking dynamics and present the main result in Section III. Technical discussions and corollaries are in Section IV. The proofs are given in the Appendix.
Notation and definitions
denotes a matrix/vector with all elements equal to
. denotes the Kronecker product. Given vectors , we define , , and . Let the set be nonempty. The symbol denotes the boundary of , and the mapping denotes the indicator function, i.e., if , otherwise. The setvalued mapping denotes the normal cone operator, i.e., if , otherwise. The setvalued mapping denotes the tangent cone operator. The mapping denotes the projection operator; denotes the projection of the vector onto the tangent cone of at , i.e., . For a function , ; denotes its subdifferential setvalued mapping, defined as ; if is differentiable at , then . Given a closed convex set and a singlevalued mapping , the variational inequality problem, denoted by VI, is the problem to find such thatIi Mathematical background: Aggregative games and variational equilibria
Iia Jointlyconvex aggregative games with coupling constraints
A jointlyconvex aggregative game with coupling constraints is denoted by a triplet , where is the index set of decision makers, or agents, is an ordered set of cost functions and is an ordered set of setvalued mappings that represent coupled constraint sets. For each , we assume an affine structure for the set :
for some set and matrices .
In aggregative games, the aim of each agent is to minimize its objective function that depends on the local decision variable and on the average among the decision variables of all agents, i.e., . Formally, a jointlyconvex aggregative game with coupling constraints represents the following collection of interdependent optimization problems:
(1) 
where .
Throughout the paper, we have the following assumption.
Standing Assumption 1
Continuity, compactness, convexity. The objective functions are continuous. The sets are nonempty, compact and convex. The set , where , is nonempty and satisfies Slater’s constraint qualification. For all , and , the function is continuously differentiable and convex.
IiB Generalized aggregative equilibrium and strictlymonotone pseudogradient mapping
Our aim is to design continuoustime, semidecentralized, dynamics that asymptotically converge to a generalized aggregative equilibrium, which is a set of decision variables such that each is optimal given the average among all the decision variables and the coupling constraints.
Definition 1
Generalized aggregative equilibrium. A set of decision variables is a generalized aggregative equilibrium of the game in (1) if, for all ,
A fundamental mapping in game theory is the socalled pseudosubdifferential mapping, which in our setup with continuously differentiable functions, hence singlevalued subdifferentials, is a pseudogradient mapping. Since we are interested in generalized aggregative equilibria, rather than generalized Nash equilibria, together with semidecentralized equilibrium seeking dynamics, let us adopt the following definition of (semiextended) pseudogradient mapping:
(2) 
where is a free design parameter, and is a control variable. Throughout the paper, we assume that the pseudogradient mapping in (2) is strictly monotone, see the discussion in Section IVA for sufficient conditions on the problem data.
Standing Assumption 2
Strictlymonotone pseudogradient mapping. The pseudogradient mapping in (2) is strictly monotone on , i.e., for all such that ,
IiC Operatortheoretic characterization
With the aim to decouple the coupling constraints of the game, in (1), we adopt duality theory for equilibrium problems. We start from the definition of the Lagrangian functions, , one for each agent :
(3) 
where is a dual variable. Then, for each , we introduce the Karush–Kuhn–Tucker (KKT) system:
(4) 
where are the dual variables, one vector for each agent , associated with the coupling constraint, and represents the complementarity condition. Note that in (4), the first two equations are equivalent to . We use the former formulation to recover a semidecentralized solution algorithm later on. Next, we follow the steps in [10] and focus on the class of variational generalized aggregative equilibria, i.e., generalized aggregative equilibria that satisfy the KKT system in (4) with equal dual variables, for all . In our definition of variational generalized aggregative equilibrium, the connection with the solutions to the KKT system is inspired by [15, Th. 9, Def. 3]. We refer to [15, §5] for the relevant properties of variational (generalized Nash) equilibria.
Definition 2
Existence and uniqueness of the vGAE follows by the standing assumptions, due to the connection with variational inequalities – the proof is analogous to [5, Prop. 12.11]. By introducing the dual variable , we have extended the space of the decision variables of the aggregative game. It then follows that the extended version of the pseudogradient mapping,
(5) 
has a fundamental role in the operatortheoretic characterization of the equilibria. Specifically, we show in the following that the solution of the KKT system is a zero of a (maximally) monotone operator that contains the extended pseudogradient mapping in (5) and that it generates a vGAE.
Lemma 1
Operatortheoretic characterization. The following statements are equivalent:

is a vGAE of the game in (1);

, for some .
Iii Continuoustime integral dynamics for generalized aggregative equilibrium seeking
For asymptotically reaching the vGNE, we consider the following continuoustime integral dynamics:
(6) 
where is a free parameter gain.
Equivalently, in collective projectedvector form, the dynamics in (6) read as
(7) 
Remark 2
Semidecentralized structure. The computation and information exchange in (6) are semidecentralized: each agent performs decentralized computations, namely, projectedpseudogradient steps, and does not exchange information with other agents. A central control unit, which does not participate in the game, collects aggregative information, and , and broadcasts two signals, and , to the agents playing the aggregative game. In turn, the dynamics of the broadcast signal are driven by the average among all the decision variables, , while the dynamics of the signal are driven by the couplingconstraint violation, . The semidecentralized structure prevents that the agents are imposed to exchange truthful information.
First, we show that the part of an equilibrium for the dynamics in (7) is a vGAE, in view of Lemma 1.
Lemma 2
The following statements are equivalent:

is an equilibrium for the dynamics in (7);

.
In view of Lemma 2, we can directly analyze the convergence of the projected dynamics in (7) to an equilibrium. Let us introduce a quadratic function, , which is used later on to obtain a Lyapunov function.
Lemma 3
Consider the function
(8) 
where are arbitrary vectors in . It holds that
(9) 
where stands for the righthand side in (7).
We are now ready to establish our main global asymptotic convergence result. The proof, given in Appendix A, is based on invariance arguments for differential inclusions with maximal monotone setvalued righthand side.
Theorem 1
Global asymptotic convergence to variational generalized aggregative equilibrium. Let be the vGAE of the game in (1). For any initial condition , there exists a unique solution to (7) starting from , which is a locally absolutely continuous function satisfying (7) almost everywhere, remains in , is bounded for all time, and converges to , a Lyapunov stable equilibrium of (7).
Iv Technical discussions
Iva On the strict monotonicity of the pseudogradient
In this subsection, let us consider a separable structure for the cost functions, i.e.,
(10) 
for some convex functions and matrices . The next result provides a sufficient condition on the problem data such that the pseudogradient mapping in (2) is strictly monotone (Standing Assumption 2).
Proposition 1
By (10), the pseudogradient mapping in (2) is
hence its subdifferential reads as
It follows from [4, Prop. 2.3.2] that the pseudogradient is strictly monotone if and only if its subdifferential is positive semidefinite, i.e., since is strongly convex,
By the Gershgorin circle theorem, the latter is true if , which is implied by (11).
The sufficient condition in (11) extends that in [1, Prop. 1] to the case of heterogeneous matrices . In turn, it is less restrictive than the sufficient condition in [17, Th. 2]. The inequality condition in (9) becomes less restrictive as grows, which is desirable for large number of agents [1, §IV].
IvB On separable convex coupling constraints
Let us discuss the setup with separable, nonlinear yet convex, coupling constraints, i.e., of the form
(12) 
where the functions are convex and continuously differentiable, and the set is nonempty and satisfies Slater’s constraint qualification. To recover affine coupling constraints, the optimization problems of the agents can be rewritten with auxiliary decision variables as
(13) 
where the set is compact and convex. Now, if is the GAE of the original game with coupling constraints as in (12), then the pair , with for all , is a GAE of the game in (13). Conversely, let be a GAE of the game in (13). If the coupling constraint is inactive at the equilibrium, , then it is unnecessary and is a GAE of the original game; if the coupling constraint is active, then
Therefore, the pair is a GAE of the game in (13), and in turn is a GAE of the original game.
IvC On generalized Nash equilibria
We recall that a Nash equilibrium is a set of strategies where each is optimal given the other strategies, as formalized next.
Definition 3
Generalized Nash equilibrium. A set of decision variables is a generalized Nash equilibrium (GNE) of the game in (1) if, for all ,
Remark 3
A GNE in Definition 3 differs from a GAE in Definition 1, since in the latter, each decision variable is optimal given the average among the decision variables of all agents that enters as second argument of the cost functions. We refer to [18, 19] for a comparison between aggregative/meanfield equilibria and Nash equilibria.
If we aim at computing a GNE, rather than a GAE, then the definition of pseudogradient mapping shall be changed to
(14) 
since, for each agent , the variable enters as local decision variable in both the first and the second argument of the cost function . Analogously to (7), possible continuoustime generalized Nash equilibrium seeking dynamics are
(15) 
Convergence to a variational GNE (vGNE) of the above dynamics then follows if the pseudogradient mapping is strictly monotone.
Corollary 1
Global asymptotic convergence to generalized Nash equilibrium. Let be the vGNE of the game in (1). Assume that the mapping in (15) is strictly monotone on . For any initial condition , there exists a unique solution to (15) starting from , which is a locally absolutely continuous function satisfying (7) almost everywhere, remains in , is bounded for all time, and converges to , a Lyapunov stable equilibrium of (15).
Analogously to Proposition 1, in the case of separable cost functions as in (10), we provide sufficient conditions on the problem data such that the pseudogradient mapping in (15) is strictly monotone.
Proposition 2
Since , by the proof of Proposition 1, we shall have
Thus, by the Gershgorin circle theorem, the latter is true if , which is implied by (16).
V Conclusion and outlook
In aggregative games with affine coupling constraints, continuoustime integral dynamics with semidecentralized computation and information exchange can ensure global asymptotic convergence to generalized aggregative or Nash equilibria, under mild regularity and strict monotonicity assumptions. Future research will focus on continuoustime distributedaveragedintegral dynamics in multiagent network games with coupling constraints.
Appendix A: Proofs
For ease of notation, next, we use , , and .
Proof of Lemma 1
Proof of Lemma 2
By Moreau’s decomposition theorem,
and . The proof then follows immediately.
Proof of Lemma 3
The proof follows the steps of [11, Proof of Lemma 6]. Since , for all vectors , by Moreau’s decomposition theorem, we have that
By definition of the normal cone , we have that
and in turn
(17) 
With similar arguments, we can show that
(18) 
Proof of Theorem 1
The dynamics in (7) represent a projected dynamical system with discontinuous righthand side [20]. The proof uses invariance arguments for differential inclusions with maximally monotone righthand side [13]. First, we note that in (5) is continuous and monotone. Then, we consider a zero of (Lemma 1), , and, bearing in mind Lemma 3, define the Lyapunov function . We show next that
(19) 
(20) 
therefore, we have , where stands for the righthand side of (20). By Lemma 3, we immediately obtain (19):
Consequently, we have that
by the monotonicity of . We conclude that is not increasing along the trajectories of (7). By radial unboundedness of , for any initial condition , the corresponding solution is bounded and therefore the associated limit set is nonempty, compact, invariant and attractive. Moreover, by definition of the limit set, is constant on . Thus, any solution with initial condition in must satisfy , that is is contained in the set of points satisfying . We then study the set . For all , it holds:
(21) 
By Lemma 1, we have that for some , hence and . Therefore, for all ,
(22) 
Now, we observe that and in turn
(23) 
The last inequality holds because, by Standing Assumption 2, and, by the definition of normal cone, . Thus, we obtain
(24) 
From (24), due to Standing Assumption 2, we conclude that and . From (22) and (24), we obtain , hence . The latter implies for all , i.e., , or, equivalently, . The latter and the identity established before returns that is a zero of , hence an equilibrium of (7), and this concludes the characterisation of .
We finally show that convergence is to an equilibrium point of (7). By Lemma 4 in Appendix B, the solution to (7) is the same as the solution to , where the righthand side of the differential inclusion is maximally monotone by Remark 1. We can then apply [21, Ch. 3, Sec. 2, Th. 1], [13, Th. 2.2, (C1), (C3)], to conclude that every equilibrium of (7) is Lyapunov stable and that, if the solution has an limit point at an equilibrium, then the solution converges to that equilibrium. Now, from the arguments in the first part of the proof, the nonempty and invariant limit set is contained in . Since points of are equilibria of (7), then the limit set is a singleton with an equilibrium to which the solution converges. This concludes the proof.
Appendix B: Projected dynamical systems
We consider a generic projected dynamical system
(25) 
where is a nonempty, closed and convex set. The dynamic behavior of (25) is wellstudied for continuous, hypomonotone mappings .
Definition 4
Hypomonotonicity. A mapping is hypomonotone if there exists such that
for all .