# Gradient Play in n-Cluster Games with Zero-Order Information

We study a distributed approach for seeking a Nash equilibrium in n-cluster games with strictly monotone mappings. Each player within each cluster has access to the current value of her own smooth local cost function estimated by a zero-order oracle at some query point. We assume the agents to be able to communicate with their neighbors in the same cluster over some undirected graph. The goal of the agents in the cluster is to minimize their collective cost. This cost depends, however, on actions of agents from other clusters. Thus, a game between the clusters is to be solved. We present a distributed gradient play algorithm for determining a Nash equilibrium in this game. The algorithm takes into account the communication settings and zero-order information under consideration. We prove almost sure convergence of this algorithm to a Nash equilibrium given appropriate estimations of the local cost functions' gradients.

## Authors

• 4 publications
• 3 publications
• 6 publications

We are concerned with finding Nash Equilibria in agent-based multi-clust...
02/18/2021 ∙ by Jan Zimmermann, et al. ∙ 0

• ### Nash equilibrium seeking under partial-decision information over directed communication networks

We consider the Nash equilibrium problem in a partial-decision informati...
09/10/2020 ∙ by Mattia Bianchi, et al. ∙ 0

• ### Learning Nash Equilibria in Monotone Games

We consider multi-agent decision making where each agent's cost function...
04/03/2019 ∙ by Tatiana Tatarenko, et al. ∙ 0

• ### Distributed Nash Equilibrium Seeking under Quantization Communication

This paper investigates Nash equilibrium (NE) seeking problems for nonco...
06/30/2021 ∙ by Ziqin Chen, et al. ∙ 0

• ### A distributed algorithm for average aggregative games with coupling constraints

We consider the framework of average aggregative games, where the cost f...
06/14/2017 ∙ by Francesca Parise, et al. ∙ 0

• ### Quantization Games on Social Networks and Language Evolution

We consider a strategic network quantizer design setting where agents mu...
05/31/2020 ∙ by Ankur Mani, et al. ∙ 0

• ### Algorithm Instance Games

This paper introduces algorithm instance games (AIGs) as a conceptual cl...
05/13/2014 ∙ by Samuel D. Johnson, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Distributed optimization and game theory provide powerful frameworks to deal with optimization problems arising in multi-agent systems. In generic distributed optimization problems, the cost functions of agents are distributed across the network, meaning that each agent has only partial information about the whole optimization problem which is to be solved. Game theoretic problems arise in such networks when the agents do not cooperate with each other and the cost functions of these non-cooperative agents are coupled by the decisions of all agents in the system. The applications of game theoretic and distributed optimization approaches include, for example, electricity markets, power systems, flow control problems and communication networks

[11, 12, 6].

On the other hand, cooperation and competition coexists in many practical situations, such as cloud computing, hierarchical optimization in Smart Grid, and adversarial networks [3, 4, 8]. A body of recent work has been devoted to analysis of non-cooperative games and distributed optimization problems in terms of a single model called -cluster games [16, 19, 17, 18, 20, 5]. In such -cluster games, each cluster corresponds to a player whose goal is to minimize her own cost function. However, the clusters in this game are not the actual decision-makers as the optimization of the cluster’s objective is controlled by the agents belonging to the corresponding cluster. Each of such agents has her own local cost function, which is available only to this agent, but depends on the joint actions of agents in all clusters. The cluster’s objective, in turn, is the sum of the local cost functions of the agents within the cluster. Therefore, in such models, each agent intends to find a strategy to achieve a Nash equilibrium in the resulting -cluster game, which is a stable state that minimizes the cluster’s cost functions in response to the actions of the agents from other clusters.

Continuous time algorithms for the distributed Nash equilibria seeking problem in multi-cluster games were proposed in [19, 17, 18]. The paper [17] solves an unconstrained multi-cluster game by using gradient-based algorithms, whereas the works [18] and [19] propose a gradient-free algorithm, based on zero-order information, for seeking Nash and generalized Nash equilibria respectively. In discrete time domain, the work [5] presents a leader-follower based algorithm, which can solve unconstrained multi-cluster games in linear time. The authors in [20] extend this result to the case of leaderless architecture. Both papers [5, 20] prove linear convergence in games with strongly monotone mappings and first-order information, meaning that agents can calculate gradients of their cost functions and use this information to update their states. In contrast to that, the work [16] deals with a gradient-free approach to the cluster games. However, the gradient estimations are constructed in such a way that only convergence to a neighborhood of the equilibrium can be guaranteed. Moreover, these estimations are obtained by using two query points, for which an extra coordination between the agents is required.

Motivated by relevancy of -cluster game models in many engineering applications, we present a discrete time distributed procedure to seek Nash equilibria in -cluster games with zero-order information. We consider settings, where agents can communicate with their direct neighbors within the corresponding cluster over some undirected graph. However, in many practical situations the agents do not know the functional form of their objectives and can only access the current values of their objective functions at some query point. Such situations arise, for example, in electricity markets with unknown price functions [15]. In such cases, the information structure is referred to as zero-order oracle. Our work focuses on zero-order oracle information settings and, thus, assumes agents to have no access to the analytical form of their cost functions and gradients. The agents instead construct their local query points and get the corresponding cost values from the oracle. Based on these values, the agents estimate their local gradients to be able to follow the step in the gradient play procedure. We formulate the sufficient conditions and provide some concrete example on how to estimate the gradients to guarantee the almost sure convergence of the resulting algorithm to Nash equilibria in -cluster games with strictly monotone game mappings. To the best of our knowledge, we present the first algorithm solving -cluster games with zero-order oracle and the corresponding one-point gradient estimations.

The paper is organized as follows. In Section II we formulated the -cluster game with undirected communication topology in each cluster and zero-order oracle information. Section III introduces the gradient play algorithm which is based on the one-point gradient estimations. The convergence result is presented in Section III as well. Section IV provides an example of query points and gradient estimations which guarantee convergence of the algorithm discussed in Section III. Section V presents some simulation results. Finally, Section VI concludes the paper.

Notations. The set is denoted by . For any function , , is the partial derivative taken in respect to the

th coordinate of the vector variable

. We consider real normed space , which is the space of real vectors, i.e. . We use to denote the inner product in . We use to denote the Euclidean norm induced by the standard dot product in . Any mapping is said to be strictly monotone on , if for any , where . We use to denote the ball of the radius and the center and to denote the unit sphere with the center in . We use to denote the projection of to a set . The mathematical expectation of a random value is denoted by . We use the big- notation, that is, the function is as , = as , if for some positive constant .

## Ii Nash Equilibrium Seeking

### Ii-a Problem Formulation

We consider a non-cooperative game between clusters. Each cluster itself consists of agents. Let and 111All results below are applicable for games with different dimensions of the action sets . The one-dimensional case is considered for the sake of notation simplicity. denote respectively the cost function and the feasible action set of the agent in the cluster . We denote the joint action set of the agents in the cluster by . Each function , , depends on , which represents the joint action of the agents within the cluster , and , denoting the joint action of the agents from all clusters except for the cluster . The cooperative cost function in the cluster is, thus, .
We assume that the agents within each cluster can interact over an undirected communication graph . The set of nodes is the set of the agents and the set of undirected arcs is such that if and only if , i.e. there is a bidirectional communication link between to , over which information in form of a message can be sent from the agent to the agent and vice versa in the cluster .
However, there is no explicit communication between the clusters. Instead, we consider the following zero-order information structure in the system: No agent has access to the analytical form of any cost function, including its own. Each agent can only observe the value of its local cost function given any joint action of all agents in the system. Formally, given a joint action , each agent , receives the value from a zero-order oracle. Especially, no agent has or receives any information about the gradient.

Let us denote the game between the clusters introduced above by . We make the following assumptions regarding the game :

###### Assumption 1.

The -cluster game under consideration is strictly convex. Namely, for all , the set is convex, the cost function is continuously differentiable in for each fixed . Moreover, the game mapping, which is defined as

 F(x)≜[∇1J1(x1,x−1),…,∇nJn(xn,x−n)]T (1)

is strictly monotone on .

###### Assumption 2.

Each function is Lipschitz continuous on .

###### Assumption 3.

The action sets , , , are compact. Moreover, for each there exists a so called safety ball with and 222Existence of the safety ball is required to construct feasible points for costs’ gradient estimations in the zero-order settings under consideration (see [1])..

The assumptions above are standard in the literature on both game-theoretic and zero-order optimization [1]. Finally, we make the following assumption on the communication graph, which guarantees sufficient information ”mixing” in the network within each cluster.

###### Assumption 4.

The underlying undirected communication graph is connected for all . The associated non-negative mixing matrix defines the weights on the undirected arcs such that if and only if and , .

One of the stable solutions in any game corresponds to a Nash equilibrium defined below.

###### Definition 1.

A vector is called a Nash equilibrium if for any and

 Ji(x∗i,x∗−i)≤Ji(xi,x∗−i).

In this work, we are interested in distributed seeking of a Nash equilibrium in any game with the information structure described above and for which Assumptions 1-4 hold.

### Ii-B Existence and Uniqueness of the Nash Equilibrium

In this subsection, we demonstrate the existence of the Nash equilibrium for under Assumptions 1 and 3. For this purpose we recall the results connecting Nash equilibria and solutions of variational inequalities from [9].

###### Definition 2.

Consider a set and a mapping : . A solution to the variational inequality problem is a set of vectors such that , for any .

The following theorem is the well-known result on the connection between Nash equilibria in games and solutions of a definite variational inequality (see Corollary 1.4.2 in [9]).

###### Theorem 1.

Consider a non-cooperative game . Suppose that the action sets of the players are closed and convex, the cost functions are continuously differentiable and convex in for every fixed on the interior of the joint action set . Then, some vector is a Nash equilibrium in , if and only if , where is the game mapping defined by (1).

Next, we formulate the result guaranteeing existence and uniqueness of in the case of strictly monotone map (see Corollary 2.2.5 and Proposition 2.3.3 in [9]).

###### Theorem 2.

Given the , suppose that is compact and the mapping is strictly monotone. Then, the solution exists and is a singleton.

Taking into account Theorems 1 and 2, we obtain the following result.

###### Theorem 3.

Let be a game for which Assumptions 1 and 3 hold. Then, there exists the unique Nash equilibrium in . Moreover, the Nash equilibrium in is the solution of , where is the game mapping (see (1)).

Thus, if Assumptions 1 and 3 hold, we can guarantee existence and uniqueness of the Nash equilibrium in the game under consideration and use the corresponding variational inequality in the analysis of the optimization procedure presented below.

## Iii Main Results

### Iii-a Zero-order gradient play between clusters

To deal with the zero-order information available to the agents and local state exchanges within the clusters, we assume each agent from the cluster maintains a local variable

 x(j)i=[x(j)1i,⋯,x(j)j−1i,xji,x(j)i+1i,⋯,x(j)nii]T∈Ωi, (2)

which is her estimation of the joint action of the agents from her cluster . Here, is player ’s estimate of and is the action of agent from cluster . The goal of the agents within each cluster is to update their local variables in such a way that the joint action with converges to the Nash equilibrium in the game between the clusters as time runs. To let the agents achieve this goal, we aim to adapt the standard projected gradient play approach to the cluster game with the zero-order information.

At this point we assume each agent , , based on its local estimation , constructs a feasible query point and sends it to the oracle. As a reply from the oracle, the agent receives the value . The vector here corresponds to the point obtained by some combination of the query vectors sent by the agents from the other clusters. Formally,

 ^~x−i=(^x(j1)1,…,^x(ji−1)i−1,^x(ji+1)i+1,…,^x(jn)n), (3)

where denotes some agent from the cluster , . Further each agent , , uses the received value to obtain the random estimation of her local cost’s gradient at the point , where

 ~x−i=(x(j1)1,…,x(ji−1)i−1,x(ji+1)i+1,…,x(jn)n) (4)

corresponds to the local estimations of other agents (one for each cluster different from ) based on which query points are obtained. Thus, . As is an estimation of , we represent this vector by the following decomposition:

 dji=∇iJji(x(j)i,~x−i)+eji, (5)

where is a random vector reflecting inaccuracy of the obtained estimation, i.e. the estimation error vector. Note that for the joint query point the oracle is free to choose any combination of the local queries defined in (3).

Now we are ready to formulate the gradient play between the clusters. Starting with an arbitrary , each agent updates the local estimation vector , , , as follows:

 x(j)i(t+1)=PΩi{ni∑l=1wijlx(l)i(t)−αtdji(t)}, (6)

where the time-dependent parameter corresponds to the step size.

Let be the -algebra generated by the estimations up to time , , . Let be the running average of the agents’ estimations vectors within the cluster . The following proposition describes the behavior of in the long run.

###### Proposition 1.

Let Assumptions 3 and 4 hold and , , , be updated according to (6). Then for all ,

1. if and almost surely, then almost surely;

2. if and almost surely, then .

###### Proof.

Follows from Lemma 8 in [7]333The proof can be repeated up to (37) in [7]. The inequality (37) and the analysis afterward stay valid in terms of the conditional expectation ..

In view of the proposition above and to be able to analyze behavior of the algorithm by means of the running averages , we make the following assumption on the balance between the step size and the error term .

###### Assumption 5.

The step size and the error term are such that

 ∞∑t=0αt=∞,∞∑t=0α2t<∞, ∞∑t=0αtE{∥eji((t))∥|Ft}<∞ almost surely, ∞∑t=0α2tE{∥eji((t))∥2|Ft}<∞ almost surely.

In Section IV we shed light on how the gradients can be sampled to guarantee fulfillment of Assumption 5. With Proposition 1 in place, we are ready to prove the main result formulated in the theorem below.

###### Theorem 4.

Let Assumptions 1-5 hold and , , , be updated according to (6). Then the joint action converges almost surely to the unique Nash equilibrium in the , i.e. .

###### Proof.

Let be the unique Nash equilibrium in the game , see Theorem 3. We proceed with estimating the distance between and . Let . As , we can use the non-expansion of the projection operator to conclude that almost surely (a.s.)444In the following discussion the big- notation is defined under the limit (see Notations).

 ∥x(j)i(t+1)−x∗i∥2≤∥vji(t)−αtdji(t)−x∗i∥2 (7) =∥vji(t)−x∗i∥2 (8) −2αt(dji(t),vji(t)−x∗i)+α2t∥dji(t)∥2 (9) =∥vji(t)−x∗i∥2 (10) −2αt(∇iJji(x(j)i(t),~x−i(t)),vji(t)−x∗i) (11) −2αt(eji((t)),vji(t)−x∗i)+O(α2t(1+∥eji(t)∥2)) (12) ≤∥vji(t)−x∗i∥2 (13) −2αt(∇iJji(x(j)i(t),~x−i(t)),vji(t)−x∗i) (14) +O(αt∥eji((t))∥)+O(α2t(1+∥eji(t)∥2)), (15)

where in the last equality we used (5), which implies that a.s.

 ∥dji(t)∥2≤2(∥∇iJji(x(j)i(t),~x−i(t))∥2+∥eji(t)∥2)

and, thus, a.s. (see Assumptions 1 and 3), whereas in the last inequality we used the Cauchy–Schwarz inequality, implying

 −(eji((t)),vji(t)−x∗i)≤∥eji((t))∥vji(t)−x∗i∥a.s.,

and Assumption 3 implying almost sure boundedness of . We focus now on the terms and . Due to Assumption 4, we have that a.s.

 ∥vji(t)−x∗i∥2=∥ni∑l=1wijlx(l)i(t)−x∗i∥2≤ni∑l=1wijl∥x(l)i(t)−x∗i∥2.

And, as , we obtain that a.s.

 ni∑j=1∥vji(t)−x∗i∥2 ≤ni∑l=1(ni∑j=1wijl)∥x(l)i(t)−x∗i∥2 (16) =ni∑l=1∥x(l)i(t)−x∗i∥2. (17)

Next,

 (∇iJji(x(j)i(t), ~x−i(t)),vji(t)−x∗i) (18) = (∇iJji(x(j)i(t),~x−i(t)),vji(t)−x∗i) (19) −(∇iJji(x(j)i(t),~x−i(t)),¯xi(t)−x∗i) (20) +(∇iJji(x(j)i(t),~x−i(t)),¯xi(t)−x∗i) (21) −(∇iJji(¯xi(t)¯x−i(t)),¯xi(t)−x∗i) (22) +(∇iJji(¯xi(t)¯x−i(t)),¯xi(t)−x∗i), (23)

where is the joint running average of the agents’ local variable over all clusters except for the cluster (see more details in (4)). Thus, by applying the Cauchy–Schwarz inequality to (18), we get

 −(∇iJji(x(j)i(t),~x−i(t)),vji(t)−x∗i) (24) ≤∥∇iJji(x(j)i(t),~x−i(t))∥∥vji(t)−¯xi(t)∥ (25) +∥∇iJji(x(j)i(t),~x−i(t))−∇iJji(¯xi(t)¯x−i(t))∥ (26) ×∥¯xi(t)−x∗i∥ (27) −(∇iJji(¯xi(t)¯x−i(t)),¯xi(t)−x∗i),a.s.. (28)

Taking into account almost sure boundedness of and (see Assumptions 1 and  3) and Assumption 2, we conclude that

 −(∇iJji(x(j)i(t),~x−i(t)),vji(t)−x∗i) (29) ≤O(∥vji(t)−¯xi(t)∥) (30) +O(∥x(j)i(t)−¯xi(t)∥+∥~x−i(t)−¯x−i(t)∥) (31) −(∇iJji(¯xi(t),~x−i(t)),¯xi(t)−x∗i). (32)

Thus, we get from (7)

 ∥x(j)i(t+1)−x∗i∥2≤∥vji(t)−x∗i∥2 (33) −2αt(∇iJji(¯xi(t)¯x−i(t)),¯xi(t)−x∗i) (34) +2αtO(∥vji(t)−¯xi(t)∥) (35) +2αtO(∥x(j)i(t)−¯xi(t)∥+∥~x−i(t)−¯x−i(t)∥) (36) +O(αt∥eji((t))∥)+O(α2t(1+∥eji(t)∥2)),a.s.. (37)

Analogously to (16)

 ni∑j=1∥vji(t)−¯xi(t)∥≤ni∑l=1∥|x(j)i(t)−¯xi(t)∥.

Therefore, by averaging both sides of (33) over and taking the conditional expectation in respect to (below we use the notation ), we obtain that a.s.

 1nini∑j=1Et{∥x(j)i(t+1)−x∗i∥2}≤1nini∑j=1∥x(j)i(t)−x∗i∥2 (38) −2αt1nini∑j=1(∇iJji(¯xi(t)¯x−i(t)),¯xi(t)−x∗i) (39) +O(1nini∑j=1αt∥x(j)i(t)−¯xi(t)∥) (40) +O(αt∥~x−i(t)−¯x−i(t)∥) (41) (42) =1nini∑j=1∥x(j)i(t)−x∗i∥2 (43) −2αt(∇iJi(¯xi(t)¯x−i(t)),¯xi(t)−x∗i)+hi(t), (44)

where

 hi(t) =O(1nini∑j=1αt∥x(j)i(t)−¯xi(t)∥) +O(αt∥~x−i(t)−¯x−i(t)∥) +1nini∑j=1O(αtEt{∥eji((t))∥}) +1nini∑j=1O(α2t(1+Et{∥eji(t)∥2})).

By taking into account Proposition 1 2) and the definition of (see (4)), we conclude that a.s.

 ∞∑t=0O(1nini∑j=1αt∥x(j)i(t)−¯xi(t)∥)<∞,
 ∞∑t=0O(αt∥~x−i(t)−¯x−i(t)∥)<∞.

Moreover, due to Assumption 5,

 ∞∑t=01nini∑j=1O(αtEt{∥eji((t))∥}<∞,
 ∞∑t=01nini∑j=1α2t(1+Et{∥eji(t)∥2}))<∞

almost surely. Thus,

 ∞∑t=0hi(t)<∞a.s. for all i∈[n]. (45)

Next, let us introduce the vector where . Therefore, summing (38) over implies

 Et∥u(t+1)∥2≤∥u(t)∥2−2αt(F(¯x(t)),¯x(t)−x∗) (46) +n∑i=1hi(t) (47) ≤∥v(t)∥2−2αt(F(¯x(t))−F(x∗),¯x(t)−x∗) (48) +n∑i=1hi(t), (49)

where in the last inequality we used the fact that is the Nash equilibrium in and, thus, a.s. for all (see Theorem 1). Due to the strictly monotone mapping (see Assumption 1), which implies

 (F(¯x(t))−F<