# Asynchronous and time-varying proximal type dynamics multi-agent network games

In this paper, we study proximal type dynamics in the context of noncooperative multi-agent network games. These dynamics arise in different applications, since they describe distributed decision making in multi-agent networks, e.g., in opinion dynamics, distributed model fitting and network information fusion, where the goal of each agent is to seek an equilibrium using local information only. We analyse several conjugations of this class of games, providing convergence results, or designing equilibrium seeking algorithms when the original dynamics fail to converge. For the games subject only to local constraints we look into both synchronous/asynchronous dynamics and time-varying communication networks. For games subject in addition to coupling constraints, we design an equilibrium seeking algorithm converging to a special class of game equilibria. Finally, we validate the theoretical results via numerical simulations on opinion dynamics and distributed model fitting.

• 8 publications
• 11 publications
• 3 publications
• 39 publications
• 25 publications
11/19/2019

### Time-varying constrained proximal type dynamics in multi-agent network games

In this paper, we study multi-agent network games subject to affine time...
11/11/2018

### Towards time-varying proximal dynamics in Multi-Agent Network Games

Distributed decision making in multi-agent networks has recently attract...
11/28/2019

### Multiple quadrotors carrying a flexible hose: dynamics, differential flatness and control

Using quadrotors UAVs for cooperative payload transportation using cable...
08/24/2022

### The END: Estimation Network Design for efficient distributed equilibrium seeking

Multi-agent decision problems are typically solved via distributed algor...
08/02/2021

### Tuning Cooperative Behavior in Games with Nonlinear Opinion Dynamics

We examine the tuning of cooperative behavior in repeated multi-agent ga...
06/06/2020

### Local Stackelberg equilibrium seeking in generalized aggregative games

We propose a two-layer, semi-decentralized algorithm to compute a local ...
06/08/2021

### Balancing Asymptotic and Transient Efficiency Guarantees in Set Covering Games

Game theoretic approaches have gained traction as a robust methodology f...

## I Introduction

### I-a Motivation: Multi-agent decision making over networks

Multi-agent decision making over networks is currently a vibrant research area in the systems-and-control community, with applications in several relevant domains, such as smart grids [1, 2], traffic and information networks [3], social networks [4, 5], consensus and flocking groups [6] and robotic [7] and sensor networks [8], [9]. The main benefit that each decision maker, in short, agent, achieves from the use of a distributed computation and communication, is to keep its own data private and to exchange information with selected agents only. Essentially, in networked multi-agent systems, the states (or the decisions) of each agent evolve as a result of local decision making, e.g. local constrained optimization, and distributed communication with some neighboring agents, via a communication network, usually modelled by a graph. Usually, the aim of the agents is to reach a collective equilibrium state, where no one can benefit from changing its state at that equilibrium.

### I-B Literature overview: Multi-agent optimization and multi-agent network games

Multi-agent dynamics over networks embody the natural extension of distributed optimization and equilibrium seeking problems in network games. In the past decade, this field has drawn attentions from research communities, leading to a wide range of results. Some examples of convex optimization constrained problems, subject to homogeneous constraint sets, can be found in [10], where a uniformly bounded subgradients and a complete communication graphs with uniform weights are considered; while in [11] the cost functions are assumed to be differentiable with Lipschitz continuous and uniformly bounded gradients.

Solutions for nooncoperative games over networks subject to convex compact local constraints have been developed also in [12] using strongly convex quadratic cost functions and time-invariant communication graphs; in [13] [14], using differentiable cost functions with Lipschitz continuous gradients, strictly convex cost functions, and undirected, possibly time-varying, communication graphs; and in [7] where the communication is ruled by a, possibly time-varying, digraph and the cost functions are assumed to be convex and lower semi-continuous. Some recent works developed algorithms to solve games over networks subject to asynchronous updates of the agents: in [15] and [16] the game is subject to affine coupling constraints, and the cost functions are differentiables, while the communication graph is assumed to be undirected.

To the best of the our knowledge, the only work that extensively focuses on multi-agent network games is [2], which considered general local convex costs and quadratic proximal terms, time-invariant and time-varying communication graphs, subject to technical restrictions.

This paper aims at presenting and solving new interesting problems, not yet considered in the context of multi-agent network games and generalizing available results as highlighted in the next section.

### I-C Contribution of the paper

Within the litterature on multi-aegnt network games, the most relevant works for our purposes are [12, 2] and [7], the last being the starting point of this paper. Next, we highlight the main contribution of our work with respect to the aforementioned literature and these works in particular:

• We prove that a row stochastic adjacency matrix with self-loops describing a strongly connected graph is an average operator. Furthermore, we show this holds in a Hilbert space weighted by the diagonal matrix, with entries being the elements of the left Perron Frobenius eigenvector of that matrix. This is a general result that we exploit for our proximal type dynamics.

• We establish the convergence for synchronous, asynchronous and time-varying dynamics in multi-agent network games, under the weak assumption of a communication network described by a row stochastich adjacency matrix. This weak assumption leads to several technical challenges that in [2] are not present, since the matrix is always assumed to be doubly stochastic, and in [7] are solved by modifying the original dynamics. We emphasize that in [2, 7], the asynchronous dynamics are not considered.

• We develop a semi-decentralized algorithm seeking normalized generalized network equilibrium, called Prox-GNWE, converging to a solution of the game subject to affine coupling constraints. Compared to the one developed in [2, Eq. 28–31], Prox-GNWE requires a lower number of communications between agents, and it can also be applied to games with a wider class of communication networks.

## Ii Notation

### Ii-a Basic notation

The set of real, positive, and non-negative numbers are denoted by , and , respectively; . The set of natural numbers is denoted by . For a square matrix , its transpose is denoted by and denotes the -th row of the matrix, and the element in the -th row and -th column. () stands for a symmetric and positive definite (semidefinite) matrix, while () describes an element wise inequality. is the Kronecker product of the matrices and

. The identity matrix is denoted by

. (

) represents the vector/matrix with only

() elements. For and , the collective vector is denoted by and . Given the matrices , denotes a block-diagonal matrix with the matrices on the diagonal. The null space of a matrix is denoted by . The Cartesian product of the sets is described by . Given two vectors and a symmetric and positive definite matrix , the weighted inner product and norm are denoted by and , respectively; the induced matrix norm is denoted by . A real dimensional Hilbert space obtained by endowing with the product is denoted by .

### Ii-B Operator-theoretic notations and definitions

The identity operator is defined by . The indicator function of is defined as if ; otherwise. The set valued mapping stands for the normal cone to the set , that is if and otherwise. The graph of a set valued mapping is . For a function , define and its subdifferential set-valued mapping, , . The projection operator over a closed set is and it is defined as . The proximal operator is defined by . A set valued mapping is -Lipschitz continuous with , if for all ; is (strictly) monotone if for all holds, and maximally monotone if it there is no monotone operator with a graph that strictly contains ; is -strongly monotone if for all it holds . denotes the resolvent mapping of . and denote the set of fixed points and zeros of , respectively. The operator is -averaged (-AVG) in , with , if , for all ; is nonexpansive (NE) if -AVG; is firmly nonexpansive (FNE) if -AVG; is -cocoercive if is -AVG (i.e., FNE).

## Iii Mathematical setup and problem formulation

We consider a set of agents (or players), where the state (or strategy) of each agent is denoted by . The set represents all the feasible states of agent , hence it is used to model the local constraints of each agent. Throughout the paper, we assume compactness and convexity of the local constraint set .

###### Standing Assumption 1 (Convexity)

For each , the set is non-empty, compact and convex.

We consider rational (or myopic) agents, namely, each agent aims at minimizing a local cost function , that we assume convex and with the following structure.

###### Standing Assumption 2 (Proximal cost functions)

For each , the strictly-convex function is defined by

 gi(xi,z):=fi(xi)+ιΩi(xi)+12∥xi−z∥2, (1)

where is a lower semi-continuous and convex function.

We emphasize that Standing Assumption 2 requires neither the differentiability of the local cost function, nor the Lipschitz continuity and boundedness of its gradient. In (1), the function is local to agent and models the local objective that the player would pursue if no coupling between agents is present. The quadratic term penalizes the distance between the state of agent and a given . This term is referred to the literature as regularization (see [17, Ch. 27]), since it leads to a strictly convex , even though the is only lower semi-continuous, see [17, Th. 27.23].

We assume that the agents can communicate through a network structure, described by a weighted digraph. Let us represent the communication links between the agents by a weighted adjacency matrix defined as . For all , denotes the weight that agent assigns to the state of agent . If , then the state of agent is independent from that of agent . The set of agents with whom agent communicates is denoted by . The following assumption formalizes the communication network via a digraph and the associated adjacency matrix.

###### Standing Assumption 3 (row stochasticity and self-loops)

The communication graph is strongly connected. The matrix is row stochastic, i.e., for all , and , for all . Moreover, has strictly-positive diagonal elements, i.e.,

In our setup, the variable in (1) represents the average state among the neighbors of agent , weighted through the adjacency matrix, i.e., . Therefore the cost function of agent is . Note that a coupling between the agents emerges in the local cost function, due to the dependence on the other agents strategy. The problem just described can be naturally formalized as a noncooperative network game between the players, i.e.,

 ∀i∈N:⎧⎨⎩argminy∈Rnfi(y)+12∥∥y−∑Nj=1ai,jxj∥∥2 s.t. y∈Ωi. (2)

We consider rational agents that, at each time instant , given the states of the neighbors, update their states/strategies according to the following myopic dynamics:

 xi(k+1)=argminy∈Ωigi(y,∑Nj=1ai,jxj(k)). (3)

These dynamics are relatively simple, yet arise in diverse research areas. Next, we recall some papers in the literature where the dynamics studied are a special case of (3).

1. Opinion dynamics: In [18], the authors study the Friedkin-Johnsen model, that is an extension of the DeGroot’s model [19]. The update rule in [18, Eq. 1] is effectively the best response of a game with cost functions equal to

 gi(xi,z)\coloneqq1−μiμi∥xi−xi(0)∥2+ι[0,1]n(xi)+∥xi−z∥2 (4)

where represents the stubbornness of the player. Thus, [18, Eq. 1] is a special case of (3).

2. Distributed model fitting:

One of the most common tasks in machine learning is model fitting and in fact several algorithms are proposed in literature, e.g.

[20, 21]. The idea is to identify the parameters of a linear model , where and are obtained via experimental data. If there is a large number of data, i.e., is a tall matrix, then the distributed counterpart of these algorithms are presented in [22, Sec. 8.2]. In particular, [22, Eq. 8.3] can be rewritten as a constrained version of (3). The cost function is defined by

 g(xi,z)\coloneqqℓi(Aixi−bi)+r(z),

where and represent the -th block of available data,and

is the local estimation of the parameters of the model. Here,

is a loss function and

is the regularization function . Finally, the arising game is subject to the constraint that at the equilibrium for all . In Section VI-C, we describe in detailed a possible implementation of this type of algorithms.

3. Constrained consensus: if the cost function of the game in (3) is chosen with for all , then we retrive the projected consensus algorithm studied in [10, Eq. 3] to solve the problem of constrained consensus. To achieve the convergence to the consensus, it is required the additional assumption that , see [10, Ass. 1].

Next, we introduce the concept of equilibrium of interest in this paper. Informally, we say that a collective vector is an equilibrium of the dynamics (3) of the game in (2), if no agent can decrease its local cost function by unilaterally changing its strategy with another feasible one. We formalize this concept in the following definition.

###### Definition 1 (Network equilibrium [2, Def. 1])

The collective vector is a network equilibrium (NWE) if, for every ,

 ¯¯¯xi=argminy∈Rngi(y,∑Nj=1ai,j¯¯¯xj). (5)

We note that if there are no self loops in the adjacency matrix, i.e., for all , then an NWE corresponds to a Nash equilibrium [2, Remark 1].

In the next section, we study the convergence of the dynamics in (3) to an NWE for a population of myopic agents. In particular, when the dynamics do not converge, we propose alternative ones to recover convergence.

## Iv Proximal dynamics

In this section, we study three different types of unconstrained proximal dynamics, namely synchronous, asynchronous and time-varying. While for the former two, we can study and prove convergence to an NWE, the last does not ensure convergence. Thus, we propose a modified version of the dynamics with the same equilibria of the original game.

### Iv-a Unconstrained dynamics

As a first step, we rephrase the dynamics in a more compact form by means of the proximity operator. In fact, the dynamics in (3), with cost function as in (1), are equivalent to the following proximal dynamics: for all ,

 xi(k+1)=prox¯fi(∑Nj=1ai,jxj(k)), ∀k∈N. (6)

In compact form, they read as

 x(k+1)=proxf(Ax(k)), (7)

where the matrix represents the interactions among agents, and the mapping is a block-diagonal proximal operator, i.e.,

 proxf⎛⎜ ⎜⎝⎡⎢ ⎢⎣z1⋮zN⎤⎥ ⎥⎦⎞⎟ ⎟⎠:=⎡⎢ ⎢ ⎢⎣prox¯f1(z1)⋮prox¯fN(zN)⎤⎥ ⎥ ⎥⎦. (8)
###### Remark 1

The definition of NWE can be equivalently recast as a fixed point of the operator (7). In fact, a collective vector is an NWE if and only if . Under Assumptions 1 and 2, is non-empty [23, Th. 4.1.5], i.e., there always exists an NWE of the game, thus the convergence problem is well posed.

The following lemma represents the cornerstone used to prove the convergence of the dynamics in (7

) to an NWE. In particular, it shows that a row stochastic matrix

is an AVG operator in the Hilbert space weighted by a diagonal matrix , whose diagonal entries are the elements of the left Perron-Frobenius (PF) eigenvector of , i.e., a vector s.t. . In the remainder of the paper, we always consider the normalized PF eigenvector, i.e., .

###### Lemma 1 (Averageness and left PF eigenvector)

Let Assumption 3 hold true, i.e, be row stochastic, be its smallest diagonal element and denotes its left PF eigenvector. Then, the following hold:

1. is -AVG in , with and ;

2. The operator is –AVG in

If is doubly-stochastic, the statements hold with .

See Appendix -C.

Now, we are ready to present the first result, namely, the global convergence of the proximal dynamics in (7) to an NWE of the game in (2).

###### Theorem 1 (Convergence of proximal dynamics)

For any , the sequence generated by the proximal dynamics in (7) converges to an NWE of (2).

See Appendix -C.

###### Remark 2

Theorem 1 extends [2, Th. 1], where the matrix is assumed doubly-stochastic [2, Ass. 1 Prop. 2]. In this case, is the left PF eigenvector of and the matrix can be set as the identity matrix, see Lemma 1.

###### Remark 3

In [2, Sec. VIII-A.] the authors study, via simulations, an application of the game in (2) to opinion dynamics. In particular, they conjecture the convergence of the dynamics in the case of a row stochastic weighted adjacency matrix. Theorem 1 theoretically supports the convergence of this class of dynamics.

The presence of self-loops in the communication (Assumption 3) is critical for the convergence of the dynamics in (7). Next, we present a simple example of a two player game, in which the dynamics fail to converge due to the lack of self-loops.

###### Example 1

Consider the two player game, in which the state of each agent is scalar and , for . The set is an arbitrarily big compact subset of , in which the dynamics of the agents are invariant. The communication network is described by the doubly-stochastic adjacency matrix , defining a strongly connected graph without self-loops. In this example, the dynamics in (7) reduce to . Hence, convergence does not take place for all .

If , i.e., the self-loop requirement in Standing Assumption 3 is not satisfied, then the convergence can be restored by relaxing the dynamics in (3) via the so-called Krasnoselskii iteration [24, Sec. 5.2],

 x(k+1)=(1−α)x(k)+α% proxf(Ax(k)), (9)

where . These new dynamics share the same fixed points of (7), that correspond to NWE of the original game in (2).

###### Corollary 1

For any and for , the sequence generated by the dynamics in (9) converges to an NWE of (3).

See Appendix -C.

### Iv-B Asynchronous unconstrained dynamics

The dynamics introduced in (7) assume that all the agents update their strategy synchronously. Here, we study the more realistic case in which the agents behave asynchronously, namely, perform their local updates at different time instants. Thus, at each time instant , only one agent updates its state according to (6), while the others do not, i.e.,

 xi(k+1)={prox¯fi(∑Nj=1ai,jxj(k)),if i=ik,xi(k),otherwise. (10)

Next, we derive a compact form for the dynamics above. Define as the matrix of all zeros except for , and also . Then, we define the set as a collection of these matrices. At each time instant

, the choice of an agent which performs the update is modelled via an i.i.d. random variable

, taking values in . If , it means that agent updates at time

, while the others do not change their strategies. Given a discrete probability distribution

, we define , for all . With this notation in mind, the dynamics in (7) are modified to model asynchronous updates,

 (11)

We remark that (11) represent the natural asynchronous counterpart of the ones in (7). In fact, the update above is equivalent to the one in (6) for the active agent at time .

Hereafter, we assume that each agent has a public and private memory. If the player is not performing an update, the strategies stored in the two memories coincide. During an update, instead, the public memory stores the strategy of the agent before the update has started, while in the private one there is the value that is modified during the computations. When the update is completed, the value in the public memory is overwritten by the private one. This assumption ensures that all the reads of the public memory of each agent , performed by each neighbor , are always consistent, see [25, Sec. 1.2] for technical details.

We consider the case in which the computation time for the update is not negligible, therefore the strategies that agent reads from each neighbor can be outdated of time instants. The maximum delay is assumed uniformly upper bounded.

###### Assumption 4 (Bounded maximum delay)

The delays are uniformly upper bounded, i.e., , for some .

The dynamics describing the asynchronous update with delays can be then cast in a compact form as

 x(k+1)=x(k)+ζk(prox% f(A^x(k))−^x(k)), (12)

where is the vector of possibly delayed strategies. Note that each agent has always access to the updated value of its strategy, i.e., , for all . We stress that the dynamics in (12) coincides with (11) when no delay is present, i.e., if .

The following theorem claims the convergence (in probability) of (12) to an NWE of the game in (2), when the maximum delay is small enough.

###### Theorem 2 (Convergence of asynchronous dynamics)

Let Assumption 4 hold true, and

 ¯¯¯¯φ

Then, for any , the sequence generated by (12) converges almost surely to some , namely, an NWE of the game in (2).

See Appendix -D. If the maximum delay does not satisfy (13), then the convergence of the dynamics in (12) is not guaranteed. In this case, convergence can be restored by introducing a time-varying scaling factor in the dynamics:

 x(k+1)=x(k)+ψkζk(proxf(A^x(k))−^x(k)). (14)

The introduction of this scaling factor, always smaller than , leads to a slower convergence of (14) with respect to (12), since it implies a smaller step size in update. The next theorem proves that, the modified dynamics converges if the scaling factor is chosen small enough.

###### Theorem 3

Let Assumption 4 hold true and set

 0<ψk

Then, for any , the sequence generated by (14) converges almost surely to some , namely, an NWE of the game in (2).

See Appendix -D.

### Iv-C Time-varying unconstrained dynamics

A challenging problem related to the dynamics in (7) is studying its convergence when the communication network is time-varying (i.e., the associated adjacency matrix is time dependent). In this case, the update rules in (7) become

 x(k+1)=proxf(A(k)x(k)), (15)

where and is the adjacency matrix at time instant .

Next, we assume persistent stochasticity of the sequence , that can be seen as the time-varying counterpart of Standing Assumption 3. Similar assumptions can be found in several other works over time-varying graphs, e.g., [26, Ass. 1], [2, Ass. 4,5], [10, Ass. 2,3].

###### Assumption 5 (Persistent row stochasticity and self-loops)

For all , the adjacency matrix is row stochastic and describes a strongly connected graph. Furthermore, there exists such that, for all , the matrix satisfies .

The concept of NWE in Definition 1 is bound to the particular communication network considered. In the case of a time-varying communication topology, we focus on a different class of equilibria, namely, those invariant with respect to changes in the communication topology.

###### Definition 2 (Persistent NWE [2, Ass. 3] )

A collective vector is a persistent NWE (p-NWE) of (15) if there exists some positive constant , such that

 (16)

Next, we assume the existence of a p-NWE.

###### Assumption 6 (Existence of a p-NWE)

The set of p-NWE of (15) is non-empty, i.e., .

We note that if the operators , , , have at least one common fixed point, our convergence problem boils down to the setup studied in [27]. In this case, [27, Th. 2] can be applied to the dynamics in (15) to prove convergence to a p-NWE, , where is a common fixed point of the proximal operators ’s. However, if this additional assumption is not met, then convergence is not directly guaranteed. In fact, even though at each time instant the update in (15) describes converging dynamics, the convergence of the overall time-varying dynamics is not proven for every switching signal, see e.g. [28, Part II].

Since satisfies Standing Assumption 3 for all , we can apply the same reasoning adopted in the proof of Theorem 1 to show that every mapping is AVG in a particular space , where in general is different at different time instants. A priori, there is not a common space in which the AVG propriety holds for all the mappings, thus we cannot infer the convergence of the dynamics in (15) under arbitrary switching signals. In some particular cases, a common space can be found, e.g., if all the adjacency matrices are doubly stochastic, then the time-varying dynamics converge in the space , see [2, Th. 3].

To address the more general case with row stochastic adjacency matrices, we propose modified dynamics with convergence guarantees to a p-NWE, for every switching sequence, i.e.,

 x(k+1)=proxf([I+Q(k)(A(k)−I)]x(k)). (17)

These new dynamics are obtained by replacing in (15) with , where is chosen as in Theorem 1. Remarkably, this key modification makes the resulting operators averaged in the same space, i.e., , for all satisfying Assumption 3, as explained in Appendix -E. Moreover, the change of dynamics does not lead to extra communications between the agents, since the matrix is block diagonal.

The following theorem represents the main result of this section and shows that the modified dynamics in (17), subject to arbitrary switching of communication topology, converge to a p-NWE of the game in (15), for any initial condition.

###### Theorem 4 (Convergence of time-varying dynamics)

Let Assumptions 5, 6 hold true. Then, for any , the sequence generated by (17) converges to a point , with as in (16), namely, a p-NWE of (15).

See Appendix -E.

We clarify that in general, the computation of associated to each requires global information on the communication network. Therefore, this solution is suitable for the case of switching between a finite set of adjacency matrices, for which the associated matrices can be computed offline.

Nevertheless, for some network structures the PF eigenvector is known or it can be explicitly computed locally. For example, if the matrix is symmetric, hence doubly-stochastic, or if each agent knows the weight that its neighbours assign to the information he communicates, the -the component of can be computed as , for any [29, Prop. 1.68]. Moreover, if each agent has the same out and in degree, denoted by , and the weights in the adjacency matrix are chosen as , then the left PF eigenvector is . In other words, in this case each agent must only know its out-degree to compute its component of .

## V Proximal dynamics under coupling constraints

### V-a Problem formulation

In this section, we consider a more general setup in which the agents in the game, not only are subject to local constraints, but also to affine separable coupling constraints. Thus, let us consider the collective feasible decision set

 X:=Ω∩{x∈RnN|Cx≤c} (18)

where and . For every agent , the set of points satisfying the coupling constraints reads as

 Xi(x−i):={y∈Rn|Ciy+∑Nj=1j≠iCjxj≤c}. (19)
###### Standing Assumption 7

For all , the local feasible decision set satisfies Slater’s condition.

The original game formulation in (2) changes to consider also the coupling constraints, i.e.,

 (20)

Hence the correspondent myopic dynamics read as

 xi(k+1)=argminy∈Xi(x−i(k))gi(y,∑Nj=1ai,jxj(k)). (21)

The concept of NWE (Definition 1) can be naturally extended for the case of network games with coupling constraints, as formalized next.

###### Definition 3 (Generalized Network Equilibrium)

A collective vector is a generalized network equilibrium (GNWE) for the game in (20) if, for every ,

 ¯¯¯xi=argminy∈Xi(x−i)gi(y,∑Nj=1ai,j¯¯¯xj).

The following example shows that the dynamics in (3) fail to converge even for simple coupling constraints.

###### Example 2

Consider a 2-player game, defined as in (20), where, for , and the local feasible decision set is defined as . The game is jointly convex since the collective feasible decision set is described by the convex set . The parallel myopic best response dynamics are described in closed form as the discrete-time linear system:

 [x1(k+1)x2(k+1)]=[0−1−10][x1(k)x2(k)], (22)

which is not globally convergent, e.g., consider .

The myopic dynamics of the agents fail to converge, thus we rephrase them into some analogues pseudo-collaborative ones. In fact, the players aim to minimize their local cost function, while at the same time coordinate with the other agents to satisfy the coupling constraints. Toward this aim, we first dualize the problem and transform it in an auxiliary (extended) network game [30, Ch. 3]. Finally, we design a semi-decentralized iterative algorithm that ensure the convergence of these dynamics to a GNWE.

Let us now introduce the dual variable for each player , and define the concept of extended network equilibrium arising from this new problem structure.

###### Definition 4 (Extended Network Equilibrium)

The pair , is an Extended Network Equilibrium (ENWE) for the game in (20) if, for every , it satisfies

 ¯¯¯xi =argminy∈Rngi(y,∑Nj=1ai,j¯¯¯xj)+¯¯¯λ⊤iCiy, (23a) ¯¯¯λi =argminξ∈RM≥0−ξ⊤(C¯¯¯x−c). (23b)

To find an ENWE of the game in (20), we assume the presence of a central coordinator. It broadcasts to all agents an auxiliary variable that each agent uses to compute its local dual variable . Namely, every agent applies a scaling factor to to attain its dual variable, i.e. . The values of the scaling factor represent a split of the burden that the agents experience to satisfy the constraints, hence . These game equilibrium problems were studied for the first time in the seminal work of Rosen [31], where the author introduces the notion of normalized equilibrium. We specialize this equilibrium idea for the problem at hand, introducing the notion of normalized extended network equilibrium (n-ENWE).

###### Definition 5 (normalized-ENWE)

The pair , is a normalized-Extended Network Equilibrium (n-ENWE) for the game in (20), if for all it satisfies

 ¯¯¯xi =argminy∈Rngi(y,∑Nj=1ai,j¯¯¯xj)+αi¯¯¯σ⊤Ciy, (24a) ¯¯¯σ =argminς∈RM≥0−ς⊤(C¯¯¯x−c), (24b)

where .

From (24a) – (24b), one can see that the class of n-ENWE is a particular instance of ENWE and, at the same time, a generalization of the case in which all the agents adopt the same dual variable, hence , for all . This latter case is widely studied in literature, since describes a fair split between the agents of the burden to satisfy the constraints [32, 33].

Next, we recast the n-ENWE described by the two inclusions (24a) – (24b) as a fixed point of a suitable mappings. In fact, (24a) can be equivalently rewritten as , where , while (24b) holds true if and only if .

To combine in a single operator (24a) and (24b), we first introduce two mappings, i.e.,

 R:=diag(proxf,projRM≥0) (25)

and the affine mapping defined as

 G(⋅):=G⋅+[0c]:=[A−ΛC⊤CI]⋅−[0c]. (26)

The composition of these two operators provides a compact definition of n-ENWE for the game in (20). In fact, a point is an n-ENWE if and only if , indeed the explicit computation of leads directly to (24a) and (24b).

The following lemmas show that an n-ENWE is also a GNWE by proving that a fixed point of is a GNWE.

###### Lemma 2 (GNWE as fixed point)

Let Assumption 7 hold true. Then, the following statements are equivalent:

1. is a GNWE for the game in (20);

2. such that .

###### Lemma 3 (Existence of a GNWE)

Let Assumption 7 hold true. It always exists a GNWE for the game in (20).

The two lemmas above are direct consequences of [2, Lem. 2, Prop. 2] respectively, so the proofs are omitted.

### V-B Algorithm derivation (Prox-GNWE)

In this section, we derive an iterative algorithm seeking n- GNWE (Prox-GNWE) for the game in (20). The set of fixed points of the operator can be expressed as the set of zeros of other suitable operators, as formalized next.

###### Lemma 4 ([24, Prop. 26.1 (iv)])

Let , with . Then,

Several algorithms are available in the literature to find a zero of the sum of two monotone operators, the most common being presented in [17, Ch. 26]. We opted for a variant of the preconditioned proximal-point algorithm (PPP) that allows for non self-adjoint preconditioning [34, Eq. 4.18]. It results in proximal type update with inertia, that resemble the original myopic dynamics studied. The resulting algorithm is here called Prox-GNWE and it is described in (27a) – (27d) where, for all the variables introduced, we adopted the notation and , for the sake of compactness. It is composed of three main steps: a local gradient descend (27a), done by each agent, a gradient ascend (27b), performed by the coordinator, and finally an inertia step (27c) – (27d).

The parameters Prox-GNWE are chosen such that the following inequalities are satisfied:

 ri−ai,i <δ−1i≤γ−1−ri−ai,i,∀i∈N, (28) pj <β≤γ−1−pj,

where

 ri\coloneqq12∑Nj=1j≠i(ai,j+aji)+(1+αi)∥C⊤i∥∞,pj\coloneqq(1+αi)maxj∈{1,…,N}∥C⊤j∥∞. (29)

We note that, if is chosen small enough, then the right inequalities in (28) always hold. On the other hand, is the step size of the algorithm, thus a smaller value can affect the convergence speed. The formal derivation of these bounds is reported in Appendix -F.

We also note that (27a) – (27d) require two communication between an agent and its neighbours one to obtain the value of in (27a) and the second to gather for all in (27c). Finally, we conclude the section by establishing global convergence of the sequence generated by Prox-GNWE to a GNWE of the game in (20).

###### Theorem 5 (Convergence Prox-GNWE)

Set , for all , with as in Theorem 1, and choose , , satisfying (28). Then, for any initial condition, the sequence generated by (27a) – (27d), converges to a GNWE of the game in (20).

See Appendix -G.

## Vi Numerical simulations

### Vi-a Synchronous/asynchronous Friedkin and Johnsen model

As mentioned in Section III, the problem studied in this paper arises often in the field of opinion dynamics. In this first simulation, we consider the standard Friedkin and Johnsen model, introduced in [35]. The strategy of each player represents its opinion on independent topics at time . An opinion is represented with a value between and , hence . The opinion if agent completely agrees on topic , and vice versa. Each agent is stubborn with respect to its initial opinion and defines how much its opinion is bound to it, e.g., if the player is a follower on the other hand if it is fully stubborn. In the following, we present the evolution of the synchronous and asynchronous dynamics (7) and (12), respectively, where the cost function is as in (4).

We considered agents discussing on topics. The communication network and the weights that each agent assigns to the neighbours are randomly drawn, with the only constraint of satisfying Assumption 3. The initial condition is also a random vector . Half of the players are somehow stubborn and the remaining are more incline to follow the opinion of the others, i.e., . For the asynchronous dynamics, we consider three scenarios.

1. There is no delay in the information and the probability of update is uniform between the players, hence and .

2. There is no delayed information and the update probability is defined accordingly to a random probability vector , so and

. In particular, we consider the case of a very skewed distribution to highlight the differences with respect to (A1) and (A3).

3. We consider an uniform probability of update and a maximum delay of two time instants, i.e., and . The values of the maximum delay is chosen in order to fulfil condition (13).

4. In this case, we consider the same setup as in (A3) but in this case the maximum delay is . Notice that the theoretical results do not guarantee the convergence of these dynamics.

In Figure 1 it can be seen how the convergence of the synchronous dynamics and the asynchronous ones, in the conditions (A2), (A3), is similar. The delay affects the convergence speed if it surpass the theoretical upper bound in (13), as in (A4). It is worthy to notice that, even a big delay, does not seem to lead to instability, while it can produce a sequence that does not converge monotonically to the equilibrium, see the zoom in Fig. 1. The slowest convergence is obtained in the case of a non uniform probability of update, i.e., (A2). From the simulation it seems that the uniform update probability produces always the fastest convergence. In fact, the presence of agents updating with a low probability produces some plateau in the convergence of the dynamics.

### Vi-B Time-varying DeGroot model with bounded confidence

In this section, we consider a modified time varying version of the DeGroot model [19], where each agent is willing to change its opinion up to a certain extent. This model can describe the presence of extremists in a discussion. In the DeGroot model, the cost function of the game in  (2) , reduces to for all . We consider agents discussing on one topic, hence the state is a scalar. The agents belong to three different categories, that we call: positive extremists, negative extremists and neutralists. This classification is based on the local feasible decision set.

• Positive (negative) extremists: the agents agree (disagree) with the topic and are not open to drastically change their opinion, i.e., ().

• Neutralists: the agents do not have a strong opinion on the topic, so their opinions vary from to , i.e., .

Our setup is composed of two negative extremists (agent 1 and 2), four neutralists (agents 3-6) and two positive extremists (agents 6 and 8). We assume that, at each time instant, the agents can switch between three different communication network topologies, depicted in Figure 2.

The dynamics in (17) can be rewritten, for a single agent , as

 xi(k+1)=projΩi((1−qi(k))xi(k)+qi(k)[A(k)]ix(k)).

From this formulation, it is clear how the modification of the original dynamics (15) results into an inertia of the players to change their opinion.

We have proven numerically that Assumption 6 is satisfied and that the unique p-NWE of the game is

 ¯x=[0.25,0.25,0.44,0.58,0.49,0.41,0.75,0.75]