1 Introduction
Nash equilibrium problems (NEPs) have been widely studied since their first formulation [1]. In a NEP, a set of agents interacts with the aim of minimizing their cost functions. A number of results are present concerning existence ad uniqueness of an equilibrium as well as methodologies to compute one [2, 3]. The interactions between the agents are expressed, in this case, only through the cost function. For the sake of generality and of making the problem realistic, coupling constraints have been considered as well, since the introduction of socalled generalized Nash equilibrium problems (GNEPs) [4]. This class of games has been recently studied by the multiagent system and control community [5, 6, 7, 8, 9, 10]. One main reason for this interest is related to possible applications that range from economics, to engineering and operation research [6, 11].
The characteristic of GNEPs is that each agent seeks to minimize his own cost function under some joint feasibility constraints. Namely, both the cost function and the constraints depend on the strategies chosen by the other agents. Consequently, the search for GNE is usually very difficult.
Similarly to NEPs, a number of results are present concerning algorithms and methodologies to find an equilibrium in a GNEP [12, 13]. In the deterministic case, many algorithm are available to find a solution, both distributed or semidecentralized [5, 14, 15]. Among the possible methods to reach an equilibrium, an elegant approach is to seek for a solution of the associated variational inequality (VI) [13].
To recast the problem as a VI, the Karush–Kuhn–Tucker (KKT) conditions are considered and the problem is rewritten as a monotone inclusion. Such a problem can then be solved via operator splitting techniques. Among others, we focus on the forward–backward (FB) splitting which leads to one of the fastest and definitely simplest algorithm available [16].
The downside of the FB scheme is that, when applied to GNEPs, it is not distributed. When considering gametheoretic setup, it is desirable (and more realistic) to consider distributed algorithms, in the sense that each agent should only know its local cost function and its local constraints. For this reason, preconditioning has been recently introduced in [5]. In [17], we propose a preliminary extension of this method to the stochastic case.
A stochastic NEP (SNEP) is a NEP where the cost functions are expected value functions. Such problems arise when there is some uncertainty, expressed through a random variable with an unknown distribution. Unfortunately, SNEPs are not studied as much as their deterministic counterpart, despite a number of practical problems must be modelled with uncertainty. Among others, in transportation systems, a possible source of uncertainty is the drivers perception of travel time
[18]; in electricity markets, companies produce energy without knowing in advance exactly the demand [19]. Similarly to the deterministic case, if we also consider shared constraints, the problem can be modelled with stochastic GNEPs (SGNEPs) [20]. For instance, any networked Cournot games with market capacity constraints, with uncertainty in the demand, can be modelled in this case [21, 22]. Due to their wide applicability, SGNEPs have received quite some attention from the control community as well [23, 24, 25].A SGNEP is a GNEP with expected value cost functions. Indeed, if the random variable is known, the expected value formulation can be solved with any standard technique for deterministic variational inequalities. However, the pseudogradient is usually not directly accessible, for instance due to the need of excessive computations in performing the expected value. For this reason, in many situations, the search for a solution of a SVI relies on samples of the random variable. Essentially, there are two main methodologies available: sample average approximation (SAA) and stochastic approximation (SA). In the SAA approach, we replace the expected value formulation with the average over an increasing number of samples of the random variable. This approach is practical when there is a huge number of data available as, for instance, in Monte Carlo simulations or machine learning
[26, 27]. In the SA approach, each agent can sample only one realization of the random variable. This approach is less computationally expensive, but, not surprisingly, it usually requires stronger assumptions on the mappings involved [24, 28, 29].One of the very first formulations of a SA approach for a stochastic FB problem was made in [30], under the assumption of strong monotonicity and Lipschitz continuity of the mapping involved. In [31] instead, convergence is proved under cocoercivity and uniform monotonicity. To weaken the assumptions, algorithms more involved than the FB have been proposed and studied in the literature. For instance, in a recent paper, [26], the authors propose a forwardbackwardforward (FBF) algorithm that converges to a solution under the assumption of pseudomonotone pseudogradient mapping but it requires two costly evaluations of the pseudogradient mapping. Alternatively, under the same assumptions, one can consider the extragradient (EG) method proposed in [27] which takes two projection steps that can be slow. Therefore, taking weaker assumptions comes at the price of having computational complexity and slowness of the algorithms.
In this paper, we present a FB algorithm and prove its convergence both for SGNEPs and SNEPs. In particular, our main contributions are the following:

We present the first preconditioned FB algorithm for SGNEPs with non smooth cost functions and prove its almost sure convergence to an equilibrium.

For SNEPs, the FB algorithm converge almost surely under restricted cocoercivity or restricted strict monotonicity also with the SA scheme (Section 6).
Our assumptions are weaker when compared to the current literature on FB algorithms. Indeed, the preconditioning technique is presented in [5, 14] under strong monotonicity while FB algorithms for stochastic VIs converge almost surely for cocoercive and uniformly monotone [31] or strongly monotone operators [30, 32]. Moreover, compared to the naively monotone FBF and EG, our algorithm shows faster convergence both in number of iterations and computational time.
We point out that, in both cases, we suppose the pseudogradient to be Lipschitz continuous, but knowing the Lipschitz constant is not necessary. This is remarkable since computing the Lipschitz constant can be challenging in a distributed setup.
A preliminary study related to this work is submitted for publication [17]. In that paper, we considered a SGNEP and build a preconditioned FB algorithm with damping. The algorithm is guaranteed to reach a SGNE if the pseudogradient mapping is strongly monotone and its convergence follows directly from [31]. We here motivate in details that the assumption of uniform monotonicity taken in [31] can be replaced with assumptions related to specific properties of the selected approximation schemes. Moreover, restricted cocoercivity is enough for the analysis and to ensure convergence.
2 Notation and preliminaries
We use Standing Assumptions to postulate technical conditions that implicitly hold throughout the paper while Assumptions are postulated only when explicitly called.
denotes the set of real numbers and . denotes the standard inner product and represents the associated Euclidean norm. We indicate a (symmetric and) positive definite matrix , i.e., , with . Given a matrix , we denote the induced inner product, . The associated induced norm, , is defined as . indicates the Kronecker product between matrices and .
indicates the vector with
entries all equal to . Given vectors ,is the resolvent of the operator where indicates the identity operator. The set of fixed points of is . For a closed set the mapping denotes the projection onto , i.e., . The residual mapping is, in general, defined as Given a proper, lower semicontinuous, convex function , the subdifferential is the operator . The proximal operator is defined as . is the indicator function of the set C, i.e., if and otherwise. The setvalued mapping denotes the normal cone operator for the the set , i.e., if otherwise.
We now recall some basic properties of operators [13]. First, we recall that is Lipschitz continuous if, for some
Definition 1 (Monotone operators).
A mapping is:

(strictly) monotone if for all

cocoercive with , if for all

firmly non expansive if for all
We use the adjective “restricted” if a property holds for all .
3 Mathematical background: Generalized Nash equilibrium problems
We consider a set of noncooperative agents, each of them choosing its strategy from its local decision set . The aim of each agent is to minimize its local cost function within its feasible strategy set. We call the decisions of all the agents with the exception of and set . The local cost function of agent is a function that depends on both the local variable and the decision of the other agents . The cost function has the form
(1) 
that present the typical splitting in smooth and non smooth parts. We assume that the non smooth part is represented by . We note that it can model not only a local cost, but also local constraints via an indicator function, e.g. .
Standing Assumption 1 (Local cost).
For each , the function in (1) is lower semicontinuous and convex and is nonempty, compact and convex.
Standing Assumption 2 (Cost functions convexity).
For each and the function is convex and continuously differentiable.
Furthermore, we consider a game with affine shared constraints . Therefore, the feasible decision set of each agent is denoted by the setvalued mapping :
(2) 
where and . The set represents the local decision set for agent , while the matrix defines how agent is involved in the coupling constraints. The collective feasible set can be written as
(3) 
where and .
Standing Assumption 3 (Constraints qualification).
The set satisfies Slater’s constraint qualification.
The aim of each agent , given the decision variables of the other agents , is to choose a strategy , that solves its local optimization problem, i.e.,
(4) 
When the optimization problems are simultaneously solved, the solution concept that we are seeking is that of generalized Nash equilibrium.
Definition 2.
A Generalized Nash equilibrium (GNE) is a collective strategy such that, for all ,
In other words, a GNE is a set of strategies where no agent can decrease its objective function by unilaterally deviating from its decision. While, under Assumptions 1–3, existence of GNE of the game is guaranteed by [12, Section 4.1], uniqueness does not hold in general [12, Section 4.3].
Within all the possible Nash equilibria, we focus on those that corresponds to the solution set of an appropriate variational inequality. To this aim, let the (pseudo) gradient of the cost function be
(5) 
and let
(6) 
Formally, the variational inequality problem, , is [16, Definition 26.19]: find such that
(7) 
Remark 1.
When Standing Assumptions 1–3 hold, any solution of in (7) is a GNE of the game in (4), while vice versa does not hold in general. Indeed, there may be Nash equilibria that are not solution of the VI [33, Proposition 12.7]. The GNE that are also solution of the associated VI are called variational equilibria (vGNE).
The GNEP can be recasted as a monotone inclusion, namely, the problem of finding a zero of a setvalued monotone operator. To this aim, we characterize a GNE of the game in terms of the Karush–Kuhn–Tucker (KKT) conditions of the coupled optimization problems in (4). Then, the set of strategies is a GNE if and only if the corresponding KKT conditions are satisfied [12, Theorem 4.6]. Moreover, since the variational problem is a minimization problem, appropriate KKT conditions hold also in this case. The interest in such conditions is due to [34, Theorem 3.1] that provides a criteria to select GNE that are also solution of the VI. Specifically, under assumptions 1–3, [34, Theorem 3.1] establishes that the equilibrium points are those such that the shared constraints have the same dual variable for all the agents. We refer also to [35, Theorem 3.1] for a more general result.
Remark 2.
In [5], the authors propose a distributed preconditioned FB algorithm as in Algorithm 1. Inspired by this work, in the following sections we describe SGNEPs and propose the stochastic counterpart of Algorithm 1 (Algorithm 2). Therefore, more details on the preconditioning procedure and on how to obtain the algorithm are described in Section 4.2.
Remark 3.
When the local cost function is the indicator function, we can use the projection on the local feasible set , instead of the proximal operator [16, Example 12.25].
Initialization: and
Iteration : Agent
() Receives for for , then updates
() Receives for , then updates
4 Stochastic generalized Nash equilibrium problems
4.1 Stochastic equilibrium problem formulation
In this section we describe the stochastic counterpart of GNEPs (SGNEPs). With SGNEP we mean a GNEP where the cost function is an expected value function. As in the deterministic case, we consider a set of agents with strategies . Each of the agents seek to minimize its local cost function within its feasible strategy set that satisfy Standing Assumption 1. The local cost function of agent is defined as
(8) 
for some measurable function . We suppose that the cost functions in (8) satisfy Standing Assumptions 1 and 2. The uncertainty is expressed through the random variable where
is the probability space. The cost function depends on both the local variable
, the decision of the other agents and on the random variable . represent the mathematical expectation with respect to the distribution of the random variable ^{1}^{1}1From now on, we use instead of and instead of .. We assume that is well defined for all the feasible .Since we consider a SGNEP, the feasible decision set of each agent is denoted by the setvalued mapping as in (2) and the collective feasible set is as in (3). We suppose that there is no uncertainty in the constraints and that Assumption 1 and 3 are satisfied.
The aim of each agent , given the decision variables of the other agents , is to choose a strategy , that solves its local optimization problem, i.e.,
(9) 
We aim to compute a stochastic Generalized Nash equilibrium (SGNE) as in Definition 2 but with expected value cost functions, that is, a collective strategy such that for all
(10) 
To guarantee the existence of a SGNE, we make further assumptions on the cost function.
Standing Assumption 4 (Cost functions measurability).
For each and for each , the function is convex, Lipschitz continuous, and continuously differentiable. The function is measurable and for each , the Lipschitz constant is integrable in .
While, under Standing Assumptions 1–4, existence of SGNE of the game is guaranteed by [23, Section 3.1], uniqueness does not hold in general [23, Section 3.2].
As for the deterministic case we seek for a vGNE, we here study the associated stochastic variational inequality (SVI). The (pseudo) gradient mapping is given, in this case, by
(11) 
The possibility to exchange the expected value and the gradient is guaranteed by Standing Assumption 4. The associated SVI reads as
(12) 
where is defined as in (6).
The stochastic vGNE (vSGNE) of the game in (9) is defined as the solution of the in (12) where is described in (11) and is defined in (3).
In what follows, we recast the SGNEP into a monotone inclusion. For each agent , the Lagrangian function is defined as
where is the Lagrangian dual variable associated with the coupling constraints. The set of strategies is a SGNE if and only if the following KKT conditions are satisfied:
(13) 
Similarly, we can use the KKT conditions to characterize the variational problem, studying the Lagrangian function associated to the SVI. Since is a solution of if and only if
the associated KKT optimality conditions reads as
(14) 
As exploited by [34, Theorem 3.1], [35, Theorem 3.1], we seek for a vSGNE, that is, an equilibrium that reach consensus of the dual variables.
4.2 Stochastic preconditioned forwardbackward algorithm
We now describe the details of the preconditioning procedure that leads to the distributed iterations presented in Algorithm 2.
Initialization: and
Iteration : Agent
(1): Receives for all for then updates:
(2): Receives for all then updates:
We suppose that each agent only knows its local data, i.e., , and . Moreover, each player has access to a pool of samples of the random variable and is able to compute, given the actions of the other players , (or an approximation, as exploited later in this section).
Since the cost function is affected by the other agents strategies, we call the set of agents interacting with . Specifically, if the function explicitly depends on .
A local copy of the dual variable is shared through the dual variables graph, . Along with the dual variable, agents share on a copy of the auxiliary variable . The role of is to force consensus, since this is the configuration that we are seeking. A deeper insight on this variable is given later in this section. The set of edges is given by: if player can receive from player . The neighbouring agents in forms a set for all . In this way, each agent control his own decision variable, a local copy of the dual variable and of the auxiliary variable and has access to the other agents variables through the graphs.
Standing Assumption 5 (Graph connectivity).
The dualvariable graph is undirected and connected.
We call the weighted adjacency matrix of . Then, letting and , the associated Laplacian is the matrix . Moreover, it follows from Standing Assumption 5 that . Assumption 5 is important to reach consensus of the dual variables.
Rewriting the KKT conditions in (14) in compact form as
(15) 
where is a setvalued mapping, it follows that the vSGNE correspond to the zeros of the mapping .
In the remaining part of this section, we split into the summation of two operators and that satisfy specific properties. The advantage of this technique is that the zeros of the mapping correspond to the fixed point of a specific operator depending on both and , as exploited in [14, 5]. Such a scheme is known as forward backward (FB) splitting [16, Section 26.5]. Indeed, it holds that, for any matrix , if and only if,
Specifically, the operator can be written as a summation of the two operators
(16)  
Therefore, finding a solution of the variational SGNEP translates in finding a pair such that .
To impose consensus on the dual variables, the authors in [5] proposed the Laplacian constraint . This is why, to preserve monotonicity we expand the two operators and in (16) introducing the auxiliary variable . Let be the laplacian of and set . Let us define and ; similarly we define of suitable dimensions. Then, let us define
(17)  
To ensure that the zeros of correspond to the zeros of the operator in (15), we take the following assumption.
Assumption 1 (Restricted cocoercivity).
is restricted cocoercive, with .
Then, the following result holds.
Lemma 1.
Proof.
See Appendix 10. ∎
The two operators and in (17) have the following properties.
Lemma 2.
Proof.
See Appendix 10. ∎
Since the expected value can be hard to compute, as the distribution of the random variable is unknown, we take an approximation of the pseudogradient. At this stage, it is not important which type of approximation we use, therefore, in what follows, we replace with
(18) 
where is an approximation of the expected value mapping in (11) given some realization of the random vector .
The fixed point problem, given , now reads as
(19) 
and suggests the stochastic FB algorithm
(20) 
where represent the backward step and is the forward step.
By expanding (20), we obtain the distributed FB steps in Algorithm 2 with as in (18), as in (17) and
(21) 
and similarly we define and of suitable dimensions. We note that is symmetric and such that is easy to be computed and the iterations are sequential [14]. If we use the traditional FB algorithm with , we have to compute the resolvent of that involves the constraint matrix and the Laplacian , therefore, it could not be evaluated in a distributed way. With as in (29), we overcome this problem [5].
5 Convergence analysis with sample average approximation
We here state some sufficient assumptions for the convergence of Algorithm 2 to a vSGNE. Note that Algorithm 2 involves an approximation of but it does not specify which one. Indeed, preconditioning can be done independently of the approximation scheme. Concerning the convergence analysis, however, we consider the sample average approximation (SAA) scheme.
We assume the decision maker to have access to an increasing number of samples of the random variable and to be able to compute an approximation of of the form
(22)  
where , for all , and is an i.i.d. sequence of random variables drawn from .
Approximations of the form (22) are very common in MonteCarlo simulation approaches, machine learning and computational statistics [26].
For let us introduce the approximation error
(23) 
Remark 4.
Since there is no uncertainty in the constraints
where is the operator with approximation as in (22) and .
Standing Assumption 6 (Zero mean error).
For al ,
To guarantee that is positive definite and to obtain convergence, the step size sequence can be taken constant but it should satisfy some bounds [5, Lemma 6].
Assumption 2 (Bounded step sizes).
The step size sequence is such that, given , for every agent
where indicates the entry of the matrix . Moreover,
where is the cocoercivity constant of as in Lemma 2.
The number of samples to be taken for the SAA must satisfy some conditions as well.
Assumption 3 (Increasing batch size).
The batch size sequence is such that, for some ,
This assumption implies that
is summable, which is a standard assumption in SAA schemes. It is often used in combination with the forthcoming variance reduction assumption to control the stochastic error
[26, 27].Assumption 4 (Variance reduction).
There exist , and a measurable locally bounded function such that for all
(24) 
Remark 5.
For simplicity of presentation, let us consider a stronger assumption instead of (24), namely, for all
(25) 
for some . In the literature, (25) is known as uniform bounded variance. Assumption 4 is more natural when the feasible set is unbounded and it is always satisfied when mapping is Caratheodory and Lipschitz continuous [26, Ex. 3.1]. Since we are in a game theoretic setup, our feasible set is bounded, we can to use (25) as a variance control assumption.
We are now ready to state the convergence result for the SAA case.
Theorem 1.
Proof.
See Appendix 12. ∎
6 Stochastic Nash equilibrium problem
6.1 Stochastic Nash equilibrium recap
In this section we consider a SNEP, that is, a SGNEP with expected value cost functions but without shared constraints.
We consider a set of noncooperative agents choosing their strategy from its local decision set which satisfy Standing Assumption 1. The local cost function of agent is defined as in (8).
Standing Assumptions 1, 2 and 4 hold also in this case. The aim of each agent , given the decision variables of the other agents , is to choose a strategy , that solves its local optimization problem, i.e.,
(26) 
As a solution, we aim to compute a stochastic Nash equilibrium (SNE), that is, a collective strategy such that for all
We note that, compared to Definition 2 and Equation (10), here we consider only local constraints.
Also in this case, we study the associated stochastic variational inequality (SVI) given by
(27) 
where is defined in (11) and as in (6). As mentioned in Remark 2, being a NE is necessary and sufficient for being a solution of the .
The stochastic variational equilibrium (vSNE) of game (26) is defined as the solution of the in (12) where is described in (11). The distributed FB algorithm that we propose is presented in Algorithm 3.
Initialization:
Iteration : Agent receives for all , then updates:
6.2 Convergence for restricted cocoercive pseudogradients
If the restricted cocoercivity assumption holds for the pseudogradient (Assumption 1) and there are enough sample available, one can use Algorithm 2 with and the SAA scheme as in (22). Moreover in this case, it is possible to use also the stochastic approximation scheme.
In this case, we approximate with only one realization of the random variable , therefore, the approximation is formally defined as
(28)  
where is a collection of i.i.d. random variables drawn from .
Before stating the convergence result, we state some further assumptions. With a little abuse of notation, the approximation error is defined as
We suppose that it satisfies Standing Assumption 6. Moreover, in the SA scheme, there are assumptions also on the step size sequence. In particular, here we let the sequence of the step sizes to be diminishing. This assumption is standard in literature and it has the role of controlling the stochastic error [24, 36].
Assumption 5 (Vanishing step sizes).
The step size sequence is such that
Assumption 6 (Bounded step sizes).
The step size sequence is such that where is the cocoercivity constant of as in Assumption 1.
We can now state our convergence result.
Theorem 2.
Proof.
See Appendix 13. ∎
6.3 Convergence for restricted strictly monotone mappings
If Assumption 1 is replaced with restricted strict monotonicity, it is still possible to prove convergence. Therefore, in the remaining part of this section we analyze this case.
Assumption 7.
is restricted strictly monotone at