On The Convergence of a Nash Seeking Algorithm with Stochastic State Dependent Payoff

09/30/2012 ∙ by A. F. Hanif, et al. ∙ 0

Distributed strategic learning has been getting attention in recent years. As systems become distributed finding Nash equilibria in a distributed fashion is becoming more important for various applications. In this paper, we develop a distributed strategic learning framework for seeking Nash equilibria under stochastic state-dependent payoff functions. We extend the work of Krstic et.al. in [1] to the case of stochastic state dependent payoff functions. We develop an iterative distributed algorithm for Nash seeking and examine its convergence to a limiting trajectory defined by an Ordinary Differential Equation (ODE). We show convergence of our proposed algorithm for vanishing step size and provide an error bound for fixed step size. Finally, we conduct a stability analysis and apply the proposed scheme in a generic wireless networks. We also present numerical results which corroborate our claim.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper we consider a fully distributed system, which consists of non cooperative nodes which can be modeled as a non cooperative game for Nash seeking. Let us consider a distributed system with nodes or agents which interact with one another and each has a payoff/utility/reward to maximize. The decision or action of each node has an impact on the reward of the other nodes, which makes the problem challenging in general. In such systems each node has access to a numerical value of their utility/reward at each time. In such systems it might not be possible to have a bird’s eye view of the system as it is too complicated or is constantly changing. Let be the action of node at time and the numerical value of the utility of this node is given by Where were represents noise , is the payoff function of node , is the state such that is compact,

is the action vector containing actions of all nodes at time

Figure 1 shows the system model where we have interacting nodes. The rewards are interdependent as the nodes interact with one another. The only assumption that we can make here is the existence of a local solution. Each of these nodes has access to the numerical value of their respective reward and it needs to implement a scheme to select an action such that its utility is maximized. The above scenario can be interpreted as an interactive game. In this paper we explore learning in such games which is synonymous with designing distributed iterative algorithms that converge to the Nash equilibrium.

Figure 1: Nodes interacting with each other through a dynamic environment

Different approaches, mainly based on gradient descent or ascent method [3], have been developed to achieve a local optimum (or global optimum in some special cases, e.g. concavity of the payoff, etc.) of the distributed optimization problem. The method of gradient ascent is also called steepest ascent method, which starts at a point and, as many times as needed, moves from to by maximizing along the line extending from in the direction of , the local downhill gradient. This gives the iterative scheme where is a learning rate/step size. For the applicability of the above algorithm it is necessary to have access to the value of at each time . The action can be positive and upper bounded by a certain maximum value for some engineering applications. Thus, the component needs to be projected in the domain This leads to a projected gradient descent or ascent algorithms: where proj denotes the projection operator. At each time , node needs to observe/compute the gradient term Use of the aforementioned gradient based method requires the knowledge of (i) the system state, (ii) the actions of others and their states and or (iii) the mathematical structure (closed form expression) of the payoff function. As we can see, it will be difficult for node to compute the gradient if the expression for the payoff function is unknown and/or if the states and actions of other nodes are not observed as depends on the actions and states of others.

There are several methods for Nash equilibrium seeking where we only have access to the numerical value of the function at each time and not its gradient (e.g. Complex functions which cannot be differentiated or unknown functions). Some of them are detailed below.

The stochastic gradient ascent proposes to feedback the numerical value of gradient of reward function of node (which can be noisy) to itself. This supposes in advance that a noisy gradient can be computed or is available at each node. Note that if the numerical value of the gradients of the payoffs are not known by the players, this scheme cannot be used. In [4] projected stochastic gradient based algorithm is presented. A distributed asynchronous stochastic gradient optimization algorithms is presented in [5]. Incremental Sub-gradient Methods for Non-differentiable Optimization are discussed in [6]. A distributed Optimization algorithms for sensor networks is presented in [7]. Interested readers are referred to a survey by Bertsekas [8] on Incremental gradient, subgradient, and proximal methods for convex optimization. In [9] the authors present Stochastic extremum seeking with applications to mobile sensor networks.

Krstic et.al. in recent years have contributed greatly to the field of non-model based extremum seeking. In [1], the authors propose a Nash seeking algorithm for games with continuous action spaces. They proposed a fully distributed learning algorithm and requires only a measurement of the numerical value of the payoff. Their scheme is based on sinus perturbation (i.e. deterministic perturbation instead of stochastic perturbation) of the payoff function in continuous time. However, discrete time learning scheme with sinus perturbations is not examined in [1]. In [10] extremum seeking algorithm with sinusoidal perturbations for non-model based systems has been extended and modified to the case of i.i.d. noisy measurements and vanishing sinus perturbation, almost sure convergence to equilibrium is proved. Sinus perturbation based extremum seeking for state independent noisy measurement is presented in [11]. Kristic et al. [2] have recently extended Nash seeking scheme to stochastic non-sinusoidal perturbations. In this paper we extend the work in [1] to the case of stochastic state dependent payoff functions, and use deterministic perturbations for Nash seeking. One can see easily the difference between this paper and the previous existing works [10][11]. In these works, the noise associated with the measurement is i.i.d. which does not hold in practice especially in engineering application where the noise is in general time correlated. In our case, we consider a stochastic state dependent payoff function and our problem can be written in Robbins-Monro form with a Markovian (correlated) noise given by (this will become clearer in the next sections), i.e. the associated noise is stochastic state dependent which is different from the case of i.i.d. noise.

Although stochastic estimation techniques do estimate the gradient but they introduce a level of uncertainty, to avoid this it is possible to introduce sinus perturbation instead of stochastic perturbation. This is particularly helpful when one node is trying to follow the actions of the other nodes in a certain application.

1.1 Contribution

In this paper, we propose a discrete time learning algorithm, using sinus perturbation, for continuous action games where each node has only a numerical realization of the payoff at each time. We therefore extend the classical Nash Seeking with sinus perturbation method [1] to the case of discrete time and stochastic state-dependent payoff functions. We prove that our algorithm converges locally to a state independent Nash equilibrium in Theorem 1 for vanishing step size and provide an error bound in Theorem 2 for fixed step size. Note that since the payoff function may not necessarily be concave, finding a global optimum at affordable complexity can be difficult in general even in deterministic case (fixed state) and known closed-form expression of payoff. We also show the convergence time for the sinus framework in Corollary 1. In this paper we analyze and prove that the algorithm converges to a limiting ODE. We provide the convergence time and error bound between our discrete time algorithm and the ODE.

The proof of the theorems are given in Appendix A.

1.2 Structure of the paper

The remainder of this paper is organized as follows. Section 2 provides the proposed distributed stochastic learning algorithm. The performance analysis of the proposed algorithm (convergence to ODE, error bounds) is presented in section 3. A numerical example with convergence plots is provided in section 4. Section 5 concludes the paper. Appendix contains the proofs.

1.3 Notations

We summarize some of the notations in Table 1.

Symbol Meaning
set of nodes
set of choices of node
state space
payoff of node
decision of at time
expectation operator
gradient operator
Table 1: Summary of Notations

2 Problem Formulation and Proposed Algorithm

Let there be distributed nodes each with a payoff function represented by at time which is used to formulate the following robust problems:

(1)

A solution to the problem (1) is called state-independent equilibrium solution.

Definition 1 (Nash Equilibrium (state-independent))

is a (state-independent) Nash equilibrium point if

(2)

where denotes the mathematical expectation over the state.

Definition 2 (Nash Equilibrium (state-dependent))

We define a state-dependent strategy of a node as a mapping from to the action space The set of state-dependent strategy is

is a (state-dependent) Nash equilibrium point if

(3)

Here we define Assuming that node has access to it’s realized payoff at each time but the closed-form expression of is unknown to node A solution to the above problem is a state-independent equilibrium in the sense no node has incentive to change its action when the other nodes keep their choice. It is well-known that equilibria can be different than global optima, the gap between the worse equilibrium and the global maximizer is captured by the so-called price of anarchy. Thus solution obtained by our method can be suboptimal with respect to maximizing the sum of all the payoffs. We study the local stability of the stochastic algorithm.

The robust game is defined as follows: is the set of nodes, is the action space of node . is the state space of the whole system, where and is a smooth function. It should be mentioned here for clarity that the decisions are taken in a decentralized fashion by each node. Let us continue by stating that is the set of nodes, is the action space of node , is the state space of the whole system, where and

Games with uncertain payoffs are called robust games. Since state can be stochastic, we get a robust game. Here we will focus on the analysis of the so-called expected robust game i.e A (state-independent) Nash equilibrium point [14] of the above robust game is a strategy profile such that no node can improve its payoff by unilateral deviation, see Definition 1 and Definition 2.

Since the current state is not observed by the nodes, it will be difficult to implement state-dependent strategy. Our goal is to design a learning algorithm for a state-independent equilibrium given in Definition 1. In what follows we assume that we are in a setting where the above problem has at least one isolated state-independent equilibrium solution. More details on existence of equilibria can be found in Theorem 3 in [15].

2.1 Learning algorithm

Suppose that each node is able to observe a numerical value of the function at time , where is the action of node at time . is an intermediary variable. , represent the amplitude frequency and phase of the sinus perturbation signal given by , represents the payoff at time . The learning algorithm is presented in Algorithm 1 and is explained below. At each time instant , each node updates its action , by adding the sinus perturbation i.e. to the intermediary variable using equation (4), and makes the action using . Then, each node gets a realization of the payoff from the dynamic environment at time which is used to compute using equation (5). The action is then updated using equation (4). This procedure is repeated for the window .

The algorithm is in discrete time and is given by

(4)
(5)

where

For almost sure convergence, it is usual to consider vanishing step-size or learning rate such as However, constant learning rate could be more appropriate in some regime. The parameter belongs to ,

1:  Each node , initialize and transmit
2:  Repeat
3:  Calculate action according to Equation (4)
4:  Perform action
5:  Observe
6:  Update using Equation (5)
7:  until horizon
Algorithm 1 Distributed learning algorithm
Remark 1 (Learning Scheme in Discrete Time)

As we will prove in subsection 3.1, the difference equation (4) can be seen as a discretized version of the learning scheme presented in [1]. But it is for games with state-dependent payoff functions i.e., robust games.

It should be mentioned here for clarity that the action of each node is scalar.

2.2 Interpretation of the proposed algorithm

In some sense our algorithm is trying to estimate the gradient of the function , but we don’t have access to the function but just its numerical value. The following equation clearly illustrated the significance of each variable and constant in the algorithm.

(6)

The learning rate can be constant or variable depending on the requirement for the algorithm and system limitations. Perturbation amplitude is a small number. is also a small value which can be varied for fine tuning. Rewriting the above equation we get

(7)

For vanishing step size as and the trajectory of the above algorithm coincides with the trajectory of the ODE in equation (18)

3 Main results

In this section we present the convergence results as introduced in the contribution section.

We introduce the following assumptions that will be used step by step111We do not use A1 and A2 simultaneously..

Assumption 1 (A1: Vanishing learning rate)

, . There exists such that The reason for A1 is that represents the step size of the algorithm. So the sum over all as it needs to traverse over all discrete time. The condition ensures bound for the cumulative noise error. This last assumption is for a local stability analysis.

Assumption 2 (A2: Constant learning rate)

and is uniformly integrable.

Assumption 3 (A3:Existence of a local maximizer)

These two conditions tell us that is a local maximizer of where

Assumption 4 (A4: Diagonal Dominance)

the expected payoff has a Hessian that is diagonally dominant at , i.e., Note that A4 implies that the Hessian of the expected payoff is invertible at This assumption is weaker compared to the classical extremum seeking algorithm because the Hessian of does not need to be invertible for each

We assume is integrable with respect to so that the expectation is finite.

3.1 Convergence to ODE

Stochastic approximation

First we need to show that our proposed algorithm converges to the respective ODE almost surely. We will use a dynamical system viewpoint and stochastic approximation method to analyze our learning algorithm. The idea consists of finding the asymptotic pseudo-trajectory of the algorithm via ordinary differential equation (ODE). To do so, we use the framework initiated by Robbins-Monro[16] or [17]. See [18, 12] for recent development. The works in [18, 12] allows us to find the limiting trajectory of the learning algorithm.

Our scheme can be written as Now we rewrite the above equation in Robbins-Monro [16] form as: where

Since our payoff is Lebesgue integrable with respect to expectation of payoff function is finite. is clearly a martingale adapted to the filtration

generated by the random variable

and the initial law of Moreover has a zero mean. Thus, is a difference martingale.

Theorem 1 (Variable Learning Rate)

Under Assumption A1, the learning algorithm converges almost surely to the trajectory of a non-autonomous system given by

The gap between the interpolated version of algorithm and the solution of the ODE is bounded by


which vanishes, where is the interpolated version of the algorithm and is the solution of the ODE at time starting from where is the Lipschitz constant for the ODE and is the time window. is specified below.

In order to calculate the bound we need to define a few terms which are helpful in obtaining a compact form of the bound.

(8)
(9)
(10)

To prove that the learning algorithm (discrete ODE) converges to the ODE we need to verify conditions from Borkar [12] Chapter 2 Lemma 1.

This is an important result as it gives us an approximation on the error between our algorithm and the corresponding ODE.

Theorem 2 (Fixed Learning Rate)

Under Assumption A2, the learning algorithm converges in distribution when to the trajectory of a non-autonomous system given by

(12)
(13)

Moreover the error gap is in order of . As converges to zero, the algorithm converges (in distribution) to the ODE.

The advantage of Theorem 2 compared to Theorem 1 is the convergence time. The number of iterations required to reach a fixed time is less with constant learning rate than the vanishing learning rate. However, the convergence notion under constant step size is weaker (it is in distribution) compared to the almost surely convergence with vanishing learning rate. So there is a sort of tradeoff between almost sure convergence and convergence time.

Let be the gap between the ODE and the isolated equilibrium at time

Theorem 3 (Exponential Stability)

Assume A3-A4 and Remark 3,4 holds. Then, there exist and such that, for all and , if the initial gap is (which is small) then for all time

(14)

where

(15)

[Sketch of Proof of Theorem 3]

Local stability proof of Theorem 3 follows the steps in [13].

From the above equation it is clear that as time goes to infinity the first term in bound vanishes exponentially and the error is bounded by the amplitude of the sinus perturbation i.e. . This means that the solution of ODE converges locally exponentially to the state-independent equilibrium action provided the initial solution is relatively close.

Definition 3 (Nash equilibrium payoff point)

An Nash equilibrium point in state-independent strategy is a strategy profile such that no node can improve its payoff more than by unilateral deviation.

Definition 4 (close Nash equilibrium strategy point)

An close Nash equilibrium point in state-independent strategy is a strategy profile such that the Euclidean distance to a Nash equilibrium is less than

A close Nash equilibrium point is an approximate Nash point with a precision at most

It is not difficult to see that for Lipschitz continuous payoff functions, an close Nash equilibrium is an Nash equilibrium point where is the Lipschitz constant.

Next corollary shows that one can get an close Nash equilibrium in finite time.

Corollary 1 (Convergence Time)

Assume A3-A4 and Remark 3,4 holds. Then, the ODE reaches a close to a Nash equilibrium in at most time units where

[Sketch of Proof for Corollary 1] The proof follows from the inequality (14) in Theorem 3.

Corollary 2 (Convergence to the ODE)

Under Assumption A1, A3, and A4, the following inequality holds almost surely:

where

(16)

[Proof of Corollary 2] The proof uses the triangle inequality By Theorem 1, one gets and by Theorem 3, one has Combining together, one arrives at the announced result.

Then constants in equation (15) and (16) depends on the number of players and the dimension of the action space.

3.2 Convergence of the stochastic ODE

In this subsection we study the stochastic ODE given by

(17)
(18)

where is the realization of the state-dependent payoff at time We assume the state process is ergodic so that,

In particular the asymptotic drift of the deterministic ODE and the stochastic ODE are the same. Hence, the following theorem follows:

Theorem 4 (Almost sure exponential stability)

The stochastic algorithm 1 converges asymptotically almost surely to the stochastic ODE in equation (18) i.e.

Since the state process is ergodic, we can apply the stochastic averaging theorem from [2] to get the announced result.

4 Numerical Example: A Generic Wireless Network with Interference

Even though the distributed optimization problem, considered in this paper, and the developed approach are general and can be used in many application domains. As an application of the above framework, we will consider the problem of power control in wireless networks in order to better illustrate our contribution. Consider an interference channel composed of transmit receiver pairs as shown in Figure 2. Each transmitter communicates with its corresponding receiver and incurs an interference on the other receivers. Each receiver feeds back a numerical value of the payoff to its corresponding transmitter.

The problem is composed of transmitter-receiver pairs; all of them use the same frequency and thus generate interference onto each other. Each transmitter-receiver pair has therefore its own payoff/reward/utility function that depends necessarily on the interference exerted by the other pairs/nodes. Since the wireless channel is time varying as well as the interference, the objective is necessarily to optimize in the long-run (e.g. average) the payoff functions of all the nodes. The payoff function of node at time is denoted by where represents an matrix containing channel coefficients at time , represents the channel coefficient between transmitter and receiver (where ) and represents the vector containing transmit powers of transmit-receive nodes. The most common technique used to obtain a local maximum of the nodes’ payoff functions is the gradient based descent or ascent method.

Remark 2
General Application Description
utility/payoff of transmitter at time
action/power of transmitter at time
state/channel gain between transmitter
and receiver at time
Table 2: Equivalent Notations for Wireless

Figure 2: Interference Channel Model

In section 3, we proved that our proposed algorithm converges to for any type of payoff functions which satisfies the assumptions in section 3.1. In order to show numerically that our algorithm converges to , we run our algorithm for a simple payoff function. In parallel, we obtain analytically the Nash equilibrium and compare the convergence point of our algorithm to . We therefore choose a simple payoff function for which can be obtained analytically.

The payoff function of node at time has then the following form:

where represents the bandwidth available for transmission. The above payoff function consists of of of user and the unit cost of transmission is . It is assumed that a used doesn’t know the structure function or the law of the channel state. For the above payoff function to ensure the assumption A3-A4 and Remark 3,4 we need to satisfy the condition . Please see appendix for more details.

The problem here is to maximize the payoff function which is stated as follows: find such that for each user satisfies
Note that when then the payoff of user is negative and the minimum power is a solution to the above problem. For the remaining, we assume that

The channel

is time varying and is generated using an independent and identically distributed complex gaussian channel model with variance

such that . The thermal noise is assumed to be a zero mean gaussian with variance such that

We consider the following simulation settings with for the above wireless model: , The numerical setting could be tuned in order to make the convergence slower or faster with some other tradeoff. Due to space limitations further discussion on how to select these parameters has been omitted. and represent the starting points of the algorithm which are initialized as and . is the penalty for interference, is the bandwidth and the variance of noise is normalized. Figure 3 represents the average transmit power trajectories of the algorithm for two nodes. The dotted line represents . As can be seen from the plots that the system converges to where .

Figure 3: Power evolution (discrete time)
Figure 4: Payoff evolution (discrete time)

The example we discussed is only one of the possible types of applications where our proposed algorithm can be implemented.

Consider for example the following payoffs: and where is a small value and

stands for probability. Goodput represents the ratio of correctly received information bits vs the number of transmitter bits. In wireless communications the channel is constantly changing due to various physical phenomenon and interference from other sources and changes in the environment. It is hard to have a closed form expression for

due to complexity of the transmitter, receiver and unknown parameters. In practice, at each time , the receiver has therefore a numerical value of but no closed form expression for rate/goodput is available especially for advanced coding scheme (e.g. turbo code, etc.). represents an outage probability for which also depends on the goodput, the gradient for

is notoriously hard to compute without channel and interference statistics knowledge (probability distribution function) and closed form expression of

. Our scheme can be particularly helpful in such scenarios.

The price/design parameter inside the reward function can be tuned such that the solution of the distributed robust extremum coincides with a global optimizer of the system designer. The can be same for all nodes or each node can have its own . Let represent the optimal action or set of actions to be performed by each node to maximize their respective utilities. It is possible to set such that the following equation is satisfied. could represent a scalar or a vector depending on the system size and the application. To be able to effectively make equal to

we need to have enough degrees of freedom in the system. However this type of tuning is not true in general.

5 Concluding remarks

Work Presented:

In this paper we have presented a Nash seeking algorithm which is able to find the local minima using just the numerical value of the stochastic state dependent payoff function at each discrete time sample. We proved the convergence of our algorithm to a limiting ODE. We have provided as well the error bound for the algorithm and the convergence time to be in a close neighborhood of the Nash equilibrium. A numerical example for a generic wireless network is provided for illustration. The convergence bounds achieved by our method are dependent on the step size and the perturbation amplitude.

New Class of Functions:

In this work we introduced a new class of state dependent payoff functions which are inspired from wireless systems applications. But these kind of functions are more general and appear in other application areas.

Achievable Bounds:

As it is clear from results in Theorem 1 that convergence depends on an exponential term and the amplitude of the sinus perturbation. As amplitude becomes smaller, the error bound also vanishes. In contrast the standard stochastic subgradient method only depend on the step size.

Global Analysis:

All the work considered in this paper including Krstic et.al. consider local stability. Our work is an extension of their work and works for local stability. The future work will focus on the extension to the case of Global Stability of Nash equilibrium for both deterministic and stochastic payoff functions.

Multidimensional Aspect:

The presented work has been studied for scalar reward and scalar action by each node. Scalar scenario has several applications to wireless (as in the aforementioned example) and sensor networks and numerous examples can be considered. A possible extension to this work could be in the direction of vector actions where each users is able to perform multiple actions based on multiple rewards.

Appendix A Convergence Theorems

a.1 Variable Step Size: Proof of Theorem 1

The Theorem 1 states that Under Assumption A1, the learning algorithm converges almost surely to the trajectory of a non-autonomous system given by

The proof follows in several steps.

  • The first step provides conditions for Lipschitz continuity of the expected payoff which is given in Lemma 1. From Lemma 2 we have that , is Lipschitz over the domain

  • Second step: the learning rates are chosen such that they satisfy assumption

  • Third step: we check the noise conditions.

Lemma 1

Let

then the mapping is Lipschitz with Lipschitz constant

[Proof of Lemma 1] Suppose that is Lipschitz with Lipschitz constant , then by Jensen’s inequality one has

By condition Let be Then

This completes the proof.

Remark 3
  • Note that under and the expected payoff vector is Lipschitz continuous with

  • If is a compact set and is continuous then is Lipschitz [In particular, the condition is not needed]

We shall prove the above remark by Reductio ad absurdum. To prove the second statement of Remark 3 we use compactness and continuity argument. We start from Bolzano–Wierstrass theorem which states that. For any any continuous map over a compact set has at least one maximum, i.e.,