1 Introduction
In multiagent systems, the ability of learning is important for an agent to adaptively adjust its behaviors in response to coexisting agents and unknown environments in order to optimize its performance. Multiagent learning algorithms have received extensive investigation in the literature, and lots of learning strategies busoniu2008comprehensive ; matignon2012independent ; bloembergen2015evolutionary ; Zhang2017FMRQ have been proposed to facilitate coordination among agents.
The multiagent learning criteria proposed in WOLFPHC require that an agent should be able to converge to a stationary policy against some class of opponents (convergence) and the bestresponse policy against any stationary opponent (rationality). If both agents adopt a rational learning strategy in the context of repeated games and also their strategies converge, then they will converge to a Nash equilibrium of the stage game. Indeed, convergence to Nash equilibrium has been the most commonly accepted goal to pursue in multiagent learning literature. Until now, a number of gradientascent based multiagent learning algorithms singh2000nash ; WOLFPHC ; Abdallah2008MRL ; zhang2010multi have been sequentially proposed towards converging to Nash equilibrium with improved convergence performance and more relaxed assumptions (less information is required). Under the same direction, another wellstudied family of multiagent learning strategies is based on reinforcement learning (e.g., Qlearning Qlearning ). Representative examples include distributed Qlearning in cooperative games lauerRiedmiller , minimax Qlearning in zerosum games minmaxQ , Nash Qlearning in generalsum games hu2003nash , and other extensions littman2001friend ; busoniu2008comprehensive , to name just a few.
1’s payoff
2’s payoff 
Agent 2’s actions  

C  D  
Agent 1’s actions 

C  3/3  0/5  
D  5/0  1/1 
All the aforementioned learning strategies pursue converging to Nash equilibrium under selfplay, however, Nash equilibrium solution may be undesirable in many scenarios. One wellknown example is the prisoner’s dilemma (PD) game shown in Table 1. By converging to the Nash equilibrium , both agents obtain the payoff of 1, while they could have obtained a much higher payoff of 3 by coordinating on the nonequilibrium outcome . In situations like the PD game, converging to the socially optimal outcome, i.e., the maximal total reward of all players, under selfplay would be more preferred. To address this issue, one natural modification for a gradientascent learner is to update its policy along the direction of maximizing the sum of all agents’ expected payoff instead of its own. However, in an open environment, the agents are usually designed by different parties and may have not the incentive to follow the strategy we design. The above way of updating strategy would be easily exploited and taken advantage by (equilibriumdriven) selfinterested agents. Thus it would be highly desirable if an agent can converge to socially optimal outcomes under selfplay and Nash equilibrium against selfinterested agents to avoid being exploited.
In this paper, we propose a new gradientascent based algorithm (SAIGA) which augments the basic gradient ascent algorithm by incorporating “social awareness” into the policy update process. Social awareness means that agents try to optimize social outcomes as well as its own outcome. A SAIGA agent holds a social attitude to reflect its sociallyaware degree, which can be adjusted adaptively based on the relative performance between its own and its opponent. A SAIGA agent seeks to update its policy in the direction of increasing its overall payoff which is defined as the average of its individual and the social payoff weighted by its sociallyaware degree. We theoretically show that for a wide range of games (e.g., symmetric games), the dynamics of SAIGAs under selfplay exhibits linear characteristics. For generalsum games, it may exhibit nonlinear dynamics which can still be analyzed numerically. The learning dynamics of two representative games (the prisoner’s dilemma game and the coordination game representing symmetric games and asymmetric games, respectively) are analyzed in details. Like previous theoretical multiagent learning algorithms, SAIGA also requires additional assumption of knowing the opponent’s policy and the game structure.
To relax the above assumption, we then propose a practical gradient ascent based multiagent learning strategy, called Sociallyaware Policy Gradient Ascent (SAPGA). SAPGA relaxes the above assumptions by estimating the performance of its own and the opponent using Qlearning techniques. We empirically evaluate its performance in different types of benchmark games and simulation results show that SAPGA agent outperforms previous learning strategies in terms of maximizing the social welfare and Nash product of the agents. Besides, SAPGA is also shown to be robust against individually rational opponents and converges to Nash equilibrium solutions.
The remainder of the paper is organized as follows. Section 2 generally reviews some related works about Gradient Ascent Reinforcement Learning algorithms. Section 3 reviews normalform game and the basic gradient ascent approach. Section 4 introduces the SAIGA algorithm and analyzes its learning dynamics theoretically. Section 5 presents the practical multiagent learning algorithm SAPGA in details. In Section 6, we extensively evaluate the performance of SAPGA under various benchmark games. Lastly we conclude the paper and point out future directions in Section 7.
2 Related Works
The first gradient ascent multiagent reinforcement learning algorithm is Infinitesimal Gradient Ascent (IGAsingh2000nash ), in which each learner updates its policy towards the gradient direction of its expected payoff. The purpose of IGA is to promote agents to converge to a particular Nash Equilibrium in a twoplayer twoaction normalform game. IGA has been proved that agents will converge to Nash equilibrium or if the strategies themselves do not converge, then their average payoffs will nevertheless converge to the average payoffs of Nash equilibrium. Soon after, M. Zinkevich et al. Zinkevich2003Online propose an algorithm called Generalized Infinitesimal Gradient Ascent(GIGA), which extends IGA to the game with an arbitrary number of actions.
Both IGA and GIGA can be combined with the Win or Learn Fast (WoLF) heuristic in order to improve performance in stochastic games (WolfIGA
WOLFPHC , WolfGIGABowling2004Convergence ). The intuition behind WoLF principle is that an agent should adapt quickly when it performs worse than expected, whereas it should maintain the current strategy when it receives payoff better than the expected one. By altering the learning rate according to the WoLF principle, a rational algorithm can be made convergent. The shortage of WoLFIGA or WoLFGIGA is that these two algorithms require a reference policy, i.e., they require the estimation of Nash equilibrium strategies and corresponding payoffs. To this end, Banerjee et alBanerjee2003Adaptive propose an alternative criterion of WoLFIGA, named Policy Dynamics based WoLF(PDWoLF), that can be accurately computed and guarantees convergence. The Weighted Policy Learner (WPLAbdallah2008MRL ) is another variation of IGA that also modulates the learning rate, meanwhile, it does not require a reference policy. Both of the WoLF and WPL are designed to guarantee convergence in stochastic repeated games.Another direction for extending IGA is making improvements from the learning value functions. Zhang et alzhang2010multi propose a gradientbased learning algorithm by adjusting the expected payoff function of IGA, named Gradient Ascent with Policy Prediction Algorithm(IGAPP). The algorithm is designed for games with two agents. The key idea behind this algorithm is that a player adjusts its strategy in response to forecasted strategies of the other player, instead of its current ones. It has been proved that, in twoplayer, twoaction, generalsum matrix games, IGAPP in selfplay or against IGA would lead players’ strategies to converge to a Nash equilibrium. Like other MARL algorithms, besides the common assumption, this algorithm also has additional requirements that a player knows the other player s strategy and current strategy gradient (or payoff matrix) so that it can forecast the other player s strategy.
All the aforementioned learning strategies pursue converging to Nash equilibriums. In contrast, in this work, we seek to incorporate the social awareness into GAbased strategy update and aim at improving the social welfare of the players under selfplay rather than pursuing Nash equilibrium solutions. Meanwhile, individually rational behavior is employed when playing against a selfish agent. Similar idea of adaptively behaving differently against different opponents was also employed in previous algorithms littman2001friend ; conitzer2007awesome ; powers2005learning ; chakraborty2014multiagent . However, all the existing works focus on maximizing an agent’s individual payoff against different opponents in different types of games, but do not directly take into consideration the goal of maximizing social welfare (e.g., cooperate in the prisoner’s dilemma game).
3 Background
In this section we introduce the necessary background for our contribution. First, we gave an overview of the relevant game theory definition. Then a brief review of gradient ascent based MARL (GAMARL) algorithm is given.
3.1 Game theory
Game theory provides a framework for modeling agents’ interaction, which was used by previous researchers in order to analyze the convergence properties of MARL algorithms singh2000nash ; WOLFPHC ; Abdallah2008MRL ; zhang2010multi . A game specifies, in a compact and simple manner, how the payoff of an agent depends on other agents actions. A (normal form) game is defined by the tuple , where is the number of players in the game, is the set of actions available to agent , and is the reward (payoff) of agent which is defined as a function of the joint action executed by all agents. If the game has only two agents, then it is convenient to define their reward functions as a payoff matrix as follows,
where , and . Each element in the matrix represents the payoff received by agent , if agent plays action and its opponent plays action .
A (or a ) of an agent is denoted by
, which maps its actions to a probability. The probability of choosing an action
according to policy is . A policy is deterministic or pure if the probability of playing one action is while the probability of playing other actions is 0, (i.e. AND ), otherwise the policy is stochastic or mixed. The joint policy of all agents is the collection of individual agents’ policies, which is defined as . For continence, the joint policy is usually expressed as , where is the collection of all policies of agents other than agent .The of an agent is defined as the reward averaged over the joint policy. Let , if agents follow a joint policy , then the of agent would be, , where .
The goal of each agent is to find such a policy that maximizes the player s expected payoff. Ideally, we want all agents to reach the equilibrium that maximizes their individual payoffs. However, when agents do not communicate and/or agents are not cooperative, reaching a globally optimal equilibrium is not always attainable. An alternative goal is converging to the Nash Equilibrium (NE), which is by definition a local maximum across agents. A joint strategy is called a (NE), if no player can get a better expected payoff by changing its current strategy unilaterally. Formally, is a NE, iff : . An NE is pure if all its constituting policies are pure. Otherwise the NE is called mixed or stochastic. Any game has at least one Nash equilibrium, but may not have any pure equilibrium.
Next subsection, we introduce the Gradient Ascent based MARL algorithm (GAMARL), together with a brief review of the dynamic analysis of GAMARL.
3.2 Gradient Ascent (GA) MARL Algorithms
Gradient ascent MARL algorithms (GAMARL) learn a stochastic policy by directly following the expected reward gradient. The ability to learn a stochastic policy is particularly important when the world is not fully observable or has a competitive nature. The basic GAMARL algorithm whose dynamics were analyzed is the Infinitesimal Gradient Ascent(IGA singh2000nash ) . When a game is repeatedly played, an IGA player updates its strategy towards maximizing its expected payoffs. A player employing GAbased algorithms updates its policy towards the direction of its expected reward gradient, as illustrated by the following equations,
(1) 
(2) 
where parameter is the gradient step size, and is the projection function mapping the input value to the valid probability range of , used to prevent the gradient moving the strategy out of the valid probability space. Formally, we have,
(3) 
Singh, Kearns, and Mansour singh2000nash examined the dynamics of using gradient ascent in twoplayer, twoaction, iterated matrix games. We can represent this problem as two matrices,
We refer to the joint policy of the two players at time by the probabilities of choosing the first action , where , is the policy of player . The notation will be omitted when it does not affect clarity (for example, when we are considering only one point in time). Then, for the twoplayer twoaction case, the above way of GAbased updating in Equations 1 and 2 can be simplified as follows,
(4) 
where , .
In the case of infinitesimal gradient step size (), the learning dynamics of the players can be modeled as a system of differential equations, i.e. , , which can be analyzed using dynamic system theory Coddington1955Theory . It is proved that the agents will converge to a Nash equilibrium, or if the strategies themselves do not converge, then their average payoffs will nevertheless converge to the average payoffs of a Nash equilibrium singh2000nash .
Combined with QlearningWatkins1989Learning , researchers propose a practical learning algorithm, i.e. the policy hillclimbing algorithm (PHC)WOLFPHC , which is a simple extension of IGA and is shown in Table 1.
The algorithm performs hillclimbing in the space of mixed policies, which is similar to gradient ascent, but does not require as much knowledge. Q values are maintained just as in normal Qlearning. In addition the algorithm maintains the current mixed policy. The policy is improved by increasing the probability that it selects the highest valued action according to a learning rate . After that, the policy is mapped back to the valid probability space. This technique, like Qlearning, is rational and will converge to an optimal policy if other players are playing stationary strategies. The algorithm guarantees the values will converge to (the local optimal value of ) with a suitable exploration policy. will converge to a policy that is greedy according to , which is converging to , and therefore will converge to a best response. PHC is rational and has no limit on the number of agents and actions.
4 Sociallyaware Infinitesimal Gradient Ascent (SAIGA)
In our daily life, people usually do not always behave as a purely individually rational entity and seek to achieve Nash equilibrium solutions. For example, when two person subjects play a PD game, reaching mutual cooperation may be observed frequently. Similar phenomena have also been observed in extensive humansubject based experiments in games such as the Public Goods gameHauert2003Prisone and Ultimatum gameAlvard2004The , in which human subjects are usually found to obtain much higher payoff by mutual cooperation rather than pursuing Nash equilibrium solutions. If the above phenomenon is transformed into computational models, it indicates that an agent may not only update its policy in the direction of maximizing its own payoff, but also take into consideration other’s payoff. We call this type of agents as sociallyaware agents.
In this paper, we incorporate the social awareness into the gradientascent based learning algorithm. In this way, apart from learning to maximizing its individual payoff, an agent is also equipped with the social awareness so that it can (1) reach mutually cooperative solutions faced with other sociallyaware agents (selfplay); (2) behave in a purely individually rational manner when others are purely rational.
Specifically, for each SAIGA agent , it distinguishes two types of expected payoffs, namely and . Payoffs and represent the individual and social payoff (the average payoff of all agents) that agent perceives under the joint strategy respectively. The payoff follows the same definition as IGA and the payoff is defined as the average of the individual payoffs of all agents.
(5) 
Each agent adopts a social attitude to reflect its sociallyaware degree. The social attitude intuitively models an agent’s social friendliness degree towards others. Specifically, it is used as the weighting factor to adjust the relative importance between and , and agent ’s overall expected payoff is defined as follows,
(6) 
Each agent updates its strategy in the direction of maximizing the value of . Formally we have,
(7) 
where parameter is the gradient step size of . If , it means that the agent seeks to maximize its individual payoff only, which is reduced to the case of traditional gradientascent updating; if , it means that the agent seeks to maximize the sum of the payoffs of both players.
Finally, each agent ’s sociallyaware degree is adaptively adjusted in response to the relative value of and as follows. During each round, if player ’s own expected payoff exceeds the value of , then player increases its social attitude , (i.e., it becomes more socialfriendly because it perceives itself to be earning more than the average). Conversely, if is less than , then the agent tends to care more about its own interest by decreasing the value of . Formally,
(8) 
where parameter is the learning rate of .
4.1 Theoretical Modeling and Analysis of SAIGA
An important aspect of understanding the behavior of a multiagent learning algorithm is theoretically modeling and analyzing its underlying dynamics tuyls2003selection ; rodrigues2009dynamic ; bloembergen2015evolutionary . In this section, we first show that the learning dynamics of SAIGA under selfplay can be modeled as a system of differential equations. To simplify analysis, we only considered twoplayertwoaction games.
Based on the adjustment rules in Equation (7) and (8), the learning dynamics of a SAIGA agent can be modeled as a set of equations in (9). For ease of exposition, we concentrate on an unconstrained update equations by removing the policy projection function which does not affect our qualitative analytical results. Any trajectory with linear (nonlinear) characteristic without constraints is still linear (nonlinear) when a boundary is enforced.
(9) 
Substituting and by their definitions (Equations 4 and 5), the learning dynamics of two SAIGA agents can be expressed as follows,
(10) 
where , ,, and with .
As and , it is straightforward to show that the above equations become differential. Thus the unconstrained dynamics of the strategy pair and social attitudes as a function of time is modeled by the following system of differential equations:
(11) 
where .
Based on the above theoretical modeling, next we analyze the learning dynamics of SAIGA qualitatively as follows.
Theorem 4.1
SAIGA has nonlinear dynamics when .
Proof
: From differential equations in (11), it is straightforward to verify that the dynamics of SAIGA learners are nonlinear when due to the existence of , and in all equations.
Since SAIGA’s dynamics are nonlinear when , in general we cannot obtain a closedform solution, but we can still resort to solve the equations numerically to obtain useful insight of the system’s dynamics. Moreover, a wide range of important games fall into the category of , in which the system of equations become linear. Therefore, it allows us to use dynamic system theory to systematically analyze the underlying dynamics of SAIGA.
Theorem 4.2
SAIGA has linear dynamics when the game itself is symmetric.
Proof
: A twoplayer twoaction symmetric game can be represented in Table 2 in general. It is obvious to check that it satisfies the constraint of , given that , . Thus the theorem holds.
1’s payoff
2’s payoff 
Agent 2’s actions  

action 1  action 2  
Agent 1’s actions 

action 1  a/a  b/c  
action 2  c/b  d/d 
4.2 Dynamics Analysis of SAIGA
Previous section mainly analyzed the dynamics of SAIGA in a qualitative manner. In this section, we move to provide detailed analysis of SAIGA’s learning dynamics. We first summarize a generalized conclusion for symmetric games, and then analysis symmetric circumstances in two representative games: the Prisoner’s Dilemma game and the Symmetric Coordination game. For asymmetric circumstances, because the complexity of nonlinear problem analysis, we only focus on the general coordination game (Table 3). Specifically we analyze the SAIGA’s learning dynamics of those games by identifying the existing equilibrium points, which provides useful insights into understanding of SAIGA’s dynamics.
For symmetric games, we have the following conclusion,
Theorem 4.3
The dynamics of SAIGA algorithm under selfplay under a symmetric game have three types of equilibrium points:

;
; 
, if ;
, if ; 
,
where . The first and second types of equilibrium points are stable, while the last is not. We say an equilibrium point is stable if once the strategy starts ”close enough” to the equilibrium (within a distance from it), it will remain ”close enough” to the equilibrium point forever.
Proof
: Following the system of differential equations in Equations (11), we can express the dynamics of SAIGA in Symmetric game as follows:
(12) 
where ,, .
We start with proving the last type of equilibrium points: If there exist an equilibrium , then we have and , . By solving the above equations, we have and . Since , then we have,
Then is an equilibrium. The stability of can be verified using theories of nonlinear dynamicsshilnikov2001methods . By expressing the unconstrained update differential equations in the form of , we have
After calculating matrix
’s eigenvalue, then we have
, , and , where is a constant. Since there exist an eigenvalue , the equilibrium is not stable.Next we turn to consider cases that equilibriums are in the boundary. In these cases, we need to put the projection function back. If , according to the known conditions, we have . Combined with the unconstrained update differential equations 12, we have , then remains unchanged. And because , then for , , then is an equilibrium.
Because , there exist a , and a set , that for , . Thus will stabilize on the point of 0. Also, as , also stable, and thus the equilibrium is stable.
The case that can be proved similarly, which is omitted here.
For the case , if is an equilibrium, combined with the unconstrained update differential equations 12, we have , which means that will keeps changing until or . If is an equilibrium, then and . Take into Equations 12, we get . Other case are the same, thus we it omit here.
The stability of the second type of equilibriums can be proved by the way as the first type one, which is omitted here.
From Theorem 4.3, we know that there are three types of equilibriums if both players play SAIGA policy, while only the first and second types of equilibrium points are stable. Besides, all equilibriums of the first two types are pure strategies, i.e., the probability for selecting action 1 for agent equals to or . Notably, the range of (the social attitude) in these three types of equilibriums may be overlapped, resulting in that the final convergence of the algorithm also depends on the value of . Next we concentrate on details of two representative symmetric games: the Prisoner’s Dilemma (PD) game and the Symmetric Coordination game.
The Prisoner’s Dilemma (PD) game is a symmetric game whose parameters meet the conditions: . Combined with Theorem 4.3, we have the following conclusion,
Corollary 1
The dynamics of SAIGA algorithm under Prisoner’s Dilemma (PD) game have two types of stable equilibrium points:

, if ;

, if ;
Proof
: Because the PD game is a symmetric game, we can use conclusions of Theorem 4.3 directly. From Theorem 4.3, we can see that the PD game have two types of stable equilibrium points:

;
; 
, if ;
, if ;
For the first type of equilibrium, take into conditions in above formulas, we have: if , then is an stable equilibrium; else if , then is an stable equilibrium.
For the second type of equilibrium, take into consideration, we found that the conditions are in conflict with each other, which means there is no such type of equilibriums under Prisoner’s Dilemma (PD) game.
Intuitively, for a PD game, from Corollary 1, we know that if both SAIGA players are initially sufficiently socialfriendly (the value of w is large than a certain threshold), then they will always converge to mutual cooperation of . In other words, given that the value of exceeds certain threshold, the strategy point of (or ) in the strategy space is asymptotically stable. If both players start with a low sociallyaware degree ( is smaller than certain threshold), then they will always converge to mutual defection of eventually. For the rest of cases, there exist infinite number of equilibrium points inbetween the above two extreme cases, all of which are not stable, which means that the learning dynamic will never converge to those equilibrium points.
Next we turn to analyze the dynamics of SAIGA playing the Symmetric Coordination game. The general form of a Coordination game is shown in Table 3. From the table, we can see that the Coordination game is asymmetric if any of the following conditions are met: , , or . we analyze a simplified game first, i.e., the Symmetric Coordination game, the general circumstance of coordination game will be analyzed later. Similar to the analysis of Theorem 1, we have,
1’s payoff
2’s payoff 
Agent 2’s actions  

1  2  
Agent 1’s actions 

1  R/r  S/t  
2  T/s  P/p 
Corollary 2
The dynamics of SAIGA algorithm under a symmetric coordination game have two types of stable equilibrium points:

, with ;

, with ;
where .
Proof
: The proof is the same with Theorem 1, thus we omit it here.
Intuitively, for a Symmetric Coordination game, from Corollary 2, there are two types of stable equilibrium if players playing SAIGA policy, which means players will eventually converging to action or , i.e., the Nash equilibriums of the Symmetric Coordination game. Besides, because the final convergence of the algorithm depends on the combined effect of and , we cannot give a theoretical conclusion about the condition under which the algorithm will converge to the social optimal for a symmetric Coordination game. In fact, experimental simulations in the following section show that the SAIGA has a higher probability converging to social optimal.
Now we turn to consider the asymmetric case. As we mentioned before, SAIGA under an asymmetric game may have nonlinear dynamics when , which has caused great difficulties for theoretical analysis. For this reason, we only analyze the general Coordination game which is a typical asymmetric game.
Theorem 4.4
The dynamics of SAIGA algorithm under a general coordination game have three types of equilibrium points:

, with when ; when ; and when ;

, with when ; when ; and when ;

, others.
The first and second types of equilibrium points are stable, while the last nonboundary equilibrium points is not.
Proof
: Following the system of differential equations in Equations (11), we can express the dynamics of SAIGA in coordination game as follows:
(13) 
where ,, , , , , , and . We can see that the dynamic of coordination game is nonlinear when . We start with proving the last type of equilibrium points first:
If there exit a equilibrium , then there have and , . By linearizing the unconstrained update differential equations into the form of in point , we have
where and , The parameters are represented as functions of and . Without loss of generality, we set . Because of , and , we have and , which means .
After calculating matrix ’s eigenvalue in Matlab, we have an eigenvalue , an eigenvalue with its real part , an eigenvalue with and an eigenvalue close to . Since there exists an eigenvalue , the equilibrium is not stableshilnikov2001methods .
Next we turn to prove the first type of equilibrium. In this case, we need to put the projection function back since we are dealing with boundary cases.
For the case , we have , thus and , which means and will keep and . Because and , then and will keep and . According to the continuity theorem of differential equations Coddington1955Theory , is a stable equilibrium. The case can be proved similarly, which is omitted here.
For the case , we have , then . Because , we have . Because , we have . According to the continuity theorem of differential equations, is a stable equilibrium. The stability of the second type of equilibrium points can be proved similarly, which is omitted here.
5 A Practical Algorithm
In SAIGA, each agent needs to know the policy of others and the payoff function, which are usually not available before a repeated game starts. Based on the idea of SAIGA, we relax the above assumptions and propose a practical multiagent learning algorithm called SociallyAware Policy Gradient Ascent (SAPGA). The overall flow of SAPGA is shown in Algorithm 2. In SAPGA, each agent only needs to observe the payoffs of both agents by the end of each round.
In SAIGA, we know that agent ’s policy (the probability of selecting each action) is updated based on the partial derivative of the expected value , while the social attitude is adjusted according to the relative value of and . Here in SAPGA, we first estimate the value of and using Qvalues, which are updated based on the immediate payoffs received during repeated interactions. Specifically, each agent keeps a record of the Qvalue of each action for both its own and the average of all agents ( and ) (Step 5). Both Qvalues are updated following Qlearning update rules accordingly by the end of each round (Step 5). Then the overall Qvalue of each agent is calculated as the weighted average of and weighted by its social attitude (Step 5). The policy update strategy is the same as the Table 1 in Step 6. Finally, the social attitude of agent is updated in Step 7. The value of and are estimated based on its current policy and Qvalues. The updating direction of is estimated as the difference between and . Note that a SAPGA player in each interaction needs only to know its own reward and the average reward of all agents. Knowing the average reward of a group is a reasonable assumption in many realistic scenarios, such as elections and voting.
6 Experimental Evaluation
This section is divided into three parts. Subsection 6.1 compare SAIGA and SAPGA with simulation in different types of twoagents, twoactions, generalsum games. Subsection 6.2 presents the experimental results for the 2x2 benchmark games, specifically, performance of converging to the social optimal outcomes and against selfish agents. Subsection 6.3 presents the experimental results for games with multiple agents, i.e. public good gameAndreoni1998Partners .
6.1 Simulation comparison of SAIGA and SAPGA
We start the performance evaluation with analyzing the learning performance of SAPGA under twoplayer twoaction repeated games. In general a twoplayer twoaction game can be classified into three categories
tuyls2006evolutionary :
, . In this case, each player has a dominant strategy and thus the game only has one pure strategy NE.

, and . In this case, there are two pure strategy NEs and one mixed strategy NE.

, and . In this case, there only exists one one mixed strategy NE.
where is the payoff of player when player takes action while its opponent takes action . We select one representative game for each category for illustration.
6.1.1 Category 1
For category 1, we consider the PD game as shown in Table 1. In this game, both players have one dominant strategy , and is the only pure strategy NE, while there also exists one socially optimal outcome under which both players can obtain higher payoffs.
Figure 1(a) show the learning dynamics of the practical SAPGA algorithm playing the PD game. The xaxis represents player 1’s probability of playing action and the y
Comments
There are no comments yet.