Sustained cooperation among multiple individuals is a hallmark of human social behavior, and may even underpin the evolution of our intelligence Reader and Laland (2002); Dunbar (1993). Often, individuals must sacrifice some personal benefit for the long-term good of the group, for example to manage a common fishery or provide clean air. Logically, it seems that such problems are insoluble without the imposition of some extrinsic incentive structure Olson (1965). Nevertheless, small-scale societies show a remarkable aptitude for self-organization to resolve public goods and common pool resource dilemmas Ostrom (1990). Reciprocity provides a key mechanism for the emergence of collective action, since it rewards for pro-social behavior and punishes for anti-social acts. Indeed, it is a common norm shared by diverse societies Becker (1990); Ostrom (1998); Blau (1964); THIBAUT and Kelley (1966). Moreover, laboratory studies find experimental evidence for conditional cooperation in public goods games; see for example Croson et al. (2005).
By far the most well-known model of reciprocity is Rapoport’s Tit-for-Tat Rapoport et al. (1965). This hand-coded algorithm was designed to compete in tournaments where each round consisted of playing the repeated Prisoner’s Dilemma game against an unknown opponent. The algorithm cooperates on its first move, and thereafter mimics the previous move of its partner, by definition displaying perfect reciprocity. Despite its simplicity, Tit-for-Tat was victorious in the tournaments Axelrod (1980a, b). Axelrod Axelrod (1984) identified four key features of the Tit-for-Tat strategy which contributed to its success, which later successful algorithms such as win-stay lose-shift Nowak and Sigmund (1993) also share:
Nice: start by cooperating.
Clear: be easy to understand and adapt to.
Provocable: retaliate against anti-social behavior.
Forgiving: cooperate when faced with pro-social play.
Later, we will think of these as design principles for models of reciprocity.
Although Tit-for-Tat and its variants have proved resilient to modifications in the matrix game setup Duersch et al. (2013); Boyd (1989); Nowak (2006), it is clearly not applicable to realistic situations. In general, cooperating and defecting require an agent to carry out complex sequences of actions across time and space, and the payoffs defining the social dilemma may be delayed. More sophisticated models of reciprocity should be applicable to the multi-agent reinforcement learning domain, where the tasks are defined as Markov games Shapley (1953); Littman (1994), and the nature of cooperation is not pre-specified. In this setting, agents must learn both the high-level strategy of reciprocity and the low level policies required for implementing (gradations of) cooperative behavior. An important class of such games are intertemporal social dilemmas Pérolat et al. (2017); Hughes et al. (2018)
, in which both individual rationality is at odds with group-level outcomes and the negative impact of individual greed is temporally distant from its adverse consequences for the group.
Lerer and Peysakhovich (2017); Kleiman-Weiner et al. (2016); Peysakhovich and Lerer (2017) propose reinforcement learning models for -agent problems, based on a planning approach. Both models first pre-train cooperating and defecting policies using explicit knowledge of other agents’ rewards. The policies are then used as options in hand-coded meta-policies. The main variation between these approaches is the algorithm for switching between these options. In Lerer and Peysakhovich (2017) and Kleiman-Weiner et al. (2016), the decision about which policy is chosen in response to the last action, and that action is assessed through planning. In Peysakhovich and Lerer (2017), the decision is based on the recent rewards of the agent, which defects if it is not doing sufficiently well. These models of reciprocity are important stepping stones, but have some important limitations. Firstly, reciprocity imitates a range of behaviors, rather than pre-determined ones (pure cooperation and pure defection), which are not obviously well-defined in general. Secondly, reciprocity is applicable beyond the -player case. Thirdly, reciprocity does not necessarily rely on directly observing the rewards of others. Finally, reciprocity can be learned online, offering better scalability and flexibility than planning.
We propose an online-learning model of reciprocity which addresses these limitations while still significantly outperforming selfish baselines in the -player setting. Our setup comprises two types of reinforcement learning agents, innovators and imitators. Both innovators and imitators are trained using the A3C algorithm Mnih et al. (2016) with the VTrace correction Espeholt et al. (2018). An innovator optimizes for a purely selfish reward. An imitator has two components: (1) a mechanism for measuring the level of sociality of different behaviors and (2) an intrinsic motivation Chentanez et al. (2005)
for matching the sociality of others. We investigate two mechanisms for assessing sociality. The first is based on hand-crafted features of the environment. The other uses a learned “niceness network”, which estimates the effect of one agent’s actions on another agent’s future returns, hence providing a measure of social impactLatané (1981). The niceness network also encodes a social norm among the imitators, for it represents “a standard of behavior shared by members of a social group” of Encyclopaedia Britannica (2008). Hence our work represents a model-based generalization of Sen and Airiau (2007) to Markov games.
An innovator’s learning is affected by the reciprocity displayed by imitators co-training in the same environment. The innovator is incentivized to behave pro-socially, because otherwise its anti-social actions are quickly repeated by the group, leading to a bad payoff for all, including the innovator. With one innovator and one imitator, the imitator learns to respond in a Tit-for-Tat like fashion, which we verify in a dyadic game called theCoins dilemma Peysakhovich and Lerer (2017). For one innovator with many imitators, the setup resembles the phenomenon of pro-social leadership Henrich et al. (2015); Gächter and Renner (2018). Natural selection favours altruism when individuals exert influence over their followers; we see an analogous effect in the reinforcement learning setting.
More specifically, we find the presence of imitators elicits socially beneficial outcomes for the group (5 players) in both the Harvest (common pool resource appropriation) and Cleanup (public good provision) environments Hughes et al. (2018). We also quantify the social influence of the innovator on the imitators by training a graph network Battaglia et al. (2016) to predict future actions for all agents, and examining the edge norms between the agents, just as in Tacchetti et al. (2018). This demonstrates that influence of the innovator on the imitator is greater than the influence between other pairs of agents in the environment. Moreover, we find that the innovator’s policies return to selfish free-riding when we continue training without reciprocating imitators, showing that reciprocity is important for maintaining stability of a learning equilibrium. Finally, we demonstrate that the niceness network learns an appropriate notion of sociality in the dyadic Coins environment, thereby inducing a tit-for-tat like strategy.
2. Agent models
We use two types of reinforcement learning agent, innovators and imitators. Innovators learn purely from the environment reward. Imitators learn to match the sociality level of innovators, hence demonstrating reciprocity. We based the design of the imitators on Axelrod’s principles, which in our language become:
Nice: imitators should not behave anti-socially at the start of training, else innovators may not discover pro-sociality.
Clear: imitation must occur within the timescale of an episode, else innovators will be unable to adapt.
Provocable: imitators must reciprocate anti-social behavior from innovators.
Forgiving: the discount factor with which anti-social behavior is remembered must be less than .
Note that imitating the policy of another agent over many episodes is not sufficient to produce cooperation. This type of imitation does not change behaviour during an episode based on the other agent’s actions, and so does not provide feedback which enables the innovators to learn. We validate this in an ablation study. As such our methods are complementary to, but distinct from the extensive literature on imitation learning; see Hussein et al. (2017) for a survey.
Moreover, observe that merely training with collective reward for all agents is inferior to applying reciprocity in several respects. Firstly collective reward suffers from a lazy agent problem due to spurious reward Sunehag et al. (2018). Secondly, it produces agent societies that are exploitable by selfish learning agents, who will free-ride on the contributions of others. Finally, in many real-world situations, agents do not have direct access to the reward functions of others.
We use two variants of the imitator. The difference is in what is being imitated. The metric-matching imitator imitates a hand-coded metric. The niceness network imitator instead learns what to imitate. The metric-matching variant allows for more experimenter control over the nature of reciprocity, but at the expense of generality. Moreover, this allows us to disentangle the learning dynamics which result from reciprocity from the learning of reciprocity itself, a scientifically useful tool. On the other hand, the niceness network can readily be applied to any multi-agent reinforcement learning environment with no prior knowledge required.
2.2.1. Reinforcement learning
Imitators learn their policy using the same algorithm and architecture as innovators, with an additional intrinsic reward to encourage reciprocation. Consider first the case with innovator and imitator; the general case will follow easily. Let the imitator have trajectory , and the innovator has trajectory . Then the intrinsic reward is defined as
where is some function of the trajectory, which is intended to capture the effect of the actions in the trajectory on the return of the other agent. We shall refer to as the niceness of the agent whose trajectory is under consideration. This intrinsic reward is added to the environment reward. We normalize the intrinsic reward so that it accounts for a proportion of the total absolute reward in each batch:
where the mean is taken over a batch of experience and
is a constant hyperparameter, which we tuned separately on each environment.
Generalizing to the case of innovator with imitators is simple: we merely apply the intrinsic reward to each imitator separately, based on the deviation between their niceness and that of the innovator. Since our method uses online learning, it automatically adapts to the details of multi-agent interaction in different environments. This is difficult to capture in planning algorithms, because the correct cooperative policy for interactions with one agent depends on the policies of all the other agents.
The two imitator variants differ primarily in the choice of the niceness function , as follows.
2.2.2. Metric matching
For the metric-matching imitator, we hand-code for trajectories in each environment in a way that measures the pro-sociality of an agent’s behavior. If these metrics are accurate, this should lead to robust reciprocity, which gives us a useful comparison for the niceness network imitators.
2.2.3. Niceness network
The niceness network estimates the value of the innovator’s states and actions to the imitator. Let be the discounted return to the imitator from time . We learn approximations to the following functions:
where and are the state and action of the innovator at time . Clearly this requires access to the states and actions of the innovator. This is not an unreasonable assumption when compared with human cognition; indeed, there is neuroscientific evidence that the human brain automatically infers the states and actions of others Mitchell (2009). Extending our imitators to model the states and actions of innovators would be an valuable extension, but is beyond our scope here.
The niceness of action is defined as
This quantity estimates the expected increase in the discounted return of the imitator given the innovator’s action .
Then, for a generic trajectory we define the niceness of the trajectory, , to be
This is used in as the function in calculating the intrinsic reward 1
The parameter controls the timescale on which the imitation will happen; the larger is, the slower the imitator is to forget. This balances between the criteria of provocability and forgiveness.
We learn the functions and using the algorithm Sutton and Barto (1998) using the innovator’s states and actions and the imitator’s reward.
While the niceness network is trained only on the states and actions of the innovator, in calculating the intrinsic reward it is used for inference on both imitator and innovator trajectories. For this to work we require symmetry between the imitator and the innovator: they must have the same state-action space, the same observation for a given state and the same payoff structure. Therefore our approach is not applicable in asymmetric games. Cooperation in asymmetric games may be better supported by indirect reciprocity than by generalizations of Tit-for-Tat; see for example Johnstone and Bshary (2007).
2.2.4. Off-policy correction
When calculating the intrinsic reward for the niceness network imitator, we evaluate for the imitator’s trajectories, having only trained on trajectories from the innovator. This is problematic if the states and actions of are drawn from a different distribution from those of . In this case, on the trajectory , we might expect that and would be inaccurate estimates for the effect of the imitator on the innovator’s payoff. In other words, the flip of perspective from innovator to imitator at inference time is only valid if the imitator’s policy is sufficiently closely aligned with that of the innovator.
In practice, we find that applying to is particularly problematic. Specifically if the imitator’s policy contains actions which are very rare in the innovator’s policy, then the estimate of for these actions is not informative. To correct this issue, we add a bias for policy imitation to the model. The imitator estimates the policy of the innovator, giving an estimate for each state . Then we add an additional KL-divergence loss for policy imitation,
where is the policy of the imitator, and is a constant hyperparameter. The effect of this loss term is to penalize actions that are very unlikely in ; these are the actions for which the niceness network is unable to produce a useful estimate.
In our ablation study, we will demonstrate that the policy imitation term alone does not suffice to yield cooperation. Indeed, our choice of terminology was based on the high-level bias for reciprocity introduced by the niceness matching, not the low-level policy imitation. The latter might better be thought of as an analogue to human behavioral mimicry Chartrand and Lakin (2013), which while an important component of human social interaction Lakin et al. (2003), alone does not constitute a robust means of generating positive social outcomes Hale and de C. Hamilton (2016).
Empirically we find that niceness matching alone often suffices to generate cooperative outcomes, despite the off-policy problem. This is likely because environments contain some state-action pairs which are universally advantageous or disadvantageous to others, regardless of the particular policy details. Moreover, this is not a limitation of our environments, it is a feature familiar from everyday life: driving a car without a catalytic converter is bad for society, regardless of the time and place. The policy imitation correction does however serve to stabilize cooperative outcomes, a suggested effect of mimicry in human social groups Tanner et al. (2008).
We test our imitation algorithms in three domains. The first is Coins. This is a -player environment introduced in Lerer and Peysakhovich (2017). This environment has simple mechanics, and a strong social dilemma between the two players, similar to the Prisoner’s Dilemma. This allows us to study our algorithms in a setup close to the Prisoner’s Dilemma, and make comparisons to previous work.
The other two environments are Harvest and Cleanup. These are more complex environments, with delayed results of actions, partial observability of a somewhat complex gridworld, and more than two players. These environments are designed to test the main hypothesis of this paper, that our algorithms are able to learn to reciprocate in complex environments where reciprocity is temporally extended and hard to define. We choose these two environments because they represent different classes of social dilemma; Cleanup is a public goods game, while Harvest is a common pool resource dilemma.
We use the gridworld game Coins, introduced in Lerer and Peysakhovich (2017). Here, two players move on a fully-observed gridworld, on which coins of two colors periodically appear. When a player picks up a coin of either color, they get a reward of . When a player picks up a coin of the other player’s color, the other player gets a reward of . The episode ends after
steps. The total return is maximized when each player picks up only coins of their own color, but players are tempted to pick up coins of the wrong color. At each timestep coins spawn each unoccupied square with a probability. Therefore the maximum achievable collective return is approximately in expectation, if neither agent chooses to defect and both agents collect all coins of their own color.
In this game, the metric-matching agent uses the number of recent defections as its measure of niceness . We define to be if the action picks up a coin which penalizes the other player, and otherwise. Then we define
To make the environment tractable for our niceness network we symmetrize the game by swapping the coin colors in the observation of the innovator. This means that the value and functions for the innovator can be consistently applied for the imitator. Note that only the colors in the observation are swapped; the underlying game dynamics remain the same, so the social dilemma still applies.
Both the metric-matching and niceness network imitators outperformed the greedy baseline in this environment, reaching greater overall rewards and a larger proportion of coins being collected by the agent which owns them (Figure 2). However, neither model was able to reach the near-perfect cooperation achieved by the approximate Markov Tit-for-Tat algorithm Lerer and Peysakhovich (2017), as shown in Figure 3 of that paper. We do not provide a numerical comparison, because we reimplemented the Coins environment for this paper.
This suggests that when it is possible to pre-learn policies that purely cooperate or defect and roll these out into the future, it is advantageous to leverage this prior knowledge to generate precise and extreme reciprocity. One might imagine improving our algorithm to display binary imitation behavior by attempting to match the maximum and minimum of recent niceness, rather than a discounted history. It would be interesting to see whether this variant elicited more cooperative behavior from innovators.
In the Cleanup environment, the aim is to collect apples. Each apple collected provides a reward of to the agent which collects it. Apples spawn at a rate determined by the state of a geographically separated river. Over time, this river fills with waste, lowering the rate of apple spawning linearly. For high enough levels of waste, no apples can spawn. The episode starts with the level of waste slightly above this critical point. The agents can take actions to clean the waste when near the river, which provides no reward but is necessary to generate any apples. The episode ends after steps, and the map is reset to its initial state. For details of the environment hyperparameters, and evidence that this is a social dilemma, see Hughes et al. (2018).
More precisely, this is a public goods dilemma. If some agents are contributing to the public good by clearing waste from the river, there is an incentive to stay in the apple spawning region to collect apples as they spawn. However, if all players adopt this strategy, then no apples spawn and there is no reward.
In this game, the metric-matching agent uses the number of contributions to the public good as its measure of niceness—for a given state and action, is if the action removes waste from the river, and otherwise. Then we define
Both metric-matching and niceness network imitators are able to induce pro-social behaviour in the innovator they play alongside, greatly exceeding the return and contributions to the public good of selfish agents (Figure 4). Niceness network imitators come close to the final performance of metric-matching imitators, despite having to learn online which actions are pro-social. A representative episode from the niceness network case reveals the mechanism by which the society solves the social dilemma.111A video is available at https://youtu.be/kCjYfdVlLC8. The innovator periodically leads an expedition to clean waste, which is subsequently joined by multiple imitators. Everyone benefits from this regular cleaning, since many apples are made available (and consumed) throughout the episode.
In the Harvest environment, introduced in Pérolat et al. (2017), collecting apples again provides a reward of . Harvested apples regrow at a rate determined by the number of nearby apples—the more other apples are present, the more like the apple is to regrow on a timestep. If all the apples in a block are harvested, none of them will ever regrow. The episode ends after steps, and the map is reset to its initial state. For details of the environment hyperparameters, and evidence that this is a social dilemma, see Hughes et al. (2018).
This is a commons dilemma Hardin (1968). A selfish agent will harvest as rapidly as possible; if the whole group adopts this approach, then the resource is quickly depleted and the return over the episode is low. In order to get a high group return, agents must abstain from harvesting apples which would overexploit the common pool resource.
In this game, there is no clear metric of pro-sociality for the metric-matching agent we use. The anti-social actions in this game are taking apples with few neighbours, as this leads to the common pool resource being depleted. The sustainability of taking a particular apple can therefore be approximated by the number of apples within distance of to , capped at . We call this quantity , following an analogous definition in Janssen et al. (2008). For a trajectory where an agent eats apples in order, we define
We present our findings in Figure 4. Selfish agents are successful near the start of training, but as they become more efficient at collection they deplete the common pool resource earlier in each episode, and collective return falls. This effect is most obvious when examining the sustainability metric, which we define as the average proportion of the episode that had passed when each apple was collected. Agents which collect apples perfectly uniformly would achieve a sustainability of . The baseline achieves a mere .
For niceness network imitators, we see the same pattern near the start, where all the agents become more competent at collecting apples and begin to deplete the apple pool faster. We then see sustainability and collective return rise again. This is because the niceness network learns to classify sustainable behaviour, generating imitators that learn to match the sustainability of innovators, which creates an incentive for innovators to behave less selfishly.
Similarly, the experiment with metric-matching imitators enters a tragedy of the commons early in training, before recovering to achieve higher collective return and better sustainability than the niceness network in a shorter period of training time. This makes intuitive sense: by pre-specifying the nature of cooperative behavior, the metric-matching imitator has a much easier optimization problem, and more quickly demonstrates clear reciprocity to the innovator. The outcome is greater pro-sociality by all, and in a faster training time. To save compute, we terminated the metric-matching runs once they were consistently outperforming the niceness network.
A representative episode from the trained agents in the niceness-network case shows the innovator taking a sustainable path through the apples, with imitators striving to match this.222A video is available at https://youtu.be/pvdMt_0RCpw. Interestingly, the society comes relatively close to causing the tragedy of the commons. Intuitively, when apples are scarce, the actions of each agent should have a more significant effect on their co-players, leading to a better learning signal for reciprocity.
In this section we examine the policies learned by the models, and the learning dynamics of the system.
3.4.1. Influence between agents
If our imitators are learning reciprocity, the the policy of the innovator should meaningfully influence the behavior of the imitators at the end of training. We demonstrate this for the Cleanup environment, by learning a GraphNet model that indirectly measures time-varying influence between entities Tacchetti et al. (2018). In Cleanup, the entities of interest are agents, waste and apples.
The input to our model is a complete directed graph with entities as nodes. The node features are the positions of each entity and additional metadata. For waste and apples, the metadata indicates whether the entity has spawned. For agents, it contains orientation, last action and last reward, and a one-hot encoding of agent type (innovator or imitator). In addition, the graph contains the timestep as a global feature.
The model is trained to predict agent actions from recorded gameplay trajectories. The architecture is as follows: the input is processed by a GraphNet encoder block with -unit MLPs for its edge, node and global nets followed by independent GRU units for the edge, node and global attributes with hidden states of size , and finally a decoder MLP network for the agent nodes with layer sizes . Importantly, the graph output of the GRU is identical in structure to that of the input.
In Tacchetti et al. (2018)
, it was shown that the GRU output graph contains information about relationships between different entities. More precisely, the norm of the state vector along the edge from entityto entity computes the effective influence of on , in the sense of Granger causality Granger (1969). We may use this metric to evaluate the degree of reciprocity displayed by imitators.
Table 1 shows the average norms of the state vectors along edges between imitators and innovators for our different imitation models, alongside an A3C baseline. The edge norm is greatest from the innovator to the imitator, strongly exceeding the baseline, indicating the innovator has a significant effect on the imitator’s actions. The effect is strikingly visible when the output graph net edges are superimposed on a representative episode with metric-matching imitators, with thicknesses proportional to their norms.555A video is available at https://youtu.be/NoDbUMkBfP4. In the video, the innovator is purple, and the imitators are sky blue, lime, rose and magenta.
|Metric matching, 4 A3C||0.97||0.30||0.28||—|
|Niceness network, 4 A3C||0.35||0.21||0.22||—|
In the Cleanup environment, we examine the performance of the model with various components of the imitator ablated. We observe that with only the policy deviation cost, the performance is no better than with purely selfish agents. With the niceness network intrinsic reward, but no policy deviation cost, we see less consistent results across random seeds for the environment and agent (Figure 6A).
3.4.3. Instability of solution without imitation
In the Cleanup environment, we take an innovator trained alongside four imitators, and run several copies of it in the environment, continuing to learn. We see that the contributions and collective reward quickly fall, as the innovators learn to be more selfish (Figure 6B). This shows that for this environment, reciprocity is necessary not only to find cooperative solutions but also to sustain cooperative behaviour.
3.4.4. Predictions of niceness network
We analysed the predictions of the niceness network in the Coins environment, to determine whether the imitator has correctly captured which actions of the innovator are beneficial and harmful. We rolled out episodes using the final policies of the innovator and imitator from a run with the niceness network. On average, we found that the niceness network on average makes significantly negative predictions () for actions which pick up the wrong coin, and near zero predictions for both picking up one’s own coin () and actions which do not pick up a coin ().
On a more qualitative level, we display some of the predictions of the niceness network for the first of these episodes in figure 7. There we see that the niceness network predicts negative values for picking up the other agent’s coins, and for actions which take the agent nearer to such coins.
Our reciprocating agents demonstrate an ability to elicit cooperation in otherwise selfish individuals, both in -player and -player social dilemmas. Reciprocity, in the form of reciprocal altruism Trivers (1971), is not just a feature of human interactions. For example, it is also thought to underlie the complex social lives of teleost fish Brandl and Bellwood (2015). As such, it may be of fundamental importance in the construction of next generation social learning agents that display simple collective intelligence Wolpert and Tumer (1999). Moreover, combining online learned reciprocity with the host of other inductive biases in the multi-agent reinforcement learning literature Fehr and Schmidt ([n. d.]); Jaques et al. (2018); Lerer and Peysakhovich (2018); Peysakhovich and Lerer (2017); Lowe et al. (2017) may well be important for producing powerful social agents.
In the -player case, our experiments place the innovator in the leadership role, giving rise to pro-sociality. An obvious extension would involve training several imitators in parallel, leading to a situation more akin to conformity; see for example Cialdini and Goldstein (2004). In this case, all individuals change their responses to match those of others. A conformity variant may well be a better model of human behavior in public goods games Bardsley and Sausgruber (2005), and hence may generalize better to human co-players.
It is instructive to compare our model to the current state-of-the-art planning-based approach, approximate Markov Tit-for-Tat (amTFT) Lerer and Peysakhovich (2017). There, reciprocity is achieved by first learning cooperative and defecting policies, by training agents to optimize collective and individual return respectively. The reciprocating strategy uses rollouts based on the cooperative policy to classify actions as cooperative or defecting, and responds accordingly by switching strategy in a hard-coded manner.
In the Coins environment, amTFT performs significantly better than both our niceness network and metric-matching imitators, solving the dilemma perfectly. We believe is because it better fulfills Axelrod’s clarity condition for reciprocity. By switching between two well-defined strategies, it produces very clear responses to defection, which provides a better reinforcement learning signal driving innovators towards pro-sociality.
On the other hand, our model is more easily scalable to complex environments. We identify three properties of an environment which make it difficult for amTFT, but which do not stop our model from learning to reciprocate.
If no perfect model of the environment exists, or rolling out such a model is infeasible, one must evaluate the cooperativeness of others online, using a learned model.
The cooperative strategy for amTFT is learned on collective return. For games with multiple agents, this may not yield a unique policy. For example, in the Cleanup environment, the set of policies maximizing collective return involve some agents cleaning the river, while other eat apples.
If cooperation and defection are nuanced, rather than binary choices, then to reciprocate you may need to adjust your level of cooperativeness to match that of your opponent. This is hard to achieve by switching between a discrete set of strategies.
This leaves open an important question: how do we produce reciprocity which is both clear and scalable to complex tasks? One approach would be combining a model like ours, which learns what to reciprocate, with a method which switched between policies in a discrete fashion, as in the previous planning-based approaches of Lerer and Peysakhovich (2017); Kleiman-Weiner et al. (2016); Peysakhovich and Lerer (2017). This might lead reciprocity algorithms which can be learned in complex environments, but which are clearer and so can induce cooperation in co-players even more strongly than our model.
- Axelrod (1980a) Robert Axelrod. 1980a. Effective Choice in the Prisoner’s Dilemma. The Journal of Conflict Resolution 24, 1 (1980), 3–25. http://www.jstor.org/stable/173932
- Axelrod (1980b) Robert Axelrod. 1980b. More Effective Choice in the Prisoner’s Dilemma. The Journal of Conflict Resolution 24, 3 (1980), 379–403. http://www.jstor.org/stable/173638
- Axelrod (1984) Robert Axelrod. 1984. The Evolution of Cooperation. Basic Books. https://books.google.co.uk/books?id=029p7dmFicsC
- Bardsley and Sausgruber (2005) Nicholas Bardsley and Rupert Sausgruber. 2005. Conformity and reciprocity in public good provision. Journal of Economic Psychology 26, 5 (2005), 664 – 681. https://doi.org/10.1016/j.joep.2005.02.001
- Battaglia et al. (2016) Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. 2016. Interaction networks for learning about objects, relations and physics. In Advances in neural information processing systems. 4502–4510.
- Becker (1990) L.C. Becker. 1990. Reciprocity. University of Chicago Press. https://books.google.co.uk/books?id=dWgI4lI7h-cC
- Blau (1964) P.M. Blau. 1964. Exchange and Power in Social Life. J. Wiley. https://books.google.co.uk/books?id=qhOMLscX-ZYC
- Boyd (1989) Robert Boyd. 1989. Mistakes allow evolutionary stability in the repeated prisoner’s dilemma game. Journal of Theoretical Biology 136, 1 (1989), 47 – 56. https://doi.org/10.1016/S0022-5193(89)80188-2
- Brandl and Bellwood (2015) Simon J. Brandl and David R. Bellwood. 2015. Coordinated vigilance provides evidence for direct reciprocity in coral reef fishes. Scientific Reports 5 (25 Sep 2015), 14556 EP –. https://doi.org/10.1038/srep14556 Article.
- Chartrand and Lakin (2013) Tanya L Chartrand and Jessica L Lakin. 2013. The antecedents and consequences of human behavioral mimicry. Annual review of psychology 64 (2013), 285–308.
- Chentanez et al. (2005) Nuttapong Chentanez, Andrew G. Barto, and Satinder P. Singh. 2005. Intrinsically Motivated Reinforcement Learning. In Advances in Neural Information Processing Systems 17, L. K. Saul, Y. Weiss, and L. Bottou (Eds.). MIT Press, 1281–1288. http://papers.nips.cc/paper/2552-intrinsically-motivated-reinforcement-learning.pdf
- Cialdini and Goldstein (2004) Robert B. Cialdini and Noah J. Goldstein. 2004. Social Influence: Compliance and Conformity. Annual Review of Psychology 55, 1 (2004), 591–621. https://doi.org/10.1146/annurev.psych.55.090902.142015 arXiv:https://doi.org/10.1146/annurev.psych.55.090902.142015 PMID: 14744228.
- Croson et al. (2005) Rachel Croson, Enrique Fatas, and Tibor Neugebauer. 2005. Reciprocity, matching and conditional cooperation in two public goods games. Economics Letters 87, 1 (2005), 95 – 101. https://doi.org/10.1016/j.econlet.2004.10.007
- Duersch et al. (2013) Peter Duersch, Joerg Oechssler, and Burkhard C. Schipper. 2013. When is tit-for-tat unbeatable? CoRR abs/1301.5683 (2013). arXiv:1301.5683 http://arxiv.org/abs/1301.5683
- Dunbar (1993) R. I. M. Dunbar. 1993. Coevolution of neocortical size, group size and language in humans. Behavioral and Brain Sciences 16, 4 (1993), 681–694. https://doi.org/10.1017/S0140525X00032325
- Espeholt et al. (2018) Lasse Espeholt, Hubert Soyer, Rémi Munos, Karen Simonyan, Volodymyr Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. 2018. IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures. CoRR abs/1802.01561 (2018). arXiv:1802.01561 http://arxiv.org/abs/1802.01561
- Fehr and Schmidt ([n. d.]) Ernst Fehr and Klaus Schmidt. [n. d.]. A Theory of Fairness, Competition and Cooperation. IEW - Working Papers 004. Institute for Empirical Research in Economics - University of Zurich. https://EconPapers.repec.org/RePEc:zur:iewwpx:004
- Gächter and Renner (2018) Simon Gächter and Elke Renner. 2018. Leaders as role models and ‘belief managers’ in social dilemmas. Journal of Economic Behavior & Organization 154 (2018), 321 – 334. https://doi.org/10.1016/j.jebo.2018.08.001
- Granger (1969) C. W. J. Granger. 1969. Investigating Causal Relations by Econometric Models and Cross-spectral Methods. Econometrica 37, 3 (1969), 424–438. http://www.jstor.org/stable/1912791
- Hale and de C. Hamilton (2016) Joanna Hale and Antonia F. de C. Hamilton. 2016. Cognitive mechanisms for responding to mimicry from others. Neuroscience & Biobehavioral Reviews 63 (2016), 106 – 123. https://doi.org/10.1016/j.neubiorev.2016.02.006
- Hardin (1968) Garrett Hardin. 1968. The Tragedy of the Commons. Science 162, 3859 (1968), 1243–1248. https://doi.org/10.1126/science.162.3859.1243 arXiv:http://science.sciencemag.org/content/162/3859/1243.full.pdf
- Henrich et al. (2015) Joseph Henrich, Maciej Chudek, and Robert Boyd. 2015. The Big Man Mechanism: how prestige fosters cooperation and creates prosocial leaders. Philosophical Transactions of the Royal Society of London B: Biological Sciences 370, 1683 (2015). https://doi.org/10.1098/rstb.2015.0013 arXiv:http://rstb.royalsocietypublishing.org/content/370/1683/20150013.full.pdf
- Hinton et al. (2012) G. Hinton, N. Srivastava, , and K. Swersky. 2012. Lecture 6a: Overview of mini–batch gradient descent. Coursera (2012). https://class.coursera.org/neuralnets-2012-001/lecture
- Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural Comput. 9, 8 (Nov. 1997), 1735–1780. https://doi.org/10.1162/neco.1918.104.22.1685
- Hughes et al. (2018) Edward Hughes, Joel Z. Leibo, Matthew G. Philips, Karl Tuyls, Edgar A. Duéñez-Guzmán, Antonio García Castañeda, Iain Dunning, Tina Zhu, Kevin R. McKee, Raphael Koster, Heather Roff, and Thore Graepel. 2018. Inequity aversion resolves intertemporal social dilemmas. CoRR abs/1803.08884 (2018). arXiv:1803.08884 http://arxiv.org/abs/1803.08884
- Hussein et al. (2017) Ahmed Hussein, Mohamed Medhat Gaber, Eyad Elyan, and Chrisina Jayne. 2017. Imitation Learning: A Survey of Learning Methods. ACM Comput. Surv. 50, 2, Article 21 (April 2017), 35 pages. https://doi.org/10.1145/3054912
- Janssen et al. (2008) M.A. Janssen, R.L. Goldstone, F. Menczer, and E. Ostrom. 2008. Effect of rule choice in dynamic interactive spatial commons. International Journal of the Commons 2(2) (2008). http://doi.org/10.18352/ijc.67
- Jaques et al. (2018) N. Jaques, A. Lazaridou, E. Hughes, C. Gulcehre, P. A. Ortega, D. Strouse, J. Z. Leibo, and N. de Freitas. 2018. Intrinsic Social Motivation via Causal Influence in Multi-Agent RL. ArXiv e-prints (Oct. 2018). arXiv:1810.08647
- Johnstone and Bshary (2007) R. A. Johnstone and R. Bshary. 2007. Indirect reciprocity in asymmetric interactions: when apparent altruism facilitates profitable exploitation. Proc. Biol. Sci. 274, 1629 (Dec 2007), 3175–3181.
- Kleiman-Weiner et al. (2016) Max Kleiman-Weiner, Mark K Ho, Joe L Austerweil, Littman Michael L, and Joshua B. Tenenbaum. 2016. Coordinate to cooperate or compete: abstract goals and joint intentions in social interaction. In Proceedings of the 38th Annual Conference of the Cognitive Science Society.
- Lakin et al. (2003) Jessica L Lakin, Valerie E Jefferis, Clara Michelle Cheng, and Tanya L Chartrand. 2003. The chameleon effect as social glue: Evidence for the evolutionary significance of nonconscious mimicry. Journal of nonverbal behavior 27, 3 (2003), 145–162.
- Latané (1981) Bibb Latané. 1981. The psychology of social impact. American Psychologist 36(4) (1981). http://dx.doi.org/10.1037/0003-066X.36.4.343
- Lerer and Peysakhovich (2017) Adam Lerer and Alexander Peysakhovich. 2017. Maintaining cooperation in complex social dilemmas using deep reinforcement learning. CoRR abs/1707.01068 (2017). arXiv:1707.01068 http://arxiv.org/abs/1707.01068
- Lerer and Peysakhovich (2018) A. Lerer and A. Peysakhovich. 2018. Learning Existing Social Conventions in Markov Games. ArXiv e-prints (June 2018). arXiv:cs.AI/1806.10071
Michael L. Littman.
Markov Games As a Framework for Multi-agent
Reinforcement Learning. In
Proceedings of the Eleventh International Conference on International Conference on Machine Learning(ICML’94). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 157–163. http://dl.acm.org/citation.cfm?id=3091574.3091594
- Lowe et al. (2017) Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. 2017. Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments. CoRR abs/1706.02275 (2017). arXiv:1706.02275 http://arxiv.org/abs/1706.02275
- Mitchell (2009) Jason P. Mitchell. 2009. Inferences about mental states. Philosophical Transactions of the Royal Society of London B: Biological Sciences 364, 1521 (2009), 1309–1316. https://doi.org/10.1098/rstb.2008.0318 arXiv:http://rstb.royalsocietypublishing.org/content/364/1521/1309.full.pdf
- Mnih et al. (2016) Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous Methods for Deep Reinforcement Learning. CoRR abs/1602.01783 (2016). arXiv:1602.01783 http://arxiv.org/abs/1602.01783
- Nowak and Sigmund (1993) Martin Nowak and Karl Sigmund. 1993. A strategy of win-stay, lose-shift that outperforms tit-for-tat in the Prisoner’s Dilemma game. Nature 364 (1993), 56 – 58. https://doi.org/10.1038/364056a0
- Nowak (2006) M. A. Nowak. 2006. Evolutionary Dynamics. Harvard University Press. https://books.google.co.uk/books?id=YXrIRDuAbE0C
- of Encyclopaedia Britannica (2008) The Editors of Encyclopaedia Britannica. 2008. Norm. (2008). https://www.britannica.com/topic/norm-society
- Olson (1965) Mancur Olson. 1965. The Logic of Collective Action. Harvard University Press. https://books.google.co.uk/books?id=jzTeOLtf7_wC
- Ostrom (1990) E. Ostrom. 1990. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press. https://books.google.co.uk/books?id=4xg6oUobMz4C
- Ostrom (1998) Elinor Ostrom. 1998. A Behavioral Approach to the Rational Choice Theory of Collective Action: Presidential Address, American Political Science Association, 1997. American Political Science Review 92, 1 (1998), 1–22. https://doi.org/10.2307/2585925
- Pérolat et al. (2017) Julien Pérolat, Joel Z. Leibo, Vinícius Flores Zambaldi, Charles Beattie, Karl Tuyls, and Thore Graepel. 2017. A multi-agent reinforcement learning model of common-pool resource appropriation. CoRR abs/1707.06600 (2017). arXiv:1707.06600 http://arxiv.org/abs/1707.06600
- Peysakhovich and Lerer (2017) Alexander Peysakhovich and Adam Lerer. 2017. Consequentialist conditional cooperation in social dilemmas with imperfect information. CoRR abs/1710.06975 (2017). arXiv:1710.06975 http://arxiv.org/abs/1710.06975
- Rapoport et al. (1965) A. Rapoport, A.M. Chammah, and C.J. Orwant. 1965. Prisoner’s Dilemma: A Study in Conflict and Cooperation. University of Michigan Press. https://books.google.co.uk/books?id=yPtNnKjXaj4C
- Reader and Laland (2002) Simon M. Reader and Kevin N. Laland. 2002. Social intelligence, innovation, and enhanced brain size in primates. Proceedings of the National Academy of Sciences 99, 7 (2002), 4436–4441. https://doi.org/10.1073/pnas.062041299 arXiv:http://www.pnas.org/content/99/7/4436.full.pdf
- Sen and Airiau (2007) Sandip Sen and Stéphane Airiau. 2007. Emergence of Norms Through Social Learning. In Proceedings of the 20th International Joint Conference on Artifical Intelligence (IJCAI’07). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1507–1512. http://dl.acm.org/citation.cfm?id=1625275.1625519
- Shapley (1953) L. S. Shapley. 1953. Stochastic Games. Proceedings of the National Academy of Sciences 39, 10 (1953), 1095–1100. https://doi.org/10.1073/pnas.39.10.1095 arXiv:http://www.pnas.org/content/39/10/1095.full.pdf
- Sunehag et al. (2018) Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinicius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z. Leibo, Karl Tuyls, and Thore Graepel. 2018. Value-Decomposition Networks For Cooperative Multi-Agent Learning Based On Team Reward. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS ’18). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 2085–2087. http://dl.acm.org/citation.cfm?id=3237383.3238080
- Sutton and Barto (1998) Richard S. Sutton and Andrew G. Barto. 1998. Introduction to Reinforcement Learning (1st ed.). MIT Press, Cambridge, MA, USA.
- Tacchetti et al. (2018) Andrea Tacchetti, H Francis Song, Pedro AM Mediano, Vinicius Zambaldi, Neil C Rabinowitz, Thore Graepel, Matthew Botvinick, and Peter W Battaglia. 2018. Relational Forward Models for Multi-Agent Learning. arXiv preprint arXiv:1809.11044 (2018).
- Tanner et al. (2008) Robin J. Tanner, Rosellina Ferraro, Tanya L. Chartrand, James R. Bettman, and Rick van Baaren. 2008. Of Chameleons and Consumption : The Impact of Mimicry on Choice and Preferences.
- THIBAUT and Kelley (1966) J.W.A. THIBAUT and H.H. Kelley. 1966. The Social Psychology of Groups. Wiley. https://books.google.co.uk/books?id=KDH5Hc9F2AkC
- Trivers (1971) Robert Trivers. 1971. The Evolution of Reciprocal Altruism. Quarterly Review of Biology 46 (03 1971), 35–57. https://doi.org/10.1086/406755
- Wolpert and Tumer (1999) David H. Wolpert and Kagan Tumer. 1999. An Introduction to Collective Intelligence. CoRR cs.LG/9908014 (1999). http://arxiv.org/abs/cs.LG/9908014
Appendix A Hyperparameters
In all experiments, for both imitators and innovators, the network consists of a single convolutional layer with output channels, a
kernel and stride, followed by an two-layer MLP with hidden sizes , an LSTM Hochreiter and Schmidhuber (1997) with hidden size and linear output layers for the policy and baseline. The discount factor for the return is set to , and the learning rate and entropy cost were tuned separately for each model-environment pair.
The architecture for the niceness network is a non-recurrent neural network with the same convnet and MLP structure as the reinforcement learning architecture (though the weights are not shared between the two). The output layer of the MLP is mapped linearly to give outputs for, for each possible action, and for each possible action.
For the niceness network, we used a separate optimizer from the RL model, with a separate learning rate. Both optimizers used the RMSProp algorithmHinton et al. (2012). The hyperparameters used in each experiment are shown in Table 2.
|Hyperparameter||Coins (MM)||Coins (AN)||Cleanup (MM)||Cleanup (AN)||Harvest (MM)||Harvest (AN)|
|- imitation reward weight|
|- imitation memory decay|
|- policy imitation weight||–||–||–|
|RL learning rate|
|Advantage network learning rate||–||–||–|
|Advantage network TD-||–||–||–|
|Advantage network discount factor||–||–||–|