Gradual learning supports cooperation in spatial prisoner's dilemma game

09/26/2019
by   Attila Szolnoki, et al.
0

According to the standard imitation protocol, a less successful player adopts the strategy of the more successful one faithfully for future success. This is the cornerstone of evolutionary game theory that explores the vitality of competing strategies in different social dilemma situations. In our present work we explore the possible consequences of two slightly modified imitation protocols that are exaggerated and gradual learning rules. In the former case a learner does not only adopt, but also enlarges the strategy change for the hope of a higher income. Similarly, in the latter case a cautious learner does not adopt the alternative behavior precisely, but takes only a smaller step towards the other's strategy during the updating process. Evidently, both scenarios assume that the players' propensity to cooperate may vary gradually between zero (always defect) and one (always cooperate) where these extreme states represent the traditional two-strategy social dilemma. We have observed that while the usage of exaggerated learning has no particular consequence on the final state, but gradual learning can support cooperation significantly. The latter protocol mitigates the invasion speeds of both main strategies, but the decline of successful defector invasion is more significant, hence the biased impact of the modified microscopic rule on invasion processes explains our observations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

07/05/2021

The self-organizing impact of averaged payoffs on the evolution of cooperation

According to the fundamental principle of evolutionary game theory, the ...
08/20/2018

Dynamic-sensitive cooperation in the presence of multiple strategy updating rules

The importance of microscopic details on cooperation level is an intensi...
09/05/2018

Reciprocity-based cooperative phalanx maintained by overconfident players

According to the evolutionary game theory principle, a strategy represen...
06/15/2020

Blocking defector invasion by focusing on the most successful partner

According to the standard protocol of spatial public goods game, a coope...
10/12/2018

Granularity of wagers in games and the (im)possibility of savings

In a casino where arbitrarily small bets are admissible, any betting str...
09/13/2018

Competition and partnership between conformity and payoff-based imitations in social dilemmas

Learning from a partner who collects higher payoff is a frequently used ...
02/19/2018

Environmental feedback drives cooperation in spatial social dilemmas

Exploiting others is beneficial individually but it could also be detrim...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

There are several update rules which are used in evolutionary game theory where different strategies compete within the framework of a social dilemma Szabó and Fáth (2007); Perc and Szolnoki (2010); Fotouhi et al. (2019); Takeshue (2019); Cheng et al. (2019); Liu et al. (2019a). In case of birth-death protocol, for instance, a player is chosen from the entire population proportional to fitness and its offspring replaces a randomly chosen neighbor Ohtsuki and Nowak (2006). Alternatively, in death-birth updating an actor is chosen to die randomly and surrounding neighbors compete for the available site proportional to their fitness. In the best-response dynamics the frequency of a strategy increases only if it is the best response to the actual state Matsui (1992). Further possibilities are the so-called win-stay-lose-shift move Nowak and Sigmund (1993); Chen et al. (2008); Amaral et al. (2016); Fu et al. (2019) and the aspiration-driven update when a player’s willingness to change depends on a predefined threshold value Liu et al. (2011); Wang and Perc (2010); Chen et al. (2017); Wu et al. (2018); Liu et al. (2019b); Zhang et al. (2019).

Beside these, the most frequent microscopic dynamics is the celebrated imitation rule where a player may adopt the strategy of a neighbor depending on their payoff difference. The popularity of the latter protocol is based on the fact that not only biological systems but also human behavior can be modeled realistically by this rule Fu et al. (2011); Naber et al. (2013); Zhang et al. (2014). One of the inspiring observations of previous studies was that the evolutionary outcome may depend sensitively on the applied microscopic dynamics Roca et al. (2009); Du et al. (2010); Szolnoki and Chen (2018a); Hadjichrysanthou et al. (2011); Chica et al. (2019); Takesue (2019); Szolnoki and Danku (2018).

Motivated by this phenomenon, in the present work we study how the final solutions change if we slightly modify the standard imitation rule. This question was also inspired by the fact that accurate strategy imitation of a partner is not necessarily a realistic approach in several cases. For example, one can easily imagine a situation when a player, who witnesses the success of a competitor player, does not simply copy the attitude of the other, but tries to surpass the other’s behavior for an even higher income. But we can also give examples for an alternative player’s approach. In the latter case a careful player is reluctant to execute drastic change from the original strategy. Instead, this actor makes just a minimal step toward the targeted strategy via the update process.

The above mentioned modified imitation scenarios are studied in the framework of a spatial prisoner’s dilemma game where cooperator and defector strategies compete. According to the dilemma the latter strategy provides higher individual income for a player independently from the other’s choice while by choosing the former strategy simultaneously competitors would gain a higher mutual income.

Evidently, to adjust the strategy change in different ways involves a multi-strategy system where players can choose not only from two options. This can be easily done if we allow actors to choose intermediate levels of cooperation by assuming continuum of strategies between the two extremes Frean (1996). For easier numerical treatment we propose a spatial prisoner’s dilemma game with discrete strategies in order to reveal the possible consequence of inaccurate imitation protocols. Hence, instead of the infinite number of continuous strategies, we introduce intermediate discrete strategies between the always defector and always cooperator strategies. In this setup the above described protocols are feasible because players’ strategy can change gradually.

In the next section we provide the details of our models that is followed by the result section where we present all observed consequences of the modified imitation rules. In the last section we discuss their potential implications and some future research directions are also given.

2 Gradual propensity level for cooperation

Starting from the traditional spatial prisoner’s dilemma we assume that players are arranged on a spatial graph where every player interacts with their neighbors and gains payoff according to the prisoner’s dilemma parametrization. Initially we present results obtained on square grid with periodic boundary conditions, but the key observations are also verified on other graphs including highly irregular scale-free network and random graph that exhibits small-world character.

Without loss of generality we assume that the mutual cooperation pays reward for both players, while a defector player gains temptation value against a cooperator where the latter sucker’s payoff is . Finally, mutual punishment leads to individual income. As it was earlier demonstrated this so-called weak prisoner’s dilemma parametrization can capture faithfully the key character of a social dilemma Nowak and May (1992).

In the traditional setup the imitation dynamics during an elementary Monte Carlo step is the following. First, a randomly selected player having strategy acquires its payoff by playing the game with all its neighbors determined by the interaction graph. After, a neighbor is chosen randomly whose calculated similarly. Last, player adopts the strategy from player

with a probability determined by the Fermi function

, where determines the uncertainty of proper imitation Szabó and Fáth (2007). Evidently, if is high, the imitation happens randomly while in the limit the update is deterministic and happens only if exceeds . According to the standard simulation technique each full Monte Carlo step gives a chance for every player to change its strategy once on average.


Figure 1: Coarse grained profile to characterize players’ propensity to cooperate. Here we use discrete strategies where the intermediate strategies covers the whole interval uniformly. In particular, a player represented by always defects, while a player belonging to class always cooperates. In case of an intermediate strategy a player from class cooperates with a probability .

In our present work there are two significant differences from the traditional model. First, the proposed learning protocols can only be executed if we assume that players’ propositions to cooperate with others go beyond the traditional two-state setup. In particular, we consider that beside the unconditional cooperate and unconditional defect states a player may cooperate with a certain

probability. Instead of the infinite number of continuous strategies we assume that players can be classified into

discrete classes. As Fig. 1 illustrates, beside the traditional (always defect) and (always cooperate) cases we introduce additional strategies distributed uniformly in the interval. For example when a player adopts a strategy belonging to -th intermediate class then it cooperates with others with a probability , where . Importantly, if and are from the same interval then player and are considered to belong to the same strategy class.

In the following we study two conceptually different imitation protocols. In the first case when a strategy update is executed, a player , whose strategy belongs to the class, does not imitate the strategy of player directly. Instead player ’s new strategy belongs to the class of if or the new strategy belongs to the class of when . This protocol describes a situation when a learner tries to go beyond the ”master” by exceeding latter’s strategy choice. In this way a learner hopes an even higher income from the strategy update. Evidently, if for then player accepts strategy. Similarly, when and then player adopts strategy. As an important technical detail, when a player adopts an intermediate strategy class then its new value is generated randomly from the interval. In this way we can avoid that the initial distribution of values influences the final evolutionary outcome artificially.

According to the alternative imitation protocol the learner player is more careful. Practically it means that when player changes strategy in response to the success of player then player ’s new strategy is if or if . In other words, player is reluctant to change its attitude and follows player only in a minimal way. Similarly to the previous protocol when an update is executed then a new value is generated from the new interval. We must note that in both cases in the limit we get back the traditional two-strategy model, while for large values the strategy change can be modified smoothly.

For the shake of appropriate comparison with the results of traditional model obtained previously we use noise level, but our key observations remain intact for other noise values Szabó et al. (2005); Szolnoki et al. (2008a). When square lattices were used the linear size was ranged from to . In case of random graph the same degree distribution was introduced for proper comparison Szabó et al. (2004), where the typical system contained players. Similarly, for the scale-free graph we kept the average degree where typical system size was comparable to the random graph. Importantly, in the scale-free network case we applied degree-normalized payoff values to avoid the additional effect of highly heterogeneous interaction graph Santos and Pacheco (2005). As it was shown previously for the traditional two-strategy model, when payoff values are normalized by the degree of each player then the average cooperation level decreases drastically and becomes comparable to those values obtained for regular graphs and cooperators cannot survive for temptation values Szolnoki et al. (2008b); Masuda (2007); Tomassini et al. (2007).

3 Results

Before presenting the results of modified imitation protocols, it is important to stress that the introduction of multi-strategies to the original model, where not just the extreme but also intermediate strategies compete, do not modify the evolutionary outcome. If a learner player follows the teacher’s strategy accurately during the imitation process then the system will gradually terminate onto the classic two-strategy state where multi-strategies become redundant. This phenomenon is nicely illustrated in the animation provided in acc (????). Here the initial state contains 22 strategies () and the temptation value is . Different strategies are denoted by different shade of red and blue as indicated on the top of Fig. 2. Accordingly, the deep red marks always defect () strategy, while deep blue marks always cooperate () strategy. When the evolution is launched a coarsening process starts and intermediate strategies die out eventually. Finally only the two extreme strategies survive who form a stable coexistence and provide an cooperation level in agreement with the result of the traditional model Szabó et al. (2005). Evidently, if temptation value is increased then the fraction of cooperators decays and they cannot survive above .

In the following we compare the consequences of modified imitation protocols by using the same parameter values ( and ). Figure 2 highlights that the evolutionary outcomes are strikingly different. When the strategy change is enlarged we started the evolution from a state when only the middle strategies, and are present. This case is illustrated in the top row from panel (a) to (c). As panel (b) shows, the extreme strategies, such as and , emerge at very early stage of the evolution. Due to the character of the imitation rule intermediate strategies drift toward the edges of profile interval leaving only extreme strategies alive. At such a high temptation level, however, the two-strategy model terminates into a full defector state as shown in panel (c). The related animation starting from the prepared initial state to the final absorbing state can be seen in exa (????). Based on this the final conclusion can be summarized as the following. Because of the specific feature of exaggerated imitation rule intermediate strategies cannot coexist long and players gradually adopt the extreme strategies. From this point the final outcome is basically determined by the relation of the always defect and the always cooperator strategies. In other words, the modification form accurate imitation to exaggerated strategy change does not alter notably the results observed for the traditional two-strategy model.

Figure 2: Front propagation for by using the suggested imitation protocols. Strategies are represented by different colors from deep red (full defection) to deep blue (full cooperation) as indicated. Top panels from (a) to (c) show how the system evolves from two intermediate strategies in case of exaggerated learning protocol. When we launch the evolution from the mentioned prepared initial state the extreme strategies appear very soon and the system evolves to the traditional two-strategy model. Here, due to high temptation value, full defectors prevail. Bottom panels from (d) to (f) show the evolution when gradual learning is applied. Here the initial state contains only the extreme strategies, but the system gradually evolves into a homogeneous state where players willingness to cooperate is significant. The system size is for both cases.

The impact of gradual imitation rule, however, is significantly different. This case is illustrated in bottom row of Fig. 2. Initially, plotted in panel (d), only extreme strategies are present. When the evolution is launched, intermediate strategies emerge and they gradually invade the homogeneous extreme domains. We observed that full defector island is more sensitive and shrinks faster than cooperator domain. This stage is illustrated in panel (e). Interestingly, intermediate strategies invade each others, too, and finally only a single strategy survives, which provides an average cooperation level for the whole population. The final state is shown in panel (f) and the related animation of the whole process can be seen in gra (????).

One may raise that the final destination may depend sensitively on the initial state. Of course this effect is relevant when the available initial strategies are limited and they span only a finite interval of the whole profile. For instance, if we apply the initial state used in panel (a) in Fig. 2 then the system is unable to leave these strategies because of the character of the gradual imitation rule. But this effect becomes irrelevant if extreme strategies are present initially, no matter other strategies are absent first. This phenomenon is illustrated in Fig. 3 where we plotted the time evolution of strategy distribution for two different cases. In the first case, plotted in panel (a), all possible strategies are present uniformly in the initial state. Later strategies die out gradually, first those who are close to the extreme strategies. Finally, only a single strategy survives and the system terminates onto an absorbing state.

Figure 3: Time evolution of strategy distributions started from a uniform state, shown in panel (a), and from a two-state initial state where only extreme strategies are present, shown in panel (b). In both cases gradual imitation rule is applied and the system terminates into a homogeneous state where . The linear size of square greed is and the same temptation value is used for both cases.

When the evolution is launched from the state contains only the extreme strategies then the system also terminates into an absorbing state where only a single strategy prevails. This process is shown in Fig. 3(b), where the number of available strategies gradually increases and after the evolution of strategy-distribution becomes similar to the above discussed case.

Naturally, the fixation interval, means which strategy survives in the end, may depend slightly on the system size. This is true especially for small system sizes and large values. This effect is illustrated in Fig. 4 where we plotted the average of final cooperation levels in dependence of system size for both previously mentioned initial states.


Figure 4: Average cooperation level of final states in dependence of system size when the evolution is launched from a uniform and a two-strategy initial state on a square lattice. This plot suggests that the large size limit can be reached more easily when only extreme strategies are present in the initial state. Similarly to Fig. 3 here and were applied. Results are averaged over 200 independent runs.

Here we have averaged the cooperation levels of final states over several independent runs by using the same system size at fixed temptation value where the initial state was either uniform or two-strategy state of extreme strategies. This comparison suggests that albeit the fixation values tend to the same level in the large-scale limit, but the convergence in the two-strategy case is much faster. Practically, in the latter case we can obtain the values valid in the large size limit already at linear size.

In the following we present our key observation about the impact of gradual learning protocol obtained on square lattice interaction graph. As Fig. 2 already signaled, the application of this strategy update rule can elevate the cooperation level in a population. This effect is confirmed by Fig. 5 where we summarized the cooperation level in dependence of temptation parameter for different values. These plots suggest that gradual learning is beneficial for cooperator strategies and the supporting mechanism becomes more visible for large values. Since large

involves just a tiny step toward teacher’s strategy, hence we can conclude that the more careful strategy change is applied, the stronger cooperation supporting effect is reached. To estimate properly this remarkable effect, in this plot we have also marked by an arrow the critical temptation value where cooperators die out in the traditional two-strategy model.


Figure 5: Cooperation level in dependence on temptation parameter on square-lattice for different values of as indicated in the legend. Arrow indicates the highest temptation value where cooperators can survive in the traditional two-strategy model. Lines are just guide to the eye. Linear system sizes are .

We have also checked the robustness of this effect on other kind of interaction graphs. By leaving the translation invariant symmetry of lattice structures we first study a random graph which exhibits small-world character. To allow fair comparison with the results of square lattice we used the same degree distribution for each node Szabó et al. (2004). Figure 6, where our results are summarized, suggests that similarly high cooperation supporting effect can be observed, hence our observation is not restricted to lattice structures, but can also be detected for random graphs.


Figure 6: Cooperation level in dependence on temptation parameter on random graph where we used the same degree number as for square lattice. The applied values are marked in the legend. Lines are just guide to the eye. Systems contain players and results are averaged over 20 independent runs.

Last, we present our results obtained for scale-free interaction graphs. This topology represents a separate class of graphs where the degree distribution of nodes is highly heterogeneous Barabási and Albert (1999). To avoid additional effect, here we keep the average degree level and normalize the players’ payoff by their degree value. In this way we can detect the pure impact of gradual learning protocol on cooperation level without disturbed by the predominant impact of accumulated payoff values on strongly heterogeneous graph Santos and Pacheco (2005). The results, summarized in Fig. 7, highlight that heterogeneous degree distribution does not block the positive impact of gradual learning. On the contrary, the highest improvement can be detected here. For example, cooperators survive already for case in the whole interval, where learning is less gradual. Furthermore, players remain mostly in cooperator state even for high value in the large limit.


Figure 7: Cooperation level in dependence on temptation parameter on scale-free networks where is applied for different values of as indicated in the legend. Lines are just guide to the eye. Systems contain players and results are averaged over 50 independent runs.

Previous works revealed the highly asymmetric way of strategy propagation for cooperator and defector strategies in spatial systems Szabó and Fáth (2007); Szolnoki et al. (2008b); Santos and Pacheco (2006); Gómez-Gardeñes et al. (2007). While defectors invade their neighborhood fast and easily, the alternative strategy needs to build up a ”phalanx” of cooperators Sigmund (2010), which is a slower process. Since the gradual learning strategy update rule may modify these dynamical processes, it is instructive to explore its direct consequences on the mentioned propagation courses.

For this reason we used a prepared initial state where only extreme defector and cooperator strategies form separate domains and monitored the front propagation speed by measuring the domain growth of dominant strategy. We have repeated it at different values by using two characteristic temptation values. In the traditional model provides a clear dominance for cooperators while ensures that defectors prevail. Using these as references we can measure how mentioned domains grow for different values. As we already noted, the usage of different means different level of gradual learning protocols. In this way we can compare directly how the introduction of intermediate strategies modify the invasion process.


Figure 8: Invasion speed of dominating strategies compared to the value of the classic two-strategy model in dependence of the number of inner competing strategies. During the imitation process players apply gradual learning. In case of cooperation, while for defection prevails when competition is released.

In Figure 8 we compare how the speed of invasions changes for the two characteristic temptation values by increasing the number of intermediate states. Note that corresponds to the traditional two-strategy model therefore the plotted represents the relative change of invasion speed. As expected, the introduction of gradual learning will decelerate the propagation processes for both cases. In other words, both the propagation of successful defector state and successful cooperator state will slow down. However, there is significant difference, because the modified learning protocol hinders the propagation of defection more efficiently. This biased impact on propagations explain why a strategy-neutral update rule has a cooperator supporting net consequence on the competition of strategies.

This argument is in nice agreement with our previous observations because the fastest invasion of defector state was previously reported in heterogeneous graphs. Since gradual learning protocol blocks this propagation efficiently therefore the largest increment in cooperation level can be expected for such interaction graphs.

4 Conclusion

Motivated by previous reports about microscopic dynamics sensitive outcomes of evolutionary process in present work we have explored the possible consequences of modified imitation rule protocols. Instead of accurate adoption we have introduced exaggerated and gradual imitation rules. While in the former protocol a learner enlarges the strategy change between the learner’s and master’s strategies, in the latter protocol the learner makes only a minimal step toward master’s strategy during the update process. The former choice can be interpreted that the learner wants an even higher payoff than the one collected by the master player. The latter protocol mimics the situation when the learner is more careful and is reluctant to make drastic strategy change. These protocols can only be executed if we introduce a multi-strategy system where intermediate level of cooperation is also possible.

We have observed that the exaggerated update protocol has no particular role on the evolutionary process and eventually the system terminates onto the traditional two-strategy model where only always defect and always cooperate players are present. The gradual learning protocol, however, may influence the evolution significantly. If we make the strategy change tiny by introducing large number of intermediate strategies then the cooperation level can be elevated remarkably. We note that a conceptually similar observation was made in Jiménez et al. (2009) where the authors introduced a weighted average as the way to update investments.

We have demonstrated that the cooperation supporting mechanism of gradual learning can be explained by a biased impact on strategy propagation process. Indeed, gradual learning decelerates the invasion for both main strategies, but the blocking of defector invasion is more effective. In this way a strategy-neutral microscopic rule has a biased impact on the competition of strategies. We note that this observation fits nicely to the general concept when there is a non-trivial consequence of a neutral intervention on strategy evolution Szolnoki et al. (2009); Fu et al. (2019); Szolnoki and Perc (2014); Liu et al. (2019c); Shao et al. (2019); Szolnoki and Chen (2018b); Xu et al. (2019).

Our observations are robust and remain valid by using different types of interaction graphs. No matter translation invariant lattice, or random, or highly heterogeneous topology is used, the positive consequence of the gradual learning protocol is remarkable. We note, however, that similar cooperator supporting effect is expected on interdependent or multilayer networks Wang et al. (2012); Boccaletti et al. (2014); Jiang et al. (2015); Allen et al. (2018); Yang et al. (2019). This expectation is based on previous results which already highlighted the decisive role of propagation process in such complex networks Szolnoki and Perc (2013); de Arruda et al. (2018).

This research was supported by the Hungarian National Research Fund (Grant K-120785) and by the National Natural Science Foundation of China (Grants No. 61976048 and No. 61503062).

References

  • Szabó and Fáth (2007) G. Szabó, G. Fáth, Evolutionary games on graphs, Phys. Rep. 446 (2007) 97–216.
  • Perc and Szolnoki (2010) M. Perc, A. Szolnoki, Coevolutionary games – a mini review, BioSystems 99 (2010) 109–125.
  • Fotouhi et al. (2019) B. Fotouhi, N. Momeni, B. Allen, M. A. Nowak, Evolution of cooperation on large networks with community structure, J. R. Soc. Interface 16 (2019) 20180677.
  • Takeshue (2019) H. Takeshue, Roles of mutation rate and co-existence of multiple strategy updating rules in evolutionary prisoner’s dilemma games, EPL 126 (2019) 58001.
  • Cheng et al. (2019) F. Cheng, T. Chen, Q. Chen, Matching donations based on social capital in Internet crowdfunding can promote cooperation, Physica A 531 (2019) 121766.
  • Liu et al. (2019a) D. Liu, C. Huang, Q. Dai, H. Li, Positive correlation between strategy persistence and teaching ability promotes cooperation in evolutionary prisoner’s dilemma games, Physica A 520 (2019a) 267–274.
  • Ohtsuki and Nowak (2006) H. Ohtsuki, M. A. Nowak, The replicator equation on graphs, J. Theor. Biol. 243 (2006) 86–97.
  • Matsui (1992) A. Matsui, Best Response Dynamics and Socially Stable Strategies, J. Econ. Theor. 57 (1992) 343–362.
  • Nowak and Sigmund (1993) M. A. Nowak, K. Sigmund, A strategy of win-stay, lose-shift that outperforms tit-for-tat in the Prisoner’s Dilemma game, Nature 364 (1993) 56–58.
  • Chen et al. (2008) X.-J. Chen, F. Fu, L. Wang, Promoting cooperation by local contribution under stochastic win-stay-lose-shift mechanism, Physica A 387 (2008) 5609–5615.
  • Amaral et al. (2016) M. A. Amaral, L. Wardil, M. Perc, J. K. L. da Silva, Stochastic win-stay-lose-shift strategy with dynamic aspirations in evolutionary social dilemmas, Phys. Rev. E 94 (2016) 032317.
  • Fu et al. (2019) M. Fu, W. Guo, L. Cheng, S. Huang, D. Chen, History loyalty-based reward promotes cooperation in the spatial public goods game, Physica A 525 (2019) 1323–1329.
  • Liu et al. (2011) Y. Liu, X. Chen, L. Wang, B. Li, W. Zhang, H. Wang, Aspiration-based learning promotes cooperation in spatial prisoner’s dilemma games, EPL 94 (2011) 60002.
  • Wang and Perc (2010) Z. Wang, M. Perc, Aspiring to the fittest and promotion of cooperation in the prisoner’s dilemma game, Phys. Rev. E 82 (2010) 021115.
  • Chen et al. (2017) Y.-S. Chen, H.-X. Yang, W.-Z. Guo, Aspiration-induced dormancy promotes cooperation in the spatial Prisoner’s Dilemma game, Physica A 469 (2017) 625–630.
  • Wu et al. (2018) T. Wu, F. Fu, L. Wang, Coevolutionary dynamics of aspiration and strategy in spatial repeated public goods games, New J. Phys. 20 (2018) 063007.
  • Liu et al. (2019b) R.-R. Liu, C.-X. Jia, Z. Rong, Effects of enhancement level on evolutionary public goods game with payoff aspirations, Applied Mathematics and Computation 350 (2019b) 242–248.
  • Zhang et al. (2019) L. Zhang, C. Huang, H. Li, Q. Dai, Aspiration-dependent strategy persistence promotes cooperation in spatial prisoner’s dilemma game, EPL 126 (2019) 18001.
  • Fu et al. (2011) F. Fu, D. I. Rosenbloom, L. Wang, M. A. Nowak, Imitation dynamics of vaccination behaviour on social networks, Proc. R. Soc. B 278 (2011) 42–49.
  • Naber et al. (2013) M. Naber, M. V. Pashkam, K. Nakayama, Unintended imitation affects success in a competitive game, Proc. Natl. Acad. Sci. 110 (2013) 20046–20050.
  • Zhang et al. (2014) B. Zhang, C. Li, H. De Silva, P. Bednarik, K. Sigmund, The evolution of sanctioning institutions: an experimental approach to the social contract, Exp. Econ. 17 (2014) 285–303.
  • Roca et al. (2009) C. P. Roca, J. A. Cuesta, A. Sánchez, Evolutionary game theory: Temporal and spatial effects beyond replicator dynamics, Phys. Life Rev. 6 (2009) 208–249.
  • Du et al. (2010) W.-B. Du, X.-B. Cao, R.-R. Liu, C.-X. Jia, The effect of a history-fitness-based updating rule on evolutionary games, Int. J. Mod. Phys. C 21 (2010) 1433–1442.
  • Szolnoki and Chen (2018a) A. Szolnoki, X. Chen, Competition and partnership between conformity and payoff-based imitations in social dilemmas, New J. Phys. 20 (2018a) 093008.
  • Hadjichrysanthou et al. (2011) C. Hadjichrysanthou, M. Broom, J. Rychtář, Evolutionary Games on Star Graphs Under Various Updating Rules, Dyn. Games Appl. 1 (2011) 386–407.
  • Chica et al. (2019) M. Chica, R. Chiong, J. J. Ramasco, H. Abbass, Effects of update rules on networked -player trust game dynamics, Commun. Nonlin. Sci. Numer. Sim. 79 (2019) 104870.
  • Takesue (2019) H. Takesue, Effects of updating rules on the coevolving prisoner’s dilemma, Physica A 513 (2019) 399–408.
  • Szolnoki and Danku (2018) A. Szolnoki, Z. Danku, Dynamic-sensitive cooperation in the presence of multiple strategy updating rules, Physica A 511 (2018) 371–377.
  • Frean (1996) M. Frean, The Evolution of Degrees of Cooperation, J. Theor. Biol. 182 (1996) 549–559.
  • Nowak and May (1992) M. A. Nowak, R. M. May, Evolutionary Games and Spatial Chaos, Nature 359 (1992) 826–829.
  • Szabó et al. (2005) G. Szabó, J. Vukov, A. Szolnoki, Phase diagrams for an evolutionary prisoner’s dilemma game on two-dimensional lattices, Phys. Rev. E 72 (2005) 047107.
  • Szolnoki et al. (2008a) A. Szolnoki, M. Perc, G. Szabó, Diversity of reproduction rate support cooperation in the prisoner’s dilemma game in complex networks, Eur. Phys. J. B 61 (2008a) 505–509.
  • Szabó et al. (2004) G. Szabó, A. Szolnoki, R. Izsák, Rock-scissors-paper game on regular small-world networks, J. Phys. A: Math. Gen. 37 (2004) 2599–2609.
  • Santos and Pacheco (2005) F. C. Santos, J. M. Pacheco, Scale-free networks provide a unifying framework for the emergence of cooperation, Phys. Rev. Lett. 95 (2005) 098104.
  • Szolnoki et al. (2008b) A. Szolnoki, M. Perc, Z. Danku, Towards effective payoffs in the prisoner’s dilemma game on scale-free networks, Physica A 387 (2008b) 2075–2082.
  • Masuda (2007) N. Masuda, Participation costs dismiss the advantage of heterogeneous networks in evolution of cooperation, Proc. R. Soc. B 274 (2007) 1815–1821.
  • Tomassini et al. (2007) M. Tomassini, L. Luthi, E. Pestelacci, Social dilemmas and cooperation in complex networks, Int. J. Mod. Phys. C 18 (2007) 1173–1185.
  • acc (????) https://dx.doi.org/10.6084/m9.figshare.9771626
  • exa (????) https://dx.doi.org/10.6084/m9.figshare.9771659
  • gra (????) https://dx.doi.org/10.6084/m9.figshare.9771671
  • Barabási and Albert (1999) A.-L. Barabási, R. Albert, Emergence of scaling in random networks, Science 286 (1999) 509–512.
  • Santos and Pacheco (2006) F. C. Santos, J. M. Pacheco, A New Route to the Evolution of Cooperation, J. Evol. Biol. 19 (2006) 726–733.
  • Gómez-Gardeñes et al. (2007) J. Gómez-Gardeñes, M. Campillo, L. M. Floría, Y. Moreno, Dynamical Organization of Cooperation in Complex Networks, Phys. Rev. Lett. 98 (2007) 108103.
  • Sigmund (2010) K. Sigmund, The Calculus of Selfishness, Princeton University Press, Princeton, NJ, 2010.
  • Jiménez et al. (2009) R. Jiménez, H. Lugo, M. San Miguel, Gradual learning and the evolution of cooperation in the spatial Continuous Prisoner’s Dilemma, Eur. Phys. J. B 71 (2009) 273–280.
  • Szolnoki et al. (2009) A. Szolnoki, M. Perc, G. Szabó, H.-U. Stark, Impact of aging on the evolution of cooperation in the spatial prisoner’s dilemma game, Phys. Rev. E 80 (2009) 021901.
  • Szolnoki and Perc (2014) A. Szolnoki, M. Perc, Coevolutionary success-driven multigames, EPL 108 (2014) 28004.
  • Liu et al. (2019c) Y. Liu, C. Yang, K. Huang, Z. Wang, Swarm intelligence inspired cooperation promotion and symmetry breaking in interdependent networked game, Chaos 29 (2019c) 043101.
  • Shao et al. (2019) Y. Shao, X. Wang, F. Fu, Evolutionary dynamics of group cooperation with asymmetrical environmental feedback, EPL 126 (2019) 40005.
  • Szolnoki and Chen (2018b) A. Szolnoki, X. Chen, Reciprocity-based cooperative phalanx maintained by overconfident players, Phys. Rev. E 98 (2018b) 022309.
  • Xu et al. (2019) Z. Xu, R. Li, L. Zhang, The role of memory in human strategy updating in optional public goods game, Chaos 29 (2019) 043128.
  • Wang et al. (2012) Z. Wang, A. Szolnoki, M. Perc, Evolution of public cooperation on interdependent networks: The impact of biased utility functions, EPL 97 (2012) 48001.
  • Boccaletti et al. (2014) S. Boccaletti, G. Bianconi, R. Criado, C. del Genio, J. Gómez-Gardeñes, M. Romance, I. Sendiña-Nadal, Z. Wang, M. Zanin, The structure and dynamics of multilayer networks, Phys. Rep. 544 (2014) 1–122.
  • Jiang et al. (2015) L.-L. Jiang, W.-J. Li, Z. Wang, Multiple effect of social influence on cooperation in interdependent network games, Sci. Rep. 5 (2015) 14657.
  • Allen et al. (2018) J. M. Allen, A. C. Skeldon, R. B. Hoyle, Social influence preserves cooperative strategies in the conditional cooperator public goods game on a multiplex network, Phys. Rev. E 98 (2018) 062305.
  • Yang et al. (2019) G. Yang, C. Zhu, W. Zhang, Adaptive and probabilistic strategy evolution in dynamical networks, Physica A 518 (2019) 99–110.
  • Szolnoki and Perc (2013) A. Szolnoki, M. Perc, Information sharing promotes prosocial behaviour, New J. Phys. 15 (2013) 053010.
  • de Arruda et al. (2018) G. F. de Arruda, F. A. Rodrigues, Y. Moreno, Fundamentals of spreading processes in single and multilayer complex networks, Phys. Rep. 756 (2018) 1–59.