The self-organizing impact of averaged payoffs on the evolution of cooperation

07/05/2021
by   A. Szolnoki, et al.
0

According to the fundamental principle of evolutionary game theory, the more successful strategy in a population should spread. Hence, during a strategy imitation process a player compares its payoff value to the payoff value held by a competing strategy. But this information is not always accurate. To avoid ambiguity a learner may therefore decide to collect a more reliable statistics by averaging the payoff values of its opponents in the neighborhood, and makes a decision afterwards. This simple alteration of the standard microscopic protocol significantly improves the cooperation level in a population. Furthermore, the positive impact can be strengthened by increasing the role of the environment and the size of the evaluation circle. The mechanism that explains this improvement is based on a self-organizing process which reveals the detrimental consequence of defector aggregation that remains partly hidden during face-to-face comparisons. Notably, the reported phenomenon is not limited to lattice populations but remains valid also for systems described by irregular interaction networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro