DeepAI
Log In Sign Up

Selfish Algorithm and Emergence of Collective Intelligence

We propose a model for demonstrating spontaneous emergence of collective intelligent behavior from selfish individual agents. Agents' behavior is modeled using our proposed selfish algorithm (SA) with three learning mechanisms: reinforced learning (SAL), trust (SAT) and connection (SAC). Each of these mechanisms provides a distinctly different way an agent can increase the individual benefit accrued through playing the prisoner's dilemma game (PDG) with other agents. The SA provides a generalization of the self-organized temporal criticality (SOTC) model and shows that self-interested individuals can simultaneously produce maximum social benefit from their decisions. The mechanisms in the SA are self-tuned by the internal dynamics and without having a pre-established network structure. Our results demonstrate emergence of mutual cooperation, emergence of dynamic networks, and adaptation and resilience of social systems after perturbations. The implications and applications of the SA are discussed.

READ FULL TEXT VIEW PDF
12/06/2021

A Synergy of Institutional Incentives and Networked Structures in Evolutionary Game Dynamics of Multi-agent Systems

Understanding the emergence of prosocial behaviours (e.g., cooperation a...
04/28/2018

The Sharer's Dilemma in Collective Adaptive Systems of Self-Interested Agents

In collective adaptive systems (CAS), adaptation can be implemented by o...
08/02/2021

Coalitional Control for Self-Organizing Agents

Coalitional control is concerned with the management of multi-agent syst...
04/17/2022

Modeling Complex Interactions in a Disrupted Environment: Relational Events in the WTC Response

When subjected to a sudden, unanticipated threat, human groups character...
06/22/2021

Reasoning about Emergence of Collective Memory

We offer a very simple model of how collective memory may form. Agents k...
02/13/2021

Modelling Cooperation in Network Games with Spatio-Temporal Complexity

The real world is awash with multi-agent problems that require collectiv...
12/01/2019

Critical mass effect in evolutionary games triggered by zealots

Tiny perturbations may trigger large responses in systems near criticali...

I Introduction

In this paper we address the problem of resolving the paradox of cooperative behavior emerging in situations in which individuals are assumed to act solely in their own self interest, that is, selfishly. This particular paradox has a long history and has been widely studied in sociology and in the cognitive sciences. Game theory has been among the leading descriptors of normative behavior in this regard. In particular, the Prisoner’s Dilemma Game (

) has been a leading metaphor for the study of the evolution of cooperative behavior in populations in which selfishness is substantially rewarded in the short-term nowak1993strategy ; gonzalezetal2015 . Herein we integrate the behavior into a recently developed model of self-organized behavior west17 , but before we become immersed in the formalization of our proposed model, we examine the behavior of interest from a qualitative perspective.

It is nearly two centuries since Lloyd lloyd33 introduced the tragedy of the commons scenario to highlight the limitations of human rationality in a social context. We subsequently update this scenario to modern situations in which individuals, with self-centered interests, must share a common resource. If each individual acts selfishly, without regard for the others sharing the resource, the unavoidable result will be the depletion of resources and subsequent tragedy. There is still much to be gained in reviewing the original scenario before adapting it to our present concerns. The ’commons’ is an old-English term for a land area whose use is open to the public. Lloyd considered a pasture to be a commons, which is shared by a number of cattle herders, each of which retains as many cattle as possible. External factors such as wars, disease, poaching and so on, act to keep the number of men and beasts well below the maximum level that can be safely supported by the commons. The dynamic response to these disruptions is robust, but eventually, social stability ensues. However, instead of enhancing the well being of the situation, the stability inevitably leads to tragedy. How does this counterintuitive development arise?

In Lloyd’s argument, the assumption is made that all herdsmen act rationally. On the socially stable commons, rationality leads to a herdsman acting in his own self-interest in order to maximize the gain made from grazing and selling his cattle. He reasons that, on the downside, to grow his herd incurs certain costs, the purchase price and the incremental cost of overgrazing, by adding another animal to his herd. On the upside, the overgrazing cost is shared by all the herdsmen on the commons. Consequently, since he does not share with others the profit of selling this additional animal, it makes economic sense for him to add the animal to his herd. This argument is valid for adding additional animals as well. This same conclusion is independently reached by each of the other herdsmen on the commons, since they are equally rational; resulting in tragedy, since with the growing herds the commons grazing capacity will inevitably be destroyed over time. The herdsman are trapped in a rational system that logically compels them to increase, without limit, the size of their herd and to do this in the face of limited resources. Disaster and failure is the end point of this logical sequence in which all participants act in their own self-interest.

One can find a number of weak points in this arcane argument, which we pose here as questions. Do people always (or ever) behave strictly rationally? Do individuals ever act independently of what others in their community are doing? Do people typically completely disregard the effects of their actions on the behavior of others? We subsequently address each of these questions and others in a more general context. For the moment it is sufficient to point out that the ”straw man” of the tragedy of the commons is an example of linear logical thinking and its deadly flaw is the unrealistic way it is applied to resolving paradoxes in our complex world.

The Altruism Paradox

Altruism is one concept that was missing from Lloyd’s tragedy of the commons discussion, but which seems to be no less important than the notion of selfishness, which was foundational to his argument. Moreover, if selfishness is entailed by rationality in decision making, as he maintained, then altruism must be realized through an irrational mechanism in decision making. In point of fact, the competition between the two aspects of human decision making ariely08 ; kahneman2011thinking , the rational and irrational, may well save the commons from ruin.

Let us consider the altruism paradox (), which identifies a self-contradictory condition regarding the characteristics of species. This dates back to Darwin’s recognition that some individuals in a number of species act in a manner that although helpful to other members of the species, may jeopardize their own survival. Yet this property is often characteristic of that species. He also identified such altruism as contradicting his theory of evolution, the natural selection darwin71 . Darwin also proposed a resolution to this problem by speculating that natural selection is not restricted to the lowest element of the social group, the individual, but can occur at all levels of a biological hierarchy, which constitutes multilevel selection theory wilson07 .

As discussed by West et al. west19b the theory of sociobiology was developed in the last century for the purpose of, at least in part, explaining how and why Darwin’s theory of biological evolution is compatible with sociology. Wilson and Wilson wilson07 were able to demonstrate the convergence of scientific consensus on the use of multilevel selection theory to resolve the in sociobiology. Rand and Nowak rand2013human emphasize that natural selection suppresses the development of cooperation unless it is balanced by specific counter mechanisms. Five such mechanism found in the literature are: multilevel selection, spatial selection, direct reciprocity, indirect reciprocity and kin selection. In their paper they discuss models of these mechanisms, as well as, the empirical evidence supporting their existence in situations where people cooperated, but in the context of evolutionary game theory. The resolution of such fundamental problems as the may be identified at the birth of each of the scientific disciplines; new perspectives designed to ignore large domains of science thought to be irrelevant within the narrow confines of the new discipline. The complexity of a given phenomenon can therefore be measured by the number of different disciplines interwoven to describe its behavior.

Prisoner’s Dilemma Game

The dates back to the early development of game theory RapoportChammah65 , and is a familiar mathematical formulation of the essential elements of many social situations involving cooperative behavior. s are generally represented with a payoff matrix that provides payoffs according to the actions of two players (see Table 1). When both players cooperate, each of them gains the payoff , and when both players defect, each of them gains . If in a social group agents and play the , when defects and cooperates, gains the payoff and gains the payoff and when the decisions are switched, so too are the payoffs. The constraints on the payoffs in the are and . The temptation to defect is established by setting the condition .

The dilemma arises from the fact that although it is clear that for a social group (in the short-term) and for an individual (in the long-term) the optimal mutual action is for both to cooperate, each individual is tempted to defect because that decision elicits the higher immediate reward to the individual defecting. But, assuming the other player also acts to selfishly maximize her own benefit, the pair will end up in a defector-defector situation, having the minimum payoff for both players. How do individuals realize that cooperation is mutually beneficial in the long-term? This question has been answered by many researchers, at various levels of inquiry, involving pairs of agents gonzalezetal2015 ; Moisanetal2018 , as well as, larger social networks nowak1993strategy . Research suggests that, at the pair level, people dynamically adjust their actions according to their observations of each others’ actions and outcomes; at the complex dynamic network or societal level, this same research suggests that the emergence of cooperation may be explained by network reciprocity, whereby individuals play primarily with those agents with whom they are already connected in a network structure. The demonstration of how social networks and structured populations with explicit connections foster cooperation was introduced by Nowak and May nowak1992evolutionary . Alternative models based on network reciprocity assume agents in a network play the only with those agents to whom they have specific links. Agents act by copying the strategy of the richest neighbor, basing their decisions on the observation of the others’ payoffs. Thus, network reciprocity depends on the existence of a network structure (an already predefined set of links among agents) and on the awareness of the behavior and payoffs of interconnected agents.

Player
Player
Table 1: The general payoffs of . The first value of each pair is the payoff of agent and the second value is the payoff of the agent .

Empirical evidence of emergence of cooperation

Past research has supported the conclusion that the survival of cooperators requires the observation of the actions and/or outcomes of others and the existence of predefined connections (links) among agents. Indeed, empirical work suggests that the survival and growth of cooperation within the social group depends on the level of information available to each agent MartinGonJuvLeb2014 . The less information about other agents available, the more difficult it is for cooperative behavior to emerge RapoportChammah65 ; MartinGonJuvLeb2014 . On the other hand, other experiments suggest that humans do not consider the payoffs to others when making their decisions, and that a network structure does not influence the final cooperative outcome fischbacher2001people . In fact, in many aspects of life, we influence others through our choices and the choices of others affect us, but we are not necessarily aware of the exact actions and rewards received by others that have affected us. For example, when a member of society avoids air travel in order to reduce their carbon footprint, s/he might not be able to observe whether others are reducing their air travel as well, yet they rely on decisions others make, influencing the community as a whole. Thus, it is difficult to explain how behaviors can be self-perpetuating even when the source of influence is unknown MartinGonJuvLeb2014 .

These empirical observations support an important hypothesis emerging from the work presented herein. Our hypothesis is that mutual cooperation emerges and survives, even when the social group consists exclusively of selfish agents and there is no conscious awareness of the payoffs to other agents. Moreover a network structure can emerge dynamically from the connections formed and guided by individual selfishness. Note that this is the dynamics of a network, as distinct from the more familiar dynamics on a network. The dynamics on a network assumes a static network structure, as in evolutionary game theory, whereupon the strengths of the links between agents may change, but the agents sit at the nodes of the network and interact with the same nearest neighbors. On the other hand, the dynamics of a networks makes no such assumption and the dynamics consist of the formation and dissolution of links between any two agents within the social group.

Ii Contributions

Herein, we aim to clarify how collective intelligence emerges without explicit knowledge of the actions and outcomes of others, and in the absence of a predefined network structure linking agents within a society. We introduce an algorithm (the Selfish Algorithm, ) to demonstrate that collective intelligence can emerge and survive between agents in a social group, out of selfishness. We construct a computational model of the and generate simulations of the evolution of mutual cooperation and networks out of selfishness.

First, the model provides a resolution of the altruism paradox () and shows how agents create a self-organized critical group out of individual selfish behavior, simultaneously maximizing the benefit of individual and of the whole group. Second, the model demonstrates how adaptation can naturally emerge from the same mechanisms in the model. Adaptation is an important property of living things, which allows them to respond to a changing environment in a way that optimizes their performance and survivability.

We studied four systems governed with different learning mechanisms proposed by the , where the behavior of the agents is modeled using three modes of learning: reinforced learning (), trust (), and connection (). The first system studied has only the mechanism active. The other three systems are a combination of and (), and (), and the combination of all the mechanisms (). Next, we tested the sensitivity of the collective behavior that emerged from these systems to changes in the social makeup by modifying a fraction of the agents having them exchanged with zealots at a given time. A zealot is an agent that will not change its decision regardless of the payoff. The comparison of the mutual cooperation of the systems versus the number of the zealots is a measure of resilience, or robustness (i.e., the resistance of the collective behavior to perturbations).

The computational results show that these systems can be ranked from strongest to weakest in their resilience as SALTCSALCSALTSAL. The and systems have a high resilience because using in the decision making process enables agents to learn to avoid pairing with the zealots. This way of pairing is different from the popular mechanism of preferential attachment barabasi1999emergence , which uses rules according to which an agent would have a higher chance of linking with other agents that already have many links, those being agents with a substantial reputation. In contrast, the demonstrates that such chances to connect to other agents emerge dynamically, according to the experienced benefits that the other agent brings to the individual’s own benefit (). This high adaptability of shows that each agent can be observed to act as a local leader, making decisions based on changes in its environment toward maximizing its own benefit and, at the same time, the local decisions made by agents also affect the choices taken by the system as a whole, leading it to its optimal state.

Network reciprocity and survival of cooperators

Figure 1: The colors on the left and right panel show the average of the ratio of Mutual Cooperation () at time 100, when each agent played the for 100 times with its eight nearest neighbors, and updated its decision by imitating the decision of the richest neighbor (including themselves). The agents were located on a regular two-dimensional lattice of size 1010 (left panel) and 3030 (right panel). The horizontal axis of the panels shows the degree of temptation to cheat experienced by the agents and the vertical axis is the ratio to initial cooperators on the lattice. 100 ensembles were was used to evaluate the average at time 100.
Figure 2: Left panel: The time evolution of the Ratio of Mutual Cooperation () for agents located on a regular two-dimensional lattice with 1010 (black curve) and 3030 agents (red curve). Initially 75% of agents randomly picked to be cooperators and at each time agents played with with their 8 nearest neighbors and imitated the decision of its richest neighbor (including themselves). Right panel: The time evolution of the for the agents with similar condition as those adopted in the left panel except that at each time two agents are picked randomly and played (no predefined connections). The curves are averaged over 100 realizations.

To put our contributions in perspective, let us highlight the main differences between the model and those of previous studies. First, the works following the strategy of the Nowak-May agents nowak1992evolutionary typically compare their individual payoff with the payoffs of the other agents when making their next decision. One way to realize this strategy is to adopt the decision of the most successful neighbor. In contrast, all the updates in the are based on the last two payoffs to the self-same individual. In other words, agents do not rely on information about the other members of society to determine its evolution. agents are connected to other agents, only indirectly, through their local payoffs in time. The most recent payoff to the individual is the result of its interaction with its most recent partner, but its previous payoff, used for tuning its tendencies (presented below in Equation 1), might be the result of playing with a different partner. Second, the Nowak-May agents nowak1992evolutionary are assumed to be located at the nodes of a lattice and to be connected by links, as they were also assumed to be in the decision making model () west19b . For example, because of social interactions these lattice networks are regularly used to model complex networks and have historically been assumed to mimic the behavior of real social systems. These studies emphasized the important role of network complexity in sustaining, or promoting, cooperation tomassini2007social . This was accomplished without explicitly introducing a self-organizing mechanism for the network formation itself.

To show the importance of the specific connection of agents on a lattice in Nowak and May nowak1992evolutionary model, we replicated their simulation work. In this simulation the agents are located on a regular two-dimensional lattice, each having eight nearest neighbors to play the . There is an agent located at each node of the lattice and they each update their decision at each round of the computation by imitating the decision of the richest agent in their neighborhood (including themselves). Figure 1 shows our replication of their work. The colors on the panels indicate the average of the Ratio of Mutual Cooperation () which sustained between the agents located on the 1010 lattice (left panel) and on the 3030 lattice (right panel) after each agent played 100 times with its eight nearest neighbors. The yellow areas correspond to high which happened for low temptation to cheat (i.e., agent’s selection of the Defect action in the ) and high initial number of cooperators. The yellow area is more extended in the case of agents located on the larger lattice of size 3030 (right panel).

To explicitly show the importance of Nowak and May’s assumption of the lattice structure for survival of mutual cooperation, we ran their model and compared time evolution of the RMC between these agents in the case where the agents were paired up randomly. The left panel of Figure 2 shows the ensemble average of 100 simulations for 100 agents (black curve) and 900 agents (red curve) connected on a two-dimensional lattice. At each time round, each agent plays the , with a low temptation to cheat , with all its eight neighbors. Initially 75% of the agents were cooperators, but the evolved and sustained in both cases. The right panel of Figure 2 shows the results of the same experiment, but selecting the interacting pairs randomly (no lattice structure is assumed for the agents). As shown in the figure, the vanishes in the absence of the network structure.

As we demonstrate next, the does not rely on a lattice network, yet, mutual cooperation is subsequently shown to emerge and survive the induce of perturbations. In fact, we show that the network reciprocity can be interpreted to be a byproduct of , emerging out of the selfishness of the agents.

Iii The Selfish Algorithm ()

The selfish algorithm () proposed in this research belongs to a family of agent-based models in modern evolutionary game theory adami2016evolutionary . Specifically, represents a generalization of a model introduced by Mahmoodi et al. west17 which is a coupling of with evolutionary game theory put into a social context. This earlier introduction led to the self-organized temporal criticality () concept and the notion that a social group could and would adjust their behavior to achieve consensus, interpreted as a form of group intelligence, as had been suggested by the collective behavior observed in animal studies couzin2007collective . The agents, like Nowak and May’s agents nowak1992evolutionary , assumes a pre-existing network between the agents which use the (belonging to the Ising universality class of models for decision making li2018ising ). overcomes this limitation, resulting in a general model for emergence of collective intelligence and a complex dynamic network.

The general notion that the adopted from Mahmoodi et al. west17 is that an agent makes a decision according to the change in a cumulative tendency (i.e., preference). The change in this cumulative tendency, , is a function of the last two payoffs of agent who played with agent and :

(1)

where is a positive number that represents the sensitivity of the agent to its last two payoffs. The quantity is the payoff to the agent played with agent at time and is the payoff to the agent played with agent at time . We used the value in the that is and assumed when the denominator of Eq. ( 1) is zero.

A simplified version of the was previously introduced elsewhere mahmoodi19 , and an abbreviated version of the algorithm and its mathematical details are included in Appendix A. In the first cycle of the , a pair of randomly selected agents and ”agree” to play. Only one pair of agents play at each time cycle. Each of the agents of a pair engages in three decisions; each decision is represented in a learning mechanism (a cumulative propensity lever, ranging from 0 to 1) by which the agent can gain information about the social environment: learning from their own past outcomes (Selfish Algorithm Learning, ), trust the decision of other agents (Selfish Algorithm Trust, ), and make social connections that are beneficial (Selfish algorithm-based connection, ). Each of these levers is updated according to the agent’s self interest (selfishness): a decision is made according to the state of each lever, and the lever is updated according to as formalized in Eq. ( 1).

In the mechanism, an agent decides to cooperate () or defect () while playing the with the paired partner, according to the state of the cumulative propensity of playing or . The cumulative propensity of the agent to pick or increases (or decreases) if it’s payoff is increased (or decreased) with respect to its previous payoff, as per Eq. ( 1). The updated cumulative tendency to play or with the same agent is used for the next time agent is paired with the same agent . Our simulation results show that the mechanism attracts the agents toward mutual cooperation.

In the decision, an agent decides to rely on the decision of its partner, instead of using its own decision made using , according to the state of the cumulative propensity of ”trusting” the other’s decision or not. Each agent can tune this propensity to rely on its partner’s decision according to as formalized in Eq. ( 1). The agent increases (or decreases) its tendency to trust its partner’s decision if it increased (or decreased) its payoff with respect to its previous payoffs. Our simulation results show that the (both and active) mechanism amplifies the mutual cooperation between the agents regardless of the value of the incentive to cheat .

Finally, the is a decision of an agent to pick the partner with whom to play in each round. Each agent can tune its propensity of selecting its partner according to as formalized in Eq. ( 1). A agent increases (or decreases) its tendency of playing with the same partner if the payoff received after playing with that partner is higher (or lower) than the agent’s own previous payoffs. Our simulation results show that a network of connections emerge over time between agents, and that this network amplifies the mutual cooperation between agents.

Iv Simulation Results

We studied agent system and set the parameters for the calculations as follows: Initially all the agents were defectors, had payoff of zero, a propensity of 0.9 to remain a defector, propensity of 0 to trust the partner’s decision, and equal propensity of 1/(20-1) to connect with each of the other agents. We set the sensitivity coefficient to . The parameter controls the magnitude of the changes of the cumulative tendencies at each update where we set the maximum and minimum of the cumulative tendencies to be 1000 and 0, respectively. As a boundary condition, if the updated cumulative tendency goes beyond 1000 (or below 0) then it is set back to 1000 (or 0). A smaller , with respect to the maximum value of the cumulative tendency, slows down the process of learning, but it does not qualitatively change behavior, but merely scales the dynamical properties of the system. The time for simulations are picked to be sufficiently long to show the asymptotic behavior of the systems.

The payoff matrix used in the simulations is shown in Table 2 as suggested by Gintis gintis2009bounds : , and . So, the maximum possible value of is 2. We selected the value (unless otherwise explicitly mentioned) which provides a strong incentive to defect. This simple payoff matrix has the spirit of the and allows us to emphasize the main properties of the SA model.

Player
Player
Table 2: The payoffs of the . The first value of each pair is the payoff of agent and the second value is the payoff of the agent .

iv.1 Emergence of mutual cooperation by

Figure 3: Each curve shows the time evolution of the Ratio of Mutual Cooperation () for agents used and played the with corresponding to the black, red, blue, green and purple curves, respectively.
Figure 4: The curves show the time evolution of the propensity of agent 1 to play as with agent 2 (red curve) () and the propensity of agent 2 to play as with agent 1 (black curve) (). Agents 1 and two are among agents using to update their decisions and had (left panel) and (right panels).

In this section we demonstrate that the learning mechanism of attracts agents, playing the , to a state of mutual cooperation. Figure 3 shows the time evolution of the in simulations where the 20 agents, randomly partnered in each cycle, used only the mechanism to update their decisions. The increased from zero and asymptotically saturates to a value that depends on the value of the temptation to cheat . The smaller the value the higher the saturated value and the greater the size of the collective cooperation. Using , each agent learns, by social interactions with the other agents, to modify its tendency to play or . These agents are connected through their payoffs. Each agent compares its recent payoff with its previous one, which, with high propensity, earned by playing with a different agent. This mutual interaction between the selfish agents led them to form an intelligent group that learns the advantage of mutual cooperation in a freely connected environment whose dynamics are represented by the .

To explain how the collective cooperation emerges, we looked into the evolution of the propensities to cooperate between two selected agents (among 20). Figure 4 depicts the time evolution of the propensity of two agents to pick cooperation () in the . The propensity of agent 1 to cooperate with agent 2 (drawn in red), and the propensity of agent 2 to cooperate with agent 1 (drawn in black). Generally, we observe a high correlation between the propensities to cooperate between the two agents.

The left panel presents these propensities when the temptation to defect is , and the right panel presents the propensities when the temptation to defect is . The correlation between the propensity of the two agents is 0.65 for and 0.84 for . The correlation between the agents is the result of the learning process between them. Let’s assume and that agent 1 played and agent 2 played at time which results in payoffs of 0 and 1.5 for them, respectively. Comparing its payoff with its previous payoff, earned playing with other agent (= 1.5, 0, 1 or 0), agent 1 would change its accumulative tendency to play for next time it is randomly paired with player 2 by , 0, or 0. This means that agent 1 reacts to the defective behavior of agent 2 by tending to behave and consequently agent 2 wouldn’t continue to have the advantage of a cooperative environment. On the other hand, agent 2 would change its accumulative tendency to play with agent 1 by 0, , or . This means agent 2 would like to play as next time it pairs with agent 1. So, both agents learn to play with one another, leading to coordination state of where both get payoff of 0. Such pairs compare this payoff (= 0) with their previous payoff, played with other agent, and would change their accumulative tendency to play by , 0, or 0 which shift their future decisions toward the coordination state of . These agents would change their tendency to play by , , 0 or which favors their stay as toward one another. However, because of the time to time change of (depending on the value of ) in the accumulative tendency to play with other pairs, there is always a chance for agents to play for a while, before they are pulled back by other agents to behave as . This creates a dynamic equilibrium between and states and defines the level of emerged mutual cooperation observed in Figure 3. Thus, the dual dynamic interaction, based on self-interest, between each agent and its environment causes the emergence of mutual cooperation between the agents who play and use to update their decisions. Note that in human experiments, humans learning from only their own outcomes without awareness of the partners’ outcomes did not lead to mutual cooperation MartinGonJuvLeb2014 as the rational agents of can do. High correlation between the propensities of the pairs of the agents using means high coordination between their decisions which also can occur if the ”Trust” mechanism is active between the agents. Trust lowers the intermediate pairings and leads the agents to coordinate and converge in the same decision. In the next section we show that combining with amplifies the between the agents. Our current experimental work Gonzalezetalinprep also confirms that adding the trust mechanism in decision making experiments with of human pairs helped them to realize the advantage of mutual cooperation.

iv.2 Enhancement of Mutual Cooperation by ; Trust as a Dynamic Decision

Figure 5: Each curve shows the time evolution of the Ratio of Mutual Cooperation () for agents using and playing the with corresponding to the black, red, blue, green, and purple curves, respectively.
Figure 6: The solid black and the solid red curves show, respectively, the time evolution of the ratio of and pairs played and used to make decision. The dashed black and dashed red curves show, respectively, the time evolution of the ratio of and pairs played and used to make decision. , .

Figure 5 shows the enhancing effect on the emergence of collective cooperation when each agent is allowed to make a decision whether to ”Trust” or rely on the decision made by the paired agent or not ().

The mechanism also decreases the time for agents to realize the benefit of mutual cooperation. In Figure 6 we compare the ratio of partners in a group of 20 agents playing the with a temptation to cheat of when using the model (dashed lines) and when using the model (solid lines). It is apparent that because of the trust mechanism, agents coordinate more often than they do without it and consequently avoid the formation of pairs. We also plot the ratio of pairs for the two systems. These curves show that agents learn to select pairs over pairs more readily than do agents, thereby pushing the up to approximately 0.9. The average correlation between the propensity of two agents to play with one another is about 0.92 when which is a sign of high coordination. We highlighted the difference between trust used in the literature with our dynamic trust model in Appendix C.

iv.3 Emergence of network reciprocity from and

By activating the ability of the agents to make decisions about the social connections that are beneficial to themselves (), we expect that an even larger and faster increase in collective cooperation. The Connection mechanism allows each agent to select a partner that helped the agent to increase its payoff with respect to its previous payoff.

We demonstrate the increase in the for a model without the Trust mechanism and a model with the Trust mechanism . The Left panel of Figure 7 shows an increase in the of the agents using (left panel), and using (right panel). When comparing the model behavior to that of the agents without the connection mechanism (paired randomly) in Figure 3, we observe that the dependence on the temptation to cheat is weaker. For example at time the average for the agents using are about 0.84, 0.80, 0.75, 0.65 and 0.62 for and , respectively, whereas for the agents using the average for the same conditions are about 0.76, 0.64, 0.58, 0.52 and 0.45, respectively. On the right panel of Figure 7, we observe that using the Learning, Trust, and Connection mechanism in conjunction (), the level of collective cooperation is the highest with the least dependence on the temptation to cheat.

To show that the reciprocity emerged between the agents using the mechanism, we plotted in Figure 8 the propensities of making connections between a typical agent (agent 1) and the other 19 agents where agents used (left panel) or (right panel) to update their decisions. In both cases, these figures show that agent 1 developed a preferential partner and learned to play most of the time with one of the agents among others.

Figure 7: Each curve shows the time evolution of the Ratio of Mutual Cooperation () for agents used (left panel) or (right panel) and played the with corresponding to the black, red, blue, green and purple curves, respectively.
Figure 8: Left and right panels show the time evolution of the propensity of pairings between agent 1 and the other 19 agents when agents used or , respectively. Agents had .

The manner in which the network develops over time is schematically depicted in Figure 9. This figure shows the recorded connection propensities between 20 agents at three time periods. Intensity of the lines between pairs show the magnitude of the propensity of one to connect to the other and the directions show the intensity belongs to which agent and towards which one.

Figure 9: Emergence of reciprocity from dynamics of the network. From left to right the panels show the snapshots of the propensities of connections between 20 agents played the and updating their decisions based on at , and , respectively. Intensity of the lines between pairs represent the magnitude of the propensity of one to another and the directions show the intensity belongs to which agent and towards which one. The colors of the nodes represent the state of the agents at that time, red as defector and green as cooperator.

The colors of the nodes represent the state of the agents at that time, red as defector and green as cooperator. The figure shows the connections among the agents at (left panel), passing through an intermediate state (middle panel, ) and after reaching dynamic equilibrium (right panel, ). The preferential connections forming the dynamic network emerge here over time and are based on the perception of the benefit that an agent receives from other agents, with whom it interacts. Some connections become stronger whereas others become weaker according to the mechanism. In mahmoodi19 we showed that creates a complex temporal network with an inverse power law (

) probability density function (

) of the propensity of making connections between agents with index . The is very different from the Poisson , the latter having an exponentially diminishing probability of changing the propensity compared with the much greater value. Consequently, the propensity of forming a link between partners is much greater for the and the forms a much more complex network than does a Poisson .

In the next section we study the adaptability of social systems ruled by different steps of the . Disrupting these systems is done by changing some agents to zealots and tracking the changes among the remaining agents in response to the zealots.

iv.4 Complex adaptation of a selfish organization

To investigate the dynamics of the agents we disrupt the stable behavior pattern emerging from the social group by fixing a fraction of the agents to be zealots and calculating the response of the remaining agents. A zealot is an agent whose properties are: zero tendency to be a Cooperator; zero trust to other agents; and a uniform tendency to play the with other agents. This divides the system into two subsystems: a fraction of the agents that continue to evolve based on , , or (subsystem ) and agents as zealots (subsystem ). We investigate how the remaining agents of system adapt to the new environment by various modes of learning. The degree to which these agents can sustain their mutual cooperation in the presence of the zealots is a measure of their resilience, or its compliment is a measure of their fragility, a fundamental property of complex networks subject to perturbation.

To track the behavioral changes of the agents in from those in we study the chance of an event happening within a given cycle. This could be the chance of finding the pairings Cooperation-Cooperation, Trust-Trust, etc. In previous sections we used the ratio of events, which was useful as there was no perturbation in the systems to detect. To evaluate the chance of the event occurring we used ensemble averages over realizations of each simulation.

The blue and orange curves in the panels of Figure 10 show the within a group of agents and within the subsystem of the 10 agents () who were not exchanged with zealots after time . The top-left panel, shows the and between the agents which used to update their decisions, before and after 10 of them being replaced by zealots at time . There is a drop in the because of the inevitable pairings between the SAL-agents and the zealots. The between the 10 agents who were not switched with zealots increased after their interactions with the zealots, despite the overall decrease in the in the system. In other words, the agents of the subgroup improved their mutual cooperation from about 0.1 to about 0.25. This is because when the agent of subsystem is randomly paired with a zealot of the subsystem , with high probability, it ended up as a sucker and received the minimum payoff of zero. The agent used this payoff as a measure on which to base its next decision. When an agent of subsystem paired with another agent of the same subsystem, with high chance (because of their past experience of playing together) played , but because of the low payoff it received previously playing with a zealot, still increases its tendency to play with this agent.

The top-right panel in Figure 10 depicts the and between agents used to make decisions, before and after the switching time . Although highly increased the level, it failed to sustain that level due to the influence of the zealots. After the switching time only the agents of the subsystem contributed to the mutual cooperation.

The bottom-left panel in the figure depicts the advantage of adding the to the mechanism in sustaining the of the system after the switching time . This figure shows that after the time the of the subsystem increases, as the only contribution for mutual cooperation, and saturates at about 0.85. The bottom-right panel of the figure shows the and of the agents use (entire algorithm) to update their decisions. The panel shows a very high resilience of the system even after this massive number of SA-agents switched to zealots at time . Similar to the system governed by , there is a drop in the but the agents could sustain the level of to about 0.95. This is because the SA-agents of this system learned to disconnect themselves from the zealots using the mechanism, which highly increased the robustness of the emerging mutual cooperation. Notice that after the switching time all the mutual cooperation occurs within subsystem .

Figure 10: The blue curves show the time evolution of the Chance of Mutual Cooperation () (chance of CC to happen among 20 agents at time t) and the orange curves shows the between 10 agents who didn’t forced to be zealot. In top-left panel agents used and in top-right panel used to update their decisions while since 10 of the agents forced to be zealots. In bottom-left panel the agents used and in bottom-right panel used to update their decisions while since 10 of the agents forced to be zealots. , .

Figure 11 summarizes the influence a given fraction of zealots within a group manifest under the different learning mechanisms. The figure shows the saturated value of of the agents, using different steps of the algorithm for decision making, after a fraction of the agents turned to zealots at time . The more adaptable the system, the higher the saturation value of that can be achieved by the remaining (1-) agents. This provides us with a measure of the resilience of the system. The solid curve is the saturated for the agents using as a function of the fraction (f) of the agents who switched to zealots at time . For ¡ 0.4 (8 zealots) the system retains a slightly above 0.4. Beyond this fraction of zealots there is an exponential decay of . In this situation the agents learn to modify their decisions ( or ) depending on whether or not their pair is a zealot or another agent using to make its decisions.

The dashed curve in the figure is the saturated value of the of the agents, using , after the switching time . Introducing the learning improves the resilience of the system for above that of alone. However, beyond the calculation converges with the earlier one indicating that the additional learning mechanism of trust ceases to be of value beyond a specific level of zealotry.

The two top curves of Figure 11 are saturated for the agents using (doted curve) and (dot-dash curve) to make decisions, after the switching time . These two curves show substantial improvement in the system’s resilience to an increase in the fraction of zealots. Or said differently, the robustness of the system to perturbations is insensitive to the number of zealots, that is, until the zealots constitute the majority of group membership. The learning in the decision making of the SA-agents have the ability to avoid pairing with the zealots. Once the fraction of zealots exceeds 0.6 the resilience drops rapidly. The saturated for the is highest where the three learning levels (decision, trust, connection) are active, giving the SA-agents maximum adaptability, that is, flexibility to change in order to maximize their self interest. This realization of self-interest improves the payoff of the whole system as well.

Notice that as the number of the zealots increases, it takes longer for agents to agree to play, but when they do play, they do so with a high propensity that they will cooperate with each other. In other words, increasing the number of zealots slows down the response of the system.

Figure 11: Effect of ratio of the agents turned to the zealots at (x axis) on the saturated value of the Chance of Mutual Cooperation () of the remained agents evolved using different steps of the algorithm; (solid curve), (dashed curve), (dot curve) and (dot-dash curve). , .

In Appendix B we show complementary analyses of the evolution of the propensity of an agent to interact with other agents after zealots are introduced, as well as the snapshots of the propensities of connections among the 20 agents.

V Discussion and Implications of Results

The Selfish Algorithm provides a demonstration that the benefit to an individual within a social group need not be achieved at the cost of diminishing the overall benefit to the group. Each individual within the group may act out of self-interest, but the collective effect can be quite different than what is obtained by the naive linear logical extrapolation typically made in the tragedy of the commons arguments lloyd33 . In fact, the demonstrates how robust cooperation can emerge out of the selfishness of the individual members of a system that improve the performance of each agent, as well as, that of the overall group, resolving the altruism paradox.

A collective group intelligence emerges from the calculations, one based on the self-interest of the individual members of the group. This collective intelligence grows spontaneously in the absence of explicit awareness on the part of individuals of the outcomes of the behavior of other members of the group. Perhaps more importantly, the collective intelligence develops without assuming a pre-existing network structure to support the collective behavior, and without assuming knowledge of information about the actions or outcomes from other members of the society.

As demonstrated in this research, collective intelligence entailed by the unfolds as a consequence of three learning mechanisms:

supports reinforced learning based on the selfishness of agents who play the . An agent chooses or in its play with other agents and agents influence one another through their payoffs, resulting in the emergence of mutual cooperation, that reveals a collective intelligence.

tunes the propensity of an agent to imitate the strategy of their partner. The strength of the tuning is proportional to the trust being beneficial or detrimental to the agent itself. The role of is to assist agents in achieving coordination, which results in a reduction in the formation of pairs. This reduction makes it easier for the system to learn to select over over time.

tunes the propensity of an agent to connect to other agents based on how they improved its self-interest.

We have studied the advantage of each of the learning mechanisms in creating mutual cooperation and thereby the facilitation of the growth of group intelligence. We also studied the adaptation and resilience of the emergent intelligence by replacing some of the agents with zealots and examining the system’s response. The control of the dynamics in is internal and according to the self-interest of the agents, spontaneously directs the system to robust self-organization. The most robust systems use the mechanism to make decisions, which increase the complexity of the system by forming a dynamic complex network with a high propensity for partnering, thereby providing the agents with the ability to learn to shun the zealots. These complex temporal pairings that emerge from can be seen as a source of the network reciprocity introduce by Nowak and May nowak1992evolutionary .

One advantage of the is its simplicity which enables us to introduce additional behavioral mechanisms into the dynamics such as deception or memory. This makes it possible to create algorithms for different self-organizing systems such as those used in voting. leadership emerges anywhere and everywhere as the need for it arises and it leads to socio-technical systems which achieve optimum leadership activities in organizations to make teams more effective at achieving their goals. The model also has promise in the information and communication technology (ICT) field of sensor and sensor array design. Future research will explore the application in detecting deception in human - human or human - machine interactions and in anticipating threats or providing leaders with near real-time advice about the emerging dynamics of a given specific situation.

The predictions in this work are entirely simulation-based, however a number of them are being experimentally tested. Experimental work (paper in preparation) confirms the emergence of mutual cooperation between human pairs playing PD when they are given the option to trust their pair’s decision, in no explicit awareness of the payoffs to other agents. Additional experiments to test our hypothesis that the collective cooperation emerged by can survive to perturbations are in progress.

Acknowledgments

This research was supported by the Army Research Office, Network Science Program, Award Number: W911NF1710431.

References

  • (1) Nowak M, Sigmund K, A strategy of win-stay, lose-shift that outperforms tit-for-tat in the Prisoner’s Dilemma game, Nature, 364, 56–58 (1993).
  • (2) Gonzalez C, Ben-Asher, N Martin JM, Dutt V, A Cognitive Model of Dynamic Cooperation With Varied Interdependency Information, Cognitive Science, 39, 457-495 (2015).
  • (3) Mahmoodi K, West BJ, Grigolini P, Self-organizing complex networks: individual versus global rules, Frontiers in physiology, 8, 478 (2017).
  • (4) Lloyd WF, Two Lectures on the Checks to Population: Delivered Before the University of Oxford, in Michaelmas Term 1832, JH Parker (1833).
  • (5) Ariely D, Predictably irrational: The hidden forces that shape our decisions, HarperCollins (2008).
  • (6) Kahneman D, Thinking, fast and slow, Macmillan (2011).
  • (7) Darwin C, The Origin of the Species and the Descent of Man, Modern Library, NY (1871).
  • (8) Wilson DS, Wilson EO, ethinking the theoretical foundation of sociobiology, The Quarterly review of biology, 4, 327–348 (2007).
  • (9) West BJ, Mahmoodi K, Grigolini P, Empirical Paradox, Complexity Thinking and Generating New Kinds of Knowledge, Cambridge Scholars Publishing (2019).
  • (10) Rand DG, Nowak, Martin A, Human cooperatio, Trends in cognitive sciences, 17,413–425 (2013).
  • (11) Rapoport A, Chammah AM, Prisoner’s Dilemma: A Study in Conflict and Cooperation,University of Michigan Press (1965).
  • (12) Moisan F, ten Brincke R, Murphy RO, Gonzalez C, Not all Prisoner’s Dilemma games are equal: Incentives, social preferences, and cooperation, Decision, 5, 306–322 (2018).
  • (13) Nowak MA, May RM, Evolutionary games and spatial chaos, Nature, 359, 826 (1992).
  • (14) Martin JM, Gonzalez C, Juvina I, Lebiere C, A Description-Experience Gap in Social Interactions: Information about Interdependence and Its Effects on Cooperation, Journal of Behavioral Decision Making, 27, 349-362 (2014).
  • (15) Fischbacher U, Gächter S, Fehr E, Are people conditionally cooperative? Evidence from a public goods experiment,Economics letters, 71, 397–404 (2001).
  • (16) Barabási AL, Albert R, Emergence of scaling in random networks, Science, 286, 509–512 (1999).
  • (17) Tomassini M, Pestelacci Enea, Luthi L, Social dilemmas and cooperation in complex networks, International Journal of Modern Physics C, 18, 1173–1185 (2007).
  • (18) Adami C, Schossau J, Hintze A, Evolutionary game theory using agent-based methods, Physics of life reviews, 19, 1–26 (2016).
  • (19) Couzin I, Collective minds, Nature, 445, 715 (2007).
  • (20) Li C, Liu F, Li P, Ising model of user behavior decision in network rumor propagation, Discrete Dynamics in Nature and Society, 2018, (2018).
  • (21) Mahmoodi K, Gonzalez C, Emergence of collective cooperation and neworks from selfish-trust and selfish-connections, Cognitive Science (The 41st Annual Meeting of the Cognitive Science Society), https://mindmodeling.org/cogsci2019/papers/0392/0392.pdf, 2254–2260 (2019).
  • (22) Gintis H, The Bounds of Reason: Game Theory and the Unification of the Behavioral Sciences-Revised Edition, Princeton University Press (2014).
  • (23) Gonzalez C, Mahmoodi K, Singh K, Emergence of Collective Cooperation in the Absence of Interdependency Information, (in preparation) 2020.
  • (24) West BJ, Nature’s Patterns and the Fractional Calculus, Walter de Gruyter GmbH & Co KG (2017).
  • (25) Bettencourt LMA, The origins of scaling in cities, Science, 340, 1438–1441 (2013).

References

  • (1) Nowak M, Sigmund K, A strategy of win-stay, lose-shift that outperforms tit-for-tat in the Prisoner’s Dilemma game, Nature, 364, 56–58 (1993).
  • (2) Gonzalez C, Ben-Asher, N Martin JM, Dutt V, A Cognitive Model of Dynamic Cooperation With Varied Interdependency Information, Cognitive Science, 39, 457-495 (2015).
  • (3) Mahmoodi K, West BJ, Grigolini P, Self-organizing complex networks: individual versus global rules, Frontiers in physiology, 8, 478 (2017).
  • (4) Lloyd WF, Two Lectures on the Checks to Population: Delivered Before the University of Oxford, in Michaelmas Term 1832, JH Parker (1833).
  • (5) Ariely D, Predictably irrational: The hidden forces that shape our decisions, HarperCollins (2008).
  • (6) Kahneman D, Thinking, fast and slow, Macmillan (2011).
  • (7) Darwin C, The Origin of the Species and the Descent of Man, Modern Library, NY (1871).
  • (8) Wilson DS, Wilson EO, ethinking the theoretical foundation of sociobiology, The Quarterly review of biology, 4, 327–348 (2007).
  • (9) West BJ, Mahmoodi K, Grigolini P, Empirical Paradox, Complexity Thinking and Generating New Kinds of Knowledge, Cambridge Scholars Publishing (2019).
  • (10) Rand DG, Nowak, Martin A, Human cooperatio, Trends in cognitive sciences, 17,413–425 (2013).
  • (11) Rapoport A, Chammah AM, Prisoner’s Dilemma: A Study in Conflict and Cooperation,University of Michigan Press (1965).
  • (12) Moisan F, ten Brincke R, Murphy RO, Gonzalez C, Not all Prisoner’s Dilemma games are equal: Incentives, social preferences, and cooperation, Decision, 5, 306–322 (2018).
  • (13) Nowak MA, May RM, Evolutionary games and spatial chaos, Nature, 359, 826 (1992).
  • (14) Martin JM, Gonzalez C, Juvina I, Lebiere C, A Description-Experience Gap in Social Interactions: Information about Interdependence and Its Effects on Cooperation, Journal of Behavioral Decision Making, 27, 349-362 (2014).
  • (15) Fischbacher U, Gächter S, Fehr E, Are people conditionally cooperative? Evidence from a public goods experiment,Economics letters, 71, 397–404 (2001).
  • (16) Barabási AL, Albert R, Emergence of scaling in random networks, Science, 286, 509–512 (1999).
  • (17) Tomassini M, Pestelacci Enea, Luthi L, Social dilemmas and cooperation in complex networks, International Journal of Modern Physics C, 18, 1173–1185 (2007).
  • (18) Adami C, Schossau J, Hintze A, Evolutionary game theory using agent-based methods, Physics of life reviews, 19, 1–26 (2016).
  • (19) Couzin I, Collective minds, Nature, 445, 715 (2007).
  • (20) Li C, Liu F, Li P, Ising model of user behavior decision in network rumor propagation, Discrete Dynamics in Nature and Society, 2018, (2018).
  • (21) Mahmoodi K, Gonzalez C, Emergence of collective cooperation and neworks from selfish-trust and selfish-connections, Cognitive Science (The 41st Annual Meeting of the Cognitive Science Society), https://mindmodeling.org/cogsci2019/papers/0392/0392.pdf, 2254–2260 (2019).
  • (22) Gintis H, The Bounds of Reason: Game Theory and the Unification of the Behavioral Sciences-Revised Edition, Princeton University Press (2014).
  • (23) Gonzalez C, Mahmoodi K, Singh K, Emergence of Collective Cooperation in the Absence of Interdependency Information, (in preparation) 2020.
  • (24) West BJ, Nature’s Patterns and the Fractional Calculus, Walter de Gruyter GmbH & Co KG (2017).
  • (25) Bettencourt LMA, The origins of scaling in cities, Science, 340, 1438–1441 (2013).

Appendix A: Details for the algorithm

The following steps are executed during a single cycle initiated at time of the algorithm depicted in Figure 12

1. Selfish algorithm - connection ()

Agents and are picked randomly such that at time the agent has the propensity:

(2)

to play with agent , where the propensity falls in the interval . The quantity is the cumulative tendency for agent to select agent to play at time . This cumulative tendency changes at step 7, according to the last two payoffs received by agent .

At the same time, the agent has a propensity given by Eq. (2), with the indices interchanged, to play with agent , where the propensity falls in the interval . Two agents and partner if two numbers and are randomly chosen from the interval (0,1), and satisfy inequalities and . If both inequalities are not satisfied another two agents are randomly selected at each time until the inequalities are satisfied and the two agents ”agree” to play a . In the flowchart wherever letter

is called it returns a single uniformly distributed random number in the interval (0,1).

2. Selfish algorithm - learning ()

Agent , playing with agent , initially selects an action, or , using . The agent has the propensity:

(3)

to pick and the propensity:

(4)

to pick , as it’s next potential decision. Note that the sum of these two terms is one. The quantities and are cumulative tendencies for agent playing with agent , at time , for the choice or , respectively. These cumulative tendencies change at step 6 based on the last two payoffs of agent .

To decide on an action a random number is selected from the interval (0,1). If then the next decision of agent will be otherwise will be . The same reasoning applies for agent .

3. Selfish algorithm - trust ()

Instead of executing the decision determined by in step 2, agent has a propensity to trust the decision made by agent , which also used in step 2. The propensity for agent to trust the decision of agent is:

(5)

Here again, if a random number , chosen from the interval (0,1), is less than then trust is established. The quantities and are cumulative tendencies for agent to execute the choice of agent , at time , or to not rely on trust and to execute its choice based on , respectively. These cumulative tendencies update in step 5 based on the last two payoffs of agent .

4. Evaluating Own Payoffs

After agent and executed their action, or , their payoff is evaluated using the payoffs matrix of the , and , respectively.

5. Update of cumulative tendency of

If agent used playing with agent then the accumulative tendencies and change to and for the next time agent and partner. The same happens for agent . Similarly, if agent did not use , but relied on its own , then the accumulative tendencies and change to and for the next time agent and partner. The same happens for agent .

6. Update of cumulative tendency of

Step 6 is only active if the agent did not use at step 3. If agent played with agent , then the accumulative tendencies and change to and for the next time agent and partner. If agent played with agent , then the accumulative tendencies and change to and for the next time agent and partner. The same happens for agent .

7. Update of cumulative tendency of

In this step the cumulative tendency to play with a specific agent changes. If agent played with agent then the cumulative tendency of pairing with agent , changes to . The same happens for agent .

As boundary condition for steps 5, 6 and 7, if the updated cumulative tendency goes beyond its defined maximum (or below its defined minimum) then it has to set back to the maximum value (minimum value).

Figure 12: Flowchart of the . ”Y” and ”N” letters represent ”Yes” and ”No”, respectively.

Appendix B: Effect of zealots on the dynamics of the SA

Figure 13 shows the time evolution of the propensity of of one agent 1 to other 19 agents, in system of 20 agents which used (left panel) or (right panel) to make decision until where 10 (out of the 19) agents turned to zealots. This figure shows that agent 1, before , mostly played with the agent corresponding to the purple or green curve of left or right panel respectively. But after these agents switched to zealots then agent 1 decreased its propensity to pair up and play with them and switched to pair with the agent corresponding to blue curves (not a zealot).

Figure 13: Time evolution of the propensity of agent 1 to pair with the other 19 agents , where agents used (left panel) and (right panel) before and after 10 of the agents (out of 19) turned to zealots. , .
Figure 14: Complex adaptation in the dynamic network emerged by SA. From left to right the panels show the snapshots of the propensities of connections between 20 agents at , and , respectively. Agents played the and updated their decisions based on except agent 16 (red node on the left panel) which turned to a zealot and kept defecting after . Intensity of the lines between pairs represent the magnitude of the propensity of one to another and the directions show the intensity belongs to which agent and towards which one. The colors of the nodes represent the state of the agents at that time, red as defector and green as cooperator. , .

Figure 14 shows the snapshots of the propensities of connections between 20 agents, using to make decisions, before and after one of them turned to a zealot (agent 10). This figure shows that the network is intelligent and is able to isolate the zealot, in order to sustain the highest level of mutual cooperation possible.

Appendix C: Effect of trust (fixed chance of trust) on the evolution of decisions

The aim of this section is to highlight the difference between trust used in the literature with our dynamic trust. We introduced ”trust” and ”not trust” as a ”decision” (while the word trust means ”not making a decision”). We connected trust to a reinforcement learning mechanism to make it adaptable. To show the difference, here we determine the effect of fixed chance of trust between the agents on the dynamics of decisions where there is no reinforcement learning active, which is to say that trust is the only learning mechanism through . We investigated the dynamics of the decisions made by social groups consisting of 10, 20, or 30 agents. These agents randomly partner at time and can either retain their decision, or exchange it with a fixed chance for their partner’s decision (trusting). In left panel of Figure 15 the average time for the agents to reach consensus is plotted versus the chance of trust that agents have to take the decision of their partner as their next decision in each of the three groups. This figure shows that there is a broad flat minimum (optimum) value of trust that leads the agents to achieving consensus faster and that this time increases with the size of the network.

Figure 15: Left panel: The black, red and blue curves show the average time to reach consensus (the time that all the agents reach in +1 or all in -1 state) vs. the magnitude of the trust between them for systems with 10, 20, and 30 agents, respectively. At each time an agent can keep its previous decision or can imitate the decision of its partner by the fixed propensity of trust. Right panel: Dependence of the average time to consensus to the number of the agents with 0.1 (black), 0.5 (red) and 0.9 chance of trusting the decision of their pair. The power law coefficients for are 2.37, 2.28 and 2.29, respectively. The curves are ensemble averages over 100 realizations.

It is interesting to note that the evolution of mutual cooperation when one of the agents becomes a zealot, which is to say, that agent remains the same all the time, the network rapidly relaxes to the value of the zealot. The conclusion is that this network is highly sensitive to this small perturbation depending on the chance of trust. In other words, this system, that learns only through trust is not significantly resilient (is not robust).

The panel on the right of the Figure 15 indicates a dependence of the average time for a group to reach consensus from a random initial state is a monotonously increasing function of group size N. The average time to reach consensus satisfies an allometry equation west2017nature : , where Y is the functionality of the system, X is a measure of system size and the empirical parameters a and b are determined by data. Allometry relations (ARs) have only recently been applied to social phenomena bettencourt2013origins but on those applications the scaling has always been superlinear (b¿1). Bettencourt bettencourt2013origins point out that the superlinear scale reflects unique social characteristics with no equivalent in biology in which the ARs scaling is invariably sublinear (b¡1). They point out that the knowledge spillover in urban settings drive the growth of functionality such as wages, income, gross domestic product, bank deposits, as well as rates of invention…all scale superlinearly with city size.

In this paper the agents were allowed to learn through a variety of modalities and to find their individual optimum value of trust regarding other agents. Subsequently, we showed that this dynamic trust acting, in concert with other learning modalities, leads to robust mutual cooperation.