1 Introduction
The Moran process[14] models antagonism between two species whose critical difference in terms of adaptation is their relative fitness. A resident has relative fitness 1 and a mutant relative fitness
. Many settings in Evolutionary Game Theory consider fitness as a measure of reproductive success; for examples see
[15, 7, 3]. A generalization of the Moran process by Lieberman et al[10] considered the situation where the replication of an individual’s fitness depends on some given structure, i.e. a directed graph. This model gave rise to an extensive line of works in Computer Science, initiated by Mertzios et al. in [12].In this work we further extend the model of [10] to capture the situation where, instead of one given underlying graph, each species has its own graph that determines their way of spreading their offsprings. As we will show, due to the process’ restrictions only one species will remain in the population eventually. Our setting is by definition an interaction between two players (species) that want to maximize their probability of occupying the whole population.
This strategic interaction is described by an 1sum bimatrix game, where each player (resident or mutant) has all the strongly connected digraphs on nodes as her pure strategies. The resident’s payoff is the extinction probability and the mutant’s payoff is the fixation probability. The general question that interests us is: what are the pure Nash equilibria of this game (if any)? To gain a better understanding of the behaviour of the competing graphs, we investigate the best responses of the resident to the clique graph of the mutant.
This model and question is motivated by many interesting problems from various, seemingly unrelated scientific areas. Some of them are: idea/rumor spreading, where the probability of spreading depends on the kind of idea/rumor; computer networks, where the probability that a message/malware will cover a set of terminals depends on the message/malware; and also spread of mutations, where the probability of a mutation occupying the whole population of cells depends on the mutation. Using the latter application as an analogue for the rest, we give the following example to elaborate on the natural meaning of this process.
Imagine a population of identical somatic resident cells (e.g. biological tissue) that carry out a specific function (e.g. an organ). The cells connect with each other in a certain way; i.e., when a cell reproduces it replaces another from a specified set of candidates, that is, the set of cells connected to it. Reproduction here is the replication of the genetic code to the descendant, i.e. the hardwired commands which determine how well the cell will adapt to its environment, what its chances of reproduction are and which candidate cells it will be able to reproduce on.
The changes in the information carried by the genetic code, i.e. mutations, give or take away survival or reproductive abilities. A bad case of mutation is a cancer cell whose genes force it to reproduce relentlessly, whereas a good one could be a cell with enhanced functionality. A mutation can affect the cell’s ability to adapt to the environment, which translates to chances of reproduction, or/and change the set of candidates in the population that should pay the price for its reproduction.
Now back to our population of resident cells which, as we said, connect with each other in a particular way. After lots of reproductions a mutant version of it shows up due to replication mistakes, environmental conditions, etc. This mutant has the ability to reproduce in a different rate, and also, to be connected with a set of cells different than the one of its resident version. For the sake of argument, we study the most pessimistic case, i.e. our mutant is an extremely aggressive type of cancer with increased reproduction rate and maximum unpredictability; it can replicate on any other cell and do that faster than a resident cell. We consider the following motivating question: Supposing this single mutant will appear at some point in time on a random cell equiprobably, what is the best structure (network) of our resident cells such that the probability of the mutant taking over the whole population is minimized?
The above process that we informally described captures the reallife process remarkably well. As a matter of fact, a mutation that affects the aforementioned characteristics in a real population of somatic cells occurs rarely compared to the time it needs to conquer the population or get extinct. Therefore, a second mutation is extremely rare to happen before the first one has reached one of those two outcomes and this allows us to study only one type of mutant per process. In addition, apart from the different reproduction rate, a mutation can lead to a different “expansionary policy” of the cell, something that has been overlooked so far.
2 Definitions
Each of the population’s individuals is represented by a label and can have one of two possible types: (resident) and (mutant). We denote the set of nodes by , with , and the set of resident(mutant) edges by (). The node connections are represented by directed edges; A node has a type R(M) directed edge () towards node if and only if when is chosen and is of type () then it can reproduce on with positive probability. The aforementioned components define two directed graphs; the resident graph and the mutant graph . A node’s type determines its fitness; residents have relative fitness 1, while mutants have relative fitness .
Our process works as follows: We start with the whole population as residents, except for one node which is selected uniformly at random to be mutant. We consider discrete time, and in each timestep an individual is picked with probability proportional to its fitness, and copies itself on an individual connected to it in the corresponding graph ( or ) with probability determined by the (weight of the) connection. The probability of (given that it is chosen) reproducing on when is resident(mutant) is by definition equal to some weight (), thus for every . For , every edge has weight if , and otherwise. Similarly for . For each graph we then define weight matrices and which contain all the information of the two graphs’ structure. After each timestep three outcomes can occur: (i) a node is added to the mutant set , (ii) a node is deleted from , or (iii) remains the same. If both graphs are strongly connected the process ends with probability 1 when either (extinction) or (fixation). An example is shown in Figure 1.
We denote by the probability of fixation given that we start with the mutant set . We define the fixation probability to be for a fixed relative fitness . We also define the extinction probability to be equal to . In the case of only one graph (i.e. ), which has been the standard setting so far, the point of reference for a graph’s behaviour is the fixation probability of the complete graph (called Moran fixation probability) . is an amplifier of selection if and or and because it favors advantageous mutants and discourages disadvantageous ones. is a suppressor of selection if and or and because it discourages advantageous mutants and favors disadvantageous ones.
An undirected graph is a graph for which if and only if . An unweighted graph is a graph with the property that for every : for every with incoming edge from , where is the outdegree of node . In the sequel we will abuse the term undirected graph to refer to an undirected unweighted graph.
In what follows we will use special names to refer to some specific graph classes. The following graphs have vertices which we omit from the notation for simplicity.

as a shorthand for the Clique or complete graph .

as a shorthand for the Undirected Star graph .

as a shorthand for the Undirected Cycle or 2regular graph .

: as a shorthand for the Circulant graph for even . Briefly this subclass of circulant graphs is defined as follows. For even degree , the graph (see Fig. 2) has vertex set , and each vertex is connected to vertices .
By “Resident Graph vs Mutant Graph” we refer to the process with Resident Graph and Mutant Graph and by we refer to the fixation probability of that process.
We note that in this paper, we are interested in the asymptotic behavior of the fixation probability in the case where the population size is large. Therefore, we employ the standard asymptotic notation with respect to ; in particular, is almost always treated as a variable independent of . Furthermore, in the rest of the paper, by and we mean graph classes and respectively, and we will omit the since we only care about the fixation probability when .
3 Our Results
In this paper, we introduce and study for the first time a generalization of the model of [10] by assuming that different types of individuals perceive the population through different graphs defined on the same vertex set, namely for residents and for mutants. In this model, we study the fixation probability, i.e. the probability that eventually only mutants remain in the population, for various pairs of graphs.
In particular, in Section 5 we initially prove a tight upper bound (Theorem 5.1) on the fixation probability for the general case of an arbitrary pair of digraphs. Next, we prove a generalization of the Isothermal Theorem of [10], that provides sufficient conditions for a pair of graphs to have fixation probability equal to the fixation probability of a clique pair, namely ; this corresponds to the absorption probability of a simple birthdeath process with forward bias . It is worth noting that it is easy to find small counterexamples of pairs of graphs for which at least one of the two conditions of Theorem 2 does not hold and yet the fixation probability is equal to ; hence we do not prove necessity.
In Section 6 we give a 2player strategic game view of the process where player payoffs correspond to fixation and/or extinction probabilities. In this setting, we give an extensive study of the fixation probability when one of the two underlying graphs is complete, providing several insightful results. In particular, we prove that, the fixation probability when the mutant graph is the clique on vertices (i.e. ) and the resident graph is the undirected star on vertices (i.e. ) is , and thus tends to 1 as the number of vertices grows, for any constant . By using a translation result (Lemma 1), we can show that, when the two graphs are exchanged, then . However, using a direct proof, in Theorem 6.2 we show that in fact , i.e. it is exponentially small in , for any constant . In Theorem 6.4, we also provide a lower bound on the fixation probability in the special case where the resident graph is any undirected graph and the mutant graph is a clique.
Furthermore, in Subsection 6.3, we find bounds on the fixation probability when the mutant graph is the clique and the resident graph belongs to various classes of regular graphs. In particular, we show that when the mutant graph is the clique and the resident graph is the undirected cycle, then , for any constant . A looser lower bound holds for smaller values of . This in particular implies that the undirected cycle is quite resistant to the clique. Then, we analyze the fixation probability by replacing the undirected cycle by 3 increasingly denser circulant graphs and find that, the denser the graph, the smaller is required to achieve a asymptotic lower bound. We also find that the asymptotic upper bound stays the same when the resident graphs become denser with constant degree, but it goes to when the degree is . In addition, by running simulations (which we do not analyse here) for the case where the resident graph is the strongest known suppressor, i.e. the one in [5], and the mutant graph is the clique, we get fixation probability significantly greater than for up to nodes and values of fitness . All of our results seem to indicate that the clique is the most beneficial graph (in terms of player payoff in the game theoretic formulation). However, we leave this fact as an open problem for future research.
Finally, in Section 7 we consider the problem of efficiently approximating the fixation probability in our model. We point out that Theorem 6.2 implies that the fixation probability cannot be approximated via a method similar to [2]. However, when we restrict the mutant graph to be complete, we prove a polynomial (in ) upper bound for the absorption time of the generalized Moran process when , where is the maximum ratio of degrees of adjacent nodes in the resident graph. The latter allows us to give a fully polynomial randomized approximation scheme (FPRAS) for the problem of computing the fixation probability in this case.
4 Previous Work
So far the bibliography consists of works that consider the same structure for both residents and mutants. This 1graph setting was initiated by P.A.P. Moran [14] where the case of the complete graph was examined. Many years later, the setting was extended to structured populations on general directed graphs by Lieberman et al. [10]. They introduced the notions of amplifiers and suppressors of selection, a categorization of graphs based on the comparison of their fixation probabilities with that of the complete graph. They also found a sufficient condition (in fact [4] corrects the claim in [10] that the condition is also necessary) for a digraph to have the fixation probability of the complete graph, but a necessary condition is yet to be found.
Since the generalized 1graph model in [10] was proposed, a great number of works have tried to answer some very intriguing questions in this framework. One of them is the following: which are the best unweighted amplifiers and suppressors that exist? Díaz et al. [2] give the following bounds on the fixation probability of strongly connected digraphs: an upper bound of for , a lower bound of for and they show that there is no positive polynomial lower bound when . An interesting problem that was set in [10] is whether there are graph families that are strong amplifiers or strong suppressors of selection, i.e. families of graphs with fixation probability tending to 1 or to 0 respectively as the order of the graph tends to infinity and for . Galanis et al. [4] find an infinite family of stronglyamplifying directed graphs, namely the “megastar” with fixation probability , which was later proved to be optimal up to logarithmic factors [6].
While the search for optimal directed strong amplifiers was still on, a restricted version of the problem had been drawing a lot of attention: which are the tight bounds on the fixation probability of undirected graphs? The lower bound in the undirected case remained , but the upper bound was significantly improved by Mertzios et al. [13] to , when is independent of . It was again improved by Giakkoupis [5] to for where , and finally by Goldberg et al. [6] to where they also find a graph which shows that this is tight. While the general belief was that there are no undirected strong suppressors, Giakkoupis [5] showed that there is a class of graphs with fixation probability , opening the way for a potentially optimal strong suppressor to be discovered.
Extensions of [10] where the interaction between individuals includes a bimatrix game have also been studied. Ohtsuki et al. in[16] considered the generalized Moran process with two distinct graphs, where one of them determines possible pairs that will play a bimatrix game and yield a total payoff for each individual, and the other determines which individual will be replaced by the process in each step. Two similar settings, where a bimatrix game determines the individuals’ fitness, were studied by IbsenJensen et al. in[8]. In that work they prove NPcompleteness and #Pcompleteness on the computation of the fixation probabilities for each setting.
5 Markov Chain Abstraction and the Generalized Isothermal Theorem
This generalized process with two graphs we propose can be modelled as an absorbing Markov chain [15]. The states of the chain are the possible mutant sets ( different mutant sets) and there are two absorbing states, namely and . In this setting, the fixation probability is the average absorption probability to , starting from a state with one mutant. Since our Markov chain contains only two absorbing states, the sum of the fixation and extinction probabilities is equal to 1.
Transition probabilities. In the sequel we will denote by the set and by the set . We can easily deduce the boundary conditions from the definition: and . For any other arbitrary state of the process we have:
(1) 
where is the total fitness of the population in state . By eliminating selfloops, we get
(2) 
We should note here that, in the general case, the fixation probability can be computed by solving a system of linear equations using this latter relation. However, bounds are usually easier to be found and special cases of resident and mutant graphs may have efficient exact solutions.
Using the above Markov chain abstraction and stochastic domination arguments we can prove the following general upper bound on the fixation probability:
Theorem 5.1
For any pair of digraphs and with , the fixation probability is upper bounded by , for . This bound is tight for independent of .
Proof
We refer to the proof of Lemma 4 of [2], as our proof is essentially the same. Briefly, we find an upper bound on the fixation probability of a relaxed Moran process that favors the mutants, where we assume that fixation is achieved when two mutants appear in the population. In their work the resident and mutant graphs are the same and undirected, but this does not change the probabilities of the first mutant placed u.a.r. to be extinct or replicated in our model. Finally, we note that this result is tight, by Theorem 6.1.
We now prove a generalization of the Isothermal Theorem of [10].
Theorem 5.2 (Generalized Isothermal Theorem)
Let , be two directed graphs with vertex set and edge sets and respectively. The generalized Moran process with 2 graphs as described above has the Moran fixation probability if:

, , that is, and are doubly stochastic, i.e. and are isothermal (actually one of them being isothermal is redundant as it follows from the second condition), and

for every pair of nodes : .
Proof
It suffices to show that in every state of the Markov chain of the process with mutants, the probability to go to a state with mutants is times the probability to go to a state with mutants (ch.6 in[15]). In our setting, by (5) these probabilities are and respectively. So, to establish the theorem, it suffices to show that its hypotheses hold if and only if relation (3) holds.
(3) 
Consider all the states where only one node is resident, i.e. . Then from relation (3) we get the following set of equations that must hold:
(4) 
Similarly, for all the states where we get from relation (3):
(5)  
Now, (for general ) the two parts of (3) are:  
(6)  
(7)  
(8) 
Now, consider all the states where only two nodes and are resident, i.e. . Then from relation (8) we get the following set of relations that must hold:
(9) 
To prove the other direction of the equivalence we show that the sets of relations (4),(9) suffice to make (3) true. If (9) is true, then (8) is obviously true. And, by using (4), the lefthand side of (6) and (7) are equal, thus (3) is true.
Observe that when we have the isothermal theorem of the special case of the generalized Moran process that has been studied so far.
6 A Strategic Game View
In this section we study the aforementioned process from a gametheoretic point of view. Consider the strategic game with 2 players; residents (type R) and mutants (type M), so the player set is . The action set of a player consists of all possible strongly connected graphs^{1}^{1}1We assume strong connectivity in order to avoid problematic cases where there is neither fixation nor extinction. that she can construct with the available vertex set . The payoff for the residents (player R) is the probability of extinction, and the payoff for the mutants (player M) is the probability of fixation. Of course, the sum of payoffs equals 1, so the game can be reduced to a zerosum game.
The natural question that emerges is: what are the pure Nash equilibria of this game (if any)? For example, for fixed , if we only consider two actions for every player, namely the graphs and , then from our results from Subsection 6.1, when , we get and from [15, 1], and . Therefore, we get the following bimatrix game:
Player  

Player  
which has a pure Nash equilibrium, namely . Trying to understand better the behaviour of the two conflicting graphs, we put some pairs of them to the test. The main question we ask in this work is: what is the best response graph of the residents to the Clique graph of the mutants? In the sequel, we will use the abbreviations plR and plM for the resident and the mutant population, respectively.
In the proofs of this paper we shall use the following fact from [15]:
Fact 1
In a birthdeath process with state space , absorbing states and backward bias at state equal to , the probability of absorption at , given that we start at is
6.1 Star vs Clique
The following result implies (since as ) that when the mutant graph is complete and the resident graph is the undirected star, the fixation probability tends to 1 as goes to infinity.
Theorem 6.1
If plR has the graph and plM has the graph for , then the payoff of plM (fixation probability) is lower bounded by .
Proof
We will find a lower bound to the fixation probability of our process , by finding the fixation probability of a process that is dominated by (has at most the fixation probability of) . Here is : Have the undirected star graph for the residents and the clique graph for the mutants. We start with a single mutant on a node uniformly at random from the vertex set. If that node is the central one of , then at the next time step it is attacked by a resident with probability 1 and the process ends with the residents occupying the vertex set. If the initial mutant node is a leaf, then the process continues with the following restriction: whenever a mutant node is selected to reproduce on the central node of , instead it reproduces on itself, unless all leaves of are mutants. can be modelled as the following Markov chain:
In Figure 3 we denote by the state of process that has mutants at the center of (star graph) and mutants at the leaves of . We also denote by the fixation probability given that the initial mutant node of process is the center of , and by the fixation probability given that the initial mutant node is a leaf of . Now, the exact fixation probability of process is:
Now, for a state where , the probability of going to state in the next step is:
For a state where the probability of going to state in the next step is:
and the probability of remaining to state is: . In our case, where we want the fixation probability given that we start from state , by using Fact 1, we get the following:
(10) 
From the transition probabilities of our Markov chain, we can see that:
and 
So, from (10) we get:
and for the required fixation probability we get:
This completes the proof of Theorem 6.1.
It is worth noting that, since the game we defined in Subsection 6 is 1sum, we immediately can get upper (resp. lower) bounds on the payoff of plR, given lower (resp. upper) bounds on the payoff of plM.
Now we give the following lemma that connects the fixation probability of a process with given relative fitness, resident and mutant graphs, with the fixation probability of a “mirror” process where the roles between residents and mutants are exchanged.
Lemma 1
.
Proof
We denote by the probability of fixation when our population has a set of mutants with relative fitness , resident graph and mutant graph . We first prove the following:
Claim
.
Proof
The probability of fixation for a mutant set and mutant graph is the same as the probability of extinction of the resident set , i.e. one minus the probability of the set conquering the graph. Thus, if we exchange the labels of residents and mutants, the relative fitness of the new residents is 1 and the relative fitness of the new mutants is , the new resident graph is , the new mutant graph is and the new mutant set is .
We can now prove Lemma 1 as follows: By the above Claim we have for every . Since for every , we get that . Averaging over all nodes in we get the required inequality.
This result provides easily an upper bound on the fixation probability of a given process when a lower bound on the fixation probability is known for its “mirror” process. For example, using Theorem 6.1 and Lemma 1 we get an upper bound for on the fixation probability of vs ; this immediately implies that the probability of fixation in this case tends to 0. However, as we subsequently explain, a more precise lower bound is necessary to reveal the approximation restrictions of the particular process.
Theorem 6.2
If plR has the graph and plM has the graph for , then the payoff of plM (fixation probability) is upper bounded by .
Proof
In order to show this, we give a pair of graphs that yields fixation probability upper bounded by an function. Have the Clique graph for the residents and the Undirected Star graph for the mutants; we will call this process . We will find an upper bound of its fixation probability by considering the following process that favors the mutants. Here is : Have the aforementioned graphs. We start with a single mutant on the central node of . If a mutant is selected to reproduce on a mutant, it reproduces according to the exact same rules of . If a resident is selected to reproduce on a resident, it also reproduces according to the exact same rules of . If a resident is selected to reproduce on a mutant, it reproduces according to the exact same rules of , unless that mutant is the central one; then the resident reproduces on itself, unless all leaves of are residents.
The corresponding Markov chain has states. A state , where is the number of mutants and the only absorbing states are and . For state the probability of going to state in the next step is:
For a state , where , the probability of going to state in the next step is:
For a state , where , the probability of going to state in the next step is:
and the probability of staying to state in the next step is: . In our case, where we want the fixation probability given that we start from state , by using Fact 1 we get the following:
(11) 
From the transition probabilities of our Markov chain, we can see that:
and 
So, from (11) we get:
This completes the proof of Theorem 6.2.
This bound shows that, not only there exists a graph that suppresses selection against the (which is an amplifier in the 1graph setting), but it also does that with great success. In fact for any mutant with constant arbitrarily large, its fixation probability is less than exponentially small.
In view of the above, the following result implies that the fixation probability in our model cannot be approximated via a method similar to [2].
Theorem 6.3 (Bounds on the 2graphs Moran process)
There is a pair of graphs such that the fixation probability is , for some constant , when the relative fitness is constant. Furthermore, there is a pair of graphs such that the fixation probability is at least , for constant .
6.2 Arbitrary Undirected Graphs vs Clique
The following result is a lower bound on the fixation probability.
Theorem 6.4
When plR has an undirected graph for which for every and plM has the graph, the payoff of plM (fixation probability) is lower bounded by , for . In particular, for the lower bound tends to as .
Proof
Notice that, given the number of mutants at a timestep is , the probability that a resident becomes mutant is , and the probability that a mutant becomes resident is upper bounded by . That is because the maximum possible number of residenttomutant edges in at a step with mutants is achieved when either every mutant has edges in only towards residents, or every resident has edges in only towards mutants; and the most extreme case is when every one of the nodes has sum of weights of incoming edges equal to the maximum ratio of degrees of adjacent nodes in , i.e. .
This means that the number of mutants in our given process of an undirected graph vs Clique stochastically dominates a birthdeath process that is described by the following Markov chain: A state , where is the number of mutants on the vertex set and the only absorbing states are and . Using Fact 1, we get: , where . From the aforementioned transition probabilities of our Markov chain we have:
Now we can calculate a lower bound on the fixation probability of using the fact that :
From the theorem above it follows that if is undirected regular then the fixation probability of vs is lower bounded by for and , which equals (defined in Section 2).
Also, by Lemma 1 and the above theorem, when , is an undirected graph with for every , and relative fitness , then the upper bound of the fixation probability tends to as .
6.3 Circulant Graphs vs Clique
In this subsection we give bounds for the fixation probability of vs . We first prove the following result that gives an upper bound on the fixation probability when is the graph as described in Section 2 and is the complete graph on vertices.
Theorem 6.5
When mutants have the graph, if residents have a graph and , then the payoff of plM (fixation probability) is upper bounded by for and for . In particular, for constant the upper bound tends to . If , then the upper bound is , for , where is a function of such that and . The bound improves as is picked closer to and, in particular, for it tends to .
Proof
We will bound from above the payoff of the mutant (i.e. the fixation probability) of our process , by finding the fixation probability of a process that dominates (has at least the fixation probability of) . The dominating process is the least favorable for the residents. Here is : Have the graph for the residents, as defined in Section 2 in the more general case where the number of its vertices does not concern us, and the clique graph for the mutants. We start with a single mutant on a node (w.l.o.g. we give it label ) uniformly at random from the vertex set. Throughout the process, if a resident is selected to reproduce on a resident, it reproduces according to the exact same rules of . If a mutant is selected to reproduce on a mutant, it reproduces according to the exact same rules of . However, if a mutant is selected to reproduce on a resident, it obeys to the following restriction: it can only reproduce on a resident that is connected to the maximum number of mutants possible (equiprobably, but it does not really matter due to the symmetry of the produced population). If a resident is selected to reproduce on a mutant when the number of mutants is , then the last among the mutants that was inserted becomes resident, thus preserving the minimality of the probability of the residents to hit the mutants (see Figure 4).
It is easy to see that process allocates the mutants in a chainlike formation that allows residents to “hit” the mutants with the smallest possible number of resident edges. In other words, if we consider the mutant set and the resident set , in every step of the process the number of resident edges on the cut of is minimum. This process is the worst the residents could deal with.
Due to the symmetry that our process brings on the population instances, the corresponding Markov chain has states, as every state with the same number of mutants can be reduced to a single one. A state , where is the number of mutants and the only absorbing states are and . After careful calculations we get that, for a state , where , the probability of going to state in the next step is:
the probability of going to state in the next step is:
and the probability of staying to state in the next step is: . In our case, where we want the fixation probability given that we start from state , by using Fact 1 we get the following:
(12) 
If is constant: from the transition probabilities of our Markov chain, we can see that:
So, from (12) we get: