1 Introduction
In this work we present faster MonteCarlo algorithms for approximation of the fixation probability of the fundamental Moran process on population structures with symmetric interactions. We start with the description of the problem.
Evolutionary dynamics
Evolutionary dynamics act on populations, where the composition of the population changes over time due to mutation and selection. Mutation generates new types and selection changes the relative abundance of different types. A fundamental concept in evolutionary dynamics is the fixation probability of a new mutant [5, 10, 12, 13]: Consider a population of resident individuals, each with a fitness value . A single mutant with nonnegative fitness value is introduced in the population as the initialization step. Intuitively, the fitness represents the reproductive strength. In the classical Moran process the following birthdeath stochastic steps are repeated: At each time step, one individual is chosen at random proportional to the fitness to reproduce and one other individual is chosen uniformly at random for death. The offspring of the reproduced individual replaces the dead individual. This stochastic process continues until either all individuals are mutants or all individuals are residents. The fixation probability is the probability that the mutants take over the population, which means all individuals are mutants. A standard calculation shows that the fixation probability is given by . The correlation between the relative fitness of the mutant and the fixation probability is a measure of the effect of natural selection. The rate of evolution, which is the rate at which subsequent mutations accumulate in the population, is proportional to the fixation probability, the mutation rate, and the population size . Hence fixation probability is a fundamental concept in evolution.
Evolutionary graph theory
While the basic Moran process happens in wellmixed population (all individuals interact uniformly with all others), a fundamental extension is to study the process on population structures. Evolutionary graph theory studies this phenomenon. The individuals of the population occupy the nodes of a connected graph. The links (edges) determine who interacts with whom. Basically, in the birthdeath step, for the death for replacement, a neighbor of the reproducing individual is chosen uniformly at random. Evolutionary graph theory describes evolutionary dynamics in spatially structured population where most interactions and competitions occur mainly among neighbors in physical space [11, 2, 6, 17]. Undirected graphs represent population structures where the interactions are symmetric, whereas directed graphs allow for asymmetric interactions. The fixation probability depends on the population structure [11, 1, 7, 3]. Thus, the fundamental computational problem in evolutionary graph theory is as follows: given a population structure (i.e., a graph), the relative fitness , and , compute an approximation of the fixation probability.
MonteCarlo algorithms
A particularly important class of algorithms for biologists is the MonteCarlo algorithms, because it is simple and easy to interpret. The MonteCarlo algorithm for the Moran process basically requires to simulate the process, and from the statistics obtain an approximation of the fixation probability. Hence, the basic question we address in this work is simple MonteCarlo algorithms for approximating the fixation probability. It was shown in [16] that simple simulation can take exponential time on directed graphs and thus we focus on undirected graphs. The main previous algorithmic result in this area [4] presents a polynomialtime MonteCarlo algorithm for undirected graphs when is given in unary. The main result of [4] shows that for undirected graphs it suffices to run each simulation for polynomially many steps.
Our contributions
In this work our main contributions are as follows:

Faster algorithm for undirected graphs First, we present a simple modification: instead of simulating each step, we discard ineffective steps, where no node changes type (i.e., either residents replace residents, or mutants replace mutants). We then show that the number of effective steps is concentrated around the expected number of effective steps. The sampling of each effective step is more complicated though than sampling of each step. We then present an efficient algorithm for sampling of the effective steps, which requires preprocessing and then time for sampling, where is the number of edges and is the maximum degree. Combining all our results we obtain faster polynomialtime MonteCarlo algorithms: Our algorithms are always at least a factor times a constant (in most cases times a constant) faster as compared to the previous algorithm, and is polynomial even if is given in binary. We present a comparison in Table 1, for constant (since the previous algorithm is not in polynomial time for in binary). For a detailed comparison see Table 2 in the Appendix.
All steps Effective steps #steps in expectation Concentration bounds Sampling a step Fixation algo Table 1: Comparison with previous work, for constant . We denote by , , , and , the number of nodes, the maximum degree, the random variable for the fixation time, and the approximation factor, respectively. The results in the column “All steps” is from
[4], except that we present the dependency on , which was considered as in [4]. The results of the column “Effective steps” is the results of this paper 
Lower bounds We also present lower bounds showing that the upper bound on the expected number of effective steps we present is asymptotically tight for undirected graphs.
Related complexity result
While in this work we consider evolutionary graph theory, a related problem is evolutionary games on graphs (which studies the problem of frequency dependent selection). The approximation problem for evolutionary games on graphs is considerably harder (e.g., PSPACEcompleteness results have been established) [9].
Technical contributions
Note that for the problem we consider the goal is not to design complicated efficient algorithms, but simple algorithms that are efficient. By simple, we mean something that is related to the process itself, as biologists understand and interpret the Moran process well. Our main technical contribution is a simple idea to discard ineffective steps, which is intuitive, and we show that the simple modification leads to significantly faster algorithms. We show a gain of factor due to the effective steps, then lose a factor of due to sampling, and our other improvements are due to better concentration results. We also present an interesting family of graphs for the lower bound examples. Technical proofs omitted due to lack of space are in the Appendix.
2 Moran process on graphs
Connected graph and type function
We consider the population structure represented as a connected graph. There is a connected graph , of nodes and edges, and two types . The two types represent residents and mutants, and in the technical exposition we refer to them as and for elegant notation. We say that a node is a successor of a node if . The graph is undirected if for all we also have , otherwise it is directed. There is a type function mapping each node to a type . Each type is in turn associated with a positive integer , the type’s fitness denoting the corresponding reproductive strength. Without loss of generality, we will assume that , for some number ( the process we consider does not change under scaling, and denotes relative fitness). Let be the total fitness. For a node let be the degree of in . Also, let be the maximum degree of a node. For a type and type function , let be the nodes mapped to by . Given a type and a node , let denote the following function: and otherwise.
Moran process on graphs
We consider the following classical Moran birthdeath process where a dynamic evolution step of the process changes a type function from to as follows:

First a node is picked at random with probability proportional to , i.e. each node has probability of being picked equal to .

Next, a successor of is picked uniformly at random.

The type of is then changed to . In other words, .
Fixation
A type fixates in a type function if maps all nodes to . Given a type function , repeated applications of the dynamic evolution step generate a sequence of type functions . Note that if a type has fixated (for some type ) in then it has also fixated in for . We say that a process has fixation time if has fixated but has not. We say that an initial type function has fixation probability for a type , if the probability that eventually fixates (over the probability measure on sequences generated by repeated applications of the dynamic evolution step )
Basic questions
We consider the following basic questions:

Fixation problem Given a type , what is the fixation probability of averaged over the initial type functions with a single node mapping to ?

Extinction problem Given a type , what is the fixation probability of averaged over the initial type functions with a single node not mapping to ?

Generalized fixation problem Given a graph, a type and an type function what is the fixation probability of in , when the initial type function is ?
Remark 1.
Note that in the neutral case when , the fixation problem has answer and extinction problem has answer . Hence, in the rest of the paper we will consider . Also, to keep the presentation focused, in the main article, we will consider fixation and extinction of type . In the Appendix we also present another algorithm for the extinction of .
Results
We will focus on undirected graphs. For undirected graphs, we will give new FPRAS (fully polynomial, randomized approximation scheme) for the fixation and the extinction problem, and a polynomialtime algorithm for an additive approximation of the generalized fixation problem. There exists previous FPRAS for the fixation and extinction problems [4]. Our upper bounds are at least a factor of (most cases ) better and always in , whereas the previous algorithms are not in polynomial time for given in binary.
3 Discarding ineffective steps
We consider undirected graphs. Previous work by Diaz et al. [4] showed that the expected number of dynamic evolution steps till fixation is polynomial, and then used it to give a polynomialtime MonteCarlo algorithm. Our goal is to improve the quite high polynomialtime complexity, while giving a MonteCarlo algorithm. To achieve this we define the notion of effective steps.
Effective steps
A dynamic evolution step, which changes the type function from to , is effective if (and ineffective otherwise). The idea is that steps in which no node changes type (because the two nodes selected in the dynamic evolution step already had the same type) can be discarded, without changing which type fixates/gets eliminated.
Two challenges
The two challenges are as follows:

Number of steps The first challenge is to establish that the expected number of effective steps is asymptotically smaller than the expected number of all steps. We will establish a factor improvement (recall is the maximum degree).

Sampling Sampling an effective step is harder than sampling a normal step. Thus it is not clear that considering effective steps leads to a faster algorithm. We consider the problem of efficiently sampling an effective step in a later section, see Section 5. We show that sampling an effective step can be done in time (after preprocessing).
Notation
For a type function , let be the subset of successors of , such that iff . Also, let .
Modified dynamic evolution step
Formally, we consider the following modified dynamic evolution step (that changes the type function from to and assumes that does not map all nodes to the same type):

First a node is picked at random with probability proportional to i.e. each node has probability of being picked equal to .

Next, a successor of is picked uniformly at random among .

The type of is then changed to , i.e., .
In the following lemma we show that the modified dynamic evolution step corresponds to the dynamic evolution step except for discarding steps in which no change was made.
Lemma 1.
Fix any type function such that neither type has fixated. Let (resp., ) be the next type function under dynamic evolution step (resp., modified dynamic evolution step). Then, and for all type functions we have: .
Potential function
Similar to [4] we consider the potential function (recall that is the set of nodes of type ). We now lower bound the expected difference in potential per modified evolutionary step.
Lemma 2.
Let be a type function such that neither type has fixated. Apply a modified dynamic evolution step on to obtain . Then,
Proof.
Observe that differs from for exactly one node . More precisely, let be the node picked in line 1 of the modified dynamic evolution step and let be the node picked in line 2. Then, . The probability to select is . The probability to then pick is .
We have that

If (and thus, since it got picked ), then .

If (and thus, since it got picked ), then .
Below we use the following notations:
Thus,
Using that the graph is undirected we get,
where . Note that in the second equality we use that for two numbers , their product is equal to . By definition of , we have
Thus, we see that , as desired. This completes the proof. ∎
Lemma 3.
Let for some number . Let be a type function such that neither type has fixated. Apply a modified dynamic evolution step on to obtain . The probability that is at least (otherwise, ).
Proof.
Consider any type function . Let be the number of edges , such that . We will argue that the total weight of nodes of type , denoted , is at least and that the total weight of nodes of type , denoted , is at most . We see that as follows:
using that and in the inequality. Also,
using that and in the inequality. We see that we thus have a probability of at least to pick a node of type . Because we are using effective steps, picking a member of type will increment the number of that type (and decrement the number of the other type). ∎
Lemma 4.
Consider an upper bound , for each starting type function, on the expected number of (effective) steps to fixation. Then for any starting type function the probability that fixation requires more than (effective) steps is at most .
Proof.
By Markov’s inequality after (effective) steps the Moran process fixates with probability at least , irrespective of the initial type function. We now split the steps into blocks of length . In every block, by the preceding argument, there is a probability of at least to fixate in some step of that block, given that the process has not fixated before that block. Thus, for any integer , the probability to not fixate before the end of block , which happens at step is at most . ∎
We now present the main theorem of this section, which we obtain using the above lemmas, and techniques from [4].
Theorem 5.
Let and be the two types, such that . Let be the maximum degree. Let be the number of nodes of type in the initial type function. The following assertions hold:

Bounds dependent on

Expected steps The process requires at most effective steps in expectation, before fixation is reached.

Probability For any integer , after effective steps, the probability that the process has not fixated is at most , irrespective of the initial type function.


Bounds independent on

Expected steps The process requires at most effective steps in expectation, before fixation is reached.

Probability For any integer , after effective steps, the probability that the process has not fixated is at most , irrespective of the initial type function.


Bounds for

Expected steps The process requires at most effective steps in expectation, before fixation is reached.

Probability For any integer , after effective steps, the probability that the process has not fixated is at most , irrespective of the initial type function.

4 Lower bound for undirected graphs
In this section, we will argue that our bound on the expected number of effective steps is essentially tight, for fixed .
We construct our lower bound graph , for given , (sufficiently large), but fixed , as follows. We will argue that fixation of takes effective steps, if there are initially exactly members of type . For simplicity, we consider and (it is easy to see using similar techniques that for lines, where , the expected fixation time is  basically because is going to fixate with pr. , using a proof like Lemma 6, and converting the nodes of type takes at least efficient steps). There are two parts to the graph: A line of nodes and a starsonacycle graph of . There is 1 edge from the one of the stars in the starsonacycle graph to the line. More formally, the graph is as follows: Let . There are nodes , such that is connected to and for . Also, is connected to . The nodes are the centers of the stars in the starsonacycle graph. For each , such that , the node is connected to a set of leaves . The set forms the starsonacycle graph. Note that is only connected to and in the starsonacycle graph. We have that the starsonacycle graph consists of nodes. There are also nodes , such that node is connected to and for . The nodes forms the line and consists of nodes. The node is connected to . There is an illustration of in Figure 1.
We first argue that if at least one of is initially of type , then with pr. lower bounded by a number depending only on , type fixates (note that and thus, even if there is only a single node of type initially placed uniformly at random, it is in with pr. ).
Lemma 6.
With pr. above if at least one of is initially of type , then fixates.
The proof is based on applying the gambler’s ruin twice. Once to find out that the pr. that eventually becomes all is above (it is nearly in fact) and once to find out that if is at some point all , then the pr. that fixates is exponentially small with base and exponent . See the appendix for the proof.
Whenever a node of , for some , changes type, we say that a leafstep occurred. We will next consider the pr. that an effective step is a leafstep.
Lemma 7.
The pr. that an effective step is a leafstep is at most .
The proof is quite direct and considers that the probability that a leaf gets selected for reproduction over a center node in the starsonacycle graph. See the appendix for the proof.
We are now ready for the theorem.
Theorem 8.
Let be some fixed constant. Consider (the maximum degree of the graph), (sufficiently big), and some such that . Then, if there are initially members of type placed uniformly at random, the expected fixation time of is above effective steps.
Proof.
Even if , we have that with pr. at least , the lone node of type is initially in . If so, by Lemma 6, type is going to fixate with pr. at least . Note that even for , at least nodes of the graphs are in (i.e. the leaves of the starsonacycle graph). In expectation nodes of are thus initially of type . For fixation for to occur, we must thus make that many leafsteps. Any effective step is a leafstep with pr. at most by Lemma 7. Hence, with pr. ( is the probability that at least one node of type is in and is a lower bound on the fixation probability if a node of is of type ) we must make effective steps before fixation in expectation, implying that the expected fixation time is at least effective steps. ∎
5 Sampling an effective step
In this section, we consider the problem of sampling an effective step. It is quite straightforward to do so in time. We will present a datastructure that after preprocessing can sample and update the distribution in time. For this result we assume that a uniformly random number can be selected between and for any number in constant time, a model that was also implicitly assumed in previous works [4]^{1}^{1}1The construction of [4] was to store a list for and a list for and then first decide if a or node would be selected in this step (based on and the number of nodes of the different types) and then pick a random such node. This works when all nodes of a type has the same weight but does not generalize to the case when each node can have a distinct weight based on the nodes successors like here.
Remark 2.
If we consider a weaker model, that requires constant time for each random bit, then we need random bits in expectation and additional amortized time, using a similar datastructure (i.e., a total of amortized time in expectation). The argument for the weaker model is presented in the Appendix. In this more restrictive model [4] would use time per step for sampling.
Sketch of datastructure
We first sketch a list datastructure that supports (1) inserting elements; (2) removing elements; and (3) finding a random element; such that each operation takes (amortized or expected) time. The idea based on dynamic arrays is as follows:

Insertion Inserting elements takes amortized time in a dynamic array, using the standard construction.

Deletion Deleting elements is handled by changing the corresponding element to a nullvalue and then rebuilding the array, without the nullvalues, if more than half the elements have been deleted since the last rebuild. Again, this takes amortized time.

Find random element Repeatedly pick a uniformly random entry. If it is not null, then output it. Since the array is at least half full, this takes in expectation at most 2 attempts and thus expected time.
At all times we keep a doubly linked list of empty slots, to find a slot for insertion in time.
Datastructure
The idea is then as follows. We have such list datastructures, one for each pair of type and degree. We also have a weight associated to each list, which is the sum of the weight of all nodes in the list, according to the modified dynamic evolution step. When the current type function is , we represent each node as follows: The corresponding list datastructure contains copies of (and keeps track of the locations in a doubly linked list). Each node also keeps track of , using another list datastructure. It is easy to construct the initial datastructure in time (note: ).
Updating the datastructure
We can then update the datastructure when the current type function changes to (all updates have that form for some and ), by removing from the list datastructure containing it and adding it to . Note that if we removed copies of from we add to . Also, we update each neighbor of (by deleting or adding a copy to , depending on whether ). We also keep the weight corresponding to each list updated and for all nodes . This takes at most datastructure insertions or deletions, and thus amortized time in total.
Sampling an effective step
Let be the current type function. First, pick a random list among the lists, proportional to their weight. Then pick a random node from . Then pick a node at random in . This takes time in expectation.
Remark 3.
Observe that picking a random list among the lists, proportional to their weight takes time to do naively: E.g. consider some ordering of the lists and let be the total weight of list (we keep this updated so it can be found in constant time). Pick a random number between 1 and the total weight of all the lists (assumed to be doable in constant time). Iterate over the lists in order and when looking at list , check if . If so, pick list , otherwise continue to list . By making a binary, balanced tree over the lists (similar to what is used for the more restrictive model, see the Appendix), the time can be brought down to for this step  however the naive approach suffices for our application, because updates requires time.
This leads to the following theorem.
Theorem 9.
An effective step can be sampled in (amortized and expected) time after preprocessing, if a uniformly random integer between and , for any , can be found in constant time.
6 Algorithms for approximating fixation probability
We present the algorithms for solving the fixation, extinction, and generalized fixation problems.
The Metasimulation algorithm
Similar to [4], the algorithms are instantiating the following metasimulation algorithm, that takes a distribution over initial type functions , type and natural numbers and as input:
algocf[]
Basic principle of simulation
Note that the metasimulation algorithm uses time (by Theorem 9). In essence, the algorithm runs simulations of the process and terminates with “Simulation took too long” iff some simulation took over steps. Hence, whenever the algorithm returns a number it is the mean of binary random variables, each equal to 1 with probability , where is the event that fixates and is the event that fixation happens before effective steps, when the initial type function is picked according to (we note that the conditional part was overlooked in [4], moreover, instead of steps we consider only effective steps). By ensuring that is high enough and that the approximation is tight enough (basically, that is high enough), we can use as an approximation of , as shown in the following lemma.
Lemma 10.
Let be given. Let be a pair of events and a number, such that and that . Then
The value of :
Consider some fixed value of . The value of is basically just picked so high that (so that we can apply Lemma 10) and such that after taking union bound over the trials, we have less than some constant probability of stopping. The right value of is thus sensitive to , but in all cases at most , because of Theorem 5. More precisely, we let
Algorithm
We consider the fixation problem for . Algorithm is as follows:

Return MetaSimulation(,,,), for .
Algorithm
We consider the extinction problem for . Algorithm is as follows:

Let be the uniform distribution over the type functions where exactly one node is .

Return MetaSimulation(,,,), for .
Algorithm
We consider the problem of (additively) approximating the fixation probability given some type function and type . Algorithm is as follows:

Let be the distribution that assigns to .

Return MetaSimulation(,,,), for .
Theorem 11.
Let be a connected undirected graph of nodes with the highest degree , divided into two types of nodes , such that . Given , let and . Consider the running times:

Fixation (resp. Extinction) problem for Algorithm (resp. ) is an FPRAS algorithm, with running time (resp. ), that with probability at least outputs a number in , where is the solution of the fixation (resp. extinction) problem for .

Generalized fixation problem Given an initial type function and a type , there is an (additive approximation) algorithm, , with running time , that with probability at least outputs a number in , where is the solution of the generalized fixation problem given and .
Remark 4.
There exists no known FPRAS for the generalized fixation problem and since the fixation probability might be exponentially small such an algorithm might not exist. (It is exponentially small for fixation of , even in the Moran process (that is, when the graph is complete) when there initially is 1 node of type )
Alternative algorithm for extinction for
We also present an alternative algorithm for extinction for when is big. This is completely different from the techniques of [4]. The alternative algorithm is based on the following result where we show for big that is a good approximation of the extinction probability for , and thus the algorithm is polynomial even for big in binary.
Theorem 12.
Consider an undirected graph and consider the extinction problem for on . If , then , where is the solution of the extinction problem for .
Proof sketch
We present a proof sketch, and details are in the Appendix. We have two cases:

By [4, Lemma 4], we have . Thus, , as desired, since .

On the other hand, the probability of fixation for in the first effective step is at most (we show this in Lemma 20 in the Appendix). The probability that fixation happens for after the first effective step is at most because of the following reason: By Lemma 3, the probability of increasing the number of members of is at most
and otherwise it decrements. We then model the problem as a Markov chain
with state space corresponding to the number of members of , using as the probability to decrease the current state. In the starting state is state 2 (after the first effective step, if fixation did not happen, then the number of members of is 2). Using that , we see that the probability of absorption in state of from state 2 is less than . Hence, is at most and is thus less than .
∎
Remark 5.
While Theorem 12 is for undirected graphs, a variant (with larger and which requires the computation of the pr. that goes extinct in the first step) can be established even for directed graphs, see the Appendix.
Concluding remarks
In this work we present faster MonteCarlo algorithms for approximating fixation probability for undirected graphs (see Remark 6 in the Appendix for detailed comparison). An interesting open question is whether the fixation probability can be approximated in polynomial time for directed graphs.
References
 [1] B. Adlam, K. Chatterjee, and M. Nowak. Amplifiers of selection. In Proc. R. Soc. A, volume 471, page 20150114. The Royal Society, 2015.
 [2] F. Débarre, C. Hauert, and M. Doebeli. Social evolution in structured populations. Nature Communications, 2014.
 [3] J. Díaz, L. A. Goldberg, G. B. Mertzios, D. Richerby, M. Serna, and P. G. Spirakis. On the fixation probability of superstars. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science, 469(2156), 2013.
 [4] J. Díaz, L. A. Goldberg, G. B. Mertzios, D. Richerby, M. Serna, and P. G. Spirakis. Approximating Fixation Probabilities in the Generalized Moran Process. Algorithmica, 69(1):78–91, 2014 (Conference version SODA 2012).
 [5] W. Ewens. Mathematical Population Genetics 1: I. Theoretical Introduction. Interdisciplinary Applied Mathematics. Springer, 2004.
 [6] M. Frean, P. B. Rainey, and A. Traulsen. The effect of population structure on the rate of evolution. Proceedings of the Royal Society B: Biological Sciences, 280(1762), 2013.
 [7] A. Galanis, A. Göbel, L. A. Goldberg, J. Lapinskas, and D. Richerby. Amplifiers for the Moran Process. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016), volume 55, pages 62:1–62:13, 2016.
 [8] A. L. Hill, D. G. Rand, M. A. Nowak, and N. A. Christakis. Emotions as infectious diseases in a large social network: the SISa model. Proceedings of the Royal Society B: Biological Sciences, 277:3827–3835, 2010.
 [9] R. IbsenJensen, K. Chatterjee, and M. A. Nowak. Computational complexity of ecological and evolutionary spatial dynamics. Proceedings of the National Academy of Sciences, 112(51):15636–15641, 2015.
 [10] S. Karlin and H. M. Taylor. A First Course in Stochastic Processes, Second Edition. Academic Press, 2 edition, Apr. 1975.
 [11] E. Lieberman, C. Hauert, and M. A. Nowak. Evolutionary dynamics on graphs. Nature, 433(7023):312–316, Jan. 2005.
 [12] P. A. P. Moran. The Statistical Processes of Evolutionary Theory. Oxford University Press, Oxford, 1962.
 [13] M. A. Nowak. Evolutionary Dynamics: Exploring the Equations of Life. Harvard University Press, 2006.
 [14] M. A. Nowak and R. M. May. Evolutionary games and spatial chaos. Nature, 359:826, 1992.
 [15] H. Ohtsuki, C. Hauert, E. Lieberman, and M. A. Nowak. A simple rule for the evolution of cooperation on graphs and social networks. Nature, 441:502–505, 2006.
 [16] M. Serna, D. Richerby, L. A. Goldberg, and J. Díaz. Absorption time of the moran process. Random Structures & Algorithms, 48(1):137–159, 2016.

[17]
P. Shakarian, P. Roos, and A. Johnson.
A review of evolutionary graph theory with applications to game theory.
Biosystems, 107(2):66 – 80, 2012.
Appendix
7 Details of Section 3
Lemma 1.
Fix any type function such that neither type has fixated. Let (resp., ) be the next type function under dynamic evolution step (resp., modified dynamic evolution step). Then, and for all type functions we have: .
Proof.
Let be as in the lemma statement. First note that since is connected and no type has fixated in . Therefore, there must be some edge , such that . There is a positive probability that and is selected by the unmodified evolution step, in which case . Thus .
We consider the node selection in the two cases:

Let be the node picked in the first part of the unmodified dynamic evolution step and be the node picked in the second part.

Similarly, let be the node picked in the first part of the modified dynamic evolution step and be the node picked in the second part.
Observe that . We will argue that for all , we have that
This clearly implies the lemma statement, since and , by the last part of the unmodified and modified dynamic evolution step. If , then , because which contradicts that . Also, if , then , because (note because was picked from ), again contradicting that .
We therefore only need to consider that and . The probability to pick and then pick in an unmodified dynamic evolution step is , for any and especially the ones for which . Hence, . Thus, also, . We also have that
where the second equality comes from that is the set of nodes such that and . Thus,
The probability to pick and then pick in a modified dynamic evolution step, for some is
Hence,
This completes the proof of the lemma. ∎
Next, the proof of Theorem 5.
Theorem 5.
Let and be the two types, such that . Let be the maximum degree. Let be the number of nodes of type in the initial type function. The following assertions hold:

Bounds dependent on

Expected steps The process requires at most effective steps in expectation, before fixation is reached.

Probability For any integer , after effective steps, the probability that the process has not fixated is at most , irrespective of the initial type function.


Bounds independent on

Expected steps The process requires at most effective steps in expectation, before fixation is reached.

Probability For any integer , after effective steps, the probability that the process has not fixated is at most , irrespective of the initial type function.


Bounds for

Expected steps The process requires at most effective steps in expectation, before fixation is reached.

Probability For any integer , after effective steps, the probability that the process has not fixated is at most , irrespective of the initial type function.

Proof.
Observe that if , then fixation has been reached in 0 (effective) steps. Thus assume . We first argue about the first item of each case (i.e., about expected steps) and then about the second item (i.e., probability) of each case.
Expected steps of first item
In every step, for , the potential increases by , unless fixation has been achieved, in expectation by Lemma 2. Let