A probabilistic interpretation of replicator-mutator dynamics

12/21/2017 ∙ by Ömer Deniz Akyıldız, et al. ∙ Universidad Carlos III de Madrid 0

In this note, we investigate the relationship between probabilistic updating mechanisms and discrete-time replicator-mutator dynamics. We consider the recently shown connection between Bayesian updating and replicator dynamics and extend it to the replicator-mutator dynamics by considering prediction and filtering recursions in hidden Markov models (HMM). We show that it is possible to understand the evolution of the frequency vector of a population under the replicator-mutator equation as a posterior predictive inference procedure in an HMM. This view enables us to derive a natural dual version of the replicator-mutator equation, which corresponds to updating the filtering distribution. Finally, we conclude with the implications of the interpretation and with some comments related to the recent discussions about evolution and learning.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recently, it has been shown that there is a connection between Bayesian updating and replicator dynamics [1, 2]

. If the frequency distribution of a population at a given time is interpreted as a probability vector, then evolutionary dynamics models correspond to updating these probabilities. This, in turn, can be seen as updating probability distributions over time via Bayesian updating as shown in

[1]. In this note, we first review the proposed relationship in detail to set up the context. Then, we briefly extend the previous results to the replicator-mutator case, where in addition to the replication, there is mutation dynamics. Then we give a filtering recursion which we call as the mutator-replicator equation. Finally, we discuss some of the implications about seeing evolutionary processes as probabilistic updating mechanisms.

Notation. Throughout, will denote the frequency of type at time and will denote the whole distribution (a vector, in this case) at time . The fitness function for type is denoted with . The average fitness will be given by . We will denote a generic probability measure over a discrete state space with to denote the distribution of the population at time and will be the likelihood. We assume there are competing types. We define a state-space . To highlight the correspondence, we note that and (the argument will depend on the context). Relevant notation will be introduced further when needed.

2 Evolutionary dynamics as Bayesian inference

2.1 Replicator dynamics as Bayesian updating

Consider the discrete time replicator dynamics which is defined as [3],


where is the frequency of the population of th type and is the fitness function of th type and is the mean fitness given by,

Next, let us define a random variable

defined on where . This random variable models the probability of a single individual belongs to a certain type, as the frequency can be interpreted this way [4]. Therefore, we write 111Note that, with a slight abuse of notation, and denotes the same quantity for each ..

Assume that, at time , we have describing the frequencies of species. Given the likelihood (or the fitness potential) , we can update this probability distribution via the Bayes’ rule as [1],


It is possible to see that (1) and (2) describe the exact same relationship [1, 2]. For this to work, we need to put , i.e., define a likelihood that depends on the whole distribution of previous time. This interpretation of the replicator dynamics as Bayesian updating was pointed out by [1, 2]. The replicator equation is more general since the likelihood in the Bayesian context does not depend on the whole distribution [1].

2.2 Replicator-mutator dynamics as Bayesian inference

If the replicator dynamics is Bayesian updating, it is natural to expect that there must be a dynamic Bayesian version for the replicator-mutator dynamics. As a straightforward dynamic extension of Bayesian updating, it is tempting to consider the prediction and filtering recursions (the latter is known as the optimal Bayesian filter [5], see [6] for an accessible treatment) to see what they mean in the evolutionary dynamics context.

To begin with, we recall that the replicator-mutator equation, which has received a significant attention and widely used, can be described as [7, 8],


where is a transition matrix, i.e.,

for every

. Now consider a Markov chain

with a transition matrix,

Our aim is to come up with a probabilistic interpretation of (3) as an update of conditional distributions of given the sequence of “observations”.

We derive the replicator-mutator equation as a probabilistic update by putting . Then in the probabilistic setup, we get,


We can immediately recognize this recursion as the prediction recursion of a hidden Markov model (HMM)222This might be easier to see from the recursion over a continuous space and with an explicit observation sequence . Consider the likelihood and the transition density . Then, the following holds,

as the density in the nominator can be written as and the density in the denominator can be written as . Note that, for this to hold need not to be a probability density. See the concluding sections for the meaning of conditioning on in this context. [9]. That is, is the predictive distribution at time given all the data up to time . The recursion (4) is a map acting between the space of probability distributions and it maps the predictive distribution of time to the predictive distribution of time . So the replicator-mutator dynamics can be thought of as employing Bayesian prediction in an HMM with a likelihood depends on the whole probability distribution of the previous time. Observations in this setting are implicit in the fitness functions as the values of the fitness functions can be reinterpreted as evaluations of the likelihood with an implicit data sequence. The posterior predictive distribution over hidden states exactly coincides with the frequencies of the population given by the replicator-mutator equation.

From replicator-mutator to mutator-replicator

Now as an obvious next step, we can investigate the filtering recursion. To keep the notation similar, let us denote the filtering distribution with and corresponding population vector with . Then the filtering recursion can be written as,


In terms of the relevant literature, we can rewrite the filtering recursion as,


where is defined component-wise with (which is actually the predictive distribution in the Bayesian sense, as the notation suggests – see above). This equation is different from the replicator-mutator equation. We refer to it as the mutator-replicator equation and it has a natural interpretation related to the replicator-mutator equation. We have shown that the Eq. (3) can be interpreted as a prediction recursion (that is the most recent distribution of the population after the last replication-mutation steps, but before the next replication step). Similarly, the Eq. (6) can be interpreted as the distribution of the population after the last mutation-replication steps but before the next mutation step, hence the name mutator-replicator. These two equations are complementary to each other in a very natural sense and it would be interesting to see if the latter recursion could be useful in the study of evolutionary dynamics.

3 Discussion

The interaction between two fields can be productive if this view can be taken to its natural conclusion. In the evolutionary dynamics context, there are several versions of the replicator-mutator equations which is a rich family of tools, aiming at modeling different mechanisms. One can try to make sense of some of those from a probabilistic modeling perspective to uncover the underlying probabilistic structure. Reversely, tools of the computational Bayesian statistics can be used to analyze these models, by taking an inference view on the problem or transferring the already well-known theoretical results to understand the dynamics of the models in evolutionary dynamics from a different perspective. As an example, if we compute

, this coincides with the marginal likelihood in Bayesian computation, which is also called as the model evidence. This is a quantity which enables us to rank different models. It could be fruitful to think about its applications in the evolutionary dynamics context, e.g., on whether it can be used to test different mutation mechanisms against each other given a specific fitness landscape.

We remark that this short note only proposes that some models of evolution can be understood as dynamic Bayesian updating mechanisms. This is also related to recent discussions about how evolution can learn, see e.g. [10]. While it is true that, under this particular replicator-mutator (or mutator-replicator) model, the distribution of the population is conditioned on all the previous evaluations (that can be regarded as implicit observations from the environment), it does not follow immediately that the evolution can utilize past information entirely, as a learning algorithm would. Given the dynamic view here, one can argue that evolution works more like a tracking algorithm, rather than a learning one333By learning

, we mean parameter estimation (or fitting) in a statistical model (either maximum-likelihood or Bayesian).

. For many practical dynamic models, the posterior probability distribution (the filter) tends to

forget the past exponentially fast under mild conditions [11], which means that only the most recent environment evaluations, the recent state of the system, and the recent mutations might be relevant rather than the entire past. Although there is an abstract possibility that the mutation mechanisms can evolve themselves, it still does not imply that evolution can learn beyond adaptation to the current structure of the environment. Mathematically, if we can define a mutation matrix with a parameter vector , which parameterizes the mutation mechanism, by generating variants of and a nested structure (running slightly different mutation mechanisms for each subgroup), the parameter vector can be adapted to the environment (e.g. see [12] for such an algorithm for continuous state-space models within the Monte Carlo framework). From a statistical perspective, it would correspond to adapting the parameters of the transition model of an HMM444There is some evidence that such a mechanism exists [13]. However, even if there is such a mechanism, it can still be regarded as some form of tracking (as real-time adaptation) rather than learning as we know it.

4 Conclusions

We believe that a probabilistic view of the replicator-mutator equation can help unifying well-known ideas from probabilistic modeling and evolutionary dynamics, which then can help merging the well-known tools of computational study of two separate lines of research. The new tools emerging from this relationship can lead to a number of fruitful ideas and help the development of more elaborate models of evolutionary phenomena.


  • [1] Cosma Rohilla Shalizi. Dynamics of Bayesian updating with dependent data and misspecified models. Electronic Journal of Statistics, 3:1039–1074, 2009.
  • [2] Marc Harper. The replicator equation as an inference dynamic. arXiv:0911.1763, 2009.
  • [3] Josef Hofbauer and Karl Sigmund. Evolutionary games and population dynamics. Cambridge University Press, 1998.
  • [4] Andrey Nikolaevich Kolmogorov. Foundations of the theory of probability. 1950.
  • [5] Brian D.O. Anderson and John B. Moore. Optimal filtering. Englewood Cliffs, N.J. Prentice Hall, 1979.
  • [6] Simo Särkkä. Bayesian filtering and smoothing. Cambridge University Press, 2013.
  • [7] Karen M Page and Martin A Nowak. Unifying evolutionary dynamics. Journal of theoretical biology, 219(1):93–98, 2002.
  • [8] Marc Harper and Dashiell EA Fryer. Stability of evolutionary dynamics on time scales. arXiv:1210.5539, 2012.
  • [9] Ramon van Handel. Hidden Markov models. Unpublished notes, 2008.
  • [10] Richard A Watson and Eörs Szathmáry. How can evolution learn? Trends in ecology & evolution, 31(2):147–157, 2016.
  • [11] Olivier Cappé, Eric Moulines, and Tobias Ryden. Inference in Hidden Markov Models. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2005.
  • [12] Dan Crisan and Joaquin Miguez. Nested particle filters for online parameter estimation in discrete-time state-space Markov models. Bernoulli (to appear).
  • [13] Ryan M Hull, Cristina Cruz, Carmen V Jack, and Jonathan Houseley. Environmental change drives accelerated adaptation through stimulated copy number variation. PLoS biology, 15(6):e2001333, 2017.