# The Moran Genealogy Process

We give a novel representation of the Moran Genealogy Process, a continuous-time Markov process on the space of size-n genealogies with the demography of the classical Moran process. We derive the generator and unique stationary distribution of the process and establish its uniform ergodicity. In particular, we show that any initial distribution converges exponentially to the probability measure identical to that of the Kingman coalescent. We go on to show that one-time sampling projects this stationary distribution onto a smaller-size version of itself. Next, we extend the Moran genealogy process to include sampling through time. This allows us to define the Sampled Moran Genealogy Process, another Markov process on the space of genealogies. We derive exact conditional and unconditional probability distributions for this process under the assumption of stationarity, and an expression for the likelihood of any sequence of genealogies it generates. This leads to some interesting observations pertinent to existing phylodynamic methods in the literature.

## Authors

• 6 publications
• 4 publications
• 5 publications
• ### Large deviations for the empirical measure of the zig-zag process

The zig-zag process is a piecewise deterministic Markov process in posit...
12/13/2019 ∙ by Joris Bierkens, et al. ∙ 0

• ### Modeling and estimation of multivariate discrete and continuous time stationary processes

In this paper, we give a AR(1) type of characterization covering all mul...
11/04/2019 ∙ by Marko Voutilainen, et al. ∙ 0

• ### Markov Genealogy Processes

We construct a family of genealogy-valued Markov processes that are indu...
05/26/2021 ∙ by Aaron A. King, et al. ∙ 0

• ### Sampling for Remote Estimation through Queues: Age of Information and Beyond

The age of information, as a metric for evaluating information freshness...
02/10/2019 ∙ by Tasmeen Zaman Ornee, et al. ∙ 0

• ### Records for Some Stationary Dependent Sequences

For a zero-mean, unit-variance second-order stationary univariate Gaussi...
07/01/2018 ∙ by Michael Falk, et al. ∙ 0

• ### Markov Chains Generated by Convolutions of Orthogonality Measures

About two dozens of exactly solvable Markov chains on one-dimensional fi...
06/08/2021 ∙ by Satoru Odake, et al. ∙ 0

• ### The Age of Information in Networks: Moments, Distributions, and Sampling

We examine a source providing status updates to monitors through network...
06/09/2018 ∙ by Roy D. Yates, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction

The Moran process (Moran, 1958) plays an important role in the theory of population genetics and is intimately related to Kingman’s (1982a; 1982b; 1982c) coalescent process, itself a foundational component of modern population genetics, phylogenetics, and phylodynamics (Hudson, 1991; Donnelly & Tavare, 1995; Stephens & Donnelly, 2000; Rosenberg & Nordborg, 2002; Ewens, 2004; Volz et al., 2009; Rasmussen et al., 2011). Kingman formulated the coalescent as a backward-in-time Markov process whereby genealogical lineages randomly coalesce with one another. He made explicit connections with the classical Wright-Fisher and Moran models, and connections exist with a far broader collection of population genetics models (Ewens, 2004; Etheridge, 2011).

The literature on coalescent theory is often focused on models whereby frequencies of alleles change within a population according to some idealized stochastic process (Hein, 2005; Durrett, 2008; Wakeley, 2008). Here, we refocus the discussion, onto the evolution of the genealogy that, at any given time, describes the full set of relationships among the members of a population alive at that time. That is, we consider stochastic processes that take values in the space of labelled genealogies with real-valued branch lengths. In particular, in § 2, we give a formal definition of the Moran Genealogy Process and, in our first set of results, show by direct forward-in-time calculation, and without any need for large population-size or sparse sampling assumptions, that this process is uniformly ergodic. In § 3, we show that its limiting stationary distribution is identical to that of the Kingman coalescent. Though this first set of results is unsurprising in itself, the proofs are novel and of some interest, since they are strictly forward-looking and therefore shed new light on the genealogical process from a different direction.

Our second major set of results concerns the effect of sequential, asynchronous sampling of the Moran Genealogy Process. In § 5, we formally define the Sampled Moran Genealogy Process and derive novel expressions for its probability distribution. In particular, we derive an expression for the marginal distribution of the genealogy that relates a sequence of samples of any length. In doing so, we allow for both coalescence events and events wherein one sample is a direct descendant of an earlier sample. Our results, again, are exact and hold for all population sizes and sampling rates. Curiously, comparison of our expressions with comparable ones from the phylodynamics literature reveals interesting discrepancies, with implications for phylodynamic inference methods currently in use. Based on an examination of the proofs, we anticipate that these results will generalize to a broad class of birth-death processes.

## 2. The Moran Genealogy Process

The name of P. A. P. Moran has been associated with a number of related stochastic processes arising in population genetics. These processes share the common features that they involve a finite population of asexual individuals who reproduce and die stochastically in continuous time. The population size is kept deterministically constant by requiring that each reproduction event is coincident with a death event. Such models are closely related to the coalescent of Kingman (1982a, b, c), which plays a prominent role in population genetics and phylogenetics. We explore these connections from a new angle by defining the Moran Genealogy Process (MGP), a stochastic process on the space of genealogies. In common with other models bearing Moran’s name, we make the assumptions that (a) the process is a continuous-time Markov process, with a constant event rate, and (b) at each event, one asexual individual gives birth and another dies, so that the population size remains constant. At any particular time, the state of the MGP is a genealogy, i.e., a tree with branch lengths, that relates the living members of the population via their ancestry. This consists of links between living individuals and those past individuals who are the most recent common ancestors of sets of currently living individuals. As living individuals reproduce and die, the genealogy grows at its leading edge; it dissolves at its trailing edge, as ancestors are “forgotten”. Fig. 1 illustrates; see also the animations in the online appendix.

As we shall see, the genealogies of the MGP have two aspects, one discrete, the other continuous. The discrete aspect, encoding the topological evolution of the genealogies, evolves as a jump process: at each event, a new branch appears at the leading edge and an internal node is dropped. The continuous aspect, tracking the quantitative dynamics of branch lengths, evolves as the latter grow continuously at the leading edge of the tree and jump at event times as internal nodes are dropped. We can study the discrete process without reference to the continuous one. Accordingly, we will first define a Markov chain that represents the discrete aspect. We will then examine the full MGP as an extension of the discrete chain.

Because the MGP is Markov, the waiting times between birth-death events are exponentially distributed and, as mentioned, the event rate is constant. For convenience in calculations, we take this rate to be

; we will later rescale time to obtain more general expressions.

### 2a. The Moran genealogy game

Although genealogies are naturally represented using trees, these representations are not unique, and it can be challenging to reason about their properties. Accordingly, we map the MGP onto a parlor game for which intuition is more readily available. In particular, the Moran genealogy process in a population of size is equivalent to the following parlor game for players.

#### Equipment

We have black balls, numbered , and green balls, each one of which is inscribed with the name of one of the players (we assume the names are unique). Each player receives a slate, the green ball bearing his or her name, and a randomly chosen black ball. We arrange seats in a row and number them through . We also have a clock which, when started, runs for a random, exponentially distributed, amount of time and then stops. Again, it will be convenient for the time being to assume that the stopping times are exponentially distributed with rate parameter .

#### Setup

To begin the game, an arbitrarily chosen player takes the first seat. Upon her slate, she writes “”. The remaining players are then seated sequentially, from left to right, in arbitrary order. As each successive player takes a seat, she exchanges her green ball for a randomly selected black ball held by one of the already-seated players. She then writes a real number upon the slate; the only constraint on her choice is that it be at least as large as the number on the slate of the player to her left yet less than zero. Thus, the player taking seat encounters the following situation. The players already seated hold among themselves green balls and black balls. The player to be seated therefore has choices as to the black ball she will exchange for her green ball. The leftmost player holds two green balls and the rightmost, two black balls; the other players may have one of each color. Each player’s green ball (other than that of the leftmost player) is held by a player seated to her left.

#### Play

Play proceeds in rounds. Each round begins when the clock is started. When it stops, two black balls are chosen at random (without replacement). The player—call him X—holding the first black ball, stands up. Player X exchanges his second ball for the green ball bearing his name. This will always be held by a player seated to his left. All players to the right of X then shift one seat to the left, leaving the rightmost seat empty. Player X is said to have been “killed”. Next, the player, Y, holding the second randomly-selected black ball, trades it for X’s green ball. Player X now takes the rightmost seat and writes the current time upon his slate. Player Y is said to have “given birth” and player X to have been “reborn”. We sometimes refer to the conjoined birth and death events as a single “Moran event”. Note that, the player in seat never holds a black ball, she can never be killed and therefore remains in this seat throughout the game.

#### Relation to the Moran genealogy process (MGP)

The correspondence between the MGP and the Moran genealogy game is as follows. The seats numbered (that is, all but the leftmost seat) correspond to the time-ordered internal branch points, seat number being the root, i.e., the most recent common ancestor of all extant individuals. The black balls correspond to individuals in the extant population (i.e., the tips of the genealogical tree): if a player holds a black ball, one individual in the extant population has this player as her most recent ancestor shared with someone else in the extant population. The green balls record the topology of the tree: each player holding a green ball is the immediate parent of the player named on that ball. Fig. 1 illustrates.

The MGP is a continuous-time process on tip-labeled trees with branch lengths and unordered descendants. The topological structure of a tree at time may be represented by the list , where each is the (unordered) pair of balls held by the player in seat . Let be the finite set of all such states. One can count the number of distinct arrangements to compute the size of . It will become clear that nothing depends on the names of the players or their order. Accordingly, we ignore the identities of the players entirely in the counting. We see that there are choices for the two black balls held by the player in seat . The player in seat now has balls to choose from: the remaining black balls plus the green ball with the name of the player in seat . Continuing to work backward, for each , the player in seat has choices for a pair of balls. Hence, one has .

The number written on a player’s slate is the time at which that player was most recently seated. Let

be the vector of numbers on the slates at time

, ordered left to right. Let , where for , and . Then the are the durations of the coalescent intervals, i.e., the intervals between successive branch points when the latter are ordered in time. The state space of the MGP is defined to be , where and the MGP itself can be written , for .

### 2b. Limiting distribution of W

With the definitions above, it should be clear that both and are Markov processes, though is not. Here, we show that the unique, limiting distribution of the

chain is the uniform distribution on

. It is easy to verify that is irreducible, aperiodic, and recurrent, whence it follows that it has a unique, limiting invariant distribution. The proof that this distribution is uniform closely follows the reasoning of Aldous (1999) and is in most respects identical to that found in Harding (1971) and Gernhard (2008).

The key is to retrace the steps of the last-seated player, counting the number of states that can immediately precede a given state. For example, suppose, as in Fig. 2, player d is in the rightmost seat (position ). It is clear that, immediately before sitting down in position , player d received one of the two balls she holds from the player currently holding green ball d (in Fig. 2C, for example, this is player b). There are thus two possibilities at this stage. Next, she might have come from any of the seats (including the rightmost one). If she came from seat , then the players seated to the left of collectively hold green balls bearing their own names. Therefore, there are balls (not all of which need be black) held by these players that might have been the one player d exchanged for the ball bearing her name. Thus, there are configurations which might have preceded the current one.

To formalize this reasoning, we first make some definitions. For , let be the configuration resulting from the “killing” of the player who holds ball . For , let be the configuration resulting from the player holding ball having “given birth”. Thus, the is the result of the Moran event following the random choice of an ordered pair of black balls . Note that if either or , then , i.e., and are the balls held by the rightmost player immediately after the rearrangement. Let be the to transition probability. Then, supposing ,

 M(w,w′)=n∑u,v=1u≠vIΦuΥv(w)=w′n(n−1)=IΦaΥb(w)=w′+IΦbΥa(w)=w′n(n−1), (1)

where denotes the indicator function for the condition . One easily verifies that is stochastic, i.e., . In fact, is doubly stochastic, i.e., . To see this, note that, by the reasoning of the last paragraph, for each , when and otherwise. We summarize with the following propositions.

###### Proposition 1.

Let be the uniform probability distribution on . That is, for each ,

 qn(w)\coloneqq2n−1n!(n−1)!.

Then is the unique, limiting, stationary distribution of the process described above.

###### Proposition 2.

Suppose . Let

is the probability that, in the move from to , the first black ball selected is held by the player in seat . Clearly, . Moreover

 ∑w∈WnMj(w,w′)=j(n2)and∑w′∈WnMj(w,w′)=Bj(w)n,

where , i.e., the number of black balls held by the player in seat .

###### Proof.

Note that, by the counting argument above, . Furthermore, for each , the probability that the player in seat is killed is just . ∎

###### Corollary 3.

Under stationarity,

 E[Bj(w)n]=j(n2).

### 2c. Ergodicity

We establish the stability properties of the Moran genealogy process by studying its resolvent. To this end, let be the -th jump time of a unit-rate Poisson process on . The resolvent of is the discrete-time chain , , resulting from observing at times . Let and represent Lebesgue measure on and counting measure on , respectively. Define the measure on by

 η(dwds)\coloneqq12n−1exp[−(1+(n2))(s1+⋯+sn−1)]dwds. (2)

To be clear, Eq. 2 defines as a product measure, where the -component is counting measure and the -component is absolutely continuous with respect to Lebesgue measure.

Now suppose is an arbitrary state and is a measurable subset of . Note that for any , there is a sample path that leads directly from to in precisely units of time and with exactly events transpiring at intervals . At each event, there is at least one choice of a pair of black balls that can be made. Hence the probability associated with any of these paths is

 12n−1exp[−(n2)n−1∑m=1sm]ds.

The probability that is . Summing over all in gives

 P[Yk∈E|Yk−1=x]≥η(E) (3)

independent of and . Though we do not use it, in fact the inequality is strict since, although the constructed paths are almost surely those which reach in minimal time, there are many others that arrive by more circuitous routes. Since Eq. 3 holds for all , the full state space is said to be a petite set. It follows from Theorem 16.2.2 of Meyn & Tweedie (2009) that is uniformly ergodic and therefore, a fortiori, that possesses a unique stationary probability distribution, . Since the invariant measures of and coincide, it follows that has the same property. We determine the form of this invariant measure in § 3.

For the MGP , we define the probability transition function, in the usual fashion (e.g., Feller, 1957; Meyn & Tweedie, 2009). Specifically, let

 Pt(x,E)\coloneqqP[X(t)∈E|X(0)=x], (4)

for , , and measurable . For each , is a measure, while for each , is a measurable function from to .

Define the family of operators, , , by

 (Ktf)(x)\coloneqqE[f(X(t))|X(0)=x]=∫XnPt(x,dx′)f(x′), (5)

for and . With this definition, is a Markov semigroup.

We can now state

###### Theorem 4.

The Moran genealogy process is uniformly ergodic. In particular, there are constants and such that

 ∥∥Pt(x,⋅)−πn∥∥TV

for all and all . Here, the norm on measures is the total variation norm, defined by

 ∥μ∥TV\coloneqqsup|f|≤1|μ(f)|.

Moreover, if denotes the operator norm, with the same and as above, we have

###### Proof.

We verify that trivially satisfies a drift condition. Down et al. (1995, Theorem 5.2) show that if, for some petite set , some function , some , and some , one has for every , then one can conclude that is -uniformly ergodic. We take , , and . We conclude that is uniformly ergodic in the sense of Down et al. (1995): this is equivalent to the first statement in the theorem.

Finally, it is easy to see that

 ∥Ktf−πn(f)∥∞

The second statement in the theorem then follows immediately from the definition of the operator norm. ∎

### 2d. Infinitesimal generator

With the Markov semigroup defined by Eq. 5, we have

 (Ktf)(w,s)=∫Wn×SnP(t,w,s,w′,s′)f(w′,s′)dw′ds′+o(t),

as , where

 P(t,w,s,w′,s′)\coloneqqe−(n2)tQ(t,s,w,s′,w′)+(1−e−(n2)t)n−1∑j=1Mj(w,w′)Rj(s,s′), (6) Q(t,s,w,s′,w′)\coloneqqδ(sn−1+t−s′n−1)⋅n−2∏j=1δ(sj−s′j)⋅Iw=w′,and (7) Rj(s,s′)\coloneqqj−2∏k=1δ(sk−s′k)⋅δ(sj−1+sj−s′j−1)⋅n−2∏k=jδ(sk+1−s′k)⋅δ(s′n−1). (8)

Here, is the Dirac delta function. The term encodes changes in that occur between birth-death events: the -st coalescent interval grows with time, while the other intervals remain fixed and the topology remains unchanged. The terms in the sum of Eq. 6 encode the changes in that occur when the selected black ball is held by the player in seat . At such an event, jumps to with probability , the -st coalescent interval subsumes the -th, while the -th interval takes the value of the -st for . Moreover, the -st interval is set to zero.

We compute the infinitesimal generator, , as the linear operator satisfying

 limt↓0Ktf−ft=Lf

for . This is easily done, and we obtain the following, which we state without proof.

###### Proposition 5.

The infinitesimal generator of the MGP is the linear operator defined by

 (Lf)(w,s)=∫Wn×SnL(w,s,w′,s′)f(w′,s′)dw′ds′, (9)

whenever . The kernel, , is given by

 L(w,s,w′,s′)\coloneqq(δ′(sn−1−s′n−1)−(n2)δ(sn−1−s′n−1))n−2∏k=1δ(sk−s′k)Iw=w′+(n2)n−1∑j=1Mj(w,w′)Rj(s,s′). (10)

Here, the symbol refers to the derivative of the Dirac delta function.

### 2e. Kolmogorov backward equation

For , let . Note that

 ∂uf∂t(t,w,s)=limΔt↓0Kt+Δtf(w,s)−Ktf(w,s)Δt=limΔt↓0KΔt−IdΔtKtf(w,s)=Luf(t,w,s)=∫Wn×SnL(w,s,w′,s′)uf(t,w′,s′).

Applying Eq. 10, we obtain the Kolmogorov backward equation

 ∂uf∂t(t,w,s)=∂uf∂sn−1(t,w,s)−(n2)uf(t,w,s)+(n2)∑w′∈WnM1(w,w′)uf(t,w′,s2,…,sn−1,0)+(n2)n−1∑j=2∑w′∈WnMj(w,w′)uf(t,w′,s1,…,sj−2,sj−1+sj,sj+1,…,sn−1,0). (11)

Together with the initial condition, , Eq. 11 determines the Markov semigroup .

## 3. The stationary Moran Genealogy Process

The invariant measure, , of the MGP is characterized by the fact that it is annihilated by the generator, i.e., . We seek a separable measure . Operating with on involves integrating over all possible genealogies :

 πnL=∫Wn×Snπn(dwds)L(w,s,w′,s′)=−(p′n−1(s′n−1)+(n2)pn−1(s′n−1))n−2∏k=1pk(s′k)p0(w′)+(n2)n−1∑j=1∑wp0(w)Mj(w,w′)j−2∏k=1pk(s′k)⋅Aj−1(s′j−1)⋅n−2∏k=jpk+1(s′k)⋅δ(s′n−1)=0. (12)

Here and is defined by

 Aj(s)\coloneqq∫s0pj(t)pj+1(s−t)dt. (13)

Integrating out all the in Eq. 12, we obtain the matrix equation

 p0(w′)=∑wp0(w)n−1∑j=1Mj(w,w′)=∑wp0(w)M(w,w′),

which is just the expression of the requirement that be the stationary distribution of the process, which we have already determined: indeed, Proposition 1 states that .

To find the other factors of , we divide both sides of Eq. 12 by and, after dropping the primes, which are no longer needed, we have

 −(p′n−1(sn−1)+(n2)pn−1(sn−1))n−2∏k=1pk(sk)+n−1∑j=1jj−2∏k=1p(sk)⋅Aj−1(sj−1)⋅n−2∏k=jpk+1(sk)⋅δ(sn−1)=0. (14)

Note that, in passing from Eq. 12 to Eq. 14, we have applied Proposition 2.

Since Eq. 14 holds for all , we must have,

 p′n−1(sn−1)+(n2)pn−1(sn−1)=0,

for , whence

 pn−1(s)=(n2)e−(n2)s. (15)

Now, we integrate Eq. 12 over , which yields

 −(n2)n−2∏k=1pk(sk)+n−1∑j=1jj−2∏k=1pk(sk)⋅Aj−1(sj−1)⋅n−2∏k=jpk+1(sk)=0. (16)

Notice that each term in the sum of Eq. 16 contains a product of factors, each of which is a probability density over a different one of the variables. Consequently, by integrating over all , , we obtain an expression for the marginal density of :

 pm+1(sm)+2mAm(sm)−(1+2m)pm(sm)=0, (17)

which holds for .

We establish, by reverse induction on , that for . In Eq. 15, we have already shown the result for . Applying the operator to both sides of Eq. 17 yields

 (18)

By the induction hypothesis, the first term of Eq. 18 vanishes and the second term simplifies, so that we are left with

 p′m+mm+2(m+22)pm=0.

The result follows. This establishes

###### Theorem 6.

The unique invariant probability measure, , of the Moran genealogy process of size is given by

 πn(dwds)=exp(−n−1∑j=1(j+12)sj)dwds,

where is the counting measure on and is Lebesgue measure on .

Up to this point, we have, for convenience, assumed that the base event rate of the MGP is . If instead, it proceeds at an arbitrary constant rate, , a simple rescaling of time gives us the following

###### Corollary 7.

If the MGP of size proceeds with event rate , then its unique invariant probability measure is given by

 (μ(n2))n−1exp(−n−1∑j=1(j+12)(n2)μsj)dwds,

where is the counting measure on and is Lebesgue measure on .

Thus, the unique limiting stationary measure of the Moran Genealogy Process is identical to the probability measure of the Kingman (1982a) coalescent. Although this result is unsurprising, the proof, which is strictly forward-looking and makes no appeal to exchangeability, sheds some light onto the relationship between the Moran process and the Kingman coalescent.

## 4. Synchronous sampling

We now ask about the probability of sampling a given genealogy. Specifically, we imagine that at a given time, we sample individuals from the population at random. What is the genealogy linking these individuals?

We can represent the process of sampling a subgenealogy of size in terms of the Moran genealogy game as follows. Specifically, we perform the following iterative procedure. (1) The player holding the highest-numbered black ball stands up. (2) He exchanges his second ball for the green ball bearing his name. (3) The standing player is dismissed, all players seated to his right shift one seat to the left, and the rightmost chair is removed. At each iteration of this procedure, we remove the highest-numbered black ball from play and dismiss one player. Thus after steps, the configuration of the balls held by the remaining players, and the numbers written on their slates, together determine the sampled genealogy. Each step in this procedure kills one player, sequentially applying the function (defined in § 2B) for . Since for all , , the result would be the same were we to kill players by announcing a random sequence of black balls, provided we then replaced the remaining black balls with those numbered .

For , let represent the random result of drawing a sample of size from as just described. The selection of a sample of size is then the -fold composition, . We risk no confusion in defining to be the corresponding projection of onto its -component.

The removal of each successive player (i.e., application of the random function ) is itself equivalent to an application of the deterministic function (defined in § 2B) for some randomly selected black ball . For and , let be the probability that and denote by the unique black ball in . Then

 Hn(w,w′)=1nn∑u=1IΥu(w)=w′=IΥa(w)=w′n.

A counting argument similar to that employed in § 2B shows that

 ∑w∈WnHn(w,w′)=(n2), for all w′∈Wn−1. (19)

As in Proposition 2, we decompose into levels, writing , where

 Hnj(w,w′)\coloneqqIΥa(w)=w′&a∈wjn. (20)

Again, simple counting arguments establish that

 ∑w∈WnHnj(w,w′)=jand∑w′∈Wn−1Hnj(w,w′)=Bj(w)n, (21)

where is as defined in Proposition 2.

What is the action of the sampling operation on probability measures? Given any probability measure on and any event , define

 Ωnν(E)\coloneqq∫E∫Xnν(dwds)n−1∑j=1Hnj(w,w′)Rnj(s,s′)dw′ds′. (22)

Here, is defined in a manner similar to Eq. 8, by

 Rnj(s,s′)\coloneqqj−2∏k=1δ(sk−s′k)⋅δ(sj−1+sj−s′j−1)⋅n−2∏k=jδ(sk+1−s′k),

for , . Applying Eq. 22 to the stationary measure, , of the size- MGP (Theorem 6), we obtain

 Ωnπn(dw′ds′)=I(w′,s′)dw′ds′,

where the density satisfies

 I(w′,s′)=n−1∑j=1∫Wn×Snqn(w)Hnj(w,w′)Rnj(s,s′)n−1∏m=1pm(sm)dwds=n−1∑j=1(∑w∈Wnqn(w)Hnj(w,w′))(∫SnRnj(s,s′)n−1∏m=1pm(sm)ds)=n−1∑j=1j2n−1n!(n−1)!j−2∏m=1pm(s′m)⋅Aj−1(s′j−1)⋅n−2∏m=jpm+1(s′m). (23)

As before, and, from Eq. 13,

 Aj−1(s)=(j2)(j+12)exp(−(j2)s)−exp(−(j+12)s)j.

Substituting these expressions into Eq. 23 and doing some routine algebra gives

 I(w,s)=exp(−n−2∑j=1(j+12)sj),

which implies that . Iterating this result times establishes

###### Theorem 8.

Let be the stationary Moran genealogy process of size and event rate . For , let be the corresponding size- sampled process. Then the marginal probability distribution of on is given by the measure

 (μ(n2))k−1exp(−k−1∑j=1(j+12)(n2)μsj)dwds,

where, as before, is the counting measure on and is Lebesgue measure on .

Theorem 8 implies that the genealogy of a synchronous sample contains no information on the population size unless the event rate is known, and vice versa. To render these parameters independently identifiable, it is necessary to sample asynchronously.

## 5. The Sampled Moran Genealogy Process

### 5a. Moran genealogy game with asynchronous sampling

We now turn to the situation where sampling occurs asynchronously, resulting in a sequence of genealogies whose probabilistic properties we wish to understand. Accordingly, we add some new rules to our parlor game.

#### Players and equipment

Before beginning the game, a finite or infinite sequence of times is chosen arbitrarily. At each of these times, a single sample will be taken from the population. In addition to the players of the Moran genealogy game, we must have two players for each of the samples. The equipment is the same as that used in the Moran genealogy game, but in addition, there is one red and one blue ball for each sample. Each one of the pair of players that will represent a sample receives a slate and a green ball bearing her own (unique) name. One of these players also takes a blue ball while the other takes a red ball. The red-ball holders and blue-ball holders are arranged in parallel queues.

#### Setup and play

The setup for the game with sampling is identical to that for the original game. Play, too, proceeds as before. However, play stops at each of the pre-selected sampling times. At sampling time , the following maneuvers occur. (1) Two more seats are placed to the right of the rightmost seat. (2) The number of a randomly chosen black ball is called out. (3) The player holding the corresponding black ball exchanges it for the green ball bearing the name, call it A, of the next red-ball holder in the queue. (4) A takes the first of the seats just placed. (5) The next blue-ball holder, call her B, in the queue exchanges her green ball for the red ball held by A. (6) B takes the second of the new seats. (7) Both A and B record the current time () on their slates. Thus, after a sampling event, player B sits in the rightmost seat, holding one blue and one red ball. Player A sits one seat to the left, holding B’s green ball and the randomly selected black ball. An animation depicting a typical simulation of the MGP with sampling can be found in the online appendix.

### 5b. Sampled Moran Genealogy Process

#### Stationarity

While the definition of the Moran genealogy process with sampling, given above, makes sense in the absence of any stationarity assumptions, the results below will depend on the assumption that the underlying MGP is stationary. Accordingly, from this point forward, we assume that, prior to the first sample, the state of the MGP is a random draw from the stationary distribution (Corollary 7).

#### Pruning

We are naturally interested in the genealogies that express the relationships among only the sampled lineages. Accordingly, we define to be the genealogy obtained by performing the following pruning procedure. Immediately following sampling time , we take a snapshot of the game tableau. We then sequentially dismiss all players holding black balls, as described in § 4. As we noted before, the order in which the players are dismissed does not matter. Next, each player holding a blue ball consults the player to her immediate left. Let X be the name of the player with the blue ball and Y that of the player to her left. If the times recorded on their slates match, it is almost surely the case that Y holds both the green ball inscribed with X’s name. In this case, X trades her red ball for Y’s other ball. Y now trades the green ball bearing X’s name for the green ball bearing his own name, stands up, and is dismissed. All players from X rightward shift one seat to the left. If the slates of X and Y do not match, no action is taken.

Following the pruning, the tableau is reset according to the snapshot and play proceeds to the next sampling time. It is readily checked that one can also obtain from as follows. The rightmost player in , call him A, always holds both a blue and a red ball. Let B be the player holding the green ball with A’s name. Let A exchange his red ball for his own green ball, stand up, and be dismissed. Now if B, who has just received this red ball, holds a green ball as well, she exchanges it for her own green ball, stands up, and is dismissed. If she holds a blue ball in addition to the newly received red one, she remains seated. The fact that one can obtain from in this way implies that one can dispense with the snapshots and obtain