Markovian Transition Counting Processes: An Alternative to Markov Modulated Poisson Processes

Stochastic models for performance analysis, optimization and control of queues hinge on a multitude of alternatives for input point processes. In case of bursty traffic, one very popular model is the Markov Modulated Poisson Process (MMPP), however it is not the only option. Here, we introduce an alternative that we call Markovian transition counting process (MTCP). The latter is a point process counting the number of transitions of a finite continuous-time Markov chain. For a given MTCP one can establish an MMPP with the same first and second moments of counts. In this paper, we show the other direction by establishing a duality in terms of first and second moments of counts between MTCPs and a rich class of MMPPs which we refer to as slow MMPPs (modulation is slower than the events). Such a duality confirms the applicability of the MTCP as an alternative to the MMPP which is superior when it comes to moment matching and finding the important measures of the inter-event process. We illustrate the use of such equivalence in a simple queueing example, showing that the MTCP is a comparable and competitive model for performance analysis.

Comments

There are no comments yet.

Authors

• 4 publications
• 4 publications
• 5 publications
• Deep Neyman-Scott Processes

A Neyman-Scott process is a special case of a Cox process. The latent an...
11/06/2021 ∙ by Chengkuan Hong, et al. ∙ 0

read it

• Modeling Events with Cascades of Poisson Processes

We present a probabilistic model of events in continuous time in which e...
03/15/2012 ∙ by Aleksandr Simma, et al. ∙ 0

read it

• Scattering Statistics of Generalized Spatial Poisson Point Processes

We present a machine learning model for the analysis of randomly generat...
02/10/2019 ∙ by Michael Perlmutter, et al. ∙ 0

read it

• A Piecewise Deterministic Markov Process via (r,θ) swaps in hyperspherical coordinates

Recently, a class of stochastic processes known as piecewise determinist...
07/02/2018 ∙ by Alexander Terenin, et al. ∙ 0

read it

• Generalized Evolutionary Point Processes: Model Specifications and Model Comparison

Generalized evolutionary point processes offer a class of point process ...
10/15/2019 ∙ by Philip A. White, et al. ∙ 0

read it

• Exact and computationally efficient Bayesian inference for generalized Markov modulated Poisson processes

Statistical modeling of point patterns is an important and common proble...
06/17/2020 ∙ by Flavio B. Gonçalves, et al. ∙ 0

read it

• Consistently Estimating Markov Chains with Noisy Aggregate Data

We address the problem of estimating the parameters of a time-homogeneou...
04/14/2016 ∙ by Garrett Bernstein, et al. ∙ 0

read it

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The term “bursty traffic” is often used in networking, traffic engineering and service systems to describe arrival patterns where at certain times there are numerous frequent arrivals and at other times there are none. While to a naive observer the standard Poisson process may also appear to exhibit such behavior, when referring to “bursty traffic” one typically means that the process is even “more bursty” than the Poisson process. There have been attempts to quantify and characterize the “level of burstiness” of a process as in [17] and later in [12], but there is no fully agreed upon definition. Instead, one often relies on specific models such as the Markov modulated Poisson process (MMPP), [8]

. The latter is a popular “bursty traffic” model because it has an asymptotic variance rate greater than unity (exceeds the Poisson process) and is also believed to have a squared coefficient of variation greater than unity,

[2]. MMPPs can easily be integrated within stochastic models as well as network approximation schemes, [15]. Hence to date, they have served as an attractive model for bursty traffic.

While the MMPP is popular, one may wish to find alternative models with comparable characteristics. One general option is to search within the class of Markovian arrival processes (MAPs), of which the MMPP is a special case. See e.g. [3]

Ch XI. In doing so, it may be interesting to find models that agree with certain attributes of a specified MMPP. Having such an alternative set of models can allow one to (i) Compare different (yet similar) models; (ii) Use the alternative models as ensembles similar to ensemble learning in a machine-learning context; (iii) Improve network decomposition schemes (such as

[15]). In view of (i), (ii) and (iii) we present a simple yet straightforward alternative to MMPPs: Markovian transition counting processes (MTCPs). Our focus is on second order equivalence between a broad class of MMPPs that we call slow MMPPs and our MTCPs. From a modelling perspective and from the perspective of (i), (ii) and (iii), we argue that MTCP models can be just as useful as MMPPs. This paper aims to illustrate that.

In general, treating MAPs as stationary often yields a useful mathematical perspective which matches scenarios when there is no known dependence on time. In describing a point process we use to denote the number of events during and further use the sequence to denote the sequence of inter-event times. Two notions of stationarity are useful in this respect. Roughly, a MAP is time-stationary if the distribution of the number of events within a given interval does not depend on the location of the interval; that is if is distributed as for any non-negative and . A MAP is event-stationary

if the joint distribution of

is the same as that of for any integer sequence of indices and any integer shift

. For a given model of a point process, one may often consider either the event-stationary or the time-stationary case. The probability laws of both cases agree in the case of the Poisson process. However, this is not true in general.

A common way to parameterise MAPs is by considering the generator, , of an irreducible finite state CTMC and setting . Roughly speaking, the matrix determines state transition rates without event counts and the matrix determines the state transition rates associated with event counts. Here we consider two classes of MAPs as alternative models. The Markov modulated Poisson process (MMPP) which is a natural generalization of the Poisson process and has a diagonal matrix and a Markovian transition counting process (MTCP) which has a diagonal matrix and the diagonal elements of are all zeros.

When observing their counting process, , in the MMPP setting there is no indication of jump times in the background CTMC; but as opposed to that, in the MTCP setting, the background CTMC jumps exactly every time when is incremented. This potentially makes MTCPs more attractive. We can consider the jump chain of an MTCP as a standard discrete-time hidden Markov model

(HMM), where the observations are the exponential random variables of sojourn times in states, and allows us to apply the forward-backward algorithm for MTCPs’ parameter estimation. Note that considering general MMPPs as standard HMMs requires more complicated manipulations; see for instance

[20].

A further virtue of MTCPs is that having a diagonal matrix makes certain algebraic quantities regarding the inter-event time process easier to compute (typical quantities that one needs to compute are or for some integer or real ). One of the state-of-the-arts fitting tools for bursty data traces is the KPC-toolbox, see[6]. The KPC-toolbox is based on the Kronecker product composition (KPC) technique to generate MAPs with predefined moments, autocorrelations, and higher order statistics in inter-event times. This technique benefits the fact that the Kronecker product of MAPs is a MAP if at least of them have diagonal matrix , see [5] for more details. The accurate fitting results of this method is another motive to carry on this research on relations between MMPPs and MTCPs.

In [16], the authors show that for a given MTCP, one can find an MMPP with the same first and second moments of the counting process. But, can we do the converse? Here, we consider this converse problem and show that for a wide class of MMPPs, it is possible to find MTCPs that match them exactly in terms of the mean and variance of for the time-stationary case.

In an early paper [1], we handle this problem just for the case of two-state MMPPs and here, using matrix analytic methods, we generalise the solution to a -state MMPP (). Further, for comparison, we apply moment matching to approximate the inter-event process of an MMPP with its associated MTCP. The numerical results of comparing the performance measures of the MMPP/M/1 queue with the approximated MTCP/M/1 queue confirm that MTCPs can be considered as an alternative model for MMPPs. In doing so, we compare two MTCP- based alternatives. The first is based on a detailed method of moments fit and the second is based on our simple approximated fit. We see that both methods are comparable.

The remainder of the paper is structured as follows. In Section 2 we overview and summarise the MAP results used in this paper. In Section 3 we focus on the relationships between MTCPs and MMPPs. Specifically, we present our main results for second order equivalence. We show numerical results in Section 4 and conclude in Section 5

. Note that in this paper, Latin letters and bold notation is used for column vectors. Vectors of probabilities are row vectors and shown with bold Greek letters.

2 Markovian Arrival Processes

A Markovian arrival process (MAP) is a mathematical model based on a Markov chain, used for modelling events occurring over time. A MAP of order (MAP) is generated by a two-dimensional Markov process on the state space . The counting process counts the number of “events” in with . The phase process is an irreducible CTMC with state space , initial distribution and generator matrix . A MAP is characterised by the parameters , where the matrix

has negative diagonal elements and non-negative off-diagonal elements, and records the phase transition rates which are not associated with an event. The matrix

has non-negative elements and records the phase transition rates associated with an event (increase of by 1). Moreover, we have . More details are in [3] (Chapter XI) and [11] (Chapter 2).

Since is assumed irreducible and finite, it has a unique stationary distribution satisfying , , where is a row vector of 0’s and is a column vector of 1’s. A MAP with parameters is time-stationary if . In the time-stationary case, we have (see [3]):

 E[N(t)] =πD1t, (1) Var(N(t)) ={πD1+2πDD♯QD1}t−2πDD♯QD♯Q(t)D1, (2) E[N3(t)] =6πD(∫t0∫u0eQ(u−s)D(1πD1s+D♯Q(s)D1)dsdu, (3)

where is the transient deviation matrix,

 D♯Q(t)=∫t0(eQu−1π)du, (4)

and is the deviation matrix defined by the following formula

 D♯Q=limt→∞D♯Q(t)=∫∞0(eQu−1π)du. (5)

Note that in some sources such as [3], the variance formula (2) is presented in terms of the fundamental matrix . For a given matrix , the relation between and its deviation matrix is , see [7]. The deviation matrix has the following properties:

 D♯Q1=0,πD♯Q=0,andD♯QQ=QD♯Q=(1π−I), (6)

where the last one follows from the fact that . The relation between the deviation matrix and the transient deviation matrix with the same generator matrix is given by:

 D♯Q(t)=D♯Q(I−eQt). (7)

Of further interest is the embedded discrete-time Markov chain of jump times with irreducible stochastic matrix

and stationary distribution , where and . The MAP is event-stationary if there is an event at time and . For an event-stationary MAP, the (generic) inter-event time is phase-type distributed, and thus has -th moment:

 Mk=E[Tkn]=k!α(−C)−k1=k!1λ∗(−πC)(−C)−k1=(−1)k+1k!1λ∗π(C−1)k−11, (8)

where . Further, correlations are given by (see [13] or [14])

 E[TkiTlj]=k!l!α(−C)−kPj−i(−C)−l1,j>i,

and when the covariance is normalised, we have the lag- autocorrelation function

 ρj=Cov(T0,Tj)Var(T0)=E[T0Tj]−M21M2−M21. (9)

The most well known type of MAP is the Markov modulated Poisson process (MMPP). The MMPP can be considered as an arrival process which consists of a finite number of different Poisson processes, modulated by a Markov process. In other words, the MMPP is a particular case of doubly stochastic Poisson processes whose arrival rate is directed by transitions of a finite-state CTMC. So, in the standard MAP parameterization, is a diagonal matrix with entries . In practice, MMPPs are ideal for modelling bursty traffic. Examples of bursty processes are in queues exhibiting rush hour traffic, in call centers or traffic light intersections.

Another example of a MAP is the Markov transition counting process (MTCP) which is a point process counting every transition of an irreducible finite state CTMC. One can consider MMPPs and MTCPs as two extreme examples of MAPs. When there is an event in an MMPP, it never corresponds to a transition in the phase process, while when there is an event in an MTCP, it always corresponds to a transition in the phase process. Therefore, the matrix of an MTCP, which records the phase transitions with no event, is diagonal and the diagonal elements of matrix are all zero. For an early reference that analyses both the MMPP and the MTCP (without using these terms, as such), see [19].

It is easy to verify that the MTCP is a natural generalisation of a Hyperexponential- renewal process (H-renewal process). Consider an MTCP where the sojourn time in each phase

(of the background CTMC) has an exponential distribution with parameter

for . The representation of an MTCP as a MAP is , , and the matrix has non-negative elements such that . Therefore, the distribution of inter-event times is which is the density function of a H. So, in an MTCP, times between events are H-distributed. But, they are not necessarily independent. We also mention that the MTCP is a special case of the Markov switched Poisson process (MSPP). The latter is a MAP where is diagonal however does not necessarily have a zero diagonal (see [11]). Further analysis dealing with MMPPs and MSPPs can be found in [2].

3 Matching MTCPs and MMPPs

In this section, we show relations between MTCPs and MMPPs. Proposition 3.2 of [16] implies that every MTCP has an associated MMPP with the same first two moments. We present this proposition in an alternative form here:

Proposition 3.1.

[16] Let be the counting process of a time-stationary MTCP. Then there is an MMPP, with the counting process , such that their first and second moments are matched. That is, for all ,

 E[~Nk(t)]=E[¯Nk(t)],fork=1,2.
Proof.

Assume that the matrix of the MTCP is given by , where is the generator matrix of the background CTMC. We can construct an MMPP with the same background CTMC by setting . From (1) and (2), if we show that and , these processes have the same first two moments and the proof is completed. Since and the result follows. ∎

The proof shows that in order to construct an MMPP matching the first two moments with an MTCP with the same generator matrix , we need to set and , where .

Now the question is can we construct an MTCP whose first two moments match those of a given MMPP? Based on the proof of the above proposition, the answer is positive for the special case of MMPPs where , i.e. . But this is a very restricted case since it does not leave any freedom with . We now show that for each instance of a class of MMPPs, where which we call “slow MMPPs”, there is an associated MTCP whose counting process exhibits the same first and second moments.

Definition 3.2.

A slow Markov modulated Poisson process (slow MMPP) is an MMPP where the event rate in any phase is greater than the total rate of leaving that phase, i.e. .

We can associate an MTCP to any slow MMPP as illustrated in Figure 1 for the case of . As the figure shows, if the transition rate matrix and the event intensity matrix of the slow MMPP are given by

 ~Q=(−q12q12q21−q21),~D=(λ100λ2), (10)

then, the corresponding matrices for the associated MTCP are

 (11)

We can generalise this construction from to an arbitrary . Here, given a slow MMPP, we construct an MTCP with the transition rate and the event intensity matrices

 ¯Q=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝¯Λ1H12⋯H1p\parH21¯Λ2⋯H2p⋯⋯⋱⋯Hp1Hp2⋯¯Λp⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠,¯D=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝¯D1H12⋯H1p\parH21¯D2⋯H2p⋯⋯⋱⋯Hp1Hp2⋯¯Dp⎞⎟ ⎟ ⎟ ⎟ ⎟⎠, (12)

where , , and for and are matrices given by

 ¯Λi=(−λiλi−Siλi−Si−λi),¯Di=(0λi−Siλi−Si0),Hij=(qij00qij),

where we use the notation . Notice that . We prove that for an arbitrary order (), the counting processes of the initial time-stationary slow MMPP and its associated MTCP exhibit the same first and second moments.

In order to compare properties of the MMPP and its associated MTCP, we construct a MAP with the same counting process as the MMPP. This is for comparing processes with the same number of phases and is done by coupling the events of the phase process of the MMPP. When the process is in phase , coupling events results in a transition from phase to or vice versa. Figure 2 shows this for the case of . This is nothing but a common modelling way to describe “self- transitions” in a CTMC. Note though that the resulting MAP is not an MTCP.

We denote the phase transition matrix of the resulting MAP by and its event intensity matrix by . Then, for the case of , we have:

 (13)
 (14)

Carrying out this process for an arbitrary value of , the transition rate matrix and the event intensity matrix are given by

 ~Q=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝~Λ1H12⋯H1p\parH21~Λ2⋯H2p⋯⋯⋱⋯Hp1Hp2⋯~Λp⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠,~D=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝~D10⋯0\par0~D2⋯0⋯⋯⋱⋯00⋯~Dp⎞⎟ ⎟ ⎟ ⎟ ⎟⎠, (15)

where and are matrices given by

 ~Λi=(−(λi+Si)λiλi−(λi+Si)),~Di=(0λiλi0).

Now, we can compare this MAP with the associated MTCP. Define the transition rate matrix by for , and , and denote a -dimensional column vector of ones by . Then, we have the following lemma.

Lemma 3.3.

For all :

1. For any :          ,

and                          ,

where denotes the Kronecker product.

2. For any :           ,

and                           .

3. Both and have the same stationary distribution which can be written as where is the stationary distribution of .

4. , where .

5. .

6. .

7. , and .

8. , and .

9. .

10. .

Proof.

The proof is in the Appendix. ∎

Having the above lemma, we can prove that there exists an MTCP corresponding to a given slow MMPP with the same first two moments (but not necessarily the equivalent third moments). This proves the converse of Proposition 3.1 for the class of slow MMPPs.

Theorem 3.4.

Let and be the counting processes of a time-stationary slow MMPP and its associated MTCP, respectively. Then, these processes have the same first and second moments. That is, and ,

 E[~Nk(t)]=E[¯Nk(t)],fork=1,2.

Further, and have different third moments.

A proof for the case appears in [1] and here we present a general proof.

Proof.

First, note that from (1) and part (iii) of Lemma 3.3 we have Then, for the variance, the proof is straightforward by using  (2) and parts (iii), (vi) and (vii) of Lemma 3.3.

For the third moment, from parts (iii), (iv) and (i) of Lemma 3.3, we see that the first term in the integral of Eq. (3) is the same for both processes. The second term in the integral (3) for an MTCP, can be written as

 ∫s0ϑdiag(λ)(Ip⊗1′2)e¯Q(u−s)¯D(e¯Qv−1pπ)(Ip⊗12)λdv = ∫s0ϑdiag(λ)((Ip⊗1′2)e¯Q(u−s)¯De¯Qv(Ip⊗12)−(Ip⊗1′2)e¯Q(u−s)¯D1pπ(Ip⊗12))λdv,

where we applied parts (iii) and (iv) of Lemma 3.3 and the definition of the transient deviation matrix (4). By using parts (i) and (iv) of Lemma 3.3, we see that second term of this integral is the same for the MMPP and its associated MTCP.

For the first term, by using the definition of the matrix exponential and putting aside the common terms or scalars, we have the following term

 (Ip⊗1′2)¯Qk¯D¯Ql(Ip⊗12),∀k,l≥0. (16)

Using part (i) of Lemma 3.3, the above term can be written as If we rewrite and from (12) and (15) in terms of Kronecker products, we have

 ¯D=¯Q+(diag(λ)⊗I2),~D=diag(λ)⊗(0110). (17)

It’s easy to verify that . Therefore,

 (Ip⊗1′2)~D(Ip⊗12)=(Ip⊗1′2)(diag(λ)⊗I2)(Ip⊗12). (18)

Comparing the above equation with (17) implies that the third moments are not the same and the MTCP has the following extra term

 ∫s0ϑdiag(λ)Gk(Ip⊗1′2)¯Q(Ip⊗12)Glλdv, (19)

which cannot be zero unless the process is a Poisson process (see Remark 3.5 below). ∎

Note that Theorem 3.4 only holds for slow MMPPs. Otherwise the construction of a MAP from an MMPP does not hold due to some non-positive elements in matrices and .

Remark 3.5.

Only for the case of Poisson process, the third moments of the slow MMPP and its associated MTCP are the same. This result comes from the fact that (19) is equal to zero just for Poisson processes:

 ϑdiag(λ)Gk(Ip⊗1′2)¯Q(Ip⊗12)Glλ=0 ⇔ϑdiag(λ)Gk+1+lλ=0,∀k,l≥0, ⇔ϑdiag(λ)Gnλ=0,∀n≥1,

where the first step holds by using part (i) of Lemma 3.3 and the fact that . Since is the stationary distribution of , the last equality holds when or (which implies again ) which is the case for the Poisson process. For more details on when a general MAP is Poisson, see [4].

Further, for any initial distribution (not restricting to the time-stationary case), we have the following proposition.

Proposition 3.6.

Let and be the counting processes of a slow MMPP and its associated MTCP, respectively. Then, these processes have the same first moment but not necessarily the same second moment.

Proof.

First consider the generating function of counting process of a MAP which is given by (see Chapter XI of [3])

 Fij(s,t)=∑n≥0snP(N(t)=n,X(t)=j|X(0)=i).

Moreover, and . Therefore, we have

 ∂∂tF(s,t)=(C+Ds)F(s,t). (20)

If we put , then, and

 (21)

From (20) and , we have The solution of this differential equation is

 M(t)=∫t0eQuDeQ(t−u)du.

So, from (21), the first moment of a non-stationary MAP in terms of the transient deviation matrix is given by

 Eη[N(t)]=ηM(t)1=ηD♯Q(t)D1+πD1t. (22)

By using parts (iii) and (vi) of Lemma 3.3, we see that the first moments of both MTCP and MMPP counting processes in the non-stationary case are the same.
To show that the second moments are different, we show that the -intercepts of their asymptotic variance are different. For a non-stationary MAP, has a linear -intercept given by (see [10])

 ¯bη=−2πDD♯QD♯QD1p−2πD1pη(D♯Q)2D1p−(ηD♯QD1p)2+2ηD♯QDD♯QD1p.

From parts (v), (vi) and (vii) of Lemma 3.3, we see that the first three terms of the above -intercept are the same for an MMPP and its associated MTCP. For the last term, let’s consider an initial distribution in the form of . Then, by using the definition of deviation matrix (5) and part (i) of Lemma 3.3, we see that this term for both processes equals to

Note that from part (vi) of Lemma 3.3 we have is the same for both processes and thus the second integral is the same for both processes. However, different event matrix for slow MMPPs and their corresponding MTCPs results in different left integral. The explicit calculation shows that the left term in is , so from (17) and (18), we can conclude that this term is different for a slow MMPP and its associated MTCP. ∎

Note that in the proof of the last part of the above proposition, for having the same second moment, we need that the initial distribution either satisfies (which is the case for the time-stationary distribution) or which by considering the structure of , results in : . Since the latter is impossible, we conclude that for having the same second moment, we must have .

4 Illustration on Matching the Inter-Event Time Process

We now present a numerical illustration of second order equivalence, focusing on comparing MMPPs and MTCPs for the purpuse of modelling exogenous processes in stochastic models. We take the viewpoint of considering a queue with an MMPP arrival process and approximate it by a queue with an MTCP arrival process. We denote as the squared coefficient of variation of the event-stationary inter-event time. Further we denote as the limiting index of dispersion of counts, see [9]. Namely:

 c2=Var(Tn)E2[Tn],d2:=limt→∞Var(N(t))E[N(t)]. (23)

As a base consider an MMPP with parameters , , . Applying the formulas in Section 2 for matrices in (10) results in , , , , , and . Say that we now wish to approximate this process by an MTCP. How do we choose the parameters of this MTCP? For this we compare two alternatives. In one alternative we’ll seek an MTCP that closely resembles the MMPP via moments and lag-j autocorrelations. Such a method was put forward in [5]. We refer to this alternative as the optimisation method. The other alternative we consider is to use the approximation method based on Theorem 3.4. We refer to this alternative as second order equivalence (SOE).

As we show in the comparative results below, the optimization method is only slightly better than SOE with a negligible difference for typical applications. However, keep in mind that carrying out an approximation via our SOE is a straight forward task, while the optimization method requires heavier computational machinery.

For the optimization method we calculate the first 50 autocorrelations of the MMPP, , from (9) and minimise the following function

 f(x)=50∑k=1(ϱj−^ϱk(x))2,

where is the lag-j autocorrelation of an MTCP specified by and having matrices

 D=⎛⎜ ⎜ ⎜⎝0x1x2x3x40x5x6x7x80x9x10x11x120⎞⎟ ⎟ ⎟⎠,C=diag(−D14). (24)

Then, applying the constrained nonlinear minimisation of the “fmincon” solver in MATLAB, we find the vector that minimizes with tolerance . This result is subject to:

 ^M1(x)=M1,and^c2(x),^M3(x),^ϱ1(x),^ϱ2(x),^ϱ3(x)

being constrained to their corresponding values for the MMPP, with a tolerance of .

Since this is generally a non-convex optimization problem, we experiment with randomly selected initial values for the “fmincon” solver and then seek the optimal one.

Using the parameters of the resulting MTCP, the matrices for both the optimization method and SOE are respectively given by

 Dopt=⎛⎜ ⎜ ⎜⎝012.35191.00780.91109.430601.24913.49533.33392.9924039.29523.43833.518539.41020⎞⎟ ⎟ ⎟⎠andD{% \footnotesize SOE}=⎛⎜ ⎜ ⎜⎝055050055003505350⎞⎟ ⎟ ⎟⎠.

Note that as opposed to , the matrix is obtained immediately from Eq. (11). In Table 1, we present a comparison of the moments and first three autocorrelation values for an MMPP and its corresponding MTCP. The MTCP is obtained through both the optimisation method (based on the inter-event process presented in [5]), and our SOE method. As the table shows, the corresponding MTCP obtained from our SOE has the same first moment as the MTCP resulting from the optimisation procedure. This comes from the fact that the first moments for both time-stationary counting process and event-stationary inter-event process are the same ().

As we expect, the value of which is related to the counting processes are exactly the same for the given MMPP and the corresponding MTCP that resulted from our SOE method. Based on Table 1, the MMPP and the MTCP resulting from optimisation method are of comparable moments and autocorrelations (their differences are less than ). But, as expected, this is not the case for the MTCP obtained from our SOE method.

To see the effect on an example stochastic model, we now consider a queueing system with matrix analytic analysis. Assume that we have an MMPP/M/1 queue and we wish to approximate it by an MTCP/M/1 queue. Consider the MTCP obtained through the optimization method and through the SOE method and compare how each of these arrival process approximations behave in terms of queue performance analysis.

Figure 2(a) demonstrates differences between the mean queue length curves of the MMPP /M/1 queue (green curve) and the MTCP/M/1 queue. The MTCP obtained either from the optimisation method (red curve) or from our SOE (blue curve). Note that the value of workload varies from to by changing the mean of the service time. Furthermore, the resulting mean queue lengths in Figure 2(a) are calculated by considering the MAP/M/1 queue as a QBD and then using the SMC solver, see [1] for more details. The same method is used to obtain the corresponding measures for Figure 2(b) and Figure 2(c).

Figure 2(b) presents a comparison of the proportional error of mean queue length of a steady-state MMPP/M/1 queue where the MMPP is approximated by an MTCP. This MTCP is obtained from either the optimization method (red curve) or our proposed SOE (blue curve). As the figure shows, this error is zero and by increasing the value of increases to at most and then again decreases. Hence, while the MTCP resulting from optimisation procedure results in closer results to the original model, the proportional error of approximating with the MTCP resulting from the (simpler) SOE is negligible too. To reinforce this observation, we consider the proportional error in measuring another important performance measure, the variance of queue length where approximating an MMPP/M/1 queue with an MTCP/M/1 queue. Figure 2(c) demonstrates the proportional error of variance of queue length when approximating an MMPP/M/1 queue with an MTCP/M/1 queue. The blue curve is obtained by considering the SOE in Section  3 and the red curve is obtained by considering the optimisation procedure for finding the corresponding MTCP. In both cases by changing the service rate, the value of workload varies from to . As Figure 2(c) shows, the proportional error of variance with our SOE is less than .

Therefore, regarding the simplicity of finding the associated MTCP suggested in Section  3 and negligibility of the proportional error of mean queue and variance of queue length, we can suggest that our method is suitable for approximating an MMPP with an MTCP in queueing applications.

5 Conclusion and Open Problems

For the two extreme examples of MAPs, the MMPP (where there is no event at transitions of the underlying CTMC) and the MTCP (where all transitions of the underlying CTMC are accompanied with an event), we have shown that the first two moments of counts for a large class of MMPPs (slow MMPPs) and MTCPs are the same. Therefore, from a modelling point of view, one can construct an MMPP from a given MTCP, and for a given slow MMPP there is an MTCP with the same first moments of counts. A question that arises is whether there is any such similar relation between the class of MMPPs and the class of MTCPs? In other words, can we find a relation between non-slow MMPPs and MTCPs (or maybe another class of MAPs)?

We also carried out parameter estimation for these MAPs. We consider parameters of a slow MMPP and use the nonlinear optimisation procedure presented in [5] to estimate parameters of an MTCP with the same first moments and autocorrelations of the inter-event time process. Comparing the results of this method with parameters resulting from matching the first two moments of counts in the context of a queueing model in [1] shows that the differences between these results are negligible. There are some papers related to fitting data with a MAP, see for instance [18] and references therein. But, as time-homogeneity of the data trace is a significant property in available fitting methods, a question that arises here for future work is what is the best fitting method if the real data trace is non-homogeneous?

References

• [1] Asanjarani A. and Nazarathy Y. A queueing approximation of MMPP/PH/1. Queueing Theory and Network Applications, pages 41–51, Springer (2016)
• [2] Asanjarani A. and Nazarathy Y. Stationary Markovian Arrival Processes, Results and Open Problems, journal=arXiv preprint arXiv:1905.01736, (2019)
• [3] Asmussen S. Applied Probability and Queues, volume 51. Springer (2003)
• [4] Bean N.G. and Green D.A. When is a MAP Poisson? Mathematical and Computer Modelling, 31(10-12):31–46 (2000)
• [5]

Casale G., Zhang E.Z., and Smirni E. Trace data characterization and fitting for Markov modeling.

Performance Evaluation, 67(2):61–79 (2010)
• [6] Casale G., Zhang E.Z., and Smirni E. KPC-Toolbox: Best recipes for automatic trace fitting using Markovian Arrival Processes, Performance Evaluation, 67(9):873–896 (2010)
• [7] Coolen-Schrijner P. and Van Doorn E.A. The deviation matrix of a continuous-time Markov chain. Probability in the Engineering and Informational Sciences, 16(03):351–366 (2002)
• [8] Fischer W. and Meier-Hellstern K. The Markov-modulated Poisson process (MMPP) cookbook. Performance Evaluation, 18(2):149–171 (1993)
• [9] Gusella R. Characterizing the variability of arrival processes with indexes of dispersion. Selected Areas in Communications, IEEE Journal on, 9(2):203–211 (1991)
• [10] Hautphenne S., Kerner Y., Nazarathy Y., and Taylor P. The second order terms of the variance curves for some queueing output processes. arXiv preprint arXiv:1311.0069 (2013)
• [11] He Q.M. Fundamentals of Matrix-Analytic Methods. Springer (2014)
• [12] He Q.M. and Neuts M.F. On episodic queues, SIAM Journal on Matrix Analysis and Applications, 18(1): 223–248, SIAM (1997)
• [13] Horváth A., Horváth G., and Telek M.

A joint moments based analysis of networks of MAP/MAP/1 queues.

Performance Evaluation, 67(9):759–778 (2010)
• [14] Horváth G. Matching marginal moments and lag autocorrelations with MAPs. In Proceedings of the 7th International Conference on Performance Evaluation Methodologies and Tools, pages 59–68. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering) (2013)
• [15] Kim S. Modeling cross correlation in three-moment four-parameter decomposition approximation of queueing networks. Operations research, 59(2): 480–497, INFORMS (2011)
• [16] Nazarathy Y. and Weiss G. The asymptotic variance rate of the output process of finite capacity birth-death queues. Queueing Systems, 59(2):135–156 (2008)
• [17] Neuts M.F. The burstiness of point processes, Stochastic Models, 9(3):445–466. Taylor & Francis (1993)
• [18] Okamura H. and Dohi T. Fitting phase-type distributions and Markovian arrival processes: Algorithms and tools. In Principles of Performance and Reliability Modeling and Evaluation, pages 49–75. Springer (2016)
• [19] Rudemo M. Point processes generated by transitions of Markov chains. Advances in Applied Probability, pages 262–286 (1973)
• [20] Scott S.L. and Smyth P. The Markov modulated Poisson process and Markov Poisson cascade with applications to web traffic data. Bayesian statistics, 7: 671–680 (2003)