1 Introduction
Computational modeling of dynamic processes on complex networks is a thriving research area [1, 2, 3]. Arguably, the most common formalism for spreading processes is the continuoustime SIS model and its variants [4, 5, 6]. Generally speaking, an underlying contact network specifies the connectivity between nodes (i.e., agents) and each agent occupies one of several mutually exclusive (local) states (or compartments). In the wellknown SIS model, these states are susceptible (S) and infected (I). Infected nodes can recover (become susceptible again) and propagate their infection to neighboring susceptible nodes.
SIStype models have shown to be extremely useful for analyzing and predicting the spread of opinions, rumors, and memes in online social networks [7, 8] as well as the neural activity [9, 10], the spread of malware [11], and blackouts in financial institutions [12, 13].
Previous research focused mostly on models where the probability of an event (e.g. infection or recovery) happening in the next (infinitesimal) time unit is constant, i.e. independent of the time the agent has already spent in its current state. We call such agents memoryless and the corresponding stochastic process Markovian. The semantics of such a model can be described using a socalled (discretestate)
continuoustime Markov chain
(CTMC).One particularly important consequence of the memoryless property is that the random time until an agent changes its state, either because of an interaction with another agent or because of a spontaneous transition, follows an exponential distribution. The distribution of this residence time is parameterized by an (interactionspecific) rate [4]. Each agent has an associated eventmodulated Poisson process whose rate depends on the agent’s state and the state of its neighbors [14]. For instance, infection of an agent increases the rate at which its susceptible neighbors switch to the infected state.
However, exponentially distributed residence times are an unrealistic assumption in many realword systems. This holds in particular for the spread of epidemics [15, 16, 17, 18, 19], for the diffusion of opinions in online social networks [20, 21]
, and interspike times of neurons
[22] as empirical results show. However, assuming time delays that can follow nonexponential distributions complicate the analysis of such processes and typically only allow MonteCarlo simulations, which suffer from high computational costs.Recently, the LaplaceGillespie algorithm (LGA) for the simulation of nonMarkovian dynamics has been introduced by Masuda and Rocha in [14]. It is based on the nonMarkovian Gillespie algorithm by Boguná et al (nMGA) [23] and minimizes the costs of sampling interevent times. However, both methods require computationally expensive updating of an agent’s neighborhood in each simulation step, which renders them inefficient for largescale networks. In the context of Markovian processes on networks, it has recently been shown that rejectionbased simulation can overcome this limitation [24, 25, 26].
Here, we extend the idea of rejectionbased simulation to nonMarkovian networked systems, proposing REDSim, a rejectionbased, eventdriven simulation approach. Specifically, we combine three ideas to obtain an efficient simulation of nonMarkovian processes: (i) we express the distributions of interevent times through timevarying instantaneous rates, (ii) we sample events based on an overapproximation of these rates and compensate via a rejection step, and (iii) we use a priority queue to sample the next event. The combination of these elements makes it possible to reduce the timecomplexity of each simulation step. Specifically, if an agent changes its state, no update of the rate of neighboring agents is necessary. This comes at the costs of rejection events to counterbalance missing information about the neighborhood. However, using a priority queue renders the computational burden of each rejection event very small.
The remainder of the paper is organized as follows: We describe our framework for nonMarkovian dynamics in Section 2 and provide a review of previous simulation approaches in Section 3. Next, we propose our rejectionbased simulation algorithm in Section 4. Section 5 presents numerical results and we conclude our work in Section 6.
2 MultiAgent Model
This section introduces the underlying formalism to express agentbased dynamics on networks. Let be a an undirected, finite graph without selfloops, called contact network. Nodes are also referred to as agents.
2.0.1 Network State.
The current state of a network is described by two functions:

assigns to each agent a local state , where is a finite set of local states (e.g., for the SIS model);

, describes the residence time of each agent, i.e. the time since the last change of state of the agent.
We say that an agent fires when it changes its state and refer to the remaining time until it fires as its time delay. The neighborhood state of an agent is a multiset containing the states of all neighboring agents together with their respective residence times:
The set of all possible neighborhoodstates of all agents in a given system is denoted by .
2.0.2 Network Dynamics.
The dynamics of the network is described by assigning to each agent two functions and :

defines the instantaneous rate of , i.e. if , then the probability that fires in the next infinitesimal time interval is ;

determines the state probabilities when a transition occurs. Here, denotes the set of all probability distributions over . Hence if , then, when agent fires, the next local state is with probability .
Note that we do not consider cases of pathological behavior here, e.g. where is defined in such a way that an infinite amount of simulation steps is possible in finite time.
A multiagent network model is completely specified by a tuple , where denotes a function that assigns to each node an initial state.
2.0.3 Example.
In the classical SIS model we have and and are the same for all agents, i.e.,
Here, denote the infection and recovery rate constants, respectively. Note that the infection rate is proportional to the number of infected neighbors whereas the rate of recovery is independent from neighboring agents. Moreover, maps a state to one iff and to zero otherwise. The model is Markovian as neither nor depend on the residence time of any agent.
2.1 Semantics
We will specify the semantics of a multiagent model by describing a stochastic simulation algorithm that generates trajectories of the system. It is based on a race condition among agents: each agent picks a random time until it will fire, but only the one with the shortest time delay wins and changes its state.
2.1.1 Time Delay Density.
Assume that is the time increment of the algorithm. We define for each the effective rate as
describes the neighborhoodstate of in time units assuming that all agents remain in their current state. Next we assume that for each node , the probability density of the (nonnegative) time delay is , i.e. is the density of firing after time units. Leveraging the theory of renewal processes [27], we find the relationship
(1) 
We assume to be zero if the denominator is zero. Note that using this equation, we can derive rate functions from a given time delay distribution (i.e. uniform, lognormal, gamma, and so on). If it is not possible to derive analytically, it can be computed numerically.
For example, a constant rate function corresponds to an exponential time delay distribution with rate . Fig. 5 (b) illustrates the rate function when
is the uniform distribution on
.2.1.2 Sampling Time Delays.
The effective rate allows us to sample the time delay after which agent fires, using the inversion method. First, we sample an exponential random variate with rate , then we integrate to find such that
(2) 
In general it is possible to precompute the integral [28], but its parameterization (on states, residence times, etc) renders this difficult.
Another viable approach is to use rejection sampling. Assume that we have such that for all . We start with . In each step, we sample an exponentially distributed random variate with rate and set . We accept with probability . Otherwise we reject it and repeat the process. If a reasonable overapproximation can be constructed, this is typically much faster than the integral approach in (2).
2.1.3 Naïve Simulation Algorithm.
The following simulation algorithm generates statistically correct trajectories of the model. It starts by initializing the global clock and setting for all . The algorithm repeatedly performs simulation steps until a predefined time horizon or some other stopping criterion is reached. Each stimulation step is as follows:

Generate a random time delay (candidate) for each agent using . Identify agent with the smallest time delay .

Pick the next state for according to and set . Set and ().

Set to update the global clock and go to Line 1.
Note that this algorithm is very inefficient as it requires an expensive iteration over all agents and sampling of time delays in each step.
3 Previous Simulation Approaches
Most recent work on nonMarkovian dynamics focuses on the mathematical modeling of such processes [29, 30, 31, 32, 33]. In particular, research has focused on how specific distributions (e.g. constant recovery times) alter the properties of epidemic spreading such as the epidemic threshold (see [3, 4] for an overview). However, only few approaches are known for the simulation of nonMarkovian dynamics [23, 14]. We shortly review them in the sequel.
3.1 NonMarkovian Gillespie Algorithm
Boguná et al. propose a direct generalization of the Gillespie algorithm to nonMarkovian systems, nMGA, which is statistically exact but computationally expensive [23]. The algorithm is conceptually similar to our baseline in Section 2.1 but computes the time delay using socalled survival functions. An agent’s survival function determines the probability that its time delay is larger than a certain time . The joint survival function of all agents determines the probability that all time delays are larger than which can be used to sample the next event time.
The drawback of the nMGA is that it is necessary to iterate over all agents in each step in order to construct their joint survival function. As a fast approximation, the authors suggest to only use the current instantaneous rate at (i.e., ) and assume all rates remain constant until the next event. This is correct in the limit of infinite agents, because when the number of agent approaches infinity, the time until the next firing of any agent approaches zero.
3.2 LaplaceGillespie Algorithm
The LGA, introduced by Masuda and Rocha in [14], aims at reducing the computational cost of finding the next event time compared to nMGA, while remaining statistically correct. It assumes that the time delay distributions can be expressed in the form of a weighted average of exponential distributions
where is a PDF over the rate . This formulation of , while being very elegant, limits the applicability to cases where the corresponding survival function is completely monotone [14], which limits the set of possible interevent time distributions.
The LGA has two advantages. Firstly, we can sample by first sampling according to and then, instead of the numerical integration in Eq. (2), compute where is uniformly distributed on . Secondly, we can assume that the sampled for a particular agent remains constant until one of its neighbors fires. Thus, in each step, it is only necessary to update the rates of the neighbors of the firing agent, and not of all agents.
4 Our Method
Rejection sampling for the efficient simulation of Markovian stochastic processes on complex networks has been proposed recently [24, 25, 26, 34], but not for the nonMarkovian case where arbitrary distributions for the interevent times are considered.
Here, we proposes the REDSim algorithm for the generation of statistically correct simulations of nonMarkovian network models, as described in Section 2. The main idea of REDSim is to rely on rejection sampling to reduce the computational cost, making it unnecessary to update the rates of the neighbors of a firing agent. Independently from that, rejection sampling can also be utilized to sample without numerical integration.
4.1 Rate OverApproximation
Recall that expresses how the instantaneous rate of changes over time, assuming that no neighboring agent changes its state. A key in ingredient of our method is now which upperbounds the instantaneous rate of , assuming that all neighbors are allowed to freely change their state as often as possible. That is, at all times is an upperbound of taking into consideration all possible states of the neighborhood.
Consider again the Markovian SIS example. The curing of an infected node does not depend on an agent’s neighborhood anyway. The rate is always , which is a trivial upper bound. A susceptible node becomes infected with rate times number of infected neighbors. Thus, the instantaneous infection rate of an agent can be bounded by where is the degree of . Upperbounds may also be constant or depend on time. Consider for example a recovery time that is uniformly distributed on . In this case, approaches infinity (cf. Fig. 5b) making a constant upperbound impossible. For multiagent models, a timedependent upperbound always exists since we can compute the maximal instantaneous rate w.r.t. all reachable neighborhood states.
4.2 The REDSim Algorithm
For a given multiagent model specification and given upperbounds , we propose a statistically exact simulation algorithm, which is based on two basic data structures:
 Labeled Graph

A graph represents the contact network and each agent (node) is annotated with its current state and , the time point of its last state change.
 Event Queue

The event queue stores the list of future events, where an event is a tuple . Here, is the agent that fires, the prospective absolute time point of firing, and is an overapproximation of the true effective rate (at time point ). The queue is sorted w.r.t. and initialized by generating one event per agent.
A global clock, , keeps track of the elapsed time since the simulation started. We use instead of to avoid updates for all agents after each event (i.e., ). We perform simulation steps until some termination criterion is fulfilled, each step is as follows:

Take the first event from the event queue and update .

Evaluate the true instantaneous rate of at the current system state.

With probability , reject the firing and go to Line 5.

Randomly choose the next state of according to the distribution
. If : set and . 
Generate a new event for agent and push it to the event queue.

Go to Line 1.
The correctness of REDSim can be shown similarly to [26, 24] (see also the proof sketch in Appendix A). Note that in all approaches evaluating an agent’s instantaneous rate is linear in the number of its neighbors. In previous approaches, the rate has to be updated for all neighbors of a firing agent. In REDSim only the rate of the firing agent has to be updated. The key asset of REDSim is that, due to the overapproximation of the rate function, we do not need to update the neighborhood of the firing agent , even though the neighbor’s respective rates might change as a result from the event. We provide a more detailed analysis of the timecomplexity of REDSim in Appendix B.
4.2.1 Event Generation.
To generate new events in Line 5, we sample a random time delay and set . To sample according to the overapproximated rate , we either use the integration approach of Eq. (2) or sample directly from an upperbounded the exponential distribution (cf. Fig. 5d).
To sample from an exponential distribution, we need to be able to find an upper bound that is constant in time for all . Hence, we simply set and sample from an exponential distribution with rate . Otherwise, when a constant upper bound either does not exits or is unfeasible to construct, we use numerical integration over (see Eq. (2)), and set . Alternatively, when has the required form (cf. Section 3), we can even use LGAlike approach to sample [23] (and also set ).
4.2.2 Discussion.
We expect REDSim to perform poor only in some special cases, where either the construction of an upperbound is numerically too expensive or where the difference between the upperbound and the actual average rate is very large, which would render the number of rejections events too high.
It is easy to extend REDSim to different types of nonMarkovian behavior. For instance, we might keep track of the number of firings of an agent and parameterize and accordingly to generate the behavior of selfexiting point processes or to cause correlated firings among agents [36, 37].
Note that, we can turn the method into a rejectionfree approach by generating a new event for and all of its neighbors in Line 5 while taking the new state of into consideration (see also Appendix A).
5 Case Studies
We demonstrate the effectiveness our approach on classical epidemictype processes and synthetically generated networks following the configuration model with a truncated powerlaw degree distribution [38]. That is for . We use (a smaller corresponds to a larger average degree). The implementation is written in Julia and publicly available^{1}^{1}1github.com/gerritgr/nonmarkoviansimulation. As a baseline for comparison, we use the rejectionfree variant of the algorithm where neighbors are updated after an event (as described at the end of Section 4.2). The evaluation was performed on a 2017 MacBook Pro with a 3.1 GHz Intel Core i5 CPU and results are shown in Fig. 8.
5.0.1 Sis Model.
We consider an SIS model (with and as defined above), but infected nodes become less infectious over time. That is, the rate at which an infected agent with residence time attacks its susceptible neighbors is for . This shifts the exponential distribution to the left. We upperbound the infection rate of an agent with degree with which is constant in time. Thus, we sample using an exponential distribution. The time until an infected agent recovers is, independent from its neighborhood, uniformly distributed in (similar to [39]). Hence, we can sample it directly. We start with infected agents.
5.0.2 Voter Model.
The voter model describes the competition of two opinions of agents in state A switch to B and vice versa (i.e. is deterministic). The time until an agent switches follows a Weibull distribution (similar to [23, 40]):
where we set , if and , if . We let the fraction of opposing neighbors modulate , i.e., , where denotes the number of neighbors currently in state B and is the degree of agent (and analogously for A). Hence, the instantaneous rate depends on the current residence time and the states of the neighboring agents. To get an upperbound for the rate, we set and get . We use numerical integration to sample to show that REDSim performs well also in the case of this more costly sampling. We start with of agents being in each state.
5.0.3 Discussion.
Our results provide strong evidence for the usefulness of rejection sampling for nonMarkovian simulation. As expected, we find that the number of interconnections (edges) and the number of agents influence the runtime behavior. Especially for REDSim, the number of edges shows to be much more relevant than purely the number of agents. Our method consistently outperforms the baseline up to several orders of magnitude. The gain (REDSim speed by baseline speed) ranges from ( nodes, voter model, ) to ( nodes, SIS model, ).
We expect the baseline algorithm to be comparable with LGA as both of them only update the rates of the relevant agents after an event. Moreover, in the SIS model, sampling the next event times is very cheap. However, a detailed statistical comparison remains to be performed (both casestudies could not straightforwardly be simulated with LGA due to its constraints on the time delays). Note that, when LGA is applicable, its key asset, the fast sampling of time delays, can also be used in REDSim. We also tested a nMGAlike implementation where rates are consider to remain constant until the next event. However, the method was—even though it is only approximate—slower than the baseline.
Note that the SIS model is somewhat unfavorable for REDSim as it generates a large amount of rejection events when only a small fraction of agents are infected. Consider, for instance, an agent with many neighbours of which only a few are infected. The overapproximation essentially assumes that all neighbors are infected to sample the next event time (and, in addition, overapproximates the rate of each individual neighbor), leading to a high rejection probability. Nevertheless, the low computational costs of each rejection event overcome this.
6 Conclusions
We presented a rejectionbased algorithm for the simulation of nonMarkovian agent models on networks. The key advantage of the rejectionbased approach is that in each simulation step it is no longer necessary to update the rates of neighboring agents. This greatly reduces the time complexity of each step compared to previous approaches and makes our method viable for the simulation of dynamical processes on realworld networks. As future work, we plan to automate the computation of the overapproximation and investigate correlated time delays [41, 14] and selfexiting point processes [36, 37].
6.0.1 Acknowledgements.
We thank Guillaume StOnge for helpful comments on nonMarkovian dynamics. This research was been partially funded by the German Research Council (DFG) as part of the Collaborative Research Center Methods and Tools for Understanding and Controlling Privacy.
References
 [1] AlbertLászló Barabási. Network science. Cambridge university press, 2016.
 [2] John Goutsias and Garrett Jenkinson. Markovian dynamics on complex reaction networks. Physics Reports, 529(2):199–264, 2013.
 [3] Romualdo PastorSatorras, Claudio Castellano, Piet Van Mieghem, and Alessandro Vespignani. Epidemic processes in complex networks. Reviews of modern physics, 87(3):925, 2015.
 [4] István .Z Kiss, J. C Miller, and P. L Simon. Mathematics of epidemics on networks. Forthcoming in Springer TAM series, 2016.
 [5] Mason Porter and James Gleeson. Dynamical systems on networks: A tutorial, volume 4. Springer, 2016.
 [6] Helena Sofia Rodrigues. Application of sir epidemiological model: new trends. arXiv preprint arXiv:1611.02565, 2016.
 [7] Maksim Kitsak, Lazaros K Gallos, Shlomo Havlin, Fredrik Liljeros, Lev Muchnik, H Eugene Stanley, and Hernán A Makse. Identification of influential spreaders in complex networks. Nature physics, 6(11):888, 2010.
 [8] Laijun Zhao, Jiajia Wang, Yucheng Chen, Qin Wang, Jingjing Cheng, and Hongxin Cui. Sihr rumor spreading model in social networks. Physica A: Statistical Mechanics and its Applications, 391(7):2444–2453, 2012.

[9]
AV Goltsev, FV De Abreu, SN Dorogovtsev, and JFF Mendes.
Stochastic cellular automata model of neural networks.
Physical Review E, 81(6):061921, 2010.  [10] Jil Meier, X Zhou, Arjan Hillebrand, Prejaas Tewarie, Cornelis J Stam, and Piet Van Mieghem. The epidemic spreading model and the direction of information flow in brain networks. NeuroImage, 152:639–646, 2017.
 [11] Chenquan Gan, Xiaofan Yang, Wanping Liu, Qingyi Zhu, and Xulong Zhang. Propagation of computer virus under human intervention: a dynamical model. Discrete Dynamics in Nature and Society, 2012, 2012.
 [12] Robert M May and Nimalan Arinaminpathy. Systemic risk: the dynamics of model banking systems. Journal of the Royal Society Interface, 7(46):823–838, 2009.
 [13] Robert Peckham. Contagion: epidemiological models and financial crises. Journal of Public Health, 36(1):13–17, 2013.
 [14] Naoki Masuda and Luis EC Rocha. A gillespie algorithm for nonmarkovian stochastic processes. SIAM Review, 60(1):95–115, 2018.
 [15] Alun L Lloyd. Realistic distributions of infectious periods in epidemic models: changing patterns of persistence and dynamics. Theoretical population biology, 60(1):59–71, 2001.
 [16] GL Yang. Empirical study of a nonmarkovian epidemic model. Mathematical Biosciences, 14(12):65–84, 1972.
 [17] SP Blythe and RM Anderson. Variable infectiousness in hfv transmission models. Mathematical Medicine and Biology: A Journal of the IMA, 5(3):181–200, 1988.
 [18] T D. Hollingsworth, R. M Anderson, and C. Fraser. Hiv1 transmission, by stage of infection. The Journal of infectious diseases, 198(5):687–693, 2008.
 [19] Z Feng and HR Thieme. Endemic models for the spread of infectious diseases with arbitrarily distributed disease stages i: General theory. SIAM J. Appl. Math, 61(3):803–833, 2000.
 [20] AlbertLaszlo Barabasi. The origin of bursts and heavy tails in human dynamics. Nature, 435(7039):207, 2005.
 [21] Alexei Vázquez, Joao Gama Oliveira, Zoltán Dezsö, KwangIl Goh, Imre Kondor, and AlbertLászló Barabási. Modeling bursts and heavy tails in human dynamics. Physical Review E, 73(3):036127, 2006.
 [22] William R Softky and Christof Koch. The highly irregular firing of cortical cells is inconsistent with temporal integration of random epsps. Journal of Neuroscience, 13(1):334–350, 1993.
 [23] Marian Boguná, Luis F Lafuerza, Raúl Toral, and M Ángeles Serrano. Simulating nonmarkovian stochastic processes. Physical Review E, 90(4):042108, 2014.
 [24] Wesley Cota and Silvio C Ferreira. Optimized gillespie algorithms for the simulation of markovian epidemic processes on large and heterogeneous networks. Computer Physics Communications, 219:303–312, 2017.
 [25] Guillaume StOnge, JeanGabriel Young, Laurent HébertDufresne, and Louis J Dubé. Efficient sampling of spreading processes on complex networks using a composition and rejection algorithm. arXiv preprint arXiv:1808.05859, 2018.
 [26] Gerrit Großmann and Verena Wolf. Rejectionbased simulation of stochastic spreading processes on complex networks. In International Workshop on Hybrid Systems Biology, pages 63–79. Springer, 2019.
 [27] David Roxbee Cox. Renewal theory. 1962.
 [28] Raghu Pasupathy. Generating homogeneous poisson processes. Wiley encyclopedia of operations research and management science, 2010.
 [29] I. Z Kiss, G. Röst, and Z. Vizi. Generalization of pairwise models to nonmarkovian epidemics on networks. Physical review letters, 115(7):078701, 2015.

[30]
Lorenzo Pellis, Thomas House, and Matt J Keeling.
Exact and approximate moment closures for nonmarkovian network epidemics.
Journal of theoretical biology, 382:160–177, 2015.  [31] HangHyun Jo, Juan I Perotti, Kimmo Kaski, and János Kertész. Analytically solvable model of spreading dynamics with nonpoissonian processes. Physical Review X, 4(1):011041, 2014.
 [32] N Sherborne, JC Miller, KB Blyuss, and IZ Kiss. Meanfield models for nonmarkovian epidemics on networks: from edgebased compartmental to pairwise models. arXiv preprint arXiv:1611.04030, 2016.
 [33] Michele Starnini, James P Gleeson, and Marián Boguñá. Equivalence between nonmarkovian and markovian dynamics in epidemic spreading processes. Physical review letters, 118(12):128301, 2017.
 [34] C. L Vestergaard and M. Génois. Temporal gillespie algorithm: Fast simulation of contagion processes on timevarying networks. PLoS computational biology, 11(10):e1004579, 2015.
 [35] Gerrit Großmann, Luca Bortolussi, and Verena Wolf. Rejectionbased simulation of nonmarkovian agents on complex networks. arXiv preprint arXiv:2878114, 2016.
 [36] Yosihiko Ogata. On lewis’ simulation method for point processes. IEEE Transactions on Information Theory, 27(1):23–31, 1981.
 [37] Angelos Dassios, Hongbiao Zhao, et al. Exact simulation of hawkes process with exponentially decaying intensity. Electronic Communications, 18, 2013.
 [38] B K Fosdick, D B Larremore, J Nishimura, and J Ugander. Configuring random graph models with fixed degree sequences. SIAM Review, 60(2):315–355, 2018.
 [39] Gergely Röst, Zsolt Vizi, and István Z Kiss. Impact of nonmarkovian recovery on network epidemics. In BIOMAT 2015: International Symposium on Mathematical and Computational Biology, pages 40–53. World Scientific, 2016.
 [40] P Van Mieghem and R Van de Bovenkamp. Nonmarkovian infection spread dramatically alters the susceptibleinfectedsusceptible epidemic threshold in networks. Physical review letters, 110(10):108701, 2013.
 [41] HangHyun Jo, ByoungHwa Lee, Takayuki Hiraoka, and WooSung Jung. Copulabased algorithm for generating bursty time series. arXiv preprint arXiv:1904.08795, 2019.
Appendix A Correctness
First, consider the rejectionfree version of the algorithm:

Take the first event from the event queue and update .

Evaluate the true instantaneous rate of at the current system state.

With probability , reject the firing and go to Line 5.

Randomly choose the next state of according to the distribution
. If : set and . 
Generate a new event for agent and push it to the event queue.

For each neighbor of : Remove the event corresponding to from the queue and generate a new event (taking the new state of into account).

Go to Line 1.
Rejection events are not necessary in this version of the algorithm because all events in the queue are generated by the real rate and are therefore consistent with the current system state. It is easy to see that the rejectionfree version is a direct eventdriven implementation of the Naïve Simulation Algorithm which specifies the semantics in Section 2.1.3. The correspondence between Gillespieapproaches and evendriven simulation exploited in literature, for instance in [4]. Thus, it is sufficient to show that the rejectionfree version and REDSim (Section 4.2) are statistically equivalent.
We do this with the following trick: We modify and into and , respectively. When we simulate the rejectionfree algorithm, it will admit exactly the same behavior as REDSim. The key to that are socalled shadowprocess [24, 26]. A shadow process does not change the state of the corresponding agent but still fires with a certain rate. They are conceptually similar to selfloops in a Markov chain. In the end, we can interpret the rejection events not as rejections, but as the statistically necessary application of the shadow process.
Here, we consider the case where a constant upperbound exits for all . That is, for all reachable . The case of an timedependent upperbound is, however, analogous. Now, for each , we define the shadowprocess as
Consequently, for all :
The only thing remaining is to define such that the shadowprocess really has no influence on the system state. Therefore, we simply trigger a null event (or selfloop) with the probability proportional to how much of is induced by the shadowprocess. Formally,
Note that, firstly, the model specification with or are equivalent, because has to actual effect on the system. Secondly, simulating the rejectionfree algorithm with directly yields REDSim. In particular, the rejections events have the same likelihood as the shadowprocess being chosen in . Moreover, updating the rates of all neighbors is redundant because all the rates remain at . Whatever the change in is, after an event, that shadow process balances it out, such hat it actually remains constant.
For the case that an upperbound does not exits, we can still look at the limit case of . In particular, we truncate all rate functions at and find that, as approaches infinity, the simulated model approaches the real model.
Appendix B TimeComplexity
Next, we discuss how the runtime of REDSim scales with the size of the underlying contact network (and number of agents). Assume that a binary heap is used to implement the event queue and that the graph structure is implemented using a hashmap. Each step starts by popping an element from the queue which has constant time complexity. Next, we compute . Therefore, we have to lookup all neighbors of in the graph structure iterate over them. We also have to lookup all states and residence times. This step has linear timecomplexity in the number of neighbors. More precisely, lookups in the hashmaps have constant timecomplexity on average and are linear in the number of agents in the worst case. Computing the rejection probability has constant time complexity. In the case of a real event, we update and . Again, this has constant timecomplexity on average. Generating a new event does not depend on the neighborhood of an agent and has, therefore, constant timecomplexity. Note that this step can still be somewhat expensive when it requires integration to sample but not in an asymptotic sense. Thus, a step in the simulation is linear in the number of neighbors of the agent under consideration.
In contrast, previous methods require that after each update, the rate of each neighbor is recomputed. The rate of , however, depends on the whole neighborhood of . Hence, it is necessary to iterate over all neighbors of every single neighbor of .
Comments
There are no comments yet.