Optimal data structures for stochastic driven simulations

by   Federico D'Ambrosio, et al.
Utrecht University

Simulations where we have some prior information on the probability distribution of possible outcomes are common in many fields of science (physics, chemistry, biochemistry, etc). Optimal data structures that allow dynamic updates without regenerating the whole structure are crucial for both efficient and accurate results, especially at large scale. In this paper, we describe three different methods: the Binary Tree, the Rejection algorithm and the Composition-Rejection algorithm. We analyze the expected time to extract and update an outcome, for each of the studied methods, for different distributions of the rates of outcomes. We give both a theoretical analysis, and an experimental verification in a real-life setting, and show for different settings which methods give the best expected times.



page 9

page 11

page 12

page 13


Overview of Bachelors Theses 2021

In this work, we review Bachelors Theses done under the supervision of V...

Programming Data Structures for Large-Scale Desktop Simulations of Complex Systems

Studying complex systems requires running large-scale simulations over m...

Reactive Proximity Data Structures for Graphs

We consider data structures for graphs where we maintain a subset of the...

Dynamic geometric set cover and hitting set

We investigate dynamic versions of geometric set cover and hitting set w...

What Does Dynamic Optimality Mean in External Memory?

In this paper, we revisit the question of how the dynamic optimality of ...

Speeding up Python-based Lagrangian Fluid-Flow Particle Simulations via Dynamic Collection Data Structures

Array-like collection data structures are widely established in Python's...

Incomplete Directed Perfect Phylogeny in Linear Time

Reconstructing the evolutionary history of a set of species is a central...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Stochastic driven simulations can be seen as a board game: at every step we are presented with multiple possible outcomes, each of them with its probability determined by the rules of the game. For instance, in a board game there is a small number of possible outcomes with probabilities that can be written as fractions with the same small denominator; thus, usually, one of the outcomes is selected with a physical random number generator, a dice, whose behavior can be simply simulated by an algorithm fed by a pseudo-random number. In a stochastic driven simulation, the rules are more complex than that; the number of possible outcomes can be very large, the probabilities of the different outcomes can differ from each other, and even be of quite different magnitudes, and these probabilities can change during the simulation. Therefore we have to look for different methods for selecting a possible outcome. For each outcome, the probability that it is selected must be proportional to its rate, i.e., equal to its rate divided by the sum of the rates of all outcomes.

The problem we study in this paper is the following: given a set of size of possible outcomes , with rate , with known distribution, and in particular the maximum and minimum possible value, we are looking for an optimal method to randomly select an outcome and to update the rate of one or more possible outcomes.

Such discrete event selection is a central ingredient in stochastic driven simulations that in different fields of science (physics, molecular biology, chemistry, etc). Our particular research was triggered by the study of a two-dimensional copper-on-copper model that requires the selection of an event out of ( coordination number and the number of atoms) possible outcomes and, after each move, the update of the rate of multiple possible outcomes [1].

This problem has been studied in the static setting, i.e. where rates are fixed before runtime. In 1974 Walker [9, 10] gave an optimal solution for the static setting, now known as the Alias method. We briefly describe the two existing static solutions.

1.1 Array of cumulative rate

We have an array of size . Each entry represents one possible outcome, and stored in the corresponding position is the cumulative rate up to that, i.e. if the rate of outcome is , then . In order to extract a random outcome, a random number between zero and the total rate is generated and we find the first entry of the array that is larger than this random number.

Constructing the array can clearly be done in time. Selecting an outcome can be done with binary search in time. This is a commonly used technique, often employed when dealing with a small number of possible outcomes, but unfortunately it does not in general allow efficient updates.

1.2 Alias method

The Alias method, proposed by Walker in 1974 [9, 10], is an ingenious solution that has a worst-case and expected time for extraction both independent of the number of possible outcomes and the distribution of rates.

Two tables of size are build, the rate table and an alias table, in time; each entry of the rate table represents a rate of


while the remainders are represented as entries of the alias table. To generate a random outcome, we randomly select an index and then, according to the rate stored in the rate table, whether to chose the outcome given by the index or its alias. This requires constant time in any case. Unfortunately, there is no efficient way to update the tables except to rebuild them after every update.

1.3 Matias-Vitter-Ni method

While the previous solutions assume that rates do not change during the simulation, in many applications, in particular in stochastic driven simulations, this is not the case, and hence we will assume that an arbitrary number of rates will change between outcome selections. A theoretical solution to the problem was given in 2003 by Matias et al. [7]. This method allows the extraction of a random item in time and the update of an arbitrary item in , that can be reduced to amortized expected time, without any hypothesis on how the rates are distributed.

Unfortunately, the method of Matias is very complex to implement. At this moment, we are not aware of any existing implementation, and thus this method is probably only of theoretical interest.

1.4 Our contribution

Under the assumptions previously described, we present three methods that solve the dynamic discrete event selection problem: a nearly-complete binary tree, where the possible outcomes are leaves; an array that stores every possible outcome and either accepts or rejects them according to their rate; a method similar to the previous one, but where outcomes are grouped according to their rate in order to reduce the number of rejections required.

We choose focus on rate instead of probability, not only because it is a more useful quantity in stochastic driven simulations but mainly because they do not need to be normalized; this is very useful for updates, as a single outcome can become more or less probable without influencing the rate of others.

We will discuss their performance both through theoretical analysis and empirical results from a simulation, showing which method is recommended in which situation.

2 Methods

In this section we present the three methods we have studied.

2.1 Binary Tree

Figure 1: Representation of the data structure of the Binary Tree algorithm.

Our first method uses the well known concept of the nearly complete binary tree [2]. For the implementation of this tree, we derive inspiration from the array implementation of the heap data structure. See [3] for more background.

A tree of depth is a nearly complete binary three when the following conditions hold:

  • every leaf in the tree is either at level or ;

  • each node has at most two children;

  • the nodes at depth are as far left as possible. More rigorously: if a node at depth has a left child, then every node at depth to the left of has two children. If a node at depth has a right child, then it also has a left child.

Since the number of operations required for both extracting and updating an outcome is proportional to the depth of the relative leaf, this structure minimizes the mean depth. It would be possible to optimizes it further and have more probable outcomes at a smaller depth, similar to Huffman coding [3, 5], but such a structure would not allow for fast update, given that we would often have to move leaves around after an update.

Every node in the Tree has the following variables:

  • payload, a pointer or index to the relative outcome (a null pointer for internal nodes);

  • rate, if the node is a leaf this variable stores the rate of the corresponding outcome. For internal nodes, this variable stores the sum of all the leaves of which it is an ancestor;

  • weight, the number of leaves under it;

  • parent, a pointer to the parent node (a null pointer for the root);

  • isLeaf, a boolean variable set as true if the node is a leaf, false otherwise;

  • left, right, pointers to the two children nodes, if present.

The Tree also has the following variables:

  • N, the number of leaves in the Tree;

  • totalRate, the sum of the rates of all the outcomes in the Tree;

  • root, a pointer to the root of the Tree;

  • touched, a list of nodes that require an update;

  • lastLeaf, a pointer to the last leaf in the Tree.

2.1.1 Building the Tree

If we are given a set of possible outcomes, each with its rate , it is possible to build the Tree in linear time. First the nodes are created and then connected with a subroutine called connecteNodes, a pre-order tree traversal algorithm [6] that makes use of the heap properties [3]. Finally a second subroutine, a post-order tree traversal algorithm [6] called firstUpdate, is called to update in linear time the rate and the weight of the internal nodes.

It can be shown that the number of operation required is proportional to and therefore it can be done in time.

2.1.2 Updating an outcome

Updating the rate of an outcome requires to change the rate variable of its related leaf and, in order to maintain internal coherence, to call a subroutine named updateTree. Multiple updates can be made at the same time by adding the affected leaves to the touched list before calling updateTree at the end.

Whenever we make a change on any of the leaves, we lose internal consistency in the Tree: weights and rates of the internal nodes will not automatically reflect the change. There is no need to update the entire Tree, so only the nodes that are an ancestor of an updated leaf are updated by recomputing rate and weight. We store the nodes that require an update in the list touched.

The number of operations required to update a single leaf is proportional to its level, therefore:


where is the level of node , is the number of leaves to update and is the number of leaves in the Tree. We have used the property that a leaf can be either at the penultimate () or ultimate () level. The update itself is clearly , so we can update leafs in or a single one in .

2.1.3 Adding and deleting an outcome

An impossible outcome, i.e. an outcome with zero rate, still occupies a leaf in the Tree, increasing the number of nodes and therefore the time required for any operation. Therefore it might be useful to not include them in the Tree: we need a way to delete impossible outcomes and add outcomes that become possible.

In order to add an outcome, a node is created and added in the correct place. To delete an outcome we use the lastLeaf variable to replace it with the last leaf, so that we do not leave a hole in the Tree. The last leaf can only be a right child: once moved, its sibling replaces its parent in order to maintain the nearly complete binary tree property.

After one or more deletions it will be necessary to find the new last leaf, with an algorithm similar to the one used to add a new leaf.

Both adding and deleting a node requires a constant time. Finding the new last leaf and updating the Tree (with the updateTree subroutine) requires time, with the new number of nodes. Therefore adding and removing outcomes from a set of requires time.

2.1.4 Extracting an outcome

In order to extract an outcome, we start from root and go down the Tree until we get to a leaf. A single random number between 0 and totalRate is generated; if it is smaller or equal to the sum of the rates on the left side of the Tree, i.e. the rate of the left child of root, we move to that side. Otherwise, we move to the right side, after subtracting the total rates of the left side, to guarantee that the random number is between zero and the rate of the remaining Tree. We repeat that at every node until we reach a leaf.

The number of operations required to reach a leaf is proportional to its depth, therefore it is . It is not hard to see that the probability of selecting an outcome is proportional to its stated rate.

2.2 The Rejection algorithm

Figure 2: Representation of the data structure of the Rejection algorithm.

We now describe the Rejection algorithm. The data structure used in this method is very simple: we have an array of length N, or, if the number of outcomes is not bounded in advance, a vector, with an entry for each outcome, retaining their rate as a variable. We assume that there is a maximum possible rate, called

maxRate. A position on the vector is randomly selected, we then extract a random number between 0 and maxRate: if the rate of the selected outcome is smaller or equal to this value, the outcome is accepted. This is repeated until we accept an outcome.

The most important feature of this algorithm is that both extraction and update time do not depend on the number of possible outcomes but only on the distribution of outcomes’ rates.

The data structure has the following variables:

  • outcomes, an array of outcome;

  • N, the number of possible outcomes;

  • maxRate, the largest possible rate;

  • last, the index of the last element in outcomes.

The outcome class is very simple:

  • payload a pointer or an index that identifies the outcome;

  • rate, the rate of the outcome;

  • id, its position in the outcomes array.

2.2.1 Adding and deleting an outcome

Contrary to the Tree, there is no difference between building the structure all at once or adding the outcomes one by one; therefore we will build it by addition. As with the update, this operation is very straightforward. The new outcome is added to the end of the vector. This operation is done in constant time. Populating the data structure with outcomes is therefore .

To delete an outcome, we overwrite it with the last non-empty element of the array. The deletion of an element requires constant time, therefore deleting elements requires time.

2.2.2 Updating an outcome

The simplicity of this structure gives us a very simple update procedure: when an outcome is updated, we only need to set its rate to the new value. If an outcome becomes impossible (i.e. its rate becomes equal to zero), it is deleted.

To update an outcome s to the rate r:

  1. If (r == 0 and s.rate > 0): delete(s);

  2. Else if (r > 0 and s.rate == 0): add(s);

  3. s.rate = r.

Every update is clearly , assuming that the operations on the vector can be performed in constant time, so the update of elements is done in .

2.2.3 Extracting an outcome

To extract a random outcome, we randomly select an outcome: if the rate of the related outcome is higher than a random number between 0 and max, the outcome is accepted; otherwise we repeat the procedure.

The extraction time, as previously pointed out, is related to the number of repetitions required. Therefore the expected time is:



is the probability of extracting an outcome. This is the probability of extracting from a uniform distribution a value

larger than a value , extracted following the distribution of rates . In the physical world, all real values between these two can be reached: the probability is continuos. Therefore, without any loss of generality, it is possible to compute it through integration, assuming bounded between a known maximum and minimum:


As we can see, it has no relation with the number of possible outcomes but only with the distribution of rates.

2.3 The Composition-Rejection algorithm

Figure 3: Representation of the data structure of the Composition-Rejection algorithm.

The Rejection algorithm’s efficiency is given by how many repetitions are needed before an outcome is accepted. If the rates are spread over a large interval, this number can increase quite rapidly. This can be prevented by grouping outcomes with similar rates (Composition) and perform the Rejection algorithm over them. While it adds an overhead to every action performed on the outcomes, it can be advantageous in some situations.

It is possible to implement this idea in different ways. We opted for grouping outcomes in the following way: for every , we have a group with all outcomes with rates


where is an upper bound on the rates in the structure and is a constant.

Our data structure has the following variables:

  • groups, a vector of Rejection structures (”group”);

  • c, the constant setting the width of rate inside the groups;

  • max, the upper bound on the rates;

  • totalRate, the sum of all the rates in the whole structure.

To both outcome and group we add a groupId variable, the position in the group vector, and a sumRate variable, the sum of the rates of all the outcomes inside that group.

2.3.1 Adding and deleting an outcome

Similarly to the Rejection algorithm, the structure is populated by adding the outcomes one by one; after having identified the correct group, the outcome is added following the algorithm presented in 2.2.1.

This algorithm has the same complexity of the one required by the Rejection method, i.e. it uses time, while populating the data structure with elements uses time.

The deletion of an outcome follows the same path and requires constant time for every deletion.

2.3.2 Updating an outcome

An outcome can be updated by simply following the algorithm in 2.2.2, unless the new rate would place the outcome in another group; in that case we have to delete it from the previous group, by overwriting it with the last placed outcome, and add it to the new one.

In either cases, the update is done in constant time, assuming that the operations on the vector can be performed in constant time; therefore an update of elements is .

2.3.3 Extracting an outcome

The extraction of an outcome is done in two phases: first we have to select a group, each with a probability proportional to the sum of all rates of all outcomes in the group, and then perform an extraction of an outcome from that group with the algorithm from Section 2.2.3. As we will discuss later, we expect the first groups to contain a large fraction of the total of all rates. This depends on the distribution of the rates of the outcomes. For some distributions, our algorithm gives expected constant time; it is easy to see that the algorithm, in particular the method of grouping can be customized when the distribution of rates is different.

The expected time required is given by the sum of the time required to select a group and the time required to accept an outcome inside the group.

The probability distribution of selecting a bucket is given by the sum of the rates inside it, divided by all the sum of the rates inside all groups


where is the number of outcomes inside the group . Since the probability distribution of rates is known, we can also compute this value as the fraction of probability inside the range of the group :


where is the distribution of rates, which we defined as a continuous quantity and therefore requires integration. The time required to select a group is proportional to its position in the list, therefore:


Once we have selected a group, the time required to accept an outcome, similarly to eq. (3) is




which has a dependency on the group index . Therefore the total expected time is


with the depth of the structure, i.e. how many groups are there. This can be written as


2.4 Distributions

As we saw, performance of both Composition and Composition-Rejection methods has a strong dependence on the distribution of the rates. Without making an assumption how the rates are distributed, we cannot infer much information on the expected time of the algorithm. In this work we analyze the performance of the algorithms for two particular distributions, that are both simple and have a clear relevance in the simulation of dynamic systems. The same analysis can be easily repeated for other distributions.

  • A uniform distribution: a random variable is said to be uniformly distributed over the interval

    if its probability density function is given by


  • A log-uniform distribution: a random variable is said to be log-uniformly distributed over the interval if its probability density function is given by:


    More intuitively, we can think the log-uniform distribution as generated by taking the exponential of uniform distribution over the interval .

3 Analytical results

In this section, we present the expected time of an extraction for the Rejection method and the Composition-Rejection method, for the two distributions we have introduced.

The results can be summarized as follows.

Theorem 1 (Rejection method).

The expected time of extraction of an outcome from the Rejection method is when the rates are uniformly distributed over and when rates are log-uniformly distributed over .

Theorem 2 (Composition-Rejection method).

The expected time of extraction of an outcome from the Composition-Rejection method with group constant is when the rates are uniformly distributed over and when rates are log-uniformly distributed over .

Corollary 3 (Optimal value of c).

The expected time of extraction of an outcome for the Composition-Rejection method when the rates are uniformly distributed over has a minimum when the group constant is equal to .


See Appendix A.4. ∎

4 Computational results

4.1 Simulation

We have measured the performance of the three methods for extracting a random outcome and updating the rate of an outcome randomly extracted, updating the rate of an arbitrary outcome. This measurement has been taken on randomly generated outcomes from both distributions with the number of outcomes between and and .

Since we are interested in real-life performances, we have opted for measuring the computational time required for these actions instead of counting the number of operations. To reduce variability, we take the mean value and standard deviation after

repetitions of the experiment, calculated with Welford’s method [11].

Random numbers are generated with the mt19937 random number generator found in the random library and its related distributions. Time is measured with high_resolution_clock of the chrono library. Both libraries were introduce with the C++11 standard. The code is compiled with clang 4.0.1-6 and executed on the following system: Processor: Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz, 4 cores
Memory: 16307MB
Operating System: Ubuntu 17.10 (64 bit)
SSD: Kingstone SV300S3

4.2 Update

Figure 4: Update time for the three methods. Since there is no dependency on the distribution of rates, we are showing the data for uniform distribution and .

We first discuss the experimental results concerning updates of the rates. For a small number of outcomes (), the Tree and Rejection method requires a comparable amount of time, while the Composition-Rejection has a larger overhead. For a larger number of possible outcomes, both the Composition and the Composition-Rejection methods present a clear advantage in update performance. Already with possible outcomes both methods are more than four time as fast as the Tree.

4.3 Extraction

(a) Log-uniform distribution
(b) Uniform distribution
Figure 5: Extraction of a random element: optimal regions for the three methods.

We now discuss the experimental results concerning the extraction of a random outcome with the three methods. Applying quadratic discriminant analysis [4] on the extraction time, we show the region where each method is best for the log-uniform distribution (Figure 4(a)) and the uniform distribution (Figure 4(b)). Similarly as for the time for updates, the Tree method is advantageous for small number of outcomes (). For larger values of that ratio, it is preferable to choose one of the other methods: which one, depends on the distribution of the rates.

(a) Log-uniform distribution
(b) Uniform distribution
Figure 6: Extraction time for for different values of the ratio .

If the rates follow the log-uniform distribution, the extraction time is comparable between the two methods only for a small range (), after which the Rejection method dramatically slows down, as we can see in Figure 5(a). The two methods give a comparable performance when the rates are distributed uniformly (Figure 5(b)).

5 Conclusions

Our analytical analysis assumes that the data structures can be accessed in constant time. This holds in most real life applications, up to the moment where the cache is filled. Beyond this point, access time is dependent on the number of possible outcomes for all the methods we described. It is a topic of further research to study the dependency of algorithms for outcome selection on the used memory in experimental settings; aiming at algorithms whose running time does not show an increase or shows only a very small increase due to memory use when the number of possible outcomes becomes very large.

In the analysis of our methods, we assumed that the number of possible outcomes is sufficiently small that the size of the cache does not influence the running time. For this setting, we have presented three methods that solve the dynamic discrete event selection problem. After both a theoretical analysis and empirical results from a simulation, we have shown that the Tree method is generally optimal for small numbers of possible outcomes (), while the choice between the Rejection and Composition-Rejection has to be done taking into account both the distribution of rates and the trade-off between update and extraction performance.


We want to thank Emanuele Degani for discussions and help with the statistical analysis.


  • [1] Gerard T. Barkema and Mark E. J. Newman. Monte Carlo simulations in surface science. In Monte Carlo Methods in Statistical Physics, chapter 11. Oxford University Press, 1999.
  • [2] Paul E. Black. Complete binary tree, 2016. URL: https://xlinux.nist.gov/dads/HTML/completeBinaryTree.html.
  • [3] T Cormen, R Rivest, and C Leiserson. Introduction to Algorithms. The MIT Press, 2nd edition, 2001.
  • [4] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning. Springer Series in Statistics. Springer New York, New York, NY, 2009. arXiv:1010.3003, doi:10.1007/978-0-387-84858-7.
  • [5] David A. Huffman. A Method for the Construction of Minimum-Redundancy Codes. Proceedings of the IRE, 40(9):1098–1102, 1952.
  • [6] Donald E. Knuth. The Art of Computer Programming, Vol. 1: Fundamental Algorithms. Addison-Wesley, Reading, MA, 2nd edition, 1973.
  • [7] Yossi Matias, Jeffrey S. Vitter, and Wen C. Ni. Dynamic generation of discrete random variates. Theory of Computing Systems, 36(4):329–357, aug 2003. doi:10.1007/s00224-003-1078-6.
  • [8] Sheldon M. Ross. A First Course in Probability. Pearson Prentice Hall, 8th edition, 2010.
  • [9] Alastair J. Walker. New Fast Method for Generating Discrete Random Numbers With Arbitrary Frequency Distribution. Electronics Letters, 10(8):8–9, 1974. doi:10.1049/el:19740097.
  • [10] Alastair J. Walker.

    An Efficient Method for Generating Discrete Random Variables with General Distributions.

    ACM Transactions on Mathematical Software, 3(3):253–256, sep 1977. doi:10.1145/355744.355749.
  • [11] B Welford. Note on a Method for Calculating Corrected Sums of Squares and Products. Technometrics, 4(3):419–420, 1962. doi:10.2307/1266577.

Appendix A Appendix

a.1 Pseudocode for the Tree method

a.1.1 Building the Tree

Given a list of rates :

  1. a vector of node, called nodes, of size is created;

  2. for :

    1. nodes[i].isLeaf = true;

    2. nodes[i].rate = ;

  3. root = nodes[0];

  4. connectNodes(0);

  5. firstUpdate(nodes[0]);

  6. lastLeaf = nodes[2N-2].

connectNodes(i): If nodes[i] is not a leaf:

  1. nodes[i].left = nodes[i+1];

  2. nodes[i].right = nodes[i+2];

  3. nodes[i+1].parent = nodes[i]

  4. nodes[i+2].parent = nodes[i]

  5. connectNodes(i+1);

  6. connectNodes(i+2);

firstUpdate(n): If n is not a leaf:

  1. firstUpdate(n.left);

  2. firstUpdate(n.right);

  3. n.rate = n.left.rate + n.right.rate;

  4. n.weight = n.right.weight + n.left.weight;

At the end, totalRate = root.rate.

a.1.2 Updating an outcome

To update a list of outcomes to the rates

  1. for every

    1. n is the pointer to the corresponding leaf;

    2. n.rate ;

    3. n is added to touched;

  2. the updateTree subroutine is called.

updateTree(): While touched is not empty:

  1. remove one element from touched and call this element currentNode;

  2. if currentNode is not a leaf update its rate and weight with the sum of the ones of its children;

  3. unless currentNode is root, we add currentNode.parent to the end of touched;

at the end, totalRate = root.rate.

a.1.3 Adding and deleting an outcome

  1. a node n with the given payload and rate is created;

  2. currentNode=root.

  3. until we place n,

    1. if currentNode has no left, currentNode.left=n;

    2. else if currentNode has no right, currentNode.right=n;

    3. else if its left is a leaf, a new node is created and placed in its position, with the previous left and n as its children;

    4. else if its right is a leaf, a new node is created and placed in its position, with the previous right and n as its children;

    5. else if right has a lower weight, we select currentNode=right;

    6. else therwise we select currentNode=left;

  4. n is added to touched;

  5. lastLeaf=n;

  6. N=N+1.

Given a node n that we want to delete:

  1. oldParent = lastLeaf.parent;

  2. if n lastLeaf:

    1. if n.parent->left=n, n.parent->left=lastLeaf;

    2. if n.parent->right=n, n.parent->right=lastLeaf;

    3. lastLeaf.parent = n.parent;

  3. m = oldParent.left;

  4. if oldParent.parent->left = oldParent, oldParent.parent->left=m;

  5. if oldParent.parent->right = oldParent, oldParent.parent->right=m;

  6. m.parent = oldParent.parent;

  7. lastLeaf and m are added to touched;

  8. the nodes n and oldParent are deleted;

  9. N=N-1.


  1. currentNode = root;

  2. until currentNode is a leaf:

    1. if currentNode.right is a leaf, currentNode = currentNode.right;

    2. else if currentNode.left is a leaf, currentNode = currentNode.left;

    3. else if currentNode.right has a lower weight than currentNode.left, currentNode = currentNode.right;

    4. else currentNode = currentNode.left;

  3. lastLeaf = currentNode.

a.1.4 Extracting an outcome

  1. A random real rand between 0 and totalRate is generated;

  2. n = root;

  3. While n is not a leaf:

    1. If , ;

    2. Else:

      1. , in this way we guarantee that at the next step ;

      2. ;

  4. Return currentNode.

a.2 Pseudocode for the Rejection method

a.2.1 Adding and deleting an outcome

To add an outcome of payload and rate

  1. if maxRate, return error.

  2. Else, an outcome s is created;

  3. s.payload = pl;

  4. s.rate = r;

  5. outcomes[N] = s;

  6. N=N+1.

To delete an outcome s

  1. newId = s.id;

  2. outcomes[newId] = outcomes[N];

  3. outcomes[N].id = newId;

  4. N=N-1.

a.2.2 Updating an outcome

To update an outcome s to the rate newRate

  1. i = log(max/newRate) / log (c);

  2. if (i==s.groupId) groups[i].update(e,newRate).

  3. else:

    1. pl = s.pl;

    2. groups[s.groupId].sumRate -= s.r;

    3. groups[s.groupId].delete(s);

    4. groups[i].add(pl,newRate);

    5. groups[i].sumRate += newRate.

a.2.3 Extracting an outcome

Until an outcome is returned:

  1. a random number rand is generated;

  2. randId =rand * N;

  3. randRate = [(N * rand) - randId] * max;

  4. if (outcomes[id].rate randRate), return outcomes[id].

a.3 Pseudocode for the Composition-Rejection method

a.3.1 Adding and deleting an outcome

To add an outcome of payload pl and rate r:

  1. i = log(max/r) / log (c);

  2. groups[i].add(pl,r);

  3. groups[i].sumRate += r.

To delete an outcome s:

  1. groups[s.groupId].sumRate -= s.r;

  2. groups[s.groupId].delete(s).

a.3.2 Extracting an outcome

  1. A random number rand is generated;

  2. rand = rand * totalRate;

  3. i=0;

  4. while (i < groups.size)

    1. if (groups[i].sumRate > random) return groups[i].extract().

    2. rand = rand - groups[i].sumRate;

    3. i++;

a.4 Analytical results

Proof of Theorem 1: Rejection method.

In the case of a uniform distribution, the expected extraction time is for the Rejection method. We can easily see that by substituting in eq. 3 and 4.




and therefore the expected time is .

We now move to the log-uniform distribution. By substituting the log-uniform distribution in eq. 3, we obtain:


which, for , gives:


Proof of Theorem 2: Composition-Rejection method.

Let us take the uniform distribution first. Substituting in eq. 6 we find


and, in eq. 10:


and therefore the expected time is:


and, assuming ,


since .

We now move to the log-uniform distribution. Following the previous scheme, we write


and, remembering the definition of (eq. 12):




Figure 7: The expected time for the extraction has a minimum for .
Proof of Corollary 3: Optimal value of c.

As we have shown in eq. 22, the time required for extracting a random outcome from the Composition-Rejection method with uniform distribution has an explicit dependency on . Solving


we can easily see that it has a minimum when and therefore , as we can see in Figure 7.

There is no explicit dependency on in the log-uniform case. ∎