A* Sampling

10/31/2014
by   Chris J. Maddison, et al.
UNIVERSITY OF TORONTO
Microsoft
0

The problem of drawing samples from a discrete distribution can be converted into a discrete optimization problem. In this work, we show how sampling from a continuous distribution can be converted into an optimization problem over continuous space. Central to the method is a stochastic process recently described in mathematical statistics that we call the Gumbel process. We present a new construction of the Gumbel process and A* sampling, a practical generic sampling algorithm that searches for the maximum of a Gumbel process using A* search. We analyze the correctness and convergence time of A* sampling and demonstrate empirically that it makes more efficient use of bound and likelihood evaluations than the most closely related adaptive rejection sampling-based algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/02/2018

Non-Asymptotic Guarantees For Sampling by Stochastic Gradient Descent

Sampling from various kinds of distributions is an issue of paramount im...
03/26/2021

Toward a deeper understanding of a basic cascade

Towards the end of the last century, B. Mandelbrot saw the importance, r...
12/10/2019

Accelerated Sampling on Discrete Spaces with Non-Reversible Markov Processes

We consider the task of MCMC sampling from a distribution defined on a d...
07/03/2012

The OS* Algorithm: a Joint Approach to Exact Optimization and Sampling

Most current sampling algorithms for high-dimensional distributions are ...
12/03/2015

Discrete Equilibrium Sampling with Arbitrary Nonequilibrium Processes

We present a novel framework for performing statistical sampling, expect...
10/01/2020

From Trees to Continuous Embeddings and Back: Hyperbolic Hierarchical Clustering

Similarity-based Hierarchical Clustering (HC) is a classical unsupervise...
02/25/2018

Cakewalk Sampling

Combinatorial optimization is a common theme in computer science which u...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Drawing samples from arbitrary probability distributions is a core problem in statistics and machine learning. Sampling methods are used widely when training, evaluating, and predicting with probabilistic models. In this work, we introduce a generic sampling algorithm that returns exact independent samples from a distribution of interest. This line of work is important as we seek to include probabilistic models as subcomponents in larger systems, and as we seek to build probabilistic modelling tools that are usable by non-experts; in these cases, guaranteeing the quality of inference is highly desirable. There are a range of existing approaches for exact sampling. Some are specialized to specific distributions

[papandreou2010gaussian], but exact generic methods are based either on (adaptive) rejection sampling [gilks1992adaptive, dymetman2012algorithm, mansinghka2009exact]

or Markov Chain Monte Carlo (MCMC) methods where convergence to the stationary distribution can be guaranteed

[propp1996exact, mira2001perfect, mitha2003perfect].

This work approaches the problem from a different perspective. Specifically, it is inspired by an algorithm for sampling from a discrete distribution that is known as the . The algorithm works by adding independent Gumbel perturbations to each configuration of a discrete negative energy function and returning the argmax configuration of the perturbed negative energy function. The result is an exact sample from the corresponding Gibbs distribution. Previous work [Papandreou11, hazan2012partition] has used this property to motivate samplers based on optimizing random energy functions but has been forced to resort to approximate sampling due to the fact that in structured output spaces, exact sampling appears to require instantiating exponentially many Gumbel perturbations.

Our first key observation is that we can apply the  without instantiating all of the (possibly exponentially many) Gumbel perturbations. The same basic idea then allows us to extend the  to continuous spaces where there will be infinitely many independent perturbations. Intuitively, for any given random energy function, there are many perturbation values that are irrelevant to determining the argmax so long as we have an upper bound on their values. We will show how to instantiate the relevant ones and bound the irrelevant ones, allowing us to find the argmax — and thus an exact sample.

There are a number of challenges that must be overcome along the way, which are addressed in this work. First, what does it mean to independently perturb space in a way analogous to perturbations in the ? We introduce the , a special case of a stochastic process recently defined in mathematical statistics [malmberg2013random]

, which generalizes the notion of perturbation over space. Second, we need a method for working with a  that does not require instantiating infinitely many random variables. This leads to our novel construction of the Gumbel process, which draws perturbations according to a top-down ordering of their values. Just as the stick breaking construction of the Dirichlet process gives insight into algorithms for the Dirichlet process, our construction gives insight into algorithms for the Gumbel process. We demonstrate this by developing , which leverages the construction to draw samples from arbitrary continuous distributions. We study the relationship between  and adaptive rejection sampling-based methods and identify a key difference that leads to more efficient use of bound and likelihood computations. We investigate the behaviour of  on a variety of illustrative and challenging problems.

2 The

The  is an algorithm for sampling from a categorical distribution over classes with probability proportional to . The algorithm proceeds by adding independent Gumbel-distributed noise to the log-unnormalized mass and returns the optimal class of the perturbed distribution. In more detail, is a Gumbel with location if . The  follows from the structure of Gumbel distributions and basic properties of order statistics; if are i.i.d. , then . Further, for any

(1)
(2)

(1) is known as max-stability—the highest order statistic of a sample of independent Gumbels also has a Gumbel distribution with a location that is the log partition function [gumbel]. (2) is a consequence of the fact that Gumbels satisfy Luce’s choice axiom [YellottJr1977109]. Moreover, the max and argmax are independent random variables, see Appendix for proofs.

We would like to generalize the interpretation to continuous distributions as maximizing over the perturbation of a density on . The perturbed density should have properties analogous to the discrete case, namely that the max in should be distributed as and the distribution of the argmax in should be distributed . The Gumbel process is a generalization satisfying these properties.

Adapted from [malmberg2013random]. Let be a sigma-finite measure on sample space , measurable, and a random variable. is a , if

  1. (marginal distributions)

  2. (independence of disjoint sets)

  3. (consistency constraints) for measurable , then

The marginal distributions condition ensures that the   satisfies the requirement on the max. The consistency requirement ensures that a realization of a Gumbel process is consistent across space. Together with the independence these ensure the argmax requirement: the argmax of a Gumbel process restricted to is distributed according to the probability distribution that is proportional to the sigma-finite measure restricted to . In particular, let be the probability distribution proportional to . If is the optimal value of some perturbed density restricted to , then the event that the optima over is contained in is equivalent to the event that . The conditions ensure that [malmberg2013random]. Thus, for example we can use the   for a continuous measure on to model a perturbed density function where the optimum is distributed . Notice that this definition is a generalization of the finite case; if is finite, then the collection corresponds exactly to maxes over subsets of independent Gumbels.

3 Top-Down Construction for the

While [malmberg2013random] defines and constructs a general class of stochastic processes that include the , the construction that proves their existence gives little insight into how to execute a continuous version of the . Here we give an alternative algorithmic construction that will form the foundation of our practical sampling algorithm. In this section we assume can be computed tractably; this assumption will be lifted in sec:algs. To explain the construction, we consider the discrete case as an introductory example.

0:  sample space , sigma-finite measure
  
  
  
  
  
  while ! do
      
      
      for  do
          if  then
              
              
              
              
              
              yield
Algorithm 1 Top-Down Construction

Suppose is a set of independent Gumbel random variables for . It would be straightforward to sample the variables then build a heap of the values and also have heap nodes store the index associated with their value. Let be the set of indices that appear in the subtree rooted at the node with index . A property of the heap is that the root pair is the max and argmax of the set of Gumbels with index in . The key idea of our construction is to sample the independent set of random variables by instantiating this heap from root to leaves. That is, we will first sample the root node, which is the global max and argmax, then we will recurse, sampling the root’s two children conditional upon the root. At the end, we will have sampled a heap full of values and indices; reading off the value associated with each index will yield a draw of independent Gumbels from the target distribution.

We sketch an inductive argument. For the base case, sample the max and its index using their distributions that we know from (1) and (2). Note the max and argmax are independent. Also let be the set of all indices. Now, inductively, suppose have sampled a partial heap and would like to recurse downward starting at . Partition the remaining indices to be sampled into two subsets and and let be the left argmax and be the right argmax. Let be the indices that have been sampled already. Then

(3)

where and denote the left and right children of and the constraints should only be applied amongst nodes . This implies

(4)

(4) is the joint density of two independent Gumbels truncated at . We could sample the children maxes and argmaxes by sampling the independent Gumbels in and respectively and computing their maxes, rejecting those that exceed the known value of . Better, the truncated Gumbel distributions can be sampled efficiently via CDF inversion111 if has CDF . To sample efficiently, return where ., and the independent argmaxes within and can be sampled using (2). Note that any choice of partitioning strategy for and leads to the same distribution over the set of Gumbel values.

The basic structure of this top-down sampling procedure allows us to deal with infinite spaces; we can still generate an infinite descending heap of Gumbels and locations as if we had made a heap from an infinite list. The algorithm (which appears as alg:gup) begins by sampling the optimal value over sample space and its location . is removed from the sample space and the remaining sample space is partitioned into and . The optimal Gumbel values for and are sampled from a Gumbel with location log measure of their respective sets, but truncated at . The locations are sampled independently from their sets, and the procedure recurses. As in the discrete case, this yields a stream of pairs, which we can think of as being nodes in a heap of the ’s.

If is the value of the perturbed negative energy at , then alg:gup instantiates this function at countably many points by setting . In the discrete case we eventually sample the complete perturbed density, but in the continuous case we simply generate an infinite stream of locations and values. The sense in which alg:gup constructs a  is that the collection satisfies Definition 2. The intuition should be provided by the introductory argument; a full proof appears in the Appendix. An important note is that because ’s are sampled in descending order along a path in the tree, when the first lands in set , the value of will not change as the algorithm continues.

Figure 1: Illustration of . Algorithm 2 0:  log density , difference , bounding function , and                     while  and  do                     if  then                                    for  do           if  then                                                                           if  then                                    if  then                         

4

The Top-Down construction is not executable in general, because it assumes can be computed efficiently.  is an algorithm that executes the  without this assumption by exploiting properties of the Gumbel process. Henceforth  refers exclusively to the continuous version.

is possible because we can transform one Gumbel process into another by adding the difference in their log densities. Suppose we have two continuous measures and . Let pairs be draws from the Top-Down construction for . If is bounded, then we can recover by adding the difference to every ; i.e., is a  with measure . As an example, if were a prior and a bounded log-likelihood, then we could simulate the Gumbel process corresponding to the posterior by adding to every from a run of the construction for .

This “linearity” allows us to decompose a target log density function into a tractable and boundable . The tractable component is analogous to the proposal distribution in a rejection sampler.  searches for within the heap of pairs from the Top-Down construction of . The search is an  procedure: nodes in the search tree correspond to increasingly refined regions in space, and the search is guided by upper and lower bounds that are computed for each region. Lower bounds for region come from drawing the max and argmax of within and evaluating . Upper bounds come from the fact that

where is a bounding function for a region, for all . is not random and can be implemented using methods from e.g., convex duality or interval analysis. The first term on the RHS is the value used in the lower bound.

The algorithm appears in alg:astar and an execution is illustrated in fig:astar-illustration. The algorithm begins with a global upper bound (dark blue dashed). and are sampled, and the first lower bound is computed. Space is split, upper bounds are computed for the new children regions (medium blue dashed), and the new nodes are put on the queue. The region with highest upper bound is chosen, the maximum Gumbel in the region, , is sampled, and is computed. The current region is split at (producing light blue dashed bounds), after which is greater than the upper bound for any region on the queue, so is guaranteed to be the max over the infinite tree of . Because is a  with measure , this means that is an exact sample from and is an exact sample from . Proofs of termination and correctness are in the Appendix.

Variants. There are several variants of . When more than one sample is desired, bound information can be reused across runs of the sampler. In particular, suppose we have a partition of with bounds on for each region.  could use this by running a search independently for each region and returning the max Gumbel. The maximization can be done lazily by using  search, only expanding nodes in regions that are needed to determine the global maximum. The second variant trades bound computations for likelhood computations by drawing more than one sample from the auxiliary Gumbel process at each node in the search tree. In this way, more lower bounds are computed (costing more likelihood evaluations), but if this leads to better lower bounds, then more regions of space can be pruned, leading to fewer bound evaluations. Finally, an interesting special case of  can be implemented when is unimodal in 1D. In this case, at every split of a parent node, one child can immediately be pruned, so the “search” can be executed without a queue. It simply maintains the currently active node and drills down until it has provably found the optimum.

5 Comparison to Rejection Samplers

Our first result relating  to rejection sampling is that if the same global bound

is used at all nodes within , then the runtime of  is equivalent to that of standard rejection sampling. That is, the distribution over the number of iterations is distributed as a Geometric distribution with rate parameter

. A proof is in the Appendix as part of the proof of termination.

When bounds are refined,  bears similarity to adaptive rejection sampling-based algorithms. In particular, while it appears only to have been applied in discrete domains,  [dymetman2012algorithm] is a general class of adaptive rejection sampling methods that maintain piecewise bounds on the target distribution. If piecewise constant bounds are used (henceforth we assume  uses only constant bounds) the procedure can be described as follows: at each step, (1) a region with bound is sampled with probability proportional to , (2) a point is drawn from the proposal distribution restricted to the chosen region; (3) standard accept/rejection computations are performed using the regional bound, and (4) if the point is rejected, a region is chosen to be split into two, and new bounds are computed for the two regions that were created by the split. This process repeats until a point is accepted.

Steps (2) and (4) are performed identically in  when sampling argmax Gumbel locations and when splitting a parent node. A key difference is how regions are chosen in step (1). In , a region is drawn according to volume of the region under the proposal. Note that piece selection could be implemented using the , in which case we would choose the piece with maximum where . In  the region with highest upper bound is chosen, where the upper bound is . The difference is that values are reset after each rejection in , while they persist in  until a sample is returned.

The effect of the difference is that  more tightly couples together where the accepted sample will be and which regions are refined. Unlike , it can go so far as to prune a region from the search, meaning there is zero probability that the returned sample will be from that region, and that region will never be refined further. , on the other hand, is blind towards where the sample that will eventually be accepted comes from and will on average waste more computation refining regions that ultimately are not useful in drawing the sample. In experiments, we will see that  consistently dominates , refining the function less while also using fewer likelihood evaluations. This is possible because the persistence inside  focuses the refinement on the regions that are important for accepting the current sample.

6 Experiments

There are three main aims in this section. First, understand the empirical behavior of  as parameters of the inference problem and bounds vary. Second, demonstrate generality by showing that  algorithms can be instantiated in just a few lines of model-specific code by expressing symbolically, and then using a branch and bound library to automatically compute bounds. Finally, compare to  and an MCMC method (slice sampling). In all experiments, regions in the search trees are hyper rectangles (possibly with infinite extent); to split a region , choose the dimension with the largest side length and split the dimension at the sampled point.

(a) vs. peakiness     (b) vs. # pts (c) Problem-dependent scaling
Figure 2: (a) Drill down algorithm performance on as function of . (b) Effect of different bounding strategies as a function of number of data points; number of likelihood and bound evaluations are reported. (c) Results of varying observation noise in several nonlinear regression problems.

6.1 Scaling versus Peakiness and Dimension

In the first experiment, we sample from for using as the proposal distribution. In this case, which is unimodal, so the drill down variant of  can be used. As grows, the function becomes peakier; while this presents significant difficulty for vanilla rejection sampling, the cost to  is just the cost of locating the peak, which is essentially binary search. Results averaged over 1000 runs appear in fig:regression (a).

In the second experiment, we run  on the clutter problem [minka2001expectation]

, which estimates the mean of a fixed covariance isotropic Gaussian under the assumption that some points are outliers. We put a Gaussian prior on the inlier mean and set

to be equal to the prior, so contains just the likelihood terms. To compute bounds on the total log likelihood, we compute upper bounds on the log likelihood of each point independently then sum up these bounds. We will refer to these as “constant” bounds. In dimensions, we generated 20 data points with half within and half within , which ensures that the posterior is sharply bimodal, making vanilla MCMC quickly inappropriate as grows. The cost of drawing an exact sample as a function of (averaged over 100 runs) grows exponentially in , but the problem remains reasonably tractable as grows ( requires 900 likelihood evaluations, requires 4000). The analogous  algorithm run on the same set of problems requires to more computation on average over the runs.

6.2 Bounding Strategies

Here we investigate alternative strategies for bounding in the case where is a sum of per-instance log likelihoods. To allow easy implementation of a variety of bounding strategies, we choose the simple problem of estimating the mean of a 1D Gaussian given observations. We use three types of bounds: constant bounds as in the clutter problem; linear bounds, where we compute linear upper bounds on each term of the sum, then sum the linear functions and take the max over the region; and quadratic bounds, which are the same as linear except quadratic bounds are computed on each term. In this problem, quadratic bounds are tight. We evaluate  using each of the bounding strategies, varying . See fig:regression (b) for results.

For , all bound types are equivalent when each expands around the same point. For larger , the looseness of each per-point bound becomes important. The figure shows that, for large , using linear bounds multiplies the number of evaluations by 3, compared to tight bounds. Using constant bounds multiplies the number of evaluations by . The Appendix explains why this happens and shows that this behavior is expected for any estimation problem where the width of the posterior shrinks with .

6.3 Using Generic Interval Bounds

Here we study the use of bounds that are derived automatically by means of interval methods [hansen2003global]. This suggests how  (or ) could be used within a more general purpose probabilistic programming setting. We chose a number of nonlinear regression models inspired by problems in physics, computational ecology, and biology. For each, we use FuncDesigner [funcdesigner] to symbolically construct and automatically compute the bounds needed by the samplers.

Several expressions for appear in the legend of fig:regression (c), where letters through denote parameters that we wish to sample. The model in all cases is where is the data point index and

is Gaussian noise. We set uniform priors from a reasonable range for all parameters (see Appendix) and generated a small (N=3) set of training data from the model so that posteriors are multimodal. The peakiness of the posterior can be controlled by the magnitude of the observation noise; we varied this from large to small to produce problems over a range of difficulties. We use  to sample from the posterior five times for each model and noise setting and report the average number of likelihood evaluations needed in fig:regression (c) (y-axis). To establish the difficulty of the problems, we estimate the expected number of likelihood evaluations needed by a rejection sampler to accept a sample. The savings over rejection sampling is often exponentially large, but it varies per problem and is not necessarily tied to the dimension. In the example where savings are minimal, there are many symmetries in the model, which leads to uninformative bounds. We also compared to  on the same class of problems. Here we generated 20 random instances with a fixed intermediate observation noise value for each problem and drew 50 samples, resetting the bounds after each sample. The average cost (heuristically set to # likelihood evaluations plus 2

# bound evaluations) of  for the five models in fig:regression (c) respectively was 21%, 30%, 11%, 21%, and 27% greater than for .

6.4 Robust Bayesian Regression

Here our aim is to do Bayesian inference in a robust linear regression model

where noise is distributed as standard Cauchy and has an isotropic Gaussian prior. Given a dataset our goal is to draw samples from the posterior . This is a challenging problem because the heavy-tailed noise model can lead to multimodality in the posterior over . The log likelihood is . We generated data points with input dimension in such a way that the posterior is bimodal and symmetric by setting , generating and , then setting and . There are then equally-sized modes near and . We decompose the posterior into a uniform within the interval and put all of the prior and likelihood terms into . Bounds are computed per point; in some regions the per point bounds are linear, and in others they are quadratic. Details appear in the Appendix.

We compare to , using two refinement strategies that are discussed in [dymetman2012algorithm]. The first is directly analogous to  and is the method we have used in the earlier  comparisons. When a point is rejected, refine the piece that was proposed from at the sampled point, and split the dimension with largest side length. The second method splits the region with largest probability under the proposal.

Figure 3:  (circles) versus  (squares and diamonds) computational costs on Cauchy regression experiments of varying dimension. Square is refinement strategy that splits node where rejected point was sampled; Diamond refines region with largest mass under the proposal distribution. Red lines denote lines of equi-total computational cost and are spaced on a log scale by 10% increase increments. Color of markers denotes the rate of refinement, ranging from (darkest) refining for every rejection (for ) or one lower bound evaluation per node expansion (for ) to (lightest) refining on 10% of rejections (for ) or performing lower bound evaluations per node expansion (for ). (left) Cost of drawing a single sample, averaged over 20 random data sets. (right) Drawing 200 samples averaged over 5 random data sets. Results are similar over a range of ’s and .

We ran experiments on several random draws of the data and report performance along the two axes that are the dominant costs: how many bound computations were used, and how many likelihood evaluations were used. To weigh the tradeoff between the two, we did a rough asymptotic calculation of the costs of bounds versus likelihood computations and set the cost of a bound computation to be times the cost of a likelihood computation.

In the first experiment, we ask each algorithm to draw a single exact sample from the posterior. Here, we also report results for the variants of  and  that trade off likelihood computations for bound computations as discussed in sec:algs. A representative result appears in fig:cauchy_results (left). Across operating points,  consistently uses fewer bound evaluations and fewer likelihood evaluations than both  refinement strategies.

In the second experiment, we ask each algorithm to draw 200 samples from the posterior and experiment with the variants that reuse bound information across samples. A representative result appears in fig:cauchy_results (right). Here we see that the extra refinement done by  early on allows it to use fewer likelihood evaluations at the expense of more bound computations, but  operates at a point that is not achievable by . For all of these problems, we ran a random direction slice sampler [neal2003slice] that was given 10 times the computational budget that  used to draw 200 samples. The slice sampler had trouble mixing when . Across the five runs for , the sampler switched modes once, and it did not ever switch modes when .

7 Discussion

This work answers a natural question: is there a Gumbel-Max trick for continuous spaces, and can it be leveraged to develop tractable algorithms for sampling from continuous distributions?

In the discrete case, recent work on “Perturb and MAP” () methods [Papandreou11, hazanNIPS2013, tarlow2012randomized] that draw samples as the argmaxes of random energy functions has shown value in developing approximate, correlated perturbations. It is natural to think about continuous analogs in which exactness is abandoned in favor of more efficient computation. A question is if the approximations can be developed in a principled way, like how [hazan2012partition] showed a particular form of correlated discrete perturbation gives rise to bounds on the log partition function. Can analogous rigorous approximations be established in the continuous case? We hope this work is a starting point for exploring that question.

We do not solve the problem of high dimensions. There are simple examples where bounds become uninformative in high dimensions, such as when sampling a density that is uniform over a hypersphere when using hyperrectangular search regions. In this case, little is gained over vanilla rejection sampling. An open question is if the split between and

can be adapted to be node-specific during the search. An adaptive rejection sampler would be able to do this, which would allow leveraging parameter-varying bounds in the proposal distributions. This might be an important degree of freedom to exercise, particularly when scaling up to higher dimensions.

There are several possible follow-ons including the discrete version of  and evaluating  as an estimator of the log partition function. In future work, we would like to explore taking advantage of conditional independence structure to perform more intelligent search, hopefully helping the method scale to larger dimensions. Example starting points might be ideas from AND/OR search [mateescu2007and] or branch and bound algorithms that only branch on a subset of dimensions [chandraker2008globally].

Acknowledgments

This research was supported by NSERC. We thank James Martens and Radford Neal for helpful discussions, Elad Mezuman for help developing early ideas related to this work, and Roger Grosse for suggestions that greatly improved this work.

References

Appendix for “”

In this appendix we prove the main theoretical results of the paper and provide additional experimental details. First, define the following shorthand

Thus is the CDF and the PDF of a . The following identities are easy to verify and will be reused throughout the appendix.

(5)
(6)

Joint Distribution of Gumbel Max and Argmax

Suppose are independent truncated Gumbels and

, then we are interested in deriving the joint distribution of

and .

This is the Gibbs distribution and the density of a . Thus, for any

These results are well-known. The fact that max Gumbel value has a location that is the log partition function means we can use samples of it as an estimator of log partition functions with known variance

for samples [gumbel]. (2) shows that Gumbels satisfy Luce’s choice axiom [YellottJr1977109]. In fact, it is a well-known result in random choice theory that the only distribution satisfying (2) is Gumbel. Notice that the argmax is also independent of the bound, and is a valid choice.

Analysis of Top-Down Construction

Correctness of the Top-Down Construction of the

0:  sample space , sigma-finite measure
  
  
  
  while  do
     
     
     
     
     yeild
Algorithm 3 In-Order Construction

70110

In-Order Construction

Top-Down Construction

Figure 4: Visualization of a realization of a Gumbel process as produced by the Top-Down (Alg. 1) and In-Order (Alg. 3) constructions for the first few steps. Blue arrows indicate truncation. Black lines in indicate partitioning for alg:gup. In particular . Note the sense in which they are simply re-orderings of each other.

The goal of this section is to prove that the Top-Down Construction constructs the Gumbel process. In particular, we will argue if we run alg:gup with on , then the collection

is a Gumbel process . In order to do this we consider a special case of alg:gup in which space is not subdivided, . In this case the construction takes on a particular simple form, since no queue is needed, see alg:gupinorder. We call this special case the In-Order Construction, because is produces the Gumbel values in non-increasing order.

We proceed by arguing that subdividing space has no effect on the distribution of the top Gumbels. This means that it would be impossible to distinguish a run of alg:gupinorder from a run of alg:gup with the Gumbel values sorted. This allows us to use any choice of with a run of alg:gup to analyze the distribution of . More precisely

  1. We argue that the top Gumbels of alg:gup are distributed as in alg:gupinorder regardless of . That is, if is the index of the th largest Gumbel, then for

    This implies that the distribution over is invariant under the choice of function in alg:gup.

  2. We derive the following for a specific choice of

    By the previous result, this is the distribution for any choice of (provided it doesn’t produce immeasurable sets) giving us conditions 1. and 2. of Definition 2. Condition 3. is easily satisfied.

This proves the existence of the Gumbel process.

Equivalence Under

We will proceed to show that the distribution over is invariant under the choice of function. To do so we argue that the top Gumbels from alg:gup all have the distribution from alg:gupinorder. That is, if is the index of the th smallest Gumbel in the tree from alg:gup and . Then for all

Notice that whenever we can omit the removal of and still have the same distribution. In the case of continuous we can completely omit all removals and set .

Proof.

We proceed by induction. For , clearly

Now for , consider the the top nodes from a single realization of the process. Let , the indices of the first Gumbels. By the induction hypothesis we know their distribution and they form a partial tree of the completely realized tree. Our goal is to show that

The boundary of the max partial tree are the nodes that are on the Queue and have not been expanded. We know that conditioned on that will come from this boundary, i.e. . The first step is to realize that the sets on the boundary of the max partial tree form a partition of . If and is the parent of node , then

Because products of indicator functions are like intersections

In other words, the boundary Gumbels are independent and . Notice that the subsets of the boundary form a complete partition of , thus we get

The location has the following distribution:

Again, because the is a partition of , this is a mixture distribution in which subsets are sampled with probability and then is sampled from . Thus,

and by the independence of the max and argmax we get that is independent of . ∎

Joint Marginal of Max-Gumbels in and

Because the joint distribution over the entire collection is the same regardless of , this implies that the joint of and for any specific choice of partition is indeed the joint marginal for any . In particular we show

Proof.

Consider the that first partitions into and . In this case we consider the distribution over and in alg:gup. If , then and . Otherwise and . Thus, iff . Using this knowledge we can split the distribution over and into two events.