# Lost Relatives of the Gumbel Trick

The Gumbel trick is a method to sample from a discrete probability distribution, or to estimate its normalizing partition function. The method relies on repeatedly applying a random perturbation to the distribution in a particular way, each time solving for the most likely configuration. We derive an entire family of related methods, of which the Gumbel trick is one member, and show that the new methods have superior properties in several settings with minimal additional computational cost. In particular, for the Gumbel trick to yield computational benefits for discrete graphical models, Gumbel perturbations on all configurations are typically replaced with so-called low-rank perturbations. We show how a subfamily of our new methods adapts to this setting, proving new upper and lower bounds on the log partition function and deriving a family of sequential samplers for the Gibbs distribution. Finally, we balance the discussion by showing how the simpler analytical form of the Gumbel trick enables additional theoretical results.

## Authors

• 4 publications
• 8 publications
• 84 publications
• 22 publications
• ### Variational Chernoff Bounds for Graphical Models

Recent research has made significant progress on the problem of bounding...
07/11/2012 ∙ by Pradeep Ravikumar, et al. ∙ 0

• ### Approximating Partition Functions in Constant Time

We study approximations of the partition function of dense graphical mod...
11/05/2017 ∙ by Vishesh Jain, et al. ∙ 0

• ### New Lower Bounds for the Number of Pseudoline Arrangements

Arrangements of lines and pseudolines are fundamental objects in discret...
09/10/2018 ∙ by Adrian Dumitrescu, et al. ∙ 0

• ### On the numerical rank of radial basis function kernels in high dimension

Low-rank approximations are popular methods to reduce the high computati...
06/23/2017 ∙ by Ruoxi Wang, et al. ∙ 0

• ### Gauged Mini-Bucket Elimination for Approximate Inference

Computing the partition function Z of a discrete graphical model is a fu...
01/05/2018 ∙ by Sungsoo Ahn, et al. ∙ 0

• ### High Dimensional Inference with Random Maximum A-Posteriori Perturbations

This paper presents a new approach, called perturb-max, for high-dimensi...
02/10/2016 ∙ by Tamir Hazan, et al. ∙ 0

• ### Communication-Channel Optimized Partition

Given an original discrete source X with the distribution p_X that is co...
01/06/2020 ∙ by Thuan Nguyen, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In this work we are concerned with the fundamental problem of sampling from a discrete probability distribution and evaluating its normalizing constant. A probability distribution on a discrete sample space is provided in terms of its potential function , corresponding to log-unnormalized probabilities via , where the normalizing constant is the partition function. In this context, is the Gibbs distribution on associated with the potential function . The challenges of sampling from such a discrete probability distribution and estimating the partition function are fundamental problems with ubiquitous applications in machine learning, classical statistics and statistical physics (see, e.g., Lauritzen, 1996).

Perturb-and-MAP methods (Papandreou & Yuille, 2010) constitute a class of randomized algorithms for estimating partition functions and sampling from Gibbs distributions, which operate by randomly perturbing the corresponding potential functions and employing maximum a posteriori (MAP) solvers on the perturbed models to find a maximum probability configuration. This MAP problem is NP-hard in general; however, substantial research effort has led to the development of solvers which can efficiently compute or estimate the MAP solution on many problems that occur in practice (e.g., Boykov et al., 2001; Kolmogorov, 2006; Darbon, 2009). Evaluating the partition function is a harder problem, containing for instance #P-hard counting problems. The general aim of perturb-and-MAP methods is to reduce the problem of partition function evaluation, or the problem of sampling from the Gibbs distribution, to repeated instances of the MAP problem (where each instance is on a different random perturbation of the original model).

The Gumbel trick (Papandreou & Yuille, 2011) relies on adding Gumbel-distributed noise to each configuration’s potential . We derive a wider family of perturb-and-MAP methods that can be seen as perturbing the model in different ways – in particular using the Weibull and Fréchet distributions alongside the Gumbel. We show that the new methods can be implemented with essentially no additional computational cost by simply averaging existing Gumbel MAP perturbations in different spaces, and that they can lead to more accurate estimators of the partition function.

Evaluating or perturbing each configuration’s potential with i.i.d. Gumbel noise can be computationally expensive. One way to mitigate this is to cleverly prune computation in regions where the maximum perturbed potential is unlikely to be found (Maddison et al., 2014; Chen & Ghahramani, 2016). Another approach exploits the product structure of the sample space in discrete graphical models, replacing i.i.d. Gumbel noise with a “low-rank” approximation. Hazan & Jaakkola (2012); Hazan et al. (2013) showed that from such an approximation, upper and lower bounds on the partition function and a sequential sampler for the Gibbs distribution can still be recovered. We show that a subfamily of our new methods, consisting of Fréchet, Exponential and Weibull tricks, can also be used with low-rank perturbations, and use these tricks to derive new upper and lower bounds on the partition function, and to construct new sequential samplers for the Gibbs distribution.

Our main contributions are as follows:

1. [leftmargin=*]

2. A family of tricks that can be implemented by simply averaging Gumbel perturbations in different spaces, and which can lead to more accurate or more sample efficient estimators of (Section 2).

3. New upper and lower bounds on the partition function of a discrete graphical model computable using low-rank perturbations, and a corresponding family of sequential samplers for the Gibbs distribution (Section 3).

4. Discussion of advantages of the simpler analytical form of the Gumbel trick including new links between the errors of estimating , sampling, and entropy estimation using low-rank Gumbel perturbations (Section 4).

##### Background and Related work

The idea of perturbing the potential function of a discrete graphical model in order to sample from its associated Gibbs distribution was introduced by Papandreou & Yuille (2011), inspired by their previous work on reducing the sampling problem for Gaussian Markov random fields to the problem of finding the mean, using independent local perturbations of each Gaussian factor (Papandreou & Yuille, 2010). Tarlow et al. (2012) extended this perturb-and-MAP approach to sampling, in particular by considering more general structured prediction problems. Hazan & Jaakkola (2012) pointed out that MAP perturbations are useful not only for sampling the Gibbs distribution (considering the argmax of the perturbed model), but also for bounding and approximating the partition function (by considering the value of the max).

Afterwards, Hazan et al. (2013) derived new lower bounds on the partition function and proposed a new sampler for the Gibbs distribution that samples variables of a discrete graphical model sequentially, using expected values of low-rank MAP perturbations to construct the conditional probabilities. Due to the low-rank approximation, this algorithm has the option to reject a sample. Orabona et al. (2014) and Hazan et al. (2016) subsequently derived measure concentration results for the Gumbel distribution that can be used to control the rejection probability. Maji et al. (2014)

derived an uncertainty measure from random MAP perturbations, using it within a Bayesian active learning framework for interactive image boundary annotation.

Perturb-and-MAP was famously generalized to continuous spaces by Maddison et al. (2014), replacing the Gumbel distribution with a Gumbel process and calling the resulting algorithm A* sampling. Maddison (2016) cast this work into a unified framework together with adaptive rejection sampling techniques, based on the notion of exponential races. This recent view generally brings together perturb-and-MAP and accept-reject samplers, exploiting the connection between the Gumbel distribution and competing exponential clocks that we also discuss in Section 2.1.

Inspired by A* sampling, Kim et al. (2016)

proposed an exact sampler for discrete graphical models based on lazily-instantiated random perturbations, which uses linear programming relaxations to prune the optimization space. Further recent applications of perturb-and-MAP include structured prediction in computer vision

(Bertasius et al., 2017) and turning the discrete sampling problem into an optimization task that can be cast as a multi-armed bandit problem (Chen & Ghahramani, 2016), see Section 5.2 below.

In addition to perturb-and-MAP methods, we are aware of three other approaches to estimate the partition function of a discrete graphical model via MAP solver calls. The WISH method (weighted-integrals-and-sums-by-hashing, Ermon et al., 2013) relies on repeated MAP inference calls applied to the model after subjecting it to random hash constraints. The Frank-Wolfe method may be applied by iteratively updating marginals using a constrained MAP solver and line search (Belanger et al., 2013; Krishnan et al., 2015). Weller & Jebara (2014a) instead use just one MAP call over a discretized mesh of marginals to approximate the Bethe partition function, which itself is an estimate (which often performs well) of the true partition function.

## 2 Relatives of the Gumbel Trick

In this section, we review the Gumbel trick and state the mechanism by which it can be generalized into an entire family of tricks. We show how these tricks can equivalently be viewed as averaging standard Gumbel perturbations in different spaces, instantiate several examples, and compare the various tricks’ properties.

##### Notation

Throughout this paper, let be a finite sample space of size . Let be an unnormalized mass function over and let be its normalizing partition function. Write for the normalized version of , and for the log-unnormalized probabilities, i.e. the potential function.

We write

for the exponential distribution with rate (inverse mean)

and for the Gumbel distribution with location and scale . The latter has mean , where is the Euler-Mascheroni constant.

### 2.1 The Gumbel Trick

Similarly to the connection between the Gumbel trick and the Poisson process established by Maddison (2016), we introduce the Gumbel trick for discrete probability distributions using a simple and elegant construction via competing exponential clocks. Consider independent clocks, started simultaneously, such that the -th clock rings after a random time . Then it is easy to show that (1) the time until some clock rings has distribution, and (2) the probability of the -th clock ringing first is proportional to its rate . These properties are also widely used in survival analysis (Cox & Oakes, 1984).

Consider competing exponential clocks , indexed by elements of , with respective rates . Property (1) of competing exponential clocks tells us that

 minx∈X{Tx}∼Exp(Z). (1)

Property (2) says that the random variable

, taking values in , is distributed according to :

 argminx∈X{Tx}∼p. (2)

The Gumbel trick is obtained by applying the function to the equalities in distribution (1) and (2). When is applied to an random variable, the result follows the distribution, which can also be represented as , where . Defining and noting that is strictly decreasing, applying the function to equalities in distribution (1) and (2), we obtain:

 maxx∈X{ϕ(x)+γ(x)} ∼Gumbel(−c+lnZ), (1’) argmaxx∈X{ϕ(x)+γ(x)} ∼p, (2’)

where we have recalled that . The distribution has mean , and thus the log partition function can be estimated by averaging samples (Hazan & Jaakkola, 2012).

### 2.2 Constructing New Tricks

Given the equality in distribution (1), we can treat the problem of estimating the partition function as a parameter estimation problem for the exponential distribution. Applying the function as in the Gumbel trick to obtain a random variable, and estimating its mean to obtain an unbiased estimator of , is just one way of inferring information about .

We consider applying different functions to (1); particularly those functions that transform the exponential distribution to another distribution with known mean. As the original exponential distribution has rate , the transformed distribution will have mean , where will in general no longer be the logarithm function. Since we often are interested in estimating various transformations of , this provides us a with a collection of unbiased estimators from which to choose. Moreover, further transforming these estimators yields a collection of (biased) estimators for other transformations of , including itself.

###### Example 1 (Weibull tricks).

For any , applying the function to an random variable yields a random variable with the distribution with scale and shape , which has mean and can be also represented as , where . Defining and noting that is increasing, applying to the equality in distribution (1) gives

 minx∈X{~p−αW(x)}∼Weibull(Z−α,α−1). (1”)

Estimating the mean of yields an unbiased estimator of . The special case corresponds to the identity function ; we call the resulting trick the Exponential trick. ∎

Table 1 lists several examples of tricks derived this way. As Example 1 shows, these tricks may not involve additive perturbation of the potential function ; the Weibull tricks multiplicatively perturb exponentiated unnormalized probabilities with Weibull noise. As models of interest are often specified in terms of potential functions, to be able to reuse existing MAP solvers in a black-box manner with the new tricks, we seek an equivalent formulation in terms of the potential function. The following Proposition shows that by not passing the function through the minimization in equation (1), the new tricks can be equivalently formulated as averaging additive Gumbel perturbations of the potential function in different spaces.

###### Proposition 2.

For any function such that exists, we have

 f(Z)=Eγ[g(e−cexp(−maxx∈X{ϕ(x)+γ(x)}))],

where .

###### Proof.

As , we have and the result follows by the assumption relating and . ∎

Proposition 2 shows that the new tricks can be implemented by solving the same MAP problems as in the Gumbel trick, and then merely passing the solutions through the function before averaging them to approximate the expectation.

### 2.3 Comparing Tricks

#### 2.3.1 Asymptotic efficiency

The Delta method (Casella & Berger, 2002) is a simple technique for assessing the asymptotic variance of estimators that are obtained by a differentiable transformation of an estimator with known variance. The last column in Table 1 lists asymptotic variances of corresponding tricks when unbiased estimators of are passed through the function to yield (biased, but consistent and non-negative) estimators of itself. It is interesting to examine the constants that multiply in some of the obtained asymptotic variance expressions for the different tricks. For example, it can be shown using Gurland’s ratio (Gurland, 1956) that this constant is at least for the Weibull and Fréchet tricks, which is precisely the value achieved by the Exponential trick (which corresponds to ). Moreover, the Gumbel trick constant can be shown to be the limit as of the Weibull and Fréchet trick constants. In particular, the constant of the Exponential trick is strictly better than that of the standard Gumbel trick: . This motivates us to compare the Gumbel and Exponential tricks in more detail.

#### 2.3.2 Mean squared error (MSE)

For estimators , their is a commonly used comparison metric. When the Gumbel or Exponential tricks are used to estimate either or , the biases, variances, and MSEs of the estimators can be computed analytically using standard methods (Appendix A).

For example, the unbiased estimator of from the Gumbel trick can be turned into a consistent non-negative estimator of by exponentiation: , where are obtained using equation (1). The bias and variance of

can be computed using independence and the moment generating functions of the

’s, see Appendix A for details.

Perhaps surprisingly, all estimator properties only depend on the true value of and not on the structure of the model (distribution ), since the estimators rely only on i.i.d. samples of a random variable. Figure 1 shows the analytically computed estimator variances and MSEs. For estimating itself (left), the Exponential trick outperforms the Gumbel trick in terms of MSE for all sample sizes (for , both estimators have infinite variance and MSE). The ratio of MSEs quickly approaches , and in this regime the Exponential trick requires fewer samples than the Gumbel trick to reach the same MSE. Also, for estimating , (Figure 1, right), the Exponential trick provides a lower MSE estimator for sample sizes ; only for the Gumbel trick provides a better estimator.

Note that as biases are available analytically, the estimators can be easily debiased (by subtracting their bias). One then obtain estimators with MSEs equal to the variances of the original estimators, shown dashed in Figure 1. The Exponential trick would then always outperform the Gumbel trick when estimating , even with sample size .

For Weibull tricks with and Fréchet tricks, we estimated the biases and variances of estimators of and by constructing estimators in each case and evaluating their bias and variance. Figure 2 shows the results for varying and several sample sizes . We plot the analytically computed value for the Gumbel trick at

, as we observe that the Weibull trick interpolates between the Gumbel trick and the Exponential trick as

increases from to . We note that the minimum MSE estimator is obtained by choosing a value of that is close to , i.e. the Exponential trick. This agrees with the finding from Section 2.3.1 that is optimal as .

### 2.4 Bayesian Perspective

A Bayesian approach exposes two choices when constructing estimators of , or of its transformations :

1. A choice of prior distribution , encoding prior beliefs about the value of before any observations.

2. A choice of how to summarize the posterior distribution given samples.

Taking the Jeffrey’s prior , an improper prior that it is invariant under reparametrization, observing samples yields the posterior:

 pM(Z|X1,…,XM)∝ZM−1e−Z∑Mm=1Xm.

Recognizing the density of a random variable, the posterior mean is

 E[Z|X1,…,XM]=M∑Mm=1Xm=(1MM∑m=1Xm)−1,

coinciding with the Exponential trick estimator of .

## 3 Low-rank Perturbations

One way of exploiting perturb-and-MAP to yield computational savings is to replace independent perturbations of each configuration’s potential with an approximation. Such approximations are available e.g. in discrete graphical models, where the sampling space has a product space structure , with the state space of the -th variable.

###### Definition 3 ( (Hazan & Jaakkola, 2012)).

The sum-unary perturbation MAP value is the random variable

 U:=maxx∈X{ϕ(x)+n∑i=1γi(xi)},

where .

This definition involves i.i.d. Gumbel random variables, rather than . (With this coincides with full-rank perturbations and .) For the distribution of is not available analytically. One can similarly define the pairwise (or higher-order) perturbations, where independent Gumbel noise is added to each pairwise (or higher-order) potential.

Unary perturbations provide the upper bound on the log partition function (Hazan & Jaakkola, 2012), can be used to construct a sequential sampler for the Gibbs distribution (Hazan et al., 2013), and, if the perturbations are scaled down by a factor of , a lower bound on can also be recovered (Hazan et al., 2013). In this section we show that a subfamily of tricks introduced in Section 2, consisting of Fréchet and Weibull (and Exponential) tricks, is applicable in the low-rank perturbation setting and use them to derive new families of upper and lower bounds on and sequential samplers for the Gibbs distribution. Please note full proofs are deferred to Appendix B and C.

### 3.1 Upper Bounds on the Partition Function

The following family of upper bounds on can be derived from the Fréchet and Weibull tricks.

###### Proposition 4.

For any , the upper bound holds with

 U(α):=nlnΓ(1+α)α+nc−1αlnEγ[e−αU].
###### Proof.

(Sketch.) By induction on , with the induction step provided by our Clamping Lemma (Lemma 7) below. ∎

To evaluate these bounds in practice, is estimated using samples of . Corollary 9 of Hazan et al. (2016) can be used to show that is finite for , and so then the estimation is well-behaved.

A natural question is how these new bounds relate to the Gumbel trick upper bound by Hazan & Jaakkola (2012). The following result aims to answers this:

###### Proposition 5.

The limit of as exists and equals , i.e. the Gumbel trick upper bound.

The question remains: When is it advantageous to use a value to obtain a tighter bound on than the Gumbel trick bound? The next result can provide guidance:

###### Proposition 6.

The function is differentiable at and the derivative equals

 ddαU(α)∣∣∣α=0=12(nπ26−var(U)).

While the variance of is generally not tractable, in practice one obtains samples from to estimate the expectation in and these samples can be reused to assess . Interestingly, equals

for both the uniform distribution and the distribution concentrated on a single configuration, and in our empirical investigations always

. Then the derivative at is non-negative and Fréchet tricks provide tighter bounds on . However, as is estimated with samples, the question of estimator variance arises. We investigate the trade-off between tightness of the bound and the variance incurred in estimating empirically in Section 5.3.

### 3.2 Clamping

Consider the partial sum-unary perturbation MAP values, where the values of the first variables have been fixed, and only the rest are perturbed:

 Uj(x1,…,xj−1):=maxxj,…,xn{ϕ(x)+n∑i=jγi(xi)}.

The following lemma involving the ’s serves three purposes: (I.) it provides the induction step for Proposition 4, (II.) it shows that clamping never hurts partition function estimation with Fréchet and Weibull tricks, and (III.) it will be used to show that a sequential sampler constructed in Section 3.3 below is well-defined.

###### Lemma 7 (Clamping Lemma).

For any and , the following inequality holds with any :

 ∑xj∈XjEγ[e−(n−j)lnΓ(1+α)−α(n−j)c)e−αUj+1]−1/α ≤Eγ[e−(n−(j−1))lnΓ(1+α)−α(n−(j−1))c)e−αUj]−1/α
###### Proof.

This follows directly from the Fréchet trick () or the Weibull trick () and representing the Fréchet resp. Weibull random variables in terms of Gumbel random variables. See Appendix B.1 for more details. ∎

###### Corollary 8.

Clamping never hurts estimation using any of the Fréchet or Weibull upper bounds .

###### Proof.

Applying the function to both sides of the Clamping Lemma 7 with , the right-hand side equals , while the left-hand side is the estimate of after clamping variable . ∎

This was shown previously in restricted settings (Hazan et al., 2013; Zhao et al., 2016). Similar results showing that clamping improves partition function estimation have been obtained for the mean field and TRW approximations (Weller & Domke, 2016), and in certain settings for the Bethe approximation (Weller & Jebara, 2014b) and L-Field (Zhao et al., 2016).

### 3.3 Sequential Sampling

Hazan et al. (2013) derived a sequential sampling procedure for the Gibbs distribution by exploiting the Gumbel trick upper bound on . In the same spirit, one can derive sequential sampling procedures from the Fréchet and Weibull tricks, leading to the following algorithm.

This algorithm is well-defined if for all , which can be shown by canceling terms in the Clamping Lemma 7. We discuss correctness in Appendix B.2. As for the Gumbel sequential sampler of Hazan et al. (2013), the expected number of restarts (and hence the running time) only depend on the quality of the upper bound , and not on the ordering of variables.

### 3.4 Lower Bounds on the Partition Function

Similarly as in the Gumbel trick case (Hazan et al., 2013), one can derive lower bounds on by perturbing an arbitrary subset of variables.

###### Proposition 9.

Let be a product space and a potential function on . Let . For any subset of the variables we have

 c+lnΓ(1+α)α−1αlnE[e−αmaxx{ϕ(x)+γS(xS)}],

where and independently for each setting of .

By averaging such lower bounds corresponding to singleton sets together, we obtain a lower bound on that involves the average-unary perturbation MAP value

 L:=maxx∈X{ϕ(x)+1nn∑i=1γi(xi)}.
###### Corollary 10.

For any , we have the lower bound , where

 L(α):=c+lnΓ(1+α)α−1nαlnE[exp(−nαL)].

Again, can be defined by continuity, where is the Gumbel trick lower bound by Hazan et al. (2013).

## 4 Advantages of the Gumbel Trick

We have seen how the Gumbel trick can be embedded into a continuous family of tricks, consisting of Fréchet, Exponential, and Weibull tricks. We showed that the new tricks can provide more efficient estimators of the partition function in the full-rank perturbation setting (Section 2), and in the low-rank perturbation setting lead to sequential samplers and new bounds on , which can be also more efficient, as we investigate in Section 5.3. To balance the discussion of merits of different tricks, in this section we briefly highlight advantages of the Gumbel trick that stem from its simpler analytical form.

First, by consulting Table 1 we see that the function has the property that the variance of the resulting estimator (of ) does not depend on the value of ; the function is a variance stabilizing transformation for the Exponential distribution.

Second, exploiting the fact that the logarithm function leads to additive perturbations, Maji et al. (2014) showed that the entropy of , the configuration with maximum potential after sum-unary perturbation in the sense of Definition 3, can be bounded as . We extend this result to show how the errors of bounding , sampling, and entropy estimation are related:

###### Proposition 11.

Writing for the Gibbs distribution and for the entropy bound, we have

 (U(0)−lnZ)error in lnZ % bound+KL(x∗∥p)sampling error =B(p)−H(x∗)error in entropy estimation.

Third, the additive character of the Gumbel perturbations can also be used to derive a new result relating the error of the lower bound and of sampling as the configuration achieving the maximum average-unary perturbation value , instead of sampling from the Gibbs distribution :

###### Proposition 12.

Writing for the Gibbs distribution,

 lnZ−L(0)error in lnZ bound≥KL(x∗∗∥p)sampling error≥0.
###### Remark.

While we knew from Hazan et al. (2013) that , this is a stronger result showing that the size of the gap is an upper bound on the KL divergence between the approximate sampling distribution of and the Gibbs distribution .

Proofs of the new results appear in Appendix B.3 and C.2.

Fourth, viewed as a function of the Gumbel perturbations , the random variable has a bounded gradient, allowing earlier measure concentration results (Orabona et al., 2014; Hazan et al., 2016). Proving similar measure concentration results for the expectations appearing in for may be more challenging.

## 5 Experiments

We conducted experiments with the following aims:

1. To show that the higher efficiency of the Exponential trick in the full-rank perturbation setting is useful in practice, we compared it to the Gumbel trick in A* sampling (Maddison et al., 2014) (Section 5.1) and in the large-scale discrete sampling setting of Chen & Ghahramani (2016) (Section 5.2).

2. To show that non-zero values of can lead to better estimators of in the low-rank perturbation setting as well, we compare the Fréchet and Weibull trick bounds to the Gumbel trick bound on a common discrete graphical model with different coupling strengths; see Section 5.3.

### 5.1 A* Sampling

A* sampling (Maddison et al., 2014) is a sampling algorithm for continuous distributions that perturbs the log-unnormalized density with a continuous generalization of the Gumbel trick, called the Gumbel process, and uses a variant of A* search to find the location of the maximum of the perturbed . Returning the location yields an exact sample from the original distribution, as in the discrete Gumbel trick. Moreover, the corresponding maximum value also has the distribution (Maddison et al., 2014). Our analysis in Section 2.3 tells us that the Exponential trick yields an estimator with lower MSE than the Gumbel trick; we briefly verified this on the Robust Bayesian Regression experiment of Maddison et al. (2014). We constructed estimators of from the Gumbel and Exponential tricks (debiased version, see Section 2.3.2), and assessed their variances by constructing each estimator times and looking at the sample variance. Figure 2(a) shows that the Exponential trick requires up to 40% fewer samples to reach a given MSE.

### 5.2 Scalable Partition Function Estimation

Chen & Ghahramani (2016) considered sampling from a discrete distribution of the form when the number of factors is large relative to the sample space size . Computing i.i.d. Gumbel perturbations for each is then relatively cheap compared to evaluating all potentials . Chen & Ghahramani (2016) observed that each (perturbed) potential can be estimated by subsampling the factors, and potentials that appear unlikely to yield the MAP value can be pruned off from the search early on. The authors formalized the problem as a Multi-armed bandit problem with a finite reward population and derived approximate algorithms for efficiently finding the maximum perturbed potential with a probabilistic guarantee.

While Chen & Ghahramani (2016) considered sampling, by modifying their procedure to return the value of the maximum perturbed potential rather than the argmax (cf equations (1) and (2)), we can estimate the partition function instead. However, the approximate algorithm only guarantees to find the MAP configuration with a probability . Figure 2(b) shows the results of running the Racing-Normal algorithm of Chen & Ghahramani (2016) on the synthetic dataset considered by the authors with the “very hard” noise setting . For low error bounds the Exponential trick remained close to optimal, but for a larger error bound the Weibull trick interpolation between the Gumbel and Exponential tricks proved useful to provide an estimator with lower MSE.

### 5.3 Low-rank Perturbation Bounds on lnZ

Hazan & Jaakkola (2012) evaluated tightness of the Gumbel trick upper bound on binary spin glass models. We show one can obtain more accurate estimates of on such models by choosing . To account for the fact that in practice an expectation in is replaced with a sample average, we treat as an estimator of with asymptotic bias equal to the bound gap , and estimate its MSE.

Figure 4 shows the MSEs of as estimators of on () binary pairwise grid models with unary potentials sampled uniformly from and pairwise potentials from (attractive models) or from (mixed models), for varying coupling strengths . We replaced the expectations in ’s with sample averages of size , using libDAI (Mooij, 2010) to solve the MAP problems yielding these samples. We constructed each estimator times to assess its variance.

## 6 Discussion

By casting partition function evaluation as a parameter estimation problem for the exponential distribution, we derived a family of methods of which the Gumbel trick is a special case. These methods can be equivalently seen as (1) perturbing models using different distributions, or as (2) averaging standard Gumbel perturbations in different spaces, allowing implementations with little additional cost.

We showed that in the full-rank perturbation setting, the new Exponential trick provides an estimator with lower MSE, or instead allows using up to 40% fewer samples than the Gumbel trick estimator to reach the same MSE.

In the low-rank perturbation setting, we used our Fréchet, Exponential and Weibull tricks to derive new bounds on and sequential samplers for the Gibbs distribution, and showed that these can also behave better than the corresponding Gumbel trick results. However, the optimal trick to use (as specified by ) depends on the model, sample size, and MAP solver used (if approximate). Since in practice the dominant computational cost is carried by solving repeated instances of the MAP problem, one can try and assess different values of on the problem at hand. That said, we believe that investigating when different tricks yield better results is an interesting avenue for future work.

Finally, we balanced the discussion by pointing out that the Gumbel trick has a simpler analytical form which can be exploited to derive more interesting theoretical statements in the low-rank perturbation setting. Beyond existing results, we derived new connections between errors of different procedures using low-rank Gumbel perturbations.

## Acknowledgements

The authors thank Tamir Hazan for helpful discussions, and Mark Rowland, Maria Lomeli, and the anonymous reviewers for helpful comments. AW acknowledges support by the Alan Turing Institute under EPSRC grant EP/N510129/1, and by the Leverhulme Trust via the CFI.

## References

• Belanger et al. (2013) Belanger, D., Sheldon, D., and McCallum, A. Marginal inference in MRFs using Frank-Wolfe. In NIPS Workshop on Greedy Optimization, Frank-Wolfe and Friends, 2013.
• Bertasius et al. (2017) Bertasius, G., Liu, Q., Torresani, L., and Shi, J. Local Perturb-and-MAP for Structured Prediction. In AISTATS, 2017.
• Boykov et al. (2001) Boykov, Y., Veksler, O., and Zabih, R. Fast approximate energy minimization via graph cuts. IEEE Transactions on pattern analysis and machine intelligence, 23(11):1222–1239, 2001.
• Casella & Berger (2002) Casella, G. and Berger, R. Statistical inference, volume 2. Duxbury Pacific Grove, CA, 2002.
• Chen & Ghahramani (2016) Chen, Y. and Ghahramani, Z. Scalable discrete sampling as a multi-armed bandit problem. In ICML, 2016.
• Cox & Oakes (1984) Cox, D. and Oakes, D. Analysis of survival data, volume 21. CRC Press, 1984.
• Darbon (2009) Darbon, J. Global optimization for first order Markov random fields with submodular priors. Discrete Applied Mathematics, 157(16):3412 – 3423, 2009.
• Ermon et al. (2013) Ermon, S., Sabharwal, A., and Selman, B.

Taming the curse of dimensionality: Discrete integration by hashing and optimization.

In ICML, 2013.
• Gurland (1956) Gurland, J. An inequality satisfied by the Gamma function. Scandinavian Actuarial Journal, 1956(2):171–172, 1956.
• Hazan & Jaakkola (2012) Hazan, T. and Jaakkola, T. On the partition function and random maximum a-posteriori perturbations. In ICML, 2012.
• Hazan et al. (2013) Hazan, T., Maji, S., and Jaakkola, T. On sampling from the Gibbs distribution with random maximum a-posteriori perturbations. In NIPS. 2013.
• Hazan et al. (2016) Hazan, T., Orabona, F., Sarwate, A., Maji, S., and Jaakkola, T. High dimensional inference with random maximum a-posteriori perturbations. CoRR, abs/1602.03571, 2016.
• Kim et al. (2016) Kim, C., Sabharwal, A., and Ermon, S. Exact sampling with integer linear programs and random perturbations. In AAAI, pp. 3248–3254, 2016.
• Kolmogorov (2006) Kolmogorov, V. Convergent tree-reweighted message passing for energy minimization. IEEE transactions on pattern analysis and machine intelligence, 28(10):1568–1583, 2006.
• Krishnan et al. (2015) Krishnan, Rahul G, Lacoste-Julien, Simon, and Sontag, David. Barrier Frank-Wolfe for Marginal Inference. In NIPS. 2015.
• Lauritzen (1996) Lauritzen, S. Graphical models. Oxford statistical science series. Clarendon Press, Oxford, 1996. Autre tirage : 1998.
• Maddison (2016) Maddison, C. A Poisson process model for Monte Carlo. In Hazan, T., Papandreou, G., and Tarlow, D. (eds.), Perturbation, Optimization, and Statistics. MIT Press, 2016.
• Maddison et al. (2014) Maddison, C., Tarlow, D., and Minka, T. A sampling. In NIPS. 2014.
• Maji et al. (2014) Maji, S., Hazan, T., and Jaakkola, T. Active boundary annotation using random MAP perturbations. In AISTATS, 2014.
• Mooij (2010) Mooij, J. libDAI: A free and open source C++ library for discrete approximate inference in graphical models. Journal of Machine Learning Research, 11, 2010.
• Orabona et al. (2014) Orabona, F., Hazan, T., Sarwate, A., and Jaakkola, T. On measure concentration of random maximum a-posteriori perturbations. In ICML, 2014.
• Papandreou & Yuille (2010) Papandreou, G. and Yuille, A. Gaussian sampling by local perturbations. In NIPS. 2010.
• Papandreou & Yuille (2011) Papandreou, G. and Yuille, A. Perturb-and-MAP random fields: Using discrete optimization to learn and sample from energy models. In Proc. IEEE Int. Conf. on Computer Vision (ICCV), pp. 193–200, November 2011.
• Tarlow et al. (2012) Tarlow, D., Adams, R., and Zemel, R. Randomized optimum models for structured prediction. In AISTATS, 2012.
• Wainwright & Jordan (2008) Wainwright, M. and Jordan, M. Graphical Models, Exponential Families, and Variational Inference. Found. Trends Mach. Learn., 1(1-2):1–305, January 2008.
• Weller & Domke (2016) Weller, A. and Domke, J. Clamping improves TRW and mean field approximations. In AISTATS, 2016.
• Weller & Jebara (2014a) Weller, A. and Jebara, T. Approximating the Bethe partition function. In UAI, 2014a.
• Weller & Jebara (2014b) Weller, A. and Jebara, T. Clamping variables and approximate inference. In NIPS, 2014b.
• Wright & Nocedal (1999) Wright, S. and Nocedal, J. Numerical optimization. Springer Science, 35:67–68, 1999.
• Zhao et al. (2016) Zhao, J., Djolonga, J., Tschiatschek, S., and Krause, A. Variable clamping for optimization-based inference. In

NIPS Workshop on Advances in Approximate Bayesian Inference

, December 2016.

## APPENDIX: Lost Relatives of the Gumbel Trick

Here we provide proofs for the results stated in the main text, together with additional supporting lemmas required for these proofs.

## Appendix A Comparison of Gumbel and Exponential tricks

In Section 2.3.1 we analyzed the asymptotic efficiency of different estimators of by measuring their asymptotic variance. (As all our estimators in the full-rank perturbation setting are consistent, their bias is in the limit of infinite data, and so this asymptotic variance equals the asymptotic MSE.) In the non-asymptotic regime, where an estimator is constructed from a finite set of samples, we can analyze both the variance and the bias of the estimator. While in most cases these cannot be obtained analytically and there we can resort to an empirical evaluation, for the estimators stemming from the Gumbel and Exponential tricks analytical treatment turns out to be possible using standard methods.

### a.1 Estimating Z

##### Gumbel trick

The Gumbel trick yields an unbiased estimator for , and we can turn it into a consistent estimator of by exponentiating it:

 ^Z:=exp(1MM∑m=1Xm)% whereX1,…,XMiid∼Gumbel(−c+lnZ).

Recalling that the moment generating function of a distribution is , we can obtain by using independence of the samples:

 E[^Z] =M∏m=1E[eXm/M]=(Γ(1−1/M)e(lnZ−c)/M)M=Γ(1−1/M)Me−cZ, E[^Z2] =M∏m=1E[e2Xm/M]=(Γ(1−2/M)e2(lnZ−c)/M)M=Γ(1−2/M)Me−2cZ2.

Therefore the squared bias, variance and MSE of the estimator are, respectively:

 bias(^Z)2 =(E[^Z]−Z)2=Z2(Γ(1−1/M)Me−c−1), var(^Z) =E[^Z2]−E[^Z]2=Z2(Γ(1−2/M)Me−2c−Γ(1−1/M)2Me−2c), MSE(^Z) =bias(^Z)2+var(^Z)=Z2(Γ(1−2/M)Me−2c−2Γ(1−1/M)Me−c+1).

These formulas hold for where the moment generating functions are defined. For the estimator has infinite bias (and infinite variance), and for it has infinite variance. Figure 1 (left) shows the functional dependence of on the number of samples , in units of .

##### Exponential trick

The Exponential trick yields an unbiased estimator of , and we can turn it into a consistent estimator of by inverting it:

 ^Z:=(1MM∑m=1Xm)−1% whereX1,…,XMiid∼Exp(Z).

As are independent and exponentially distributed with identical rates

, their sum follows the Gamma distribution with shape

and rate . Therefore the estimator can be written as , where . Recalling the mean and variance of the Inverse-Gamma distribution, we obtain:

 bias(^Z)2 =(E[^Z]−Z)2=Z2(MM−1−1)=Z21M−1, var(^Z) =Z2M21(M−1)2(M−2), MSE(^Z) =bias(^Z)2+var(^Z)=Z2M−2+M2(M−1)2(M−2)=Z2M+2(M−1)(M−2).

Again these formulas hold for where the relevant expectations are defined: for the estimator has infinite bias, and for it has infinite variance. Figure 1 (left) shows the functional dependence of on the number of samples , in units of . By inspecting the curves we observe that the Gumbel trick estimator requires roughly 45% more samples to yield the same MSE as the Exponential trick estimator.

### a.2 Estimating lnZ

A similar analysis can be performed for estimating rather than . In that case the Gumbel trick estimator of is unbiased and has variance (and thus MSE) equal to . On the other hand, the Exponential trick estimator is

 ˆlnZ=−ln(1MM∑m=1Xm)whereX1,…,XMiid∼Exp(Z).

Again and by reference to properties of the Gamma distribution,

 bias(ˆlnZ)2 =(E[^Z]−Z)2=(lnM−(ψ(M)−lnZ)−lnZ)2=(lnM−ψ(M))2, var(ˆlnZ) =ψ1(M), MSE(ˆlnZ) =bias(ˆlnZ)2+var(ˆlnZ)=(lnM−ψ(M))2+ψ1(M),

where is the digamma function and is the trigamma function. Note that the estimator can be debiased by subtracting its bias . Figure 1 (right) compares the MSE of the Gumbel and Exponential trick estimators of . We observe that the Gumbel trick estimator performs better only for , and even in that case the Exponential trick estimator is better when debiased.

## Appendix B Sum-unary perturbations

Recall that sum-unary perturbations refer to the setting where each variable’s unary potentials are perturbed with Gumbel noise, and the perturbed potential of a configuration sums the perturbations from all variables (see Definition 3 in the main text). Using sum-unary perturbations we can derive a family of upper bounds on the log partition function (Proposition 4) and construct sequential samplers for the Gibbs distribution (Algorithm 1). Here we provide proofs for the related results stated in Sections 3.1 and 3.2.

##### Notation

We will write for , where , when we find this increases clarity of our exposition.

###### Lemma 13 (Weibull and Fréchet tricks).

For any finite set and any function , we have

 pow−α∑y∈Ypow−1/αh(y) =EW[miny{h(y)W(y)Γ(1+α)}]where {W(y)}y∈Yi.i.d.∼Weibull(1,α−1)for α∈(0,∞), pow−α∑y∈Ypow−1/αh(y) =EF[maxy{h(y)F(y)Γ(1+α)}]where {F(y)}y∈Yi.i.d.∼Fr´echet(1,−α−1)for α∈(−1,0).
###### Proof.

This follows from setting up competing exponential clocks with rates and then applying the function as in Example 1 for the case of the Weibull trick. The case of the Fréchet trick is similar, except that is strictly decreasing for , hence the maximization in place of the minimization. ∎

### b.1 Upper bounds on the partition function

###### Proposition (4.)

For any , the upper bound holds with

 U(α):=nlnΓ(1+α)α+nc−1αlnEγ[e−αU].
###### Proof.

We show the result for using the Weibull trick; the case of can be proved similarly using the Fréchet trick. The idea is to prove by induction on that , so that the claimed result follows by applying the monotonically decreasing function .

The base case is the Clamping Lemma 7 below with . Now assume the claim for and for define

 Un−1(α,x1):=(n−1)lnΓ(1+α)α+(n−1)c−1αlnEγ[exp(−αmaxx2,…,xn{ϕ(x)+n∑i=2γi(xi)})].

With this definition, the Clamping Lemma with states that , so:

 Z−α ≥pow−α∑x1∈X1pow−1/αe−αUn−1(α,x1) [inductive hypothesis] ≥pow−αpow−1/αe−αU(α) [Clamping Lemma] =e−αU(α),

as required to complete the inductive step. ∎

###### Proposition (5.)

The limit of as exists and equals , i.e. the Gumbel trick upper bound.

###### Proof.

Recall that . The first term tends to as by L’Hôpital’s rule, where is the digamma function. The second term is constant in . In the last term, is the moment generating function of evaluated at , and as such its derivative at exists and equals the negative of the mean of . Hence by L’Hôpital’s rule,

 −limα→01αlnE[e−αU]=−limα→0−E[U]E[e−αU]=E[U]=U(0).

The claimed result then follows by the Algebra of Limits, as the contributions of the first two terms cancel. ∎

###### Proposition (6.)

The function is differentiable at and the derivative equals

 ddαU(α)∣∣∣α=0=nπ212−var(U)2.
###### Proof.

First we show that is differentiable on , and that the limit of the derivative as exists and equals .

The first term of is