Learning Approximately Objective Priors

04/04/2017 ∙ by Eric Nalisnick, et al. ∙ University of California, Irvine 0

Informative Bayesian priors are often difficult to elicit, and when this is the case, modelers usually turn to noninformative or objective priors. However, objective priors such as the Jeffreys and reference priors are not tractable to derive for many models of interest. We address this issue by proposing techniques for learning reference prior approximations: we select a parametric family and optimize a black-box lower bound on the reference prior objective to find the member of the family that serves as a good approximation. We experimentally demonstrate the method's effectiveness by recovering Jeffreys priors and learning the Variational Autoencoder's reference prior.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Bayesian inference is distinguished by its ability to incorporate existing knowledge in a principled way. This knowledge is encoded by the prior distribution, and its specification is widely regarded as “the most important step” in the Bayesian approach given that it can “drastically alter the subsequent inference” [24]. The choice is especially crucial in settings with few observations, as the data cannot overwhelm a harmful prior. Even in situations where there are enough observations to make the prior’s influence on the posterior benign, the marginal likelihood is still sensitive to the choice, possibly resulting in the selection of an inferior model.

The best case scenario for specifying a prior is when—unsurprisingly—there is existing information about the phenomenon we wish to model. For example, choosing a good prior for parameters in a model of galaxy formation can usually be done by consulting an astrophysicist or the relevant research literature.

In many practical situations, however, there are no available means for obtaining useful prior information. For example, in high-dimensional problems the parameter space is often inherently unintuitive. The usual way to proceed is to pick a noninformative prior that is flat and/or objective. By flat prior we mean a distribution that does not have any substantial concentration of its mass; maximum entropy priors [9] often exhibit this characteristic. An objective prior is one that has some formal invariance property. The two best known examples of objective priors are Jeffreys [10] and reference [3]

priors, which are both invariant to model reparametrization. Some priors are both objective and flat: the Jeffreys prior for the Gaussian mean is the (improper) uniform distribution. However, just because a prior is relatively flat does not mean it is objective. For example, the Bernoulli’s Jeffreys prior is the arcsine distribution, which, having vertical asymptotes at

and , is conspicuously not flat.

Since there are no guarantees that what looks to be a flat prior might not harbor hidden subjectivity, objective priors seem to be the better ‘default’ choices. However, the mathematical rigor that makes objective priors attractive also makes their use problematic: their derivation is difficult for all but the simplest models. To be specific, solving the calculus of variations problem for a reference prior requires, among other properties, an analytical form for the posterior distribution, which is rarely available.

In this paper we broaden the potential use of objective priors by describing methods for learning high-fidelity reference prior approximations. The proposed method is akin to black-box (posterior) variational inference [20]: we posit a parametric family of distributions and perform derivation-free optimization to find the member of the family closest to the true reference prior. Doing so would be useful, for example, if one wishes to have an objective prior that preserves model conjugacy111Reference priors are often improper distributions.

. The modeler could employ the techniques proposed below to find the conjugate prior’s parameter setting that makes it closest to objective. Moreover, these methods learn a reference prior for a given model independently of any data source

222Except when the model is for a conditional distribution, i.e. . In this case, samples of are necessary to learn the approximate reference prior., which means that obtaining a reference prior for a particular model needs to be done only once.

In our experimental results we demonstrate that the proposed framework recovers the Jeffreys prior better than existing numerical methods. We also analyze the optimization objective, providing intuition behind a number of hyper-parameter choices. And lastly, we learn a reference prior for a Variational Autoencoder [12]

. In an interesting case study, we see that the Variational Autoencoder’s reference prior differs markedly from the standard Normal distribution that is commonly used as the prior on the latent space.

2 Background and
Related Work

We begin by defining reference priors, highlighting their connection to the Jeffreys prior, and summarizing the related work on computing intractable reference priors. We use the following notation throughout the paper. Define the likelihood to be where are the model parameters and is the dataset, which is comprised of i.i.d. observations . denotes the prior, the posterior, and the marginal likelihood (or model evidence). When we refer to the ‘likelihood function’ we mean the functional form of the data model, . We write expectations with respect to the dataset likelihood, but because of ’s i.i.d. assumption, these can be written equivalently in terms of each data instance; for example: .

2.1 Reference Priors

Reference priors [1, 4] (RPs) are objective Bayesian prior distributions derived for a given likelihood function by finding the prior that maximizes the data’s influence on the posterior distribution. Equivalently, the prior’s influence on the posterior is minimized, which is precisely the behavior we desire if we wish to represent a state of ignorance about the model parameters. The RP’s data-driven nature yields ‘frequentist-esque’ posteriors: for large sample sizes, the credible interval approximates a confidence interval with significance level [7]

. Thus, RPs give results that are the nearest Bayesian equivalent to maximum likelihood estimation, and this behavior is how they derive their name: RPs serve as a

reference against which to test subjective priors.

Definition. We now state the RP definition formally. A RP is the distribution that maximizes the mutual information between the parameters and the data [1, 3]:


Here denotes mutual information. In the second line, is (by definition) decomposed into separate marginal and conditional entropy terms, showing that maximizing in turn maximizes the prior’s uncertainty while minimizing the posterior’s uncertainty. The second term reflects the RP’s data-driven nature as it encourages the posterior to contract quickly (as

increases). Another way to see how the RP accentuates the data’s influence is by writing the mutual information in terms of a Kullback-Leibler divergence (KLD):

. This form shows that increasing decreases the similarity between the posterior and prior.

Solution. Solving Equation 1 for is a calculus of variations problem whose solution can be expressed by re-writing the mutual information as


Clearly, the mutual information is maximized when . See Bernardo (1979) for a complete discussion of the derivation. Equation 2 also makes clear the analytical obstacles that need to be overcome to solve the optimization problem for a given model: the functional requires that the log posterior—which is usually intractable to compute—be integrated over the likelihood function. Note that the solution is commonly not a proper distribution (that integrates to 1).

Relation to Jeffreys Priors. RPs are equal to the Jeffreys333The Jeffreys prior is defined as where denotes the Fisher information matrix. in one dimension but not in general. The equivalence is obtained by invoking the Bernstein Von Mises theorem: setting where is the maximum likelihood estimate and is the Fisher information matrix. RPs, also like the Jeffreys, are invariant to model reparametrization, which follows from the fact that the mutual information is itself invariant to a change in parametrization [1].

2.2 Related Work

Next we review existing techniques for approximating intractable RPs. These methods have a notable lack of scalability, requiring numerical integration over the parameter space. Nonetheless, since they share some fundamental similarities with our proposed method, we reproduce their main components so that we can later discuss how our method handles the same analytical difficulties.

Numerical Algorithm. Berger et al. (2009) proposed a numerical method for computing a RP’s value at any given point . Their method is, simply, to calculate numerically via Monte Carlo approximations:


where is an improper uniform prior over the parameter space. The method proceeds by sampling datasets from the likelihood function, i.e. , and parameter values from the prior, i.e. . The posterior is then approximated as . This numerical approximation has two significant downsides. The first is that the user must specify the points at which to compute the prior, and the second is that numerically integrating over the parameter space is computationally expensive in even low dimensions.


Lafferty & Wasserman (2001) proposed a Markov Chain Monte Carlo (MCMC) method for sampling from a RP. Their approach involves running the Metropolis-Hastings algorithm on the following ratio



where is the iteration index, is the entropy of the likelihood function, and where are the parameter samples collected during the previous iteration.

While this MCMC approach may look dissimilar to Berger et al.’s method at first glance, the two methods are in fact related. We can see the connection by examining just one of the distributions in the ratio (when ):

The line above becomes equivalent to Equation 3 if we use a Monte Carlo approximation of the expectation over and then exponentiate both sides. Despite this close connection between the two methods, Lafferty & Wasserman (2001)’s approach is superior to that of Berger et al.’s since it draws samples from the prior instead of merely computing its value at points the user must select. Yet the same costly discrete approximations of the integrals will be required.

Reference Distance Method. The third approach, and the only other that we are aware of for finding approximate RPs, is the Reference Distance Method (RDM) proposed by Berger et al. (2015). This method focuses on finding a joint RP by minimizing the divergence between a parametric family and the marginal RPs [2]. Since we are concerned with models for which even the marginal RPs are intractable, the RDM is not a relevant point for comparison.

3 Learning Reference Prior Approximations

We now turn to the primary contribution of this paper: approximating RPs by learning the parameters of the approximation. Our proposed approach contrasts with Berger et al. (2009)’s and Lafferty & Wasserman (2001)’s in that their methods are not model-based. In other words, their procedures produce no parametric artifact for the prior unless a post-hoc step of model fitting is carried out. Our black-box optimization framework subsumes the utility of the numerical and MCMC methods as it can directly learn either a parametric approximation to evaluate the prior’s density or a functional sampler that can generate new samples from the prior at any later time.

3.1 Method #1: Information Lower Bound

Inspired by recent advances in posterior variational inference (VI), we use similar ideas to optimize an approximate prior—call it with parameters —so that it is the distribution in the family closest to the true RP . The mutual information still serves as the natural optimization objective; the difference is that we take the over , instead of the density itself, such that :


In the final line above, we wrote the mutual information as the difference between the negative likelihood entropy and the expected log marginal likelihood because this is ’s most tractable form: it contains only instead of and . We use the notional to emphasize that ’s distribution is a function of .

Bounding log p(). The marginal likelihood term in Equation 5 is still problematic, and thus, just as in posterior VI, we need some tractable bound to optimize instead. Since we need to bound from below, must be bounded from above. Hence, unfortunately, we cannot use the evidence lower bound (ELBO) common in posterior VI. As an alternative we use the variational Rényi bound [14] (VR), which is defined as:

Plugging the VR bound into Equation 5 yields a general lower bound on the mutual information:


In theory, setting provides the tightest bound, and decreasing loosens the bound. However, as we discuss next, practical implementation requires a negative value for .

Optimization Objective. The expectation within the VR bound usually will not be analytically solvable, requiring the use of a Monte Carlo approximation (which we will refer to as MC-VR). Yet, introducing sampling into the VR bound can give rise to numerical challenges. The MC-VR estimator is an exponentiated form of the harmonic mean estimator [19]

, which is notorious for its high variance. Furthermore, approximating the expectation with samples, since they reside inside the logarithm, biases the bound downward. Li & Turner (2016) propose the following

VR-max estimator, corresponding to , to cope with these issues: where indexes samples . We find that the VR-max estimator generally preserves the bound and needs to be checked only in high dimensions (), which is a regime not well suited for reference priors anyway (due to overfitting).

Introducing the VR-max estimator into Equation 5 yields a tractable lower bound on the mutual information:


Maximizing with respect to the prior’s parameters results in as long as is sufficiently expressive. can be interpreted as follows. The first term is the entropy of the likelihood function, and thus maximizing its negation encourages certainty in the data model. The second term, the expected value of the VR-max estimator under the likelihood, encourages diversity in by forcing a dataset

to have low probability under other parameter settings


Connection to Previous Work. Further understanding of can be gained by re-writing it to see its relationship to Berger et al. (2009)’s and Lafferty & Wasserman (2001)’s methods. Pulling out the expectation over the likelihood, we have the equivalent form:

which is the difference between the data’s log-likelihood under the model (i.e. parameter setting) that generated this data and the data log-likelihood under several samples from the prior. We see that optimization forces the prior to place most of its mass on parameters that generate identifiable datasets—or in other words, datasets that have high probability under only their true generative model. Turning back to Berger et al. (2009)’s Equation 3, and recalling its connection to the MCMC method, we see each method is approximately computing with the critical difference being that Berger et al. (2009) and Lafferty & Wasserman (2001) approximate with whereas we use in order to ensure a proper lower bound.

3.1.1 Black-Box, Gradient-Based Optimization

We now address how to compute and optimize (Equation 7) efficiently using differentiable Monte Carlo approximations.

Computing the Expectations. Consider first the three expectations in Equation 7. Starting with the term, for many predictive models, is either Gaussian, as in regression, or Bernoulli, as in binary classification, meaning can be computed analytically444To keep the notation simple, in our discussion of conditional models the dependence on the features is implicit. Writing the entropy with as the feature matrix and

as the vector of labels, we have:

.. The second term, , is simply the cross-entropy between and where is the sample that maximizes the likelihood. This term also can usually be calculated analytically for regression and classification models. The only component that will typically be intractable is the expectation over , as the parameters are often buried under nonlinear functions and nested hierarchies. To address this we compute the outer expectation with samples :


for samples from the RP approximation and where denotes the cross-entropy term mentioned above. If both entropy terms can be computed analytically, we can write the expression as a KLD, which we do in the second line by using the identity . If the entropy terms are not analytically tractable, they will need to be estimated by sampling from the likelihood function.

Differentiable Sampling: We can take derivatives through each , thereby allowing for fully gradient-based optimization, by drawing the samples via a differentiable non-centered parametrization (DNCP)—the so-called ‘reparametrization trick’ [12], i.e.

where is the derivative that needs a DNCP in order to be evaluated. Requiring that has a DNCP does not significantly limit the approximating family. For instance, most mixture densities have a DNCP. When dealing with discrete data or parameters, we can use the Concrete distribution [18, 8], a differentiable relaxation of the discrete distribution, to still have fully gradient-based learning.

3.1.2 Implicit Priors

A crucial detail to note about Equation 8 is that it does not require evaluation of the prior’s density. Rather, we need only to draw samples from it. This allows us to use black-box functional samplers as the variational family [21], i.e. ,

is some arbitrary differentiable function (such as a neural network), and

is a fixed noise distribution. We call an implicit prior in this setting since its density function is unknown.

Thus, the proposed information bound provides a ‘built-in’ sampling technique in lieu of Lafferty & Wasserman (2001)’s MCMC algorithm. Although we cannot guarantee the same asymptotically unbiased approximation as MCMC, the lack of restrictions on should allow for a sufficiently expressive sampler. Furthermore, we can persist the sampler just by saving the values of ; there’s no need to save the samples themselves. And since learning a RP for a generative model is dataset independent, could be shared easily via an online repository and users desiring a RP for the same model could download to generate an unbounded number of their own samples. The same can be done when is a proper distribution.

3.1.3 Example: Gaussian Mean

To provide some intuition and to sanity check the proposed approach, consider learning an approximate RP for the mean parameter of a Gaussian density. The RP on is the improper uniform distribution, which can be approximated as a Gaussian with infinite variance: The analytical solution to the KLD term in Equation 8 in this case is:

which is the squared distance between two samples from . Maximizing Equation 8 therefore maximizes the average distance between samples from the RP approximation. If we set and transform to the Normal’s DNCP where , then the optimization objective becomes

and optimization would increase without bound, agreeing with the infinite-variance Normal approximation.

3.2 Method #2: Particle Descent

Next we present a particle-based approximation method, which we outline below. While the core ideas are not significantly different from those of Method #1, here we use a different rearrangement of the mutual information and a lower bound bound on . These changes expose nuances that may be useful if the reader wishes to apply other VI techniques to the problem of RP approximation. Other modern VI methods using transformations [22] or importance weighting [5] could also be applied.

A Particle-Based Approximation. Stein Variational Gradient Descent [16] (SVGD) is a variational inference technique that exploits a connection between the Stein operator and the derivative of the KLD to derive a deterministic particle update rule. At time step , each parameter particle is updated according to:

is a learning rate, is a proper kernel function, and is the cumbersome distribution we wish to approximate. SVGD has the nice property that using one particle reduces to vanilla gradient ascent on (MAP estimation, in the Bayesian setting).

Similarly to (Equation 8), SVGD does not require the approximating density be evaluated, and hence we can draw the particles from a black-box function, i.e. , just as defined in Section 3.1.2. Liu & Feng (2016) call this variant Amortized SVGD (A-SVGD), and it estimates the parameters by finding SVDG’s fixed-point solutions [15]:


This formulation is especially beneficial to SVGD because training can be done with a practical number of particles (), but an unlimited number can be drawn at evaluation time. Lastly, note that the base parameters are updated through the particle, not through the particle in .

A-SVGD for RP Approximations. We can use A-SVGD to learn particle approximations of RPs. A-SVGD requires RP learning be formulated as KLD minimization. In this context recall Equation 2 in which we showed that minimizing 555While Equation 2, in form, looks like a negated KLD, it will take on positive values due to being unnormalized. maximizes . Hence we can treat the functional as the intractable density on which to apply the SVGD operator . The gradient term is then


where is the particle. In line 2, following Berger et al. (2009) in Equation 3, we assume the prior is constant. If a prior is included, it acts as a hyper-prior, regularizing the particles towards a user-specified distribution.

Again we face the problem of evaluating . We could use the VR-max estimator, but here we opt for stability and use the ELBO: . Making this substitution reduces the influence of , the term that encourages diversity in and the generation of identifiable datasets (as discussed in Section 3.1). Yet the derivative of the kernel, the second term in , acts as a (locally) repulsive force on the particles, which may compensate for the introduction of the ELBO. Substituting the approximation of the marginal likelihood into Equation 10 yields:


In the last line we use a Monte Carlo approximation of just as in Method #1, where denotes a sample from . Optimization is performed as before as described in Equation 9.

(a) Bernoulli
(b) Gaussian Scale
(c) Poisson
Figure 1: Approximation via Lower Bound Optimization.

4 Empirical Results

Below we describe several empirical analyses of the proposed methods. Formulating experiments is somewhat difficult due to the fact that RPs do not necessarily improve a model’s ability to generalize to out-of-sample data. In fact, using an RP when a model requires regularization will likely degrade performance. Thus, our main analysis is a case study of the Variational Autoencoder [12, 23]. But before analyzing the Variational Autoencoder’s RP, we check that our methods do indeed recover known RPs for exponential family models.

For all experiments, we used the AdaM optimization algorithm [11] with settings and . Training parameters such as the learning rate, latent dimensionality of the functional sampler , number of samples were chosen based on which combination gave the highest average value of the information lower bound over the last 50 updates. The number of training iterations was set at and the batch size was set to in all cases.

(a) Bernoulli
(b) Gaussian Scale
(c) Poisson
Figure 2: Approximation via Particle Descent.

4.1 Recovering Jeffreys Priors

(a) Bernoulli
(b) Gaussian Scale
(c) Poisson
Figure 3: Quantifying the Approximation Quality.

The Kolmogorov-Smirnov distance (supremum of distance between empirical CDFs) between the Jeffreys/true reference prior and the various approximation techniques. The gray region denotes where the test’s null hypothesis is rejected, meaning there is a statistical difference between the distributions.

We begin experimental evaluation by attempting to recover the true RP for three one-dimensional models: the Bernoulli mean parameter, , the Gaussian scale parameter, , and the Poisson rate parameter, . These are also the Jeffreys priors for the respective models (since we are in the univariate case). The chosen learning rate was for the implicit priors and for the parametric and particle approximations, the number of samples drawn was for all models, and the functional sampler was a linear model with a latent dimensionality of 5, i.e.

. We used a logit-normal distribution for the Bernoulli RP’s parametric approximation and a log-normal for the Gaussian scale’s and Poisson’s RP approximation. Both the logit- and log-normal have DNCPs. For A-SVGD, we used the Sobolev kernel (length scale of

) on the unit interval for the Bernoulli model and an RBF in log space (length scale set via the heuristic in

[16]) for the Gaussian scale and Poisson models.

Qualitative Evaluation. Plots of the density functions learned by the lower bound method (Section 3.1) are shown in Figure 1. The red line shows the Jeffreys prior (the gold-standard RP), the blue line shows the parametric approximation, and the gray histogram represents samples from the implicit prior. Both approximation types have negligible qualitative difference to the red line.

The density functions learned by A-SGVB (Section 3.2) are shown in Figure 2. Again, the red line denotes the true RP, and the gray histogram represents particles sampled from . Here, we do notice some minor differences. For instance, the Bernoulli prior’s right mode seems to be a bit stronger than its left, and the Poisson prior exhibits underestimation in its tail. These defects are likely due to the approximations not being penalized as much as in Method #1 for concentrating their mass in one of the likelihood function’s points of low entropy.

Quantitative Evaluation. Next we quantitatively compare our methods via a two-sample test against three baselines: Berger et al. (2009)’s numerical method, Lafferty & Wasserman (2001)’s MCMC algorithm, and a uniform prior, which serves as a naive flat prior. For Berger et al. (2009)’s method, we use the same number of parameter samples () as our method, set the parameter to , and sample datasets containing points. To generate samples from Berger et al. (2009)’s method, we calculate the prior at evenly spaced grid points across the domain and then treat them as a discrete approximation with each point having probability . We then sample from this discrete distribution times. For the MCMC method, we replicate Lafferty & Wasserman (2001)’s simulations by using a uniform proposal distribution and running for iterations. We kept the last samples drawn (no need to account for auto-correlation due to the uniform proposal). For the Gaussian and Poisson cases, we approximated using points. For all settings, we made sure our approximation methods ran no longer than the baselines, but this was never an issue: our methods converged in a fraction of the time the numerical algorithms needed to run.

We quantify the gap in the approximations via a Kolmogorov-Smirnov two-sample test (KST) under the null hypothesis where is the true RP and is an approximation. We draw samples from the true RP, when it is improper, via the same discrete approximation used for Berger et al. (2009)’s method. The KST computes the distance (KSD) between the distributions as where is the empirical CDF.

Figure 3 shows the KSD between samples from the Jeffreys prior and the various approximation techniques, as the sample size increases. The black dotted line in conjunction with the gray shaded area denotes the threshold at which the null hypothesis (that the distributions are equal) is rejected. The uniform distribution is denoted by the dark gray line, the numerical algorithm by the black, MCMC by the brown, the parametric approximation by the red, the implicit prior by the blue, and the particle approximation by the green. We see that the latter three approximations (ours) have a lower KSD—and thus are closer to the true RP—in almost every experiment. The exceptions are that MCMC is superior to A-SVGD for the Bernoulli (and competitive with the implicit), and the Berger et al. (2009) technique bests A-SVGD and the parametric approximation for the Poisson. The parametric approximation for the Bernoulli and the implicit prior for the Gaussian scale and Poisson are the only methods that achieve conspicuous indistinguishability.

4.2 Optimization Stability

(a) Samples
(b) Dimensionality
Figure 4: Optimization Stability. We train an implicit prior for a multivariate Gaussian and vary (a) the number of samples used in the VR-max estimator, and (b) the Gaussian’s dimensionality.

As discussed in Section 3.1, the VR-max estimator used in Equation 7 has intrinsically high-variance. While we have just shown in the previous section that our approximations better recover the true RP in one dimension, scaling to higher dimensions is a concern (as is also the case for existing techniques). Here we examine optimization progress of an RP approximation for the scale parameters of a multivariate Gaussian with a diagonal covariance matrix. We produce two plots: one showing the information lower bound’s progress (for a linear model implicit prior) when using a different number of samples over which to take the maximum (Figure 4a) and another showing progress as the Gaussian’s dimensionality increases (Figure 4b). For the former, using a five dimensional Gaussian, we see there is a trade-off between lower bound maximization and the number of samples used: using more samples increases the rate of progress but also the objective’s variance. We find that in less than ten dimensions, using around samples (red line) results in a good variance vs progress balance. In Figure 4b, in which we vary the dimensionality of the Gaussian while keeping the number of samples fixed at , we see that the objective’s variance decreases with dimensionality. While this may seem non-intuitive at first, recall that the VR-max estimator acts as a diversity term, finding points in space that give the data high probability even though it was generated with different parameters. As dimensionality inflates, it becomes harder and harder for a finite number of samples to capture these points and thus the term in Equation 8 becomes prone to mode seeking.

4.3 Vae Case Study

(a) Training Configuration
(b) Approximation
Figure 5: Learning the Variational Autoencoder’s Reference Prior.

(a) computational pipeline from the implicit prior through the VAE decoder; (b) RP approximation (contours are generated via kernel density estimation on


Lastly, we study learning an RP approximation for an intractable, neural-network-based model: a Variational Autoencoder [12] (VAE). The standard Normal distribution is often chosen as the prior on the VAE’s latent space [12, 23, 5], and this choice is made more for analytical simplicity rather than convictions based on prior information666“I chose the simple Gaussian prior N(0,I) because it’s simple to demonstrate but also because it results in a relatively friendly objective function.” — D. Kingma, comment taken from r/MachineLearning, 4/12/16.. Thus, we learn an RP for the VAE to investigate the qualities of its objective prior, which was previously intractable.

We trained an implicit prior (IP) for a VAE with output dimensions (MNIST’s size), encoder hidden units with hyperbolic tangent activations, and a two-dimensional latent space for purposes of visualization. The IP is also a one-hidden layer neural network777

Architecture / training paramters: 2000 latent noise dimensions, 1000 hidden dimensions, ReLU activations,

learning rate, samples for VR-max term.. The computational pipeline is depicted in Figure 5a, where denotes the VAE likelihood function (decoder) and denotes the functional sampler. Note that the VAE has two sets of parameters: , the latent variable on which we place the prior, and the weights of the decoder, denoted as . The weights must have some value during RP training and thus we place a standard normal prior on and sample from this prior during optimization of .

Figure 5b shows samples from the VAE’s RP. We see that the learned IP is drastically different than the standard Normal that is typically used: the IP is multimodal and has a much larger variance. Yet, the difference is intuitive: placing most prior mass at opposite sides of the latent space encourages the VAE to space it’s latent representations with as much distance as possible, ensuring they are as identifiable w.r.t. the model likelihood, the VAE decoder, as possible. Interestingly, recent work by Hoffman & Johnson (2016) suggests that VAEs can be improved by multimodal priors: ”[T]he [VAE’s] individual encoding distributions do not have significant overlapthen perhaps we should investigate multimodal priors that can meet halfway” [6]. This suggests using multimodal, dispersed priors encourages flexibility and objectivity in the posterior distribution.

We can also see analytically that the distribution in Figure 5b allows the VAE ‘to follow the data’ as a good RP should. For simplicity, consider using a bivariate Gaussian as the RP approximation, and assuming it captures the same distribution as in Figure 5 (b), it’s parameters would be approximately . Next recall the VAE’s optimization objective (the ELBO): . The first term optimizes the model w.r.t. the data and the second acts as regularization, ensuring the variational posterior is close to the prior. Assuming ’s covariance matrix is also diagonal, we can write . This means that using the standard Normal up-weights the regularization (towards the prior) by about a factor of .

5 Conclusions

We have introduced two flexible, widely applicable, and derivation-free methods for approximating reference priors. The first optimizes a new lower bound on the reference prior objective and allows for parametric or non-parametric approximations to be employed, depending on whether the user prefers to easily evaluate the prior or to have a maximally expressive approximation. The second method uses a recently proposed particle technique to also allow for free-form approximations. We demonstrated quantitatively and qualitatively that these methods can recover the true reference priors for univariate distributions as well as generalize to more exotic models such as Variational Autoencoders.

Looking forward, we believe using similar techniques for constructing priors that optimize objectives other than mutual information presents a promising next step. For example, Liu et al. (2014) showed that priors that maximize divergence measures other than KLD, such as Hellinger distance, between the prior and posterior have desirable properties. Extending the proposed approximation techniques to these other families of objectives may enable new classes of Bayesian prior distributions.


This work was supported by the National Science Foundation under awards IIS-1320527 and DGE-1633631 and by the National Institutes of Health under award NIH-1U01TR001801-01.


  • [1] James O Berger, José M Bernardo, and Dongchu Sun. The formal definition of reference priors. The Annals of Statistics, pages 905–938, 2009.
  • [2] James O Berger, Jose M Bernardo, and Dongchu Sun. Overall objective priors. Bayesian Analysis, 10(1):189–221, 2015.
  • [3] Jose M Bernardo. Reference posterior distributions for Bayesian inference. Journal of the Royal Statistical Society. Series B (Methodological), pages 113–147, 1979.
  • [4] José M Bernardo. Reference analysis. Handbook of Statistics, 25:17–90, 2005.
  • [5] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. International Conference on Learning Representations (ICLR), 2016.
  • [6] Matthew Hoffman and Matthew Johnson. ELBO surgery: yet another way to carve up the variational evidence lower bound. NIPS 2016 Workshop on Advances in Approximate Bayesian Inference, 2016.
  • [7] Telba Z Irony and Nozer D Singpurwalla. Non-informative priors do not exist: A dialogue with jose m. bernardo. Journal of Statistical Planning and Inference, 65(159):189, 1997.
  • [8] E. Jang, S. Gu, and B. Poole. Categorical reparameterization with Gumbel-softmax. International Conference on Learning Representations (ICLR), 2017.
  • [9] Edwin T Jaynes. Information theory and statistical mechanics. Physical Review, 106(4):620, 1957.
  • [10] Harold Jeffreys.

    An invariant form for the prior probability in estimation problems.

    In Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, volume 186, pages 453–461. The Royal Society, 1946.
  • [11] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations (ICLR), 2014.
  • [12] Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. International Conference on Learning Representations (ICLR), 2014.
  • [13] John Lafferty and Larry Wasserman. Iterative Markov chain Monte Carlo computation of reference priors and minimax risk. In

    Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence

    , pages 293–300, 2001.
  • [14] Yingzhen Li and Richard E Turner. Variational inference with Renyi divergence. Neural Information Processing Systems (NIPS), pages 1073–1081, 2016.
  • [15] Qiang Liu and Yihao Feng. Two methods for wild variational inference. ArXiv Preprint, 2016.
  • [16] Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose Bayesian inference algorithm. Neural Information Processing Systems (NIPS), pages 2378–2386, 2016.
  • [17] Ruitao Liu, Arijit Chakrabarti, Tapas Samanta, Jayanta K Ghosh, and Malay Ghosh. On divergence measures leading to Jeffreys and other reference priors. Bayesian Analysis, 9(2):331–370, 2014.
  • [18] C. J. Maddison, A. Mnih, and Y. Whye Teh.

    The concrete distribution: A continuous relaxation of discrete random variables.

    International Conference on Learning Representations (ICLR), 2017.
  • [19] Adrian E Raftery, Michael A Newton, Jaya M Satagopan, and Pavel N Krivitsky. Estimating the integrated likelihood via posterior simulation using the harmonic mean identity. Bayesian Statistics, pages 1–45, 2007.
  • [20] Rajesh Ranganath, Sean Gerrish, and David Blei. Black box variational inference. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, pages 814–822, 2014.
  • [21] Rajesh Ranganath, Dustin Tran, Jaan Altosaar, and David Blei. Operator variational inference. In Advances in Neural Information Processing Systems, pages 496–504, 2016.
  • [22] Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In

    Proceedings of The 32nd International Conference on Machine Learning

    , pages 1530–1538, 2015.
  • [23] Danilo Rezende, Shakir Mohamed, and Daan Wierstra.

    Stochastic backpropagation and approximate inference in deep generative models.

    International Conference on Machine Learning (ICML), pages 1278–1286, 2014.
  • [24] Christian Robert. The Bayesian Choice. Springer, 2001.