Nested sampling has become one of the pillars of Bayesian computation in Astronomy (and other fields), enabling efficient sampling of complicated and high-dimensional posterior distributions for parameter inference, and estimation of marginal likelihoods for model comparison(Skilling, 2006; Feroz et al., 2009; Handley et al., 2015a, b). In spite of its widespread popularity, nested sampling has had a practical limitation owing to the fact that priors must be specified as a bijective transform from the unit hyper-cube. As a consequence, nested sampling has only been practical for a somewhat restrictive class of priors, which have a readily available representation as a bijector.
This limitation has been notably prohibitive for (commonly occurring) situations where one wants to use an interim posterior derived from one experiment as the prior for another. In these situations, while the interim posterior is computable up to a normalization constant so can be sampled from, no bijector representation readily presents itself in general.
In this letter we address the need to provide priors in the form of bijectors as required for nested sampling, by training parametric bijectors to represent priors given only a set of samples from the desired prior. For use cases where interim posteriors are to be used as priors, the bijector can be simply trained on a set of MCMC samples from the interim posterior. In the context of cosmological data analysis, representing interim posteriors as (computationally inexpensive) bijectors has an additional advantage: it circumvents the need to either multiply computationally expensive likelihoods together, or introduce an extra importance sampling step, when performing joint analyses of multiple datasets. This will hence lead to substantial computational (and convenience) gains when performing multi-probe cosmological analyses.
The structure of this letter is as follows. In §2 we briefly review nested sampling theory. In §3 we discuss representation of priors as bijective transformations from a base density. In §4 we use bijectors for representing cosmological parameter posteriors for a number of experiments, demonstrating that they are sufficiently accurate for practical use as priors for subsequent (multi-probe) cosmological analyses. We conclude in §5.
2 Nested sampling and prior specification
Given a likelihood function and prior distribution , nested sampling was incepted by Skilling (2006) as a Monte Carlo tool to compute the Bayesian evidence
whilst simultaneously generating samples drawn from the normalised posterior distribution .
Nested sampling performs these dual processes by evolving an ensemble of live points by initially drawing them from the prior distribution , and then at each subsequent iteration: (a) discarding the lowest likelihood live point and (b) replacing it by a live point drawn from the prior subject to hard constraint that the new point is at a higher likelihood than the discarded point. The set of discarded dead points and likelihoods can be used to compute a statistical estimate of the Bayesian evidence eqn:evidence, and be weighted to form a set of samples from the posterior distribution.
The nested sampling procedure therefore differs substantially from traditional strategies such as Gibbs sampling, Metropolis-Hastings or Hamiltonian Monte Carlo in that it is acts as a likelihood scanner rather than a posterior sampler. It scans athermally, so in fact is capable of producing samples at any temperature and computing the full partition function in addition to the Bayesian evidence (Handley, 2019b).
Implementations of the nested sampling meta-algorithm e.g. MultiNest (Feroz et al., 2009) and PolyChord (Handley et al., 2015a, b) differ primarily in their choice of method by which to draw a new live point from the prior subject to a hard likelihood constraint. Nested sampling algorithms typically assume that each parameter has a prior that is uniform over the prior range i.e. the unit hypercube. As we shall see in the next section, this is not as restrictive as it sounds, since any non-uniform proper prior may be transformed onto the unit hypercube via a bijective transformation.
It should be noted that normalising flows and bijectors have been used by Moss (2020) & Williams et al. (2021) in the context of techniques for generating new live points, but the approach in this letter for using them for prior specification is applicable to all existing nested sampling algorithms.
3 Representing priors as bijectors
is an invertible function between the elements of two sets, with a one-to-one correspondence between elements, ie., each element of one set is paired with precisely one element of the other set and there are no unpaired elements. In the context of probability theory, bijectors are convenient for defining mappings from one probability distribution to another. If a random variate
with probability densityis transformed under a bijector , then samples and probability densities transform as: &y = f(x), x = f^-1(y)
&p(y) = |J|p(x), p(x) = |J^-1|p(y) where is the Jacobian corresponding to the bijector function . Any probability density can be represented as a bijection from a base density, or equivalently, any target density can be completely specified by a base density and bijector from that base density to the target. Note also that any composition of bijectors is also a bijector.
In the context of nested sampling, priors over model parameters should be provided as a bijector that transforms from a base (unit) uniform density to the desired prior , ie., θ= f(u), where &π(θ) = |J| U(f^-1(θ)) In practice, all that is required for specifying nested sampling priors is the bijector function .
In many use cases, such a bijector is readily available. For example, for any univariate prior density, the bijector from the unit uniform is simply the inverse cumulative density function (CDF) of the target prior,
&f(u) = CDF^-1(u),
&CDF(u) = ∫_-∞^u π(θ)dθ
, where the CDF can be interpolated and inverted numerically if the integral is intractable.
However, for the general case of correlated multivariate priors, the requisite bijector is far more involved to compute analytically (see 5.1 of Feroz et al., 2009) and in general numerically requires obtaining by other means. One particularly common scenario where this arises is when one wants to use the (sampled) posterior from one experiment as the prior for another; in these cases, all that is available are samples from the desired prior and un-normalized values of its density at those points.
The solution is to fit a parametric model for the required bijector(with parameters ) to the target prior. In the following sections we describe how to define expressive parametric bijector models, and fit them to target (prior) densities.
3.1 Parametric bijectors
Parametric bijector models have recently been gaining popularity for solving complex density estimation tasks, probabilistic learning and variational inference, with a plethora of models available. In this section we give a brief overview of the key methods (in order of increasing complexity). For a more in-depth review, see Papamakarios et al. (2019).
Compositions of simple invertible functions
Since any composition of bijectors is also a bijector, it is possible to construct flexible parametric bijectors by composing even simple invertible fucntions, such as affine transformations, exponentials, sigmoids, hyperbolic functions, etc.
Some such transformations applied to a base density lead to already well-studied distribution families, such as the “sinh-arcsinh" distributions (Jones & Pewsey, 2009) which are generated by applying a bijector (with some parameters , and
) to a base normal random variate. Chaining a number of such transformations together, interleaved with linear transformations of the parameter space, can already lead to rather expressive families of distributions.
More sophisticated bijector models typically involve constructing normalizing flows parameterized by neural networks. Of these neural network based models, Inverse Autoregressive Flows (IAFs;Kingma et al., 2016) have been demonstrated to be effective at representing complex densities, are fast to train, and fast to evaluate once trained. IAFs define a series of bijectors composed into a normalizing flow, as follows: &z^(0) = n ∼N(0, 1)
&z^(1) = μ^(1)[z^(0)] + σ^(1)[z^(0)]⊙z^(0)
&y = z^(n) = μ^(n)[z^(n-1)] + σ^(n)[z^(n-1)]⊙z^(n-1) where the shifts and scales of the subsequent affine transforms are autoregressive and are parameterized by neural networks, ie., μ^(k)_i = μ^(k)_i[z_1:i-1; w], σ^(k)_i = σ^(k)_i[z_1:i-1; w], where denote the weights and biases of the neural network(s). For more details see Papamakarios et al. (2017).
Continuous normalizing flows
In order to take a further step forward in expressivity, one can replace the discrete set of transforms with an integral of continuous-time dynamics (Chen
et al., 2018; Grathwohl et al., 2018), i.e., defining the bijector as the solution to an initial value problem:
&y = z(t) = ∫_0^t f(z, t; w)dt, where the dynamics are parameterized by a neural network with weights . The elevation from discrete to continuous normalizing flows comes with a significant increase in expressivity, making such models appropriate for the most complex bijector representation tasks. See Chen et al. (2018) and Grathwohl et al. (2018) for more details.
3.2 Fitting bijectors to target densities
Fitting a bijector model to the target can be achieved by minimizing the Kullback-Leibler (KL) divergence between the target and the (model) bijective density (with respect to the model parameters ). Since the KL divergence will not be analytically tractable in general, one can instead minimize a Monte Carlo estimate of the integral given samples from the target prior : L(w) = ^KL(w) = 1N∑_i=1^Nln π^*(θ_i; w), where is the probability density corresponding to the parametric bijector model, with parameters .
Alternatively, when one has access to not only samples from the target prior, but also the values of the density at those samples, it can be advantageous to regress the model to the target density using the L2 norm, ie., minimize the loss functionL(w) = ∑_i=1^N || ln π(θ_i) - ln π^*(θ_i; w) ||^2_2. This has been shown to be less noisy than minimizing the (sampled) KL-divergence in certain cases (Seljak & Yu, 2019) (and can be readily extended to exploit gradient information about the target density if it is available; see Seljak & Yu, 2019 for details).
4 Cosmological posteriors as bijectors
As a concrete and non-trivial cosmological example we choose the concordance CDM model of the universe (Scott, 2018) with an additional spatial curvature parameter . Such models have been debated recently in the literature (Handley, 2019a; Di Valentino et al., 2019; Efstathiou & Gratton, 2020). We choose this model not for its controversy, but merely for the fact that it yields non-trivial banana-like priors and posteriors and therefore proves a more challenging cosmological example for a bijector to learn than models without curvature.
For likelihoods we choose three datasets: (1) The Planck baseline TT,TE,EE+lowE likelihood (without lensing) Planck Collaboration (2020a, b) (hereafter CMB), (2) Baryon Acoustic Oscillation and Redshift Space Distortion measurements from the Baryon Oscillation Spectroscopic Survey (BOSS) DR12 Alam & et al. (2017); Beutler et al. (2011); Ross et al. (2015) (hereafter BAO), and (3) local cosmic distance ladder measurements of the expansion rate, using type Ia SNe calibrated by variable Cepheid stars from the SES collaboration SES Collaboration (2018) (hereafter SES).
The bijector code is implemented as an extension to CosmoChord using forpy111https://github.com/ylikx/forpy to call tensorflow bijectors from FORTRAN. All the nested sampling runs, code and details for reproducing the results in this letter can be found on Zenodo (Handley & Alsing, 2020).
Figure 2 shows our key results for CMB and BAO data. In this case we compare the three routes to a combined CMB and BAO constraint on a CDM universe. The simplest approach is to run nested sampling with the default CosmoMC prior with both likelihoods turned on CMB+BAO. The bijector approach first runs the CMB dataset to update the prior CMB, then after using the posterior samples from this run to train a CMB bijector constructs a distribution to use as the prior for the input for the Bayesian update with the next dataset CMBBAO. In Figure 2 we show that doing this in either order recovers the same posterior.
One critical advantage of this approach is that since much of the cost of a nested sampling run is in compressing from prior to posterior, whilst the first run CMB is expensive, the second update CMBBAO requires considerably fewer likelihood calls. Furthermore, since the journey from prior to posterior is shorter, less poisson error accumulates over nested sampling iterations, making for more accurate posteriors and evidence estimates. For expensive models like those involving cosmic curvature, this is a significant advantage.
It should be noted that as CMB and BAO are in mild tension, it has been argued that these combined constraints should be viewed with some scepticism (Handley, 2019a; Vagnozzi et al., 2020), and this tension is much stronger between CMB and SES data (Verde et al., 2019). Whilst this is a general cause for simultaneous concern and excitement scientifically speaking, in the context of this work this also serves to highlight the expressivity of these bijector models when combined with nested sampling chains. The tail behaviour of these bijective transformations is sufficiently powerful to still recover the correct answer even when combining likelihoods which are in strong tension. Figure 1 demonstrates that the evidence estimates extracted by this run from anesthetic (Handley, 2019b) are also consistent to within the errors on nested sampling.
Nested sampling implementations have been burdened by a practical limitation that priors need to be specified as bijective transforms from the unit hyper-cube, and in many cases such a representation is not readily available. We have shown that nested sampling priors can be constructed by fitting flexible parametric bijectors to samples from the target prior, providing a general-purpose approach for constructing generic priors for nested sampling. This is of particular importance for use cases where an interim posterior from one experiment is to be used as the prior for another, where a parametric bijector can then simply be trained on MCMC samples from the interim posterior. We have demonstrated the use of parametric bijectors on some typical use-cases from cosmology.
In the longer term, we plan to release a library of cosmological bijectors for use as nested sampling priors in order to save users computational resources and standardise cosmological prior choice.
Justin Alsing was supported by the research project grant “Fundamental Physics from Cosmological Surveys" funded by the Swedish Research Council (VR) under Dnr 2017-04212. Will Handley was supported by a Gonville & Caius Research Fellowship, STFC grant number ST/T001054/1 and a Royal Society University Research Fellowship.
- Alam & et al. (2017) Alam S., et al. 2017, MNRAS, 470, 2617
- Beutler et al. (2011) Beutler F., et al., 2011, MNRAS, 416, 3017
- Chen et al. (2018) Chen R. T., Rubanova Y., Bettencourt J., Duvenaud D. K., 2018, in Advances in neural information processing systems. pp 6571–6583
- Di Valentino et al. (2019) Di Valentino E., Melchiorri A., Silk J., 2019, Nature Astronomy, 4, 196–203
- Efstathiou & Gratton (2020) Efstathiou G., Gratton S., 2020, Monthly Notices of the Royal Astronomical Society: Letters, 496, L91–L95
- Feroz et al. (2009) Feroz F., Hobson M. P., Bridges M., 2009, Monthly Notices of the Royal Astronomical Society, 398, 1601–1614
- Grathwohl et al. (2018) Grathwohl W., Chen R. T., Bettencourt J., Sutskever I., Duvenaud D., 2018, arXiv preprint arXiv:1810.01367
- Handley (2019a) Handley W., 2019a, Curvature tension: evidence for a closed universe (arXiv:1908.09139)
- Handley (2019b) Handley W., 2019b, Journal of Open Source Software, 4, 1414
- Handley & Alsing (2020) Handley W., Alsing J., 2020, Nested sampling with any prior you like (Supplementary inference products), doi:10.5281/zenodo.4247552, https://doi.org/10.5281/zenodo.4247552
- Handley et al. (2015a) Handley W. J., Hobson M. P., Lasenby A. N., 2015a, Monthly Notices of the Royal Astronomical Society: Letters, 450, L61–L65
- Handley et al. (2015b) Handley W. J., Hobson M. P., Lasenby A. N., 2015b, Monthly Notices of the Royal Astronomical Society, 453, 4385–4399
- Jones & Pewsey (2009) Jones M. C., Pewsey A., 2009, Biometrika, 96, 761
- Kingma et al. (2016) Kingma D. P., Salimans T., Jozefowicz R., Chen X., Sutskever I., Welling M., 2016, in Advances in neural information processing systems. pp 4743–4751
- Moss (2020) Moss A., 2020, MNRAS, 496, 328
- Papamakarios et al. (2017) Papamakarios G., Pavlakou T., Murray I., 2017, arXiv preprint arXiv:1705.07057
- Papamakarios et al. (2019) Papamakarios G., Nalisnick E., Rezende D. J., Mohamed S., Lakshminarayanan B., 2019, arXiv preprint arXiv:1912.02762
- Planck Collaboration (2020a) Planck Collaboration 2020a, A&A, 641, A5
- Planck Collaboration (2020b) Planck Collaboration 2020b, A&A, 641, A6
- Ross et al. (2015) Ross A. J., Samushia L., Howlett C., Percival W. J., Burden A., Manera M., 2015, MNRAS, 449, 835
- SES Collaboration (2018) SES Collaboration 2018, ApJ, 855, 136
- Scott (2018) Scott D., 2018, The Standard Model of Cosmology: A Skeptic’s Guide (arXiv:1804.01318)
- Seljak & Yu (2019) Seljak U., Yu B., 2019, arXiv preprint arXiv:1901.04454
- Skilling (2006) Skilling J., 2006, Bayesian Anal., 1, 833
- Vagnozzi et al. (2020) Vagnozzi S., Di Valentino E., Gariazzo S., Melchiorri A., Mena O., Silk J., 2020, arXiv e-prints, p. arXiv:2010.02230
- Verde et al. (2019) Verde L., Treu T., Riess A. G., 2019, arXiv e-prints, p. arXiv:1907.10625
- Williams et al. (2021) Williams M. J., Veitch J., Messenger C., 2021, arXiv e-prints, p. arXiv:2102.11056