stein_discrepancy
Code to compute the Stein discrepancy between a sample distribution and its target
view repo
To improve the efficiency of Monte Carlo estimation, practitioners are turning to biased Markov chain Monte Carlo procedures that trade off asymptotic exactness for computational speed. The reasoning is sound: a reduction in variance due to more rapid sampling can outweigh the bias introduced. However, the inexactness creates new challenges for sampler and parameter selection, since standard measures of sample quality like effective sample size do not account for asymptotic bias. To address these challenges, we introduce a new computable quality measure based on Stein's method that quantifies the maximum discrepancy between sample and target expectations over a large class of test functions. We use our tool to compare exact, biased, and deterministic sample sequences and illustrate applications to hyperparameter selection, convergence rate assessment, and quantifying bias-variance tradeoffs in posterior inference.
READ FULL TEXT VIEW PDFCode to compute the Stein discrepancy between a sample distribution and its target
When faced with a complex target distribution, one often turns to Markov chain Monte Carlo (MCMC) [1] to approximate intractable expectations with asymptotically exact sample estimates
. These complex targets commonly arise as posterior distributions in Bayesian inference and as candidate distributions in maximum likelihood estimation
[2]. In recent years, researchers [e.g., 3, 4, 5] have introduced asymptotic bias into MCMC procedures to trade off asymptotic correctness for improved sampling speed. The rationale is that more rapid sampling can reduce the variance of a Monte Carlo estimate and hence outweigh the bias introduced. However, the added flexibility introduces new challenges for sampler and parameter selection, since standard sample quality measures, like effective sample size, asymptotic variance, trace and mean plots, and pooled and within-chain variance diagnostics, presume eventual convergence to the target [1] and hence do not account for asymptotic bias.To address this shortcoming, we develop a new measure of sample quality suitable for comparing asymptotically exact, asymptotically biased, and even deterministic sample sequences. The quality measure is based on Stein’s method and is attainable by solving a linear program. After outlining our design criteria in Section
2, we relate the convergence of the quality measure to that of standard probability metrics in Section
3, develop a streamlined implementation based on geometric spanners in Section 4, and illustrate applications to hyperparameter selection, convergence rate assessment, and the quantification of bias-variance tradeoffs in posterior inference in Section 5. We discuss related work in Section 6 and defer all proofs to the appendix.Notation We denote the , , and norms on by , , and respectively. We will often refer to a generic norm on with associated dual norms
for vectors
, for matrices , andfor tensors
. We denote the -th standard basis vector by , the partial derivative by , and the gradient of any -valued function by with components .Consider a target distribution with open convex support and continuously differentiable density . We assume that is known up to its normalizing constant and that exact integration under is intractable for most functions of interest. We will approximate expectations under with the aid of a weighted sample, a collection of distinct sample points with weights encoded in a probability mass function . The probability mass function induces a discrete distribution and an approximation for any target expectation . We make no assumption about the provenance of the sample points; they may arise as random draws from a Markov chain or even be deterministically selected.
Our goal is to compare the fidelity of different samples approximating a common target distribution. That is, we seek to quantify the discrepancy between and in a manner that (i) detects when a sequence of samples is converging to the target, (ii) detects when a sequence of samples is not converging to the target, and (iii) is computationally feasible. We begin by considering the maximum deviation between sample and target expectations over a class of real-valued test functions ,
(1) |
When the class of test functions is sufficiently large, the convergence of to zero implies that the sequence of sample measures converges weakly to . In this case, the expression (1) is termed an integral probability metric (IPM) [6]. By varying the class of test functions , we can recover many well-known probability metrics as IPMs, including the total variation distance, generated by , and the Wasserstein distance (also known as the Kantorovich-Rubenstein or earth mover’s distance), , generated by
The primary impediment to adopting an IPM as a sample quality measure is that exact computation is typically infeasible when generic integration under is intractable. However, we could skirt this intractability by focusing on classes of test functions with known expectation under . For example, if we consider only test functions for which , then the IPM value is the solution of an optimization problem depending on alone. This, at a high level, is our strategy, but many questions remain. How do we select the class of test functions ? How do we know that the resulting IPM will track convergence and non-convergence of a sample sequence (Desiderata (i) and (ii))? How do we solve the resulting optimization problem in practice (Desideratum (iii))? To address the first two of these questions, we draw upon tools from Charles Stein’s method of characterizing distributional convergence. We return to the third question in Section 4.
Stein’s method [7] for characterizing convergence in distribution classically proceeds in three steps:
Identify a real-valued operator acting on a set of -valued^{1}^{1}1Scalar functions are more common in Stein’s method, but we will find -valued more convenient. functions of for which
(2) |
Together, and define the Stein discrepancy,
an IPM-type quality measure with no explicit integration under .
Lower bound the Stein discrepancy by a familiar convergence-determining IPM . This step can be performed once, in advance, for large classes of target distributions and ensures that, for any sequence of probability measures , converges to zero only if does (Desideratum (ii)).
Upper bound the Stein discrepancy by any means necessary to demonstrate convergence to zero under suitable conditions (Desideratum (i)). In our case, the universal bound established in Section 3.3 will suffice.
While Stein’s method is typically employed as an analytical tool, we view the Stein discrepancy as a promising candidate for a practical sample quality measure. Indeed, in Section 4, we will adopt an optimization perspective and develop efficient procedures to compute the Stein discrepancy for any sample measure and appropriate choices of and . First, we assess the convergence properties of an equivalent Stein discrepancy in the subsections to follow.
The generator method of Barbour [8] provides a convenient and general means of constructing operators which produce mean-zero functions under (2) . Let represent a Markov process with unique stationary distribution . Then the infinitesimal generator of , defined by
satisfies under mild conditions on and . Hence, a candidate operator can be constructed from any infinitesimal generator.
For example, the overdamped Langevin diffusion, defined by the stochastic differential equation for a Wiener process, gives rise to the generator
(3) |
After substituting for , we obtain the associated Stein operator^{2}^{2}2The operator has also found fruitful application in the design of Monte Carlo control variates [9].
(4) |
The Stein operator is particularly well-suited to our setting as it depends on only through the derivative of its log density and hence is computable even when the normalizing constant of is not.
If we let denote the boundary of (an empty set when ) and represent the outward unit normal vector to the boundary at , then we may define the classical Stein set
of sufficiently smooth functions satisfying a Neumann-type boundary condition. The following proposition – a consequence of integration by parts – shows that is a suitable domain for .
If , then for all .
Together, and form the classical Stein discrepancy , our chief object of study.
In the univariate setting (), it is known for a wide variety of targets that the classical Stein discrepancy converges to zero only if the Wasserstein distance does [ChenGoSh10, 10]. In the multivariate setting, analogous statements are available for multivariate Gaussian targets [11, 12, 13], but few other target distributions have been analyzed. To extend the reach of the multivariate literature, we show in Theorem 2
that the classical Stein discrepancy also determines Wasserstein convergence for a large class of strongly log-concave densities, including the Bayesian logistic regression posterior under Gaussian priors.
If , and is strongly concave with third and fourth derivatives bounded and continuous, then, for any probability measures , only if .
We emphasize that the sufficient conditions in Theorem 2 are certainly not necessary for lower bounding the classical Stein discrepancy. We hope that the theorem and its proof will provide a template for lower bounding for other large classes of multivariate target distributions.
We next establish sufficient conditions for the convergence of the classical Stein discrepancy to zero.
If and with integrable,
One implication of Proposition 3 is that converges to zero whenever converges in mean-square to and converges in mean to .
The analyses and algorithms in this paper readily accommodate non-uniform Stein sets of the form
(5) |
for constants known as Stein factors in the literature. We will exploit this additional flexibility in Section 5.2 to establish tight lower-bounding relations between the Stein discrepancy and Wasserstein distance for well-studied target distributions. For general use, however, we advocate the parameter-free classical Stein set and graph Stein sets to be introduced in the sequel. Indeed, any non-uniform Stein discrepancy is equivalent to the classical Stein discrepancy in a strong sense:
For any ,
In this section, we introduce an efficiently computable Stein discrepancy with convergence properties equivalent to those of the classical discrepancy. We restrict attention to the unconstrained domain in Sections 4.1-4.3 and present extensions for constrained domains in Section 4.4.
Evaluating a Stein discrepancy for a fixed pair reduces to solving an optimization program over functions . For example, the classical Stein discrepancy is the optimum
(6) | ||||
s.t. |
Note that the objective associated with any Stein discrepancy is linear in and, since is discrete, only depends on and through their values at each of the sample points . The primary difficulty in solving the classical Stein program (6) stems from the infinitude of constraints imposed by the classical Stein set . One way to avoid this difficulty is to impose the classical smoothness constraints at only a finite collection of points. To this end, for each finite graph with vertices and edges , we define the graph Stein set,
the family of functions which satisfy the classical constraints and certain implied Taylor compatibility constraints at pairs of points in . Remarkably, if the graph consists of edges between all distinct sample points , then the associated complete graph Stein discrepancy is equivalent to the classical Stein discrepancy in the following strong sense.
If , and with , then
where is a constant, independent of , depending only on the dimension and norm .
Proposition 5 follows from the Whitney-Glaeser extension theorem for smooth functions [14, 15] and implies that the complete graph Stein discrepancy inherits all of the desirable convergence properties of the classical discrepancy. However, the complete graph also introduces order constraints, rendering computation infeasible for large samples. To achieve the same form of equivalence while enforcing only constraints, we will make use of sparse geometric spanner subgraphs.
For a given dilation factor , a -spanner [16, 17] is a graph with weight on each edge and a path between each pair with total weight no larger than . The next proposition shows that spanner Stein discrepancies enjoy the same convergence properties as the complete graph Stein discrepancy.
If , is a -spanner, and , then
Moreover, for any norm, a 2-spanner with edges can be computed in expected time for a constant depending only on and [18]. As a result, we will adopt a 2-spanner Stein discrepancy, , as our standard quality measure.
The final unspecified component of our Stein discrepancy is the choice of norm . We recommend the norm, as the resulting optimization problem decouples into independent finite-dimensional linear programs (LPs) that can be solved in parallel. More precisely, equals
(7) | ||||
We have arbitrarily numbered the elements of the vertex set so that represents the function value , and represents the gradient value .
A small modification to the unconstrained formulation (7) extends our tractable Stein discrepancy computation to any domain defined by coordinate boundary constraints, that is, to with for all . Specifically, for each dimension , we augment the -th coordinate linear program of (7) with the boundary compatibility constraints
(8) |
These additional constraints ensure that our candidate function and gradient values can be extended to a smooth function satisfying the boundary conditions on . Proposition 10 in the appendix shows that the spanner Stein discrepancy so computed is strongly equivalent to the classical Stein discrepancy on .
Algorithm 1 summarizes the complete solution for computing our recommended, parameter-free spanner Stein discrepancy in the multivariate setting. Notably, the spanner step is unnecessary in the univariate setting, as the complete graph Stein discrepancy can be computed directly by sorting the sample and boundary points and only enforcing constraints between consecutive points in this ordering. Thus, the complete graph Stein discrepancy is our recommended quality measure when , and a recipe for its computation is given in Algorithm 2.
We now turn to an empirical evaluation of our proposed quality measures. We compute all spanners using the efficient C++ greedy spanner implementation of Bouts et al. [19] and solve all optimization programs using Julia for Mathematical Programming [20] with the default Gurobi 6.0.4 solver [21]. All reported timings are obtained using a single core of an Intel Xeon CPU E5-2650 v2 @ 2.60GHz.
We begin with a simple example to illuminate a few properties of the Stein diagnostic. For the target
, we generate a sequence of sample points i.i.d. from the target and a second sequence i.i.d. from a scaled Student’s t distribution with matching variance and 10 degrees of freedom. The left panel of Figure
1 shows that the complete graph Stein discrepancy applied to the first Gaussian sample points decays to zero at an rate, while the discrepancy applied to the scaled Student’s t sample remains bounded away from zero. The middle panel displays optimal Stein functions recovered by the Stein program for different sample sizes. Each yields a test function , featured in the right panel, that best discriminates the sample from the target. Notably, the Student’s t test functions exhibit relatively large magnitude values in the tails of the support.
We show in Theorem 9 in the appendix that, when , the classical Stein discrepancy is the optimum of a convex quadratically constrained quadratic program with a linear objective, variables, and constraints. This offers the opportunity to directly compare the behavior of the graph and classical Stein discrepancies. We will also compare to the Wasserstein distance , which is computable for simple univariate target distributions [22] and provably lower bounds the non-uniform Stein discrepancies (5) with for and for [ChenGoSh10, 23]. For and targets and several random number generator seeds, we generate a sequence of sample points i.i.d. from the target distribution and plot the non-uniform classical and complete graph Stein discrepancies and the Wasserstein distance as functions of the first sample points in Figure 2. Two apparent trends are that the graph Stein discrepancy very closely approximates the classical and that both Stein discrepancies track the fluctuations in Wasserstein distance even when a magnitude separation exists. In the case, the Wasserstein distance in fact equals the classical Stein discrepancy because is a Lipschitz function.
Stochastic Gradient Langevin Dynamics (SGLD) [3] with constant step size is a biased MCMC procedure designed for scalable inference. It approximates the overdamped Langevin diffusion, but, because no Metropolis-Hastings (MH) correction is used, the stationary distribution of SGLD deviates increasingly from its target as grows. If is too small, however, SGLD explores the sample space too slowly. Hence, an appropriate choice of
is critical for accurate posterior inference. To illustrate the value of the Stein diagnostic for this task, we adopt the bimodal Gaussian mixture model (GMM) posterior of
[3] as our target. For a range of step sizes , we use SGLD with minibatch size 5 to draw 50 independent sequences of length , and we select the value of with the highest median quality – either the maximum effective sample size (ESS, a standard diagnostic based on autocorrelation [1]) or the minimum spanner Stein discrepancy – across these sequences. The average discrepancy computation consumes s for spanner construction and s per coordinate linear program. As seen in Figure 2(a), ESS, which does not detect distributional bias, selects the largest step size presented to it, while the Stein discrepancy prefers an intermediate value. The rightmost plot of Figure 2(b) shows that a representative SGLD sample of size using the selected by ESS is greatly overdispersed; the leftmost is greatly underdispersed due to slow mixing. The middle sample, with selected by the Stein diagnostic, most closely resembles the true posterior.The approximate random walk MH (ARWMH) sampler [5] is a second biased MCMC procedure designed for scalable posterior inference. Its tolerance parameter controls the number of datapoint likelihood evaluations used to approximate the standard MH correction step. Qualitatively, a larger implies fewer likelihood computations, more rapid sampling, and a more rapid reduction of variance. A smaller yields a closer approximation to the MH correction and less bias in the sampler stationary distribution. We will use the Stein discrepancy to explicitly quantify this bias-variance trade-off.
We analyze a dataset of prostate cancer patients with six binary predictors and a binary outcome indicating whether cancer has spread to surrounding lymph nodes [24]. Our target is the Bayesian logistic regression posterior [1] under a prior on the parameters. We run RWMH () and ARWMH ( and batch size ) for likelihood evaluations, discard the points from the first evaluations, and thin the remaining points to sequences of length . The discrepancy computation time for points averages s for the spanner and s for a coordinate LP. Figure 4 displays the spanner Stein discrepancy applied to the first points in each sequence as a function of the likelihood evaluation count. We see that the approximate sample is of higher Stein quality for smaller computational budgets but is eventually overtaken by the asymptotically exact sequence.
To corroborate our result, we use a Metropolis-adjusted Langevin chain [25] of length as a surrogate for the target and compute several error measures for each sample : normalized probability error , mean error
, and second moment error
for , , , and the -th datapoint covariate vector. The measures, also found in Figure 4, accord with the Stein discrepancy quantification.The Stein discrepancy can also be used to assess the quality of deterministic sample sequences. In Figure 5 in the appendix, for , we plot the complete graph Stein discrepancies of the first points of an i.i.d. sample, a deterministic Sobol sequence [26], and a deterministic kernel herding sequence [27] defined by the norm . We use the median value over 50 sequences in the i.i.d. case and estimate the convergence rate for each sampler using the slope of the best least squares affine fit to each log-log plot. The discrepancy computation time averages s for points, and the recovered rates of and for the i.i.d. and Sobol sequences accord with expected and bounds from the literature [28, 29]. As witnessed also in other metrics [30], the herding rate of outpaces its best known bound of , suggesting an opportunity for sharper analysis.
We have developed a quality measure suitable for comparing biased, exact, and deterministic sample sequences by exploiting an infinite class of known target functionals. The diagnostics of [31, 32] also account for asymptotic bias but lose discriminating power by considering only a finite collection of functionals. For example, for a target, the score statistic of [32] cannot distinguish two samples with equal first and second moments. Maximum mean discrepancy (MMD) on a characteristic Hilbert space [33] takes full distributional bias into account but is only viable when the expected kernel evaluations are easily computed under the target. One can approximate MMD, but this requires access to a separate trustworthy ground-truth sample from the target.
plus 0.3ex
Our integrability assumption together with the boundedness of and imply that and exist. Define the ball of radius , . Since is convex, the intersection is compact and convex with Lipschitz boundary . Thus, the divergence theorem (integration by parts) implies that
for the outward unit normal vector to . The final quantity in this expression equates to zero, as for all on the boundary , is bounded, and for any with for all and .
We let denote the set of real-valued functions on with continuous derivatives and denote the smooth function distance, the IPM generated by
We additionally define the operator norms for vectors , for matrices , and for tensors .
The following result, proved in the companion paper [34], establishes the existence of explicit constants (Stein factors) , such that, for any test function , the Stein equation
has a solution belonging to the non-uniform Stein set .
Suppose that and that is -strongly concave with
For each , let represent the overdamped Langevin diffusion with infinitestimal generator
(9) |
and initial state . Then, for each with bounded first, second, and third derivatives, the function
solves the the Stein equation
(10) |
and satisfies
Hence, by the equivalence of non-uniform Stein discrepancies (Proposition 4), for any probability measure .
The desired result now follows from Lemma 8, which implies that the Wasserstein distance whenever for a sequence of probability measures .
If and are probability measures on , and for all , then
for a standard normal random vector in .
Lemma 2.2 of the companion paper [34] establishes this result for the case ; we omit the proof of the generalization which closely mirrors that of the Euclidean norm case.
Fix any in . By Proposition 1, . The Lipschitz and boundedness constraints on and now yield
To derive the second advertised inequality, we use the definition of the matrix norm, the Fenchel-Young inequality for dual norms, the definition of the matrix dual norm, and the Cauchy-Schwarz inequality in turn:
Since our bounds hold uniformly for all in , the proof is complete.
Fix any , and let and . Since the Stein discrepancy objective is linear in , we have for any . The result now follows from the observation that .
The first inequality follows from the fact that . By the Whitney-Glaeser extension theorem [15, Thm. 1.4] of Glaeser [14], for every function , there exists a function with and for all in the support of . Here is a constant, independent of , depending only on the dimension and norm . Since the Stein discrepancy objective is linear in and depends on only through the values and , we have .
The first inequality follows from the fact that . Fix any and any pair of points . By the definition of , we have . By the -spanner property, there exists a sequence of points with and for which for all and . Since for each , the triangle inequality implies that
Identical reasoning establishes that .
Furthermore, since for each , the triangle inequality and the definition of the tensor norm imply that
Since were arbitrary, and the Stein discrepancy objective is linear in , we conclude that .
If for , and represent the sorted values of , then the non-uniform classical Stein discrepancy is the optimal value of the convex program
(11a) | ||||
s.t. | (11b) | |||
(11c) | ||||
(11d) | ||||
(11e) |
where ,
We say the program (11) is finite-dimensional, because it suffices to optimize over vectors representing the function values () and derivative values () at each sample or boundary point . Indeed, by introducing slack variables, this program is representable as a convex quadratically constrained quadratic program with constraints, variables, and a linear objective. Moreover, the pairwise constraints in this program are only enforced between neighboring points in the sequence of ordered locations . Hence the resulting constraint matrix is sparse and banded, making the problem particularly amenable to efficient optimization.
Proof Throughout, we say that is an extension of if and for each . Since the Stein objective only depends on and through their values at sample points, and any extension have identical objective values.
We will establish our result by showing that every is feasible for the program (11), so that lower bounds the optimum of (11), and that every feasible for (11) has an extension in , so that also upper bounds the optimum of (11).
Fix any . Also, since is -bounded and -Lipschitz, the constraints (11b) must be satisfied. Consider now the -bounded and -Lipschitz extensions of
We know that for all , for, if not, there would be a point and a point such that , which combined with the -Lipschitz property would be a contradiction. Thus, for each sample , the fundamental theorem of calculus gives
The right-hand side of this inequality evaluates precisely to the right-hand side of the constraint (11c). An analogous upper bound using yields (11d).
Finally, consider any point . If , then (11e) is satisfied as for any point on the boundary. Suppose instead that . Without loss of generality, we may assume that . Since is -Lipschitz, we have for all . Integrating both sides of this inequality from to , we obtain
Since , we have . Similarly, by integrating the inequality from to , we have , which combined with yields (11e).
Suppose now that is any function feasible for the program (11). We will construct an extension by first working independently over each interval . Fix an index . Our strategy is to identify a pair of -bounded, -Lipschitz functions and defined on the interval which satisfy for all , for , and . For any such pair, there exists satisfying
and hence we will define the extension
By convexity, the extension derivative is -bounded and -Lipschitz, so we will only need to check that . The maximum magnitude values of occur either at the interval endpoints, which are -bounded by (11e), or at critical points satisfying , so it suffices to ensure that is -bounded at all critical points.
We will use the -bounded, -Lipschitz functions and as building blocks for our extension, since they satisfy for and ,
for . We need only consider three cases.
Without loss of generality, we may assume that and that changes sign. Consider the quantity . If , we let and .
Since, on the interval , is piecewise linear with at most two pieces that can take on the value , has at most two roots within this interval. However, since is continuous, negative for some value of , and nonnegative at , we know has at least two roots. Thus let be the roots of . For any choice of , the convex combination will be exactly outside . Moreover, if , then this combination will be less than on , and if , the combination will be on the whole interval. Hence it suffices to only check the critical points and . By (11e), for , and so will be -bounded.
If instead , we can recycle the argument from Case 1 with and and conclude that is -bounded.
Without loss of generality, we may assume that . Since
continuously interpolates between
and on , it must have a root . Let be the point where changes from one linear portion to another. Then because is monotonic on each linear portion, the fact that means that cannot have a root between and hence has at most one root on . Hence is the unique root of .In a similar fashion, let us define as the root of , and since for all , we have . Define
and . As in Case 2, we will consider two subcases. If , we will let and . By (11e), , and since this is the only critical point, will be -bounded.
For the other case, in which , we choose and . Then (11e) imply that , and, since this is the only critical point, the extension is well-defined on .
It only remains to define our extension outside of the interval when either or is infinite. Suppose . We extend to each using the construction
This extension ensures that is -bounded and -Lipschitz.
Moreover, the constraint (11e)
guarantees that .
Analogous reasoning establishes an extension to .
∎
For with support for , Algorithm 1 computes a Stein discrepancy based on the graph Stein set
Our next result shows that the graph Stein discrepancy based on a -spanner is strongly equivalent to the classical Stein discrepancy.
If , and is a -spanner, then
where is a constant, independent of , depending only on the dimension .
Proof
Fix any , , and with , and consider any -th coordinate boundary projection point
Since , we must have . Moreover, for each dimension , we have , since otherwise, for some and by the continuity of .
The smoothness constraints of the classical Stein set now imply that
and
Comments
There are no comments yet.