On the use of bootstrap with variational inference: Theory, interpretation, and a two-sample test example

by   Yen-Chi Chen, et al.
University of Washington

Variational inference is a general approach for approximating complex density functions, such as those arising in latent variable models, popular in machine learning. It has been applied to approximate the maximum likelihood estimator and to carry out Bayesian inference, however, quantification of uncertainty with variational inference remains challenging from both theoretical and practical perspectives. This paper is concerned with developing uncertainty measures for variational inference by using bootstrap procedures. We first develop two general bootstrap approaches for assessing the uncertainty of a variational estimate and the study the underlying bootstrap theory in both fixed- and increasing-dimension settings. We then use the bootstrap approach and our theoretical results in the context of mixed membership modeling with multivariate binary data on functional disability from the National Long Term Care Survey. We carry out a two-sample approach to test for changes in the repeated measures of functional disability for the subset of individuals present in 1984 and 1994 waves.



There are no comments yet.


page 1

page 2

page 3

page 4


Implicit Variational Inference with Kernel Density Ratio Fitting

Recent progress in variational inference has paid much attention to the ...

Wasserstein Variational Inference

This paper introduces Wasserstein variational inference, a new form of a...

Walsh-Hadamard Variational Inference for Bayesian Deep Learning

Over-parameterized models, such as DeepNets and ConvNets, form a class o...

VarFA: A Variational Factor Analysis Framework For Efficient Bayesian Learning Analytics

We propose VarFA, a variational inference factor analysis framework that...

A Novel Variational Family for Hidden Nonlinear Markov Models

Latent variable models have been widely applied for the analysis and vis...

Good Initializations of Variational Bayes for Deep Models

Stochastic variational inference is an established way to carry out appr...

Generalized Bayesian Updating and the Loss-Likelihood Bootstrap

In this paper, we revisit the weighted likelihood bootstrap and show tha...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Variational inference (Jordan et al., 1999; Wainwright et al., 2008) is a method to approximate complex density functions (Blei et al., 2017) which has been applied to various statistical models such as factor analysis (Ghahramani and Beal, 2000; Khan et al., 2010; Klami et al., 2015), stochastic block models (Celisse et al., 2012; Latouche et al., 2012; Bickel et al., 2013), latent Dirichlet allocation (Blei et al., 2003, 2006), and Gaussian processes (Damianou et al., 2011, 2014).

Variational inference can be used to approximate a posterior distribution as an alternative to Markov Chain Monte Carlo (MCMC), when a sampling procedure would be prohibitively slow or require immense human effort to tune, or to approximate a maximum likelihood estimator (MLE), when computation with the specified likelihood is intractable. In particular, when the model involves a latent structure such as a mixed membership model

(Airoldi et al., 2005, 2008; Wang et al., 2015) or a mixed effect model (Hall et al., 2011a; Westling and McCormick, 2015), finding the MLE may be very challenging and variational inference provides a fast way to obtain an estimate of the parameter. The estimator from variational inference is called the variational estimator.

Recently, the asymptotic distribution of point estimates resulting from variational inference was investigated in Hall et al. (2011b); Bickel et al. (2013); Westling and McCormick (2015) and Wang and Blei (2017)

analyze variational inference under a Bayesian framework. When a consistent estimator of the asymptotic variance is available, practitioners can analyze the uncertainty of the variational estimate and draw scientific conclusions by constructing confidence intervals (CI) for the parameters of interest

(Hall et al., 2011b; Westling and McCormick, 2015).

However, constructing a CI using the asymptotic distribution fails if we do not have a consistent estimator of the variance of the variational estimator. To overcome this problem, we consider using bootstrap methods implemented in Bickel et al. (2013) and Wang et al. (2015). The bootstrap approach does not require a consistent variance estimator to be available, and, in some cases, leads to a CI with a higher-order coverage (Hall, 2013). Despite the fact that the bootstrap method has already been used with variational estimation (Wang et al., 2015), the underlying bootstrap theory for variational inference does not exist.

In this paper, we investigate the validity of using a bootstrap approach where variational inference is used to approximate an MLE. We construct a confidence interval (CI) in the usual fixed dimensional case, where both the dimensionality of the parameter and the number of latent variables are fixed, as well as in the increasing dimensional case. An example of the latter situation may come from an item response theory model where the latent dimensionality may increase when the number of questions per individual is increasing. Haberman (1977) and Douglas (1997) have analyzed a situation where the number of questions (dimension) and the number of participants (sample size) increase jointly.

This paper has been motivated by the general need – as opposed to one specific substantive problem or a specific application area – to provide statisticians, computer scientists and data scientists with the theory and tools for using the bootstrap for variational inference. We use two sets of functional disability measures obtained five years apart from the National Long Term Care Survey (NLTCS) to illustrate the bootstrap approach on a two-sample test, a setting where we find the variational inference to be particularly appropriate. However, a complete development of a substantive application is beyond the scope of this paper.


We briefly review variational inference in Section 2. In Section 3, we discuss how to apply the bootstrap to variational inference. We then develop asymptotic normality and bootstrap theory in Section 4. In Section 5, we illustrate the bootstrap approach with a two-sample test using functional disability data from the NLTCS. Finally, we discuss related topics and the link to Stephen E. Fienberg in Section 6.

2 Variational Inference

We consider the variational inference in the context of a latent variable model. Assume our data consists of individuals and variables (e.g., survey questions or test problems) and forms a random sample of that are IID from a distribution function . We assume that there exists latent features for each individual that are denoted as . This setup is quite general – in a mixed membership model,

is the vector of mixed membership indicator; in a random effect model,

is the random effect; in a stochastic block model, is the community indicator (and denotes the edge connected to the -th observation).

We assume a parametric model on the distribution function

such that the joint distribution of

has a parametric form , where is the parameter of interest. When the latent feature vector is known, the likelihood of -th observation is

Analogously to Neyman and Scott (1948), we regard the latent feature vectors as incidental parameters and the population parameter as structural parameters.

In reality, we do not know the latent vectors so the observed log-likelihood function is


Often, we are interested in using the maximum likelihood estimator

However, maximizing or even calculating the marginal likelihood can often be computationally intractable. Thus, variational estimators provide an alternative, computationally feasible estimator to the MLE. The variational estimator is constructed as follows. We first pick a family of distributions–the variational distribution family–for the latent variable . Let be the variational distribution family indexed by the variational parameter , which is a nuisance parameter in our model. Note that we allow each has its own variational parameter; namely, . Using Jensen’s inequality, the log-likelihood function satisfies


where means that the expectation is taken over variable and the underlying distribution is . We call the expression on the right hand side of the inequality the evidence lower bound (ELBO).

Instead of maximizing the log-likelihood function, the variational framework maximizes the ELBO, leading to


Because in the above maximizing criterion is only involved in when is fixed, the first element is equivalent to the maximizer of the following criterion:


where The quantity , is called the ELBO estimator or the variational estimator.

Because the ELBO estimator comes from optimizing , it is an estimator of


Note that the expectation in the above expression is for the random variable

and is taken with respect to the data-generating distribution . The quantity defines the population quantity that the variational inference (ELBO estimator) is estimating. Note that depends on the variational distribution and is often different from the population version of . Thus, variational inference can be thought of as an intentional model misspecification even if the original parametric model is correctly specified. We will argue in the next section that despite the misspecification, variational inference is still a useful procedure for making statistical inference.

Figure 1: An illustration for the relations of and when a latent variable model is used and the model is not correctly specified. In this case, the distribution corresponding to the population MLE is just the distribution in the parametric family that minimizes the KL-divergence to the true distribution function. So can be viewed as a projection from onto the parametric family. However, if the MLE is computationally intractable, we can still specify a tractable variational estimator and the corresponding variational distribution, , can be viewed as another projection from .
Remark 1.

When the parametric model is correctly specified (i.e., there exists such that ), the variational estimator may recover the correct model with in some special cases. For concrete examples, we refer the readers to Hall et al. (2011b) and Bickel et al. (2013) where they illustrated this possibility in a single predictor Poisson mixed model and a stochastic block model.

2.1 Further considerations for using variational inference in practice

As described in the previous section, the distribution based on the variational estimator may not converge to the true data-generating distribution function even when the model is correctly specified. Despite this drawback, variational inference can be a useful procedure for inference for the following reasons.

  • Likelihood formulation as a working model. As George Box has said “Essentially, all models are wrong, but some are useful” (Box, 1976). A proposed model is almost always misspecified. When using a parametric model to analyze the data, we do not claim that the parametric model describes the actual data-generating distribution. Instead, a working model and parameter estimates help us to learn various aspects about the data at hand. To carry this reasoning further, the ML procedure and the variational inference procedure are just different principles of fitting parameters to the data. When the model is misspecified, both the MLE and the variational estimator are best approximation estimators under different criteria of measuring the quality of approximation. Figure 1 provides a diagram illustrating the case where the likelihood model does not include the true data-generating distribution.

  • Two-sample test. The variational inference procedure is a useful procedure for two-sample test. Given two sets of data and , the goal of a two-sample test is to test

    Using the equation (5), implies that

    where and are the maximizer of equation (5) assuming the expectation is taken over and . Thus, when applied to a two-sample test, variational inference is as valid of an approach as ML inference. In a sense, one can interpret the tests following either approach, variational or ML, as inferences based on different projections of the distributions onto the same parameter space.

3 Bootstrapping the variational estimator

We use the bootstrap (Efron, 1982, 1992) to evaluate the uncertainty of the variational estimator and construct CIs. We focus on the empirical bootstrap – also known as classical, nonparametric, or Efron’s bootstrap – where one samples with replacement from the original dataset, recomputes the ELBO estimator for each bootstrap sample, and uses the distribution of these bootstrapped ELBO estimators to derive uncertainty measures. We illustrate estimation of the error of and construction of the CIs using the bootstrap. There are many bootstrap CIs (see, e.g., Hall 2013). Here, we will focus on two common approaches: the percentile method and the (studentized) pivotal method. Note that constructing a CI using the percentile approach has been implemented in Wang et al. (2015).

The bootstrap approach to estimating uncertainty is very general. The bootstrap percentile approach can be used even when the asymptotic covariance matrix is not available (e.g., difficult to estimate). When the asymptotic covariance matrix of is known and can be consistently estimated (say using a sandwich estimator), the bootstrap pivotal method may produce CIs with a higher order coverage than those based on the asymptotic normality (Horowitz, 1997; Singh, 1981; Babu and Singh, 1983).

More formally, let be a bootstrap sample from the original sample . Given the bootstrap sample, we compute the bootstrap ELBO estimator


Repeating the bootstrap procedure times, we obtain bootstrap ELBO estimators:

We will use these bootstrap ELBO estimators to assess the uncertainty of the original ELBO estimator.

Note that one may also use the jackknife and weighted bootstrap (O’Hagan et al., 2015) to generate bootstrap sample. In particular, when analyzing a network data where each corresponds to the edges of -the vertex, the empirical bootstrap cannot be applied but the weighted bootstrap (and the parametric bootstrap; see Remark 2) is still applicable.

3.1 Estimating the variance

The bootstrap approach can be applied to estimate the variance of the ELBO estimator. Assume that we focus on the -th parameter . The variance of can be estimated using the sample variance of the bootstrapped variational estimators


Figure 2 provides a diagram summarizing the procedure.

The intuition behind equation (7) is that the bootstrap distribution of the estimators behaves as if new realizations of the original estimator are drawn. Thus, the variance of the bootstrap estimators would be an approximation to the variance of .

Bootstrap variance estimator. Find the variational estimator using . For , do the following task: Sample with replacement from to obtain . Compute the variational estimator using . For the -th parameter, compute its variance estimator using

Figure 2: Bootstrap variance estimator.

3.2 Confidence interval: percentile approach

The bootstrap approach enables us to construct CIs for the parameters of interest. We first introduce a simple approach called the percentile (quantile) approach, which is based on the percentile of the distribution of the bootstrap variational estimators. Assume again we focus on the

-th parameter. Given a confidence level , let denotes the -quantile of the bootstrap ELBO estimators

Then a CI of the -th parameter is


Figure 3 summarizes the steps in computing a bootstrap percentile CI.

Equation (8) presents a CI that uses the percentile of the bootstrap distribution of the ELBO estimator. This CI is based on the following approximation:


Namely, the CDF of the difference between ELBO estimator and the truth can be approximated by the CDF of the bootstrapped differences. Thus, approximates the distribution of the actual difference and we use it to construct a CI. We will show the validity of equation (9) in Theorem 2 and Theorem 3.

Confidence interval by the bootstrap percentile approach. Find the variational estimator using . For , do the following task: Sample with replacement from to obtain . Compute the variational estimator using . For each parameter, say , compute and from

Form the confidence interval:

Figure 3: Confidence interval by the bootstrap percentile approach.

3.3 Confidence interval: the pivotal approach

The (studentized) pivotal approach (Wasserman, 2006), also called a percentile-t approach (Hall, 2013), is another popular method for constructing a CI and may lead to a CI with a higher-order correctness (Hall, 2013).

The pivotal approach requires a consistent estimator of the variance of . Let be such a consistent estimator. Note that can be constructed using a sandwich estimator as is described in Hall et al. (2011b) and Westling and McCormick (2015). Then the statistic

acts as a t-statistic and converges to a standard normal distribution (see, e.g., equation (

4)). Therefore, is a pivotal quantity that has asymptotic normality and the pivotal approach is based on bootstrapping to construct a CI.

Instead of using the percentile from a standard normal distribution, we use the bootstrap percentile of . For the -th bootstrap sample, we not only compute the bootstrap parameter estimate but also re-compute the corresponding variance estimator to evaluate the bootstrap version of the pivotal statistics

We then pick the value as the upper quantile of the empirical distribution function of , i.e.,

The CI is


Note that is the estimator of the variance of using the original sample. Figure 4 provides a summary of the bootstrap pivotal approach for constructing a CI.

The intuition of the bootstrap studentized pivotal approach is that the distribution of bootstrap statistic (given ) converges to the distribution of faster than the convergence of to a standard normal distribution. Thus, the CI in equation (10) has a higher order correctness (Singh, 1981; Babu and Singh, 1983; Hall, 2013).

Confidence interval by the bootstrap pivotal approach. Find the variational estimator using . Compute the variance estimator using a sandwich estimator. For , do the following task: Sample with replacement from to obtain . Compute the variational estimator using . Compute the variance estimator using . For each parameter, say , compute using

Compute and from
Form the confidence interval:

Figure 4: Confidence interval by the bootstrap pivotal approach.
Remark 2 (Parametric bootstrap).

In addition to the above bootstrap methods, the parametric bootstrap is another popular approach which generates bootstrap samples from instead of the empirical distribution function. However, we caution against using the parametric bootstrap. When using the variational estimator, the parametric bootstrap may not give a CI with the (asymptotic) nominal coverage even if the parametric family is correct (i.e., there exists such that the data generating distribution ) because the ELBO estimator does not converge to in general. Thus, will not be close to , so there is no guarantee that the CI will have nominal coverage. However, when the model is correctly specified and does converge to (this may occur when we allow to increase; see Remark 5), the parametric bootstrap can provide CIs with nominal coverage; see Bickel et al. (2013) for an example in the case of stochastic block model.

Remark 3 (Label Switching Problem).

In some models, the MLE may only be unique up to permutation of indices (see, e.g., the example in Section 5). In this case, the ELBO is non-convex so we need to use a gradient ascent method such as the EM algorithm. For each bootstrap sample, we will apply the EM algorithm with the same initialization (we recommend to use the estimator of the original sample as the initial point for each bootstrap sample). This will avoid the problem of label switching (Redner and Walker, 1984) and the bootstrap will recover the uncertainty in parameter estimation.

4 Asymptotic Distribution and Bootstrap Consistency

In this section, we derive the asymptotic distribution of the variational estimator and its bootstrap theory. We will study the theory in both scenarios: fixing and increasing , the dimension of parameters , after introducing further notation.

Let be a ball with radius centered at . We define , and let and to be the gradient and Hessian matrix of , respectively. For a unit vector and a function , is the derivative of in the direction of . For a matrix , we denote and

to be the largest and the smallest eigenvalues of

, respectively.

4.1 Fixed Dimension

When the dimension is fixed, the ELBO estimator and its target can be analyzed using the theory of M-estimators (Van der Vaart, 1998). The asymptotic normality of has been analyzed in the literature (Hall et al., 2011b; Bickel et al., 2013; Westling and McCormick, 2015; Wang and Blei, 2017) under several scenarios. Here we present the asymptotic normality using the result stated in Westling and McCormick (2015) because they also considered frequentist estimation in the general context of latent variable models.

Theorem 1 (Theorem 2 in Westling and McCormick (2015)).

Assume conditions (B1-B5) in the appendix of Westling and McCormick (2015). Then


where is a matrix such that

We include the assumptions (B1-5) in the appendix of Westling and McCormick (2015) in appendix B. These assumptions are made to derive the asymptotic normality of an -estimator (see, e.g., Theorem 5.23 of Van der Vaart 1998). Essentially, these assumptions assure that , and are well-defined and sufficiently smooth and well-behaved around and -a.e. . Viewing the ELBO estimator as the MLE, the quantity and are analogous to the score function and the Fisher information matrix, respectively.

To describe the validity of a bootstrap procedure, we often use the notion of convergence under Kolmogorov distance (Van der Vaart, 1998). For two random variables and , their Kolmogorov distance is

The bound on Kolmogorov distance is also called the Berry-Esseen bound (Berry, 1941; Esseen, 1942)

. Note that convergence in probability in Kolmogorov distance is a stronger result, compared to convergence in distribution. Namely, if a sequence of random variables

with , then .

Let and be the scaled difference and the bootstrap version of it. We will prove that and converge in Kolmogorov distance.

Theorem 2.

Assume conditions (B1-5) in the appendix of Westling and McCormick (2015) and . Then for any vector such that ,


Thus, for any ,

where and are the CIs based on equations (8) and (10), respectively.

The proof is deferred to the appendix. Theorem 2 shows that no matter which orientation we project onto (using the unit vector ), the distribution of random variable and the distribution of its bootstrap variant converge. Thus, the bootstrap quantile converges to the quantile of the actual distribution, which proves validity of the bootstrap.

4.2 Increasing Dimension

We now study the bootstrap theory when the dimension of parameters is allowed to increase with respect to the sample size. These situations occurs in many scenarios. For example, in a mixed membership model, we may want to increase the number of subgroups when we have a larger sample. Or in an item response theory model, both the number of questions in a test and the number of participants may be increasing at the same time (Haberman, 1977; Douglas, 1997). In this case, we will write as . Note that we only allow , the dimension of increase and the dimension of variational parameters are assumed to be fixed. Thus, the population quantity will also be changing.


  • is the unique maximizer of and is unique for each and almost surely for under .

  • There exists such that all eigenvalues of are not in for any .

  • There exists such that for any unit vectors ,

    for all and .

  • There exists such that for any unit vector ,

    for any .

(A0) is a very common assumption that requires to be uniquely defined (Westling and McCormick, 2015). Note that we can relax (A0) to require to be unique under permuting the indices when the model is symmetric (such as the example in Section 5). The theoretical results will be the same after a small modification to the proof so here we make this assumption to simplify the exposition. (A1) implies that the Hessian matrix of is invertible at when . This is a generalization of the invertible Fisher information matrix condition to the increasing-dimensional setting. (A2) can be viewed as a generalization of a bounded

-norm of the third derivative tensor

. To see this, consider only two-directional derivative, . The supremum of this will be the -norm (maximum absolute eigenvalue) of . Note that assumptions similar to (A1–2) also appear in Portnoy (1985) and Mammen (1989)

. (A3) is a third moment condition that is used to establish a Berry-Esseen bound

(Berry, 1941; Esseen, 1942). Note that when is changing with respect to , (A2) and (A3) can be relaxed so that constants and can depend on . However, this relaxation will put another constraint on how fast with respect to .

Note that we do not assume the distribution belongs to an exponential family. If belongs the exponential family, the assumptions can be weakened to the assumptions in Portnoy (1988).

Theorem 3.

Assume (A0-3) and and . Then, for any vector such that , there exists a number such that


where is a standard normal random variable. Moreover,


Thus, for any ,

where and are the CIs based on equations (8) and (10), respectively.

The proof is deferred to the appendix. The first assertion in Theorem 3 states that the difference between the ELBO estimator and its target converges to a normal distribution when we project the difference to any direction. The quantity

is the standard deviation of the difference of the estimator that depends on the data-generating distribution

and on the variational family that is being used. Note that, when the dimension is fixed, .

The second assertion in Theorem 3 shows that the limiting distributions of the scaled difference and its bootstrap variant are asymptotically the same. This implies that the CI constructed using the bootstrap or variance estimated by the bootstrap is asymptotically valid.

Note that the requirement is very common in increasing-dimensional problem; see, e.g., Mammen (1993); Portnoy (1988).

Remark 4 (Increasing both and ).

Theorem 3 can be applied to a case where both the dimension of parameter and the dimension of variational parameter are increasing. In this case, we need assumptions (A0-A3) to hold for every and . When we allow to increase, the assumption (A1) may be too strong. We can relax this assumption by allowing the constant in (A1) to decrease to slowly. The increasing rate of will be constrained by the decreasing rate of to guarantee the invertibility of .

Remark 5 (Increasing only).

Even when , the dimension of the parameter, remains fixed (i.e., the population MLE is fixed), changing , the dimension of , will also change the (population) quantity . In some situations, we even have ; see Hall et al. (2011b) and Bickel et al. (2013) for examples. The difference can be viewed as the bias of the variational estimator. Because the dimension of variational parameter can be viewed as a measure of model complexity of the variational estimator, the property can be interpreted as an asymptotic unbiasedness property in terms of model complexity.

Remark 6 (High-dimensional case).


, the conventional central limiting theorem fails because of the complexity coming from the high dimensional parameters

(Portnoy, 1984, 1985). Thus, CIs from the percentile or pivotal approaches do not have the nominal coverage. However, it is still possible to construct an asymptotically valid CI using the bootstrap. The rectangle CI (Chernozhukov et al., 2013) is one example. We refer the readers to Chernozhukov et al. (2013); Wasserman et al. (2013); Fan and Zhou (2015) for more details about rectangle CIs.

5 Data Analysis

We illustrate our theoretical results with multivariate binary data on functional disability from the National Long Term Care Survey (NLTCS). Erosheva et al. (2007) presented the first case of variational estimation for mixed membership models with binary data from the NLTCS. Here, we consider observations collected on the NLTCS participants in 1984, 1989 and 1994. The data contain binary indicators on 6 activities of daily living (ADL) and 10 instrumental activities of daily living (IADL) for community-dwelling elderly. The 6 ADL items include basic activities of hygiene and personal care: eating, getting in/out of bed, getting around inside, dressing, bathing, and getting to the bathroom or using toilet. The 10 IADL items include basic activities necessary to reside in the community: doing heavy housework, doing light housework, doing laundry, cooking, grocery shopping, getting about outside, traveling, managing money, taking medicine, and telephoning. Responses are coded as 0 and 1, where 1 denotes a presence and 0 denotes an absence of a functional disability. In the NLTCS, positive (1) ADL responses mean that during the past week the activity had not been, or was not expected to be, performed without the aid of another person or the use of equipment; negative (0) IADL responses mean that a person usually could not, or was not going to be able to, perform the activity because of a disability or a health problem. For a more in-depth discussion, see Manton et al. (1993), and Erosheva and White (2006).

Similar to Erosheva et al. (2007), we also use a mixed membership analysis. Erosheva et al. (2007) take a fully Bayesian approach and specify priors for the and model parameters discussed below; however, in this analysis we take a frequentist approach and directly compute maximum ELBO estimates for and . Also, Erosheva et al. (2007) analyze all four waves (1982, 1984, 1989, and 1994), but we restrict our analysis to the 1984, 1989, and 1994 waves.

In particular, we are interested in two tasks. First, we use a mixed membership model to describe the 5,934 observations in the 1984 wave. We use a variational procedure to estimate the model parameters and then give bootstrapped confidence intervals for each of those estimates. Next, we consider the 4,463 and 5,089 observations from 1989 and 1994 respectively. We test whether the responses observed in 1989 and 1994 arise from the same distribution. Given the two natural sub-populations, this corresponds to a possible two-sample test described in Section 2.1

. A conceptually simpler approach could be used instead of a model based approach. At the coarsest resolution, this might be a two sample t-test for each of the 16 variables, and at the finest resolution, this might be a two sample t-test for each of the

possible response patterns. However, testing in the mixed membership framework allows investigation of subtle changes in the underlying structure, while still retaining easy interpretation.

5.1 Mixed membership models and variational inference

Throughout this analysis, we use mixed membership models to uncover latent structure. Like a mixture model, mixed membership models assume that the population is comprised of several groups, where each group has a distribution over the observed variables. However, while mixture models assume that each individual belongs to a single group, mixed membership models allow each individual to have a partial membership in multiple groups (Airoldi et al., 2014). Mixed membership models have been used for topic modeling (Blei et al., 2003), social network analysis (Airoldi et al., 2008), survey data (Erosheva et al., 2007), and statistical genetics (Pritchard et al., 2000)

. Note that allowing for mixed membership differs from estimating the posterior probability of group assignment when using a mixture model. Under a mixture model, as the data about an individual grows, the posterior should concentrate on a single group, while in a mixed membership model, as the data about an individual grows, we may consistently estimate the individual’s membership, which could be in the interior of the simplex.

In the setting we consider, for each individual we observe multivariate data and assume the following generative model. Let index variables and be the fixed number of groups. We assume fixed parameters , which regulates the Dirichlet distribution for group membership, and for and , where is the Bernoulli parameter for a response to variable from a full member of group . The generative model for individual is:

  1. , where lies in the simplex (i.e., and ). Each element characterizes the extent of membership for individual in group .

  2. For each variable :

    • , where indicates the group whose parameters govern individual ’s response to question .

    • , the observed response for individual on question .

This hierarchical model assumes that each individual responds to each question as a full member of group . However, for each individual, the group may vary across variables and the probability of responding as a full member of group for each question is governed by . In addition, is independent of given .

The parameters of interest are and . For the Dirichlet parameter , the quantity indicates the relative proportion of each group and the magnitude, , indicates the level of intra-individual mixing. Distributions with larger values of concentrate density in the interior of the simplex and imply a higher level of intra-individual mixing, while distributions with smaller values of concentrate density in the corners of the simplex and indicate less intra-individual mixing. The Bernoulli parameters characterize the ability/disability of each group. The parameters and are latent variables which we consider as nuisance parameters. In the previous notation, and .

Although the model is straightforward to describe and generate, maximum likelihood estimation is difficult because the normalizing constant is intractable. Thus, to fit the model, we use the mixedMem R package (Wang and Erosheva, 2015) which specifies the following mean field variational distribution with variational parameters and in the simplex:


In the previous notation, . The likelihood and specified variational distribution yield the following ELBO:


where is the gamma function and is the digamma function which is the derivative of the log- function. We maximize the ELBO with respect to the parameters of interest, and , and the variational parameters, and , through a block coordinate ascent procedure which alternates between two steps. In the first step, holding and fixed, we compute the optimal variational parameters by iterative coordinate ascent. Then, holding the variational parameters fixed, we update and through a Newton-Raphson procedure. Because there is no closed-form solution for and , we can not easily compute a Hessian required for the sandwich estimator of Westling and McCormick (2015) or the pivotal confidence intervals summarized by Figure 4. However, percentile based bootstrap confidence intervals and bootstrap variance estimates can be used.

5.2 Initial analysis and bootstrapped standard errors

We first select an appropriate number of groups, , using a pseudo-BIC criterion:


where , the count of parameters and . Because the ELBO is generally multi-modal, we use 1,000 random initialization points (for and ) for . For each , we then select the resulting stationary point with the largest ELBO and compute the pseudo-BIC. Using many random restarts is important, because, as is typically the case, the ELBO defined by the mixed membership model and variational family we use is multi-modal. We see from the left panel of Figure 5 that the ELBO of each stationary point can vary widely within each value of . In the right panel of Figure 5, we plot only the lowest pBIC for each ; we see that the pBIC criteria leads us to select a 4 group model, though a 6 group model might also be appropriate.

The estimated Bernoulli and Dirchlet parameters for the optimal 4 group model are presented in Figure 6 and 7. The confidence intervals shown in black are calculated by using the non-parametric percentile bootstrap summarized in Figure 3 and the confidence intervals shown in red are calculated using the parametric percentile bootstrap. The intervals used are post model-selection (Leeb and Pötscher, 2005).

Figure 5: The pseudo-BIC across levels of groups. The left panel shows pseudo-BIC values across all random initializations, and the right panel shows only the optimal model for each .

Since the ELBO can be multi-modal and we use a coordinate ascent procedure, we need to carefully initialize each bootstrap run so that we do not enter another basin of attraction and overestimate the sampling variability. In particular, we initialize the global parameters, and , as well as the individual latent variables, and , at the corresponding quantities estimated from the original data. In general, we expect each bootstrap run to require less computational effort than the original estimation procedure since we expect the initialization to be near the stationary point. In addition, each of the bootstrap runs can be easily parallelized on a cluster; for this particular analysis, an individual bootstrap run took roughly 15 seconds on a laptop.

In Figure 6, we have sorted the groups top to bottom (1 through 4) by least disabled to most disabled. Group 1 is generally most likely to be able to perform each of the 16 tasks. Group 2 appears to be relatively less able to perform most physical/mobility related tasks, but is relatively more able to perform tasks requiring mental acuity. For instance, members of Group 2 are relatively less able to get in/out of bed, move around inside, and move around outside; however, they are relatively more able to cook, manage money, take medicine, and use the telephone. Group 3 appears to have more mobility, but is less able to perform tasks which require mental acuity. For instance, individuals in Group 3 are relatively more able to get in/out of bed, move around inside, and use the toilet, but less able to manage money or use the telephone. Finally Group 4 is generally least likely to be able to perform each task. The estimated Bernoulli parameters for Group 4 are higher than the marginal probabilities for all 16 tasks. Note that the CIs from the two bootstrap methods are small, indicating that our estimators are quite precise.

Figure 6: The estimated parameters and CIs for the 4 group mixed membership model. The black CI’s are formed using the non-parametric bootstrap; the red CI’s are formed using the parametric bootstrap. The top panel shows estimates for the ADL activities and the bottom panel shows estimates for the IADL activities. The estimated population proportion, is shown on the left with the corresponding CI under each group label. For aiding interpretation, the vertical dashed lines shows the marginal proportion of individuals whose response was for each variable.
Figure 7: The left panel shows the estimates and CI’s for and the right panel shows the estimates and CI’s for the proportion of each group; i.e., . The black CI’s are formed using the non-parametric bootstrap and the red CI’s are formed using the parametric bootstrap.

We caution against using the parametric bootstrap with variational inference since in general is not equal to so the CI’s may not always cover the variational point estimates. In particular, for the Bernoulli parameters, 37 of the 64 CI’s constructed via the parametric bootstrap do not cover the point estimates. In addition, all 4 of the parametric bootstrap CI’s for (and 2/4 of the CI’s for the population proportions) do not cover the point estimates. However, all of the CI’s (both for the Bernoulli and Dirichlet parameters) constructed using the non-parametric bootstrap do cover the point estimates. As noted by Andrews (2000), when the point estimates are near the boundary of the parameter space, the bootstrap estimates might be unstable. In this case, we see that when the Bernoulli parameters are close to 0 or 1, this generally causes a problem for the parametric bootstrap, but not for the non-parametric bootstrap.

5.3 Two-sample test

We now consider observations from the 1989 and 1994 waves. In particular, we test whether the functional disability measures taken five years apart are generated by the same distribution. In order to concretely interpret differences between the two waves, we fix and use the Bernoulli parameters estimated from the 1984 wave. We then find point estimates and separately by maximizing the ELBO with respect to (keeping fixed). Again, because of multi-modailty of the ELBO, we use 1000 random initialization to select an for each wave. In principal, fixing the Bernoulli parameters to any random quantity and concluding that

would result in rejecting the null hypothesis (where

indicates the value which maximizes the ELBO for fixed ). However, we use the point estimates from the 1984 wave to facilitate interpretability.

The estimated group proportions for 1989 and 1994 are shown in Figure 8

with the corresponding confidence intervals formed by the non-parametric percentile bootstrap standard errors. In 1994, the prevalence of the least disabled group (Group 1) increased, while the prevalence of Group 2 (incapable of mobility tasks, but capable of mental tasks), Group 3 (capable of mobility tasks, but incapable of mental tasks) and Group 4 (generally incapable of all tasks) all decreased by roughly .03 each.

Figure 8: The estimated proportions of each group for 1989 and 1994. The CIs are percentile method bootstrapped intervals.

For good measure, we also perform a Wald test for the population proportions:

where the proportion for Group 4 is excluded so that the distribution is non-degenerate. Using the bootstrap estimate of covariance calculated by the procedure summarized in Figure 2, we find that

Thus, we reject the null hypothesis that all population proportions are equal with a p-value less than when compared to the reference distribution.

6 Discussion

We conclude this paper by including some remarks about practical aspects of the variational inference and the bootstrap, and by making several observations on the connections between the research presented in this paper and the work of late Stephen E. Fienberg, to whom this special issue of the Annals of Applied Statistics is dedicated.

Bootstrap versus asymptotic normality

For practitioners, constructing a CI using a bootstrap approach is generally easier than using the asymptotic normality (Hall et al., 2011b; Westling and McCormick, 2015). To construct a CI using asymptotic normality, we need a (consistent) variance estimator and calculating that estimator often requires an involved derivation, which could be very challenging when the model is complex. The NLTCS example of mixed membership is one such example. And sometimes, such an estimator does not exist so we are unable to use the asymptotic normality approach. On the other hand, the implementation of a bootstrap approach is very easy – it is just sampling with replacement and re-applying the variational inference. The bootstrap approach does not require a consistent variation estimator so it is a more general approach than the interval from asymptotic normality approach. Moreover, if we do have a variance estimator, as is discussed in Section 3.3, we can construct a CI using the bootstrap pivotal approach, which may lead to a CI with a higher order correctness (Singh, 1981; Babu and Singh, 1983; Hall, 2013).

Implications for variational inference in Bayesian settings

Theorems 1 and 2

proved the asymptotic normality and bootstrap validity of using the variational inference to approximate the MLE. These theorems can be applied to Bayesian variational estimators as long as the prior is sufficiently smooth (for a trivial case, consider a uniform prior on the parameter space) or to a penalized ELBO with a very week (i.e., asymptotically negligible) penalty. However, in the Bayesian framework, the posterior distribution and credible intervals are the quantities of the interest and the CI is not the main objective. For the penalized ELBO, a weak penalty is often not of research interest because it neither encourages sparsity nor stabilizes the estimator.

Two-sample test and comparison

The two-sample test used in Section 5.3 shows great potential of combining the bootstrap CI and variational inference. In the NLTCS example that we presented, without adequate tools to obtain uncertainty in estimates, it is possible that an erroneous conclusion could have been made, stating that the proportion of responses that corresponds to profile 3 (mainly problems with managing money, grocery shopping, and traveling) stays the same over the ten year interval, while our analysis demonstrated that the difference is significant. Note that the approach of comparing two samples is very generic – it can be applied to various problems involving a comparison of two datasets using variational inference. With the methodologies developed in this paper, we can assess the significance of the difference between estimates using the bootstrap and make a statistical conclusion about the two datasets.

Connection with Stephen E. Fienberg

Stephen Fienberg had originally introduced Erosheva, then a graduate student, to the Grade of Membership Model (Woodbury et al., 1978) and to the functional disability data from the NLTCS. Erosheva’s graduate work has motivated the original NLTCS publication (Erosheva et al., 2007) as well as the development of the general mixed membership modeling framework. For that original NLTCS publication, under Fienberg’s direction, Erosheva and Joutard have developed and implemented both a fully Bayesian MCMC approach and the corresponding variational estimation algorithm for mixed membership models with binary data (Erosheva et al., 2007). Later, Wang et al. (2015), extended this line of work and developed a variational estimation algorithm for mixed membership models with rank data, where, at a suggestion of a reviewer, they used bootstrap methods to assess uncertainty in the model estimates.

Although Fienberg’s impact on statistical science spanned many areas, including the census and survey research in general, he is perhaps best known for his contributions to discrete data analysis and log-linear models. Even though he used to say that “everything is a log-linear model”, meaning that almost any statistical model for discrete data has a log-linear representation, he did not brush off other approaches. In particularly, Stephen Fienberg was a big advocate of mixed membership models – because of their flexibility and practical appeal – during the last 15 years of his career. Mixed membership models present certain challenges in estimation, and, while recommending variational inference as a step toward solving those challenges, Fienberg was very much cognizant of both the practical advantages and the lack of statistical theory for variational estimation. We are not aware of his involvement in recent efforts to provide a theoretical foundation for variational estimates, but we can say with confidence that he would have been supportive and encouraging for advancing research in this direction.


We thank the referee and the editor for insightful comments. We also thank Fang Han for helpful comments on the theoretical results.

Appendix A Proofs

ProofProof of Theorem 2.

Our proof consists of two parts. In the first part, we show that the asymptotic normality admits a Berry-Essen bound. In the second part, we show that the bootstrap variant converges to the same distribution with a Berry-Essen bound.

Part 1: Berry-Esseen Bound. By the derivation of Theorem 2 in Westling and McCormick (2015) and Theorem 5.23 in Van der Vaart (1998), the ELBO estimator has the property that