Smoothed Gradients for Stochastic Variational Inference

06/13/2014 ∙ by Stephan Mandt, et al. ∙ Princeton University Columbia University 0

Stochastic variational inference (SVI) lets us scale up Bayesian computation to massive data. It uses stochastic optimization to fit a variational distribution, following easy-to-compute noisy natural gradients. As with most traditional stochastic optimization methods, SVI takes precautions to use unbiased stochastic gradients whose expectations are equal to the true gradients. In this paper, we explore the idea of following biased stochastic gradients in SVI. Our method replaces the natural gradient with a similarly constructed vector that uses a fixed-window moving average of some of its previous terms. We will demonstrate the many advantages of this technique. First, its computational cost is the same as for SVI and storage requirements only multiply by a constant factor. Second, it enjoys significant variance reduction over the unbiased estimates, smaller bias than averaged gradients, and leads to smaller mean-squared error against the full gradient. We test our method on latent Dirichlet allocation with three large corpora.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Stochastic variational inference (SVI) lets us scale up Bayesian computation to massive data [1]. SVI has been applied to many types of models, including topic models [1], probabilistic factorization [2], statistical network analysis [3, 4], and Gaussian processes [5].

SVI uses stochastic optimization [6] to fit a variational distribution, following easy-to-compute noisy natural gradients that come from repeatedly subsampling from the large data set. As with most traditional stochastic optimization methods, SVI takes precautions to use unbiased, noisy gradients whose expectations are equal to the true gradients. This is necessary for the conditions of [6] to apply, and guarantees that SVI climbs to a local optimum of the variational objective. Innovations on SVI, such as subsampling from data non-uniformly [2] or using control variates [7, 8], have maintained the unbiasedness of the noisy gradient.

In this paper, we explore the idea of following a biased stochastic gradient in SVI. We are inspired by the recent work in stochastic optimization that uses biased gradients. For example, stochastic averaged gradients (SAG) iteratively updates only a subset of terms in the full gradient [9]; averaged gradients (AG) follows the average of the sequence of stochastic gradients [10]. These methods lead to faster convergence on many problems.

However, SAG and AG are not immediately applicable to SVI. First, SAG requires storing all of the terms of the gradient. In most applications of SVI there is a term for each data point, and avoiding such storage is one of the motivations for using the algorithm. Second, the SVI update has a form where we update the variational parameter with a convex combination of the previous parameter and a new noisy version of it. This property falls out of the special structure of the gradient of the variational objective, and has the significant advantage of keeping the parameter in its feasible space. (E.g., the parameter may be constrained to be positive or even on the simplex.) Averaged gradients, as we show below, do not enjoy this property. Thus, we develop a new method to form biased gradients in SVI.

To understand our method, we must briefly explain the special structure of the SVI stochastic natural gradient. At any iteration of SVI, we have a current estimate of the variational parameter , i.e., the parameter governing an approximate posterior that we are trying to estimate. First, we sample a data point . Then, we use the current estimate of variational parameters to compute expected sufficient statistics about that data point. (The sufficient statistics is a vector of the same dimension as .) Finally, we form the stochastic natural gradient of the variational objective with this simple expression:

(1)

where is a prior from the model and is an appropriate scaling. This is an unbiased noisy gradient [11, 1], and we follow it with a step size that decreases across iterations [6]. Because of its algebraic structure, each step amounts to taking a weighted average,

(2)

Note that this keeps in its feasible set.

With these details in mind, we can now describe our method. Our method replaces the natural gradient in Eq. (1) with a similarly constructed vector that uses a fixed-window moving average of the previous sufficient statistics. That is, we replace the sufficient statistics with an appropriate scaled sum, . Note this is different from averaging the gradients, which also involves the current iteration’s estimate.

We will demonstrate the many advantages of this technique. First, its computational cost is the same as for SVI and storage requirements only multiply by a constant factor (the window length ). Second, it enjoys significant variance reduction over the unbiased estimates, smaller bias than averaged gradients, and leads to smaller mean-squared error against the full gradient. Finally, we tested our method on latent Dirichlet allocation with three large corpora. We found it leads to faster convergence and better local optima.

Related work

We first discuss the related work from the SVI literature. Both Ref. [8] and Ref. [7] introduce control variates to reduce the gradient’s variance. The method leads to unbiased gradient estimates. On the other hand, every few hundred iterations, an entire pass through the data set is necessary, which makes the performance and expenses of the method depend on the size of the data set. Ref. [12] develops a method to pre-select documents according to their influence on the global update. For large data sets, however, it also suffers from high storage requirements. In the stochastic optimization literature, we have already discussed SAG [9] and AG [10]. Similarly, Ref. [13] introduces an exponentially fading momentum term. It too suffers from the issues of SAG and AG, mentioned above.

2 Smoothed stochastic gradients for SVI

Latent Dirichlet Allocation and Variational Inference

We start by reviewing stochastic variational inference for LDA [1, 14], a topic model that will be our running example. We are given a corpus of documents with words . We want to infer hidden topics, defined as multinomial distributions over a vocabulary of size . We define a multinomial parameter , termed the topics. Each document is associated with a normalized vector of topic weights . Furthermore, each word in document has a topic assignment . This is a vector of binary entries, such that if word in document is assigned to topic , and otherwise.

In the generative process, we first draw the topics from a Dirichlet, . For each document, we draw the topic weights, . Finally, for each word in the document, we draw an assignment , and we draw the word from the assigned topic,

. The model has the following joint probability distribution:

(3)

Following [1], the topics are global parameters, shared among all documents. The assignments and topic proportions are local, as they characterize a single document.

In variational inference [15], we approximate the posterior distribution,

(4)

which is intractable to compute. The posterior is approximated by a factorized distribution,

(5)

Here, and are Dirichlet distributions, and are multinomials. The parameters , and minimize the Kullback-Leibler (KL) divergence between the variational distribution and the posterior [16]. As shown in Refs. [1, 17], the objective to maximize is the evidence lower bound (ELBO),

(6)

This is a lower bound on the marginal probability of the observations. It is a sensible objective function because, up to a constant, it is equal to the negative KL divergence between q and the posterior. Thus optimizing the ELBO with respect to q is equivalent to minimizing its KL divergence to the posterior.

In traditional variational methods, we iteratively update the local and global parameters. The local parameters are updated as described in [1, 17] . They are a function of the global parameters, so at iteration the local parameter is . We are interested in the global parameters. They are updated based on the (expected) sufficient statistics ,

(7)

For fixed and , the multinomial parameter is K1. The binary vector is V1; it satisfies if the word in document is , and else contains only zeros. Hence, is KV and therefore has the same dimension as . Alternating updates lead to convergence.

Stochastic variational inference for LDA

The computation of the sufficient statistics is inefficient because it involves a pass through the entire data set. In Stochastic Variational Inference for LDA [1, 14], it is approximated by stochastically sampling a ”minibatch” of documents, estimating on the basis of the minibatch, and scaling the result appropriately,

Because it depends on the minibatch,

is now a random variable. We will denote variables that explicitly depend on the random minibatch

at the current time by circumflexes, such as and .

In SVI, we update by admixing the random estimate of the sufficient statistics to the current value of . This involves a learning rate ,

(8)

The case of and corresponds to batch variational inference (when sampling without replacement) . For arbitrary , this update is just stochastic gradient ascent, as a stochastic estimate of the natural gradient of the ELBO [1] is

(9)

This interpretation opens the world of gradient smoothing techniques. Note that the above stochastic gradient is unbiased: its expectation value is the full gradient. However, it has a variance. The goal of this paper will be to reduce this variance at the expense of introducing a bias.

Input: documents, minibatch size , number of stored
sufficient statistics , learning rate hyperparameters , .
Output: Hidden variational parameters , , .
1 Initialize randomly and . Initialize empty queue .
2 for  to  do
3       Sample minibatch uniformly.
4       initialize repeat
5             For and set
6              
7              
8            
9      until  and converge.For each topic , calculate sufficient statistics for minibatch :
10      
11       Add new sufficient statistic in front of queue Q:
12      
13       Remove last element when length has been reached:
14       if   then
15            
16       end if
17      Update , using stored sufficient statistics:
18      
19      
20       .
21      
22 end for
Algorithm 1 Smoothed stochastic gradients for Latent Dirichlet Allocation

Smoothed stochastic gradients for SVI

Noisy stochastic gradients can slow down the convergence of SVI or lead to convergence to bad local optima. Hence, we propose a smoothing scheme to reduce the variance of the noisy natural gradient. To this end, we average the sufficient statistics over the past iterations. Here is a sketch:

  • Uniformly sample a minibatch of documents. Compute the local variational parameters from a given .

  • Compute the sufficient statistics .

  • Store , along with the most recent sufficient statistics. Compute as their mean.

  • Compute the smoothed stochastic gradient according to

    (10)
  • Use the smoothed stochastic gradient to calculate . Repeat.

Details are in Algorithm 1. We now explore its properties. First, note that smoothing the sufficient statistics comes at almost no extra computational costs. In fact, the mean of the stored sufficient statistics does not explicitly have to be computed, but rather amounts to the update

(11)

after which is deleted. Storing the sufficient statistics can be expensive for large values of : In the context of LDA involving the typical parameters and , using amounts to storing 64-bit floats which is in the Gigabyte range.

Note that when we obtain stochastic variational inference (SVI) in its basic form. This includes deterministic variational inference for in the case of sampling without replacement within the minibatch.

Biased gradients

Let us now investigate the algorithm theoretically. Note that the only noisy part in the stochastic gradient in Eq. (9) is the sufficient statistics. Averaging over stochastic sufficient statistics thus promises to reduce the noise in the gradient. We are interested in the effect of the additional parameter .

When we average over the most recent sufficient statistics, we introduce a bias. As the variational parameters change during each iteration, the averaged sufficient statistics deviate in expectation from its current value. This induces biased gradients. In a nutshell, large values of will reduce the variance but increase the bias.

To better understand this tradeoff, we need to introduce some notation. We defined the stochastic gradient in Eq. (9) and refer to as the full gradient (FG). We also defined the smoothed stochastic gradient in Eq. (10). Now, we need to introduce an auxiliary variable, . This is the time-averaged full gradient. It involves the full sufficient statistics evaluated along the sequence generated by our algorithm.

We can expand the smoothed stochastic gradient into three terms:

(12)

This involves the full gradient (FG), a bias term and a stochastic noise term. We want to minimize the statistical error between the full gradient and the smoothed gradient by an optimal choice of . We will show this the optimal choice is determined by a tradeoff between variance and bias.

For the following analysis, we need to compute expectation values with respect to realizations of our algorithm, which is a stochastic process that generates a sequence of ’s. Those expectation values are denoted by . Notably, not only the minibatches are random variables under this expectation, but also the entire sequences  . Therefore, one needs to keep in mind that even the full gradients are random variables and can be studied under this expectation.

We find that the mean squared error of the smoothed stochastic gradient dominantly decomposes into a mean squared bias and a noise term:

(13)

To see this, consider the mean squared error of the smoothed stochastic gradient with respect to the full gradient, , adding and subtracting :

We encounter a cross-term, which we argue to be negligible. In defining we find that . Therefore,

The fluctuations of the sufficient statistics is a random variable with mean zero, and the randomness of enters only via . One can assume a very small statistical correlation between those two terms, . Therefore, the cross-term can be expected to be negligible. We confirmed this fact empirically in our numerical experiments: the top row of Fig. 1 shows that the sum of squared bias and variance is barely distinguishable from the squared error.

By construction, all bias comes from the sufficient statistics:

(14)

At this point, little can be said in general about the bias term, apart from the fact that it should shrink with the learning rate. We will explore it empirically in the next section. We now consider the variance term:

This can be reformulated as . Assuming that the variance changes little during those successive updates, we can approximate , which yields

(15)

The smoothed gradient has therefore a variance that is approximately times smaller than the variance of the original stochastic gradient.

Bias-variance tradeoff

To understand and illustrate the effect of in our optimization problem, we used a small data set of 2000 abstracts from the Arxiv repository. This allowed us to compute the full sufficient statistics and the full gradient for reference. More details on the data set and the corresponding parameters will be given below.

Figure 1: Empirical test of the variance-bias tradeoff on 2,000 abstracts from the Arxiv repository (, ). Top row. For fixed (left), (middle), and (right), we compare the squared bias, variance, variance+bias and the squared error as a function of iterations. Depending on , the variance or the bias give the dominant contribution to the error. Bottom row. Squared bias (left), variance (middle) and squared error (right) for different values of . Intermediate values of lead to the smallest squared error and hence to the best tradeoff between small variance and small bias.

We computed squared bias (SB), variance (VAR) and squared error (SE) according to Eq. (13) for a single stochastic optimization run. More explicitly,

(16)

In Fig. 1, we plot those quantities as a function of iteration steps (time). As argued before, we arrive at a drastic variance reduction (bottom, middle) when choosing large values of .

In contrast, the squared bias (bottom, left) typically increases with . The bias shows a complex time-evolution as it maintains memory of previous steps. For example, the kinks in the bias curves (bottom, left) occur at times and , i.e. they correspond to the values of . Those are the times from which on the smoothed gradient looses memory of its initial state, typically carrying a large bias. The variances become approximately stationary at iteration (bottom, middle). Those are the times where the initialization process ends and the queue in Algorithm 1 has reached its maximal length . The squared error (bottom, right) is to a good approximation just the sum of squared bias and variance. This is also shown in the top panel of Fig. 1.

Due to the long-time memory of the smoothed gradients, one can associate some ”inertia” or ”momentum” to each value of . The larger , the smaller the variance and the larger the inertia. In a non-convex optimization setup with many local optima as in our case, too much inertia can be harmful. This effect can be seen for the and runs in Fig. 1 (bottom), where the mean squared bias and error curves bend upwards at long times. Think of a marble rolling in a wavy landscape: with too much momentum it runs the danger of passing through a good optimum and eventually getting trapped in a bad local optimum. This picture suggests that the optimal value of depends on the ”ruggedness” of the potential landscape of the optimization problem at hand. Our empirical study suggest that choosing between and produces the smallest mean squared error.

Aside: connection to gradient averaging

Our algorithm was inspired by various gradient averaging schemes. However, we cannot easily used averaged gradients in SVI. To see the drawbacks of gradient averaging, let us consider stochastic gradients and replace

(17)

One arrives at the following parameter update for :

(18)

This update can lead to the violation of optimization constraints, namely to a negative variational parameter . Note that for (the case of SVI), the third term is zero, guaranteeing positivity of the update. This is no longer guaranteed for , and the gradient updates will eventually become negative. We found this in practice. Furthermore, we find that there is an extra contribution to the bias compared to Eq. (14),

(19)

Hence, the averaged gradient carries an additional bias in - it is the same term that may violate optimization constraints. In contrast, the variance of the averaged gradient is the same as the variance of the smoothed gradient. Compared to gradient averaging, the smoothed gradient has a smaller bias while profiting from the same variance reduction.

3 Empirical study

We tested SVI for LDA, using the smoothed stochastic gradients, on three large corpora:

  • 882K scientific abstracts from the Arxiv repository, using a vocabulary of 14K words.

  • 1.7M articles from the New York Times, using a vocabulary of 8K words.

  • 3.6M articles from Wikipedia, using a vocabulary of 7.7K words.

We set the minibatch size to and furthermore set the number of topics to , and the hyper-parameters . We fixed the learning rate to . We also compared our results to a decreasing learning rate and found the same behavior.

Figure 2: Per-word predictive probabilitiy as a function of the effective number of passes through the data (minibatch size * iterations / size of corpus). We compare results for the New York Times, Arxiv, and Wikipedia data sets. Each plot shows data for different values of . We used a constant learning rate of , and set a time budget of 24 hours. Highest likelihoods are obtained for between 10 and 100, after which strong bias effects set in.

For a quantitative test of model fitness, we evaluate the predictive probability over the vocabulary [1]. To this end, we separate a test set from the training set. This test set is furthermore split into two parts: half of it is used to obtain the local variational parameters (i.e. the topic proportions by fitting LDA with the fixed global parameters . The second part is used to compute the likelihoods of the contained words:

(20)

We show the predictive probabilities as a function of effective passes through the data set in Fig. 2 for the New York Times, Arxiv, and Wikipedia corpus, respectively. Effective passes through the data set are defined as (minibatch size * iterations / size of corpus). Within each plot, we compare different numbers of stored sufficient statistics, . The last value of corresponds to a version of the algorithm where we average over all previous sufficient statistics, which is related to averaged gradients (AG), but which has a bias too large to compete with small and finite values of . The maximal values of 30, 5 and 6 effective passes through the Arxiv, New York Times and Wikipedia data sets, respectively, approximately correspond to a run time of 24 hours, which we set as a hard cutoff in our study.

We obtain the highest held-out likelihoods for intermediate values of . E.g., averaging only over subsequent sufficient statistics results in much faster convergence and higher likelihoods at very little extra storage costs. As we discussed above, we attribute this fact to the best tradeoff between variance and bias.

4 Discussion and Conclusions

SVI scales up Bayesian inference, but suffers from noisy stochastic gradients. To reduce the mean squared error relative to the full gradient, we averaged the sufficient statistics of SVI successively over

iteration steps. The resulting smoothed gradient is biased, however, and the performance of the method is governed by the competition between bias and variance. We argued theoretically and showed empirically that intermediate values of the number of stored sufficient statistics give the highest held-out likelihoods.

Proving convergence for our algorithm is still an open problem, which is non-trivial especially because the variational objective is non-convex. To guarantee convergence, however, we can simply phase out our algorithm and reduce the number of stored gradients to one as we get close to convergence. At this point, we recover SVI.

Acknowledgements

We thank Laurent Charlin, Alp Kucukelbir, Prem Gopolan, Rajesh Ranganath, Linpeng Tang, Neil Houlsby, Marius Kloft, and Matthew Hoffman for discussions. We acknowledge financial support by NSF CAREER NSF IIS-0745520, NSF BIGDATA NSF IIS-1247664, NSF NEURO NSF IIS-1009542, ONR N00014-11-1-0651, the Alfred P. Sloan foundation, DARPA FA8750-14-2-0009 and the NSF MRSEC program through the Princeton Center for Complex Materials Fellowship (DMR-0819860).

References

  • [1] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference.

    The Journal of Machine Learning Research

    , 14(1):1303–1347, 2013.
  • [2] Prem Gopalan, Jake M Hofman, and David M Blei. Scalable recommendation with Poisson factorization. Preprint, arXiv:1311.1704, 2013.
  • [3] Prem K Gopalan and David M Blei. Efficient discovery of overlapping communities in massive networks. Proceedings of the National Academy of Sciences, 110(36):14534–14539, 2013.
  • [4] Edoardo M Airoldi, David M Blei, Stephen E Fienberg, and Eric P Xing. Mixed membership stochastic blockmodels. In Advances in Neural Information Processing Systems, pages 33–40, 2009.
  • [5] James Hensman, Nicolo Fusi, and Neil D Lawrence. Gaussian processes for big data.

    Uncertainty in Artificial Intelligence

    , 2013.
  • [6] Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400–407, 1951.
  • [7] Chong Wang, Xi Chen, Alex Smola, and Eric Xing. Variance reduction for stochastic gradient optimization. In Advances in Neural Information Processing Systems, pages 181–189, 2013.
  • [8] Rie Johnson and Tong Zhang.

    Accelerating stochastic gradient descent using predictive variance reduction.

    In Advances in Neural Information Processing Systems, pages 315–323, 2013.
  • [9] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. Technical report, HAL 00860051, 2013.
  • [10] Yurii Nesterov. Primal-dual subgradient methods for convex problems. Mathematical Programming, 120(1):221–259, 2009.
  • [11] Masa-Aki Sato. Online model selection based on the variational Bayes. Neural Computation, 13(7):1649–1681, 2001.
  • [12] Mirwaes Wahabzada and Kristian Kersting. Larger residuals, less work: Active document scheduling for latent Dirichlet allocation. In Machine Learning and Knowledge Discovery in Databases, pages 475–490. Springer, 2011.
  • [13] Paul Tseng. An incremental gradient (-projection) method with momentum term and adaptive stepsize rule. SIAM Journal on Optimization, 8(2):506–531, 1998.
  • [14] Matthew Hoffman, Francis R Bach, and David M Blei. Online learning for latent Dirichlet allocation. In Advances in Neural Information Processing Systems, pages 856–864, 2010.
  • [15] Martin J Wainwright and Michael I Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1–305, 2008.
  • [16] Christopher M Bishop et al. Pattern Recognition and Machine Learning, volume 1. Springer New York, 2006.
  • [17] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent Dirichlet allocation. The Journal of Machine Learning Research, 3:993–1022, 2003.