Markov chain Monte Carlo (MCMC) sampling is a powerful method for computation on intractable distributions. We are interested in large dataset applications, where the goal is to sample a posterior distribution of parameter for large . The Metropolis-Hastings method (M-H) generates sample candidates from a proposal distribution which is in general different from the target distribution , and decides whether to accept or reject based on an acceptance test. The acceptance test is usually a Metropolis test (Metropolis et al., 1953; Hastings, 1970).
Many state-of-the-art machine learning methods, and deep learning in particular, are based on minibatch updates (such as SGD) to a model. Minibatch updates produce many improvements to the model for each pass over the dataset, and have high sample efficiency. In contrast, conventional M-H requires calculations over the full dataset to produce a new sample. Recent results from(Korattikara et al., 2014) and (Bardenet et al., 2014) perform approximate (bounded error) acceptance tests using subsets of the full dataset. The amount of data consumed for each test varies significantly from one minibatch to the next. By contrast, (Maclaurin and Adams, 2014; Bardenet et al., 2016) perform exact tests but require a lower bound on the parameter distribution across its domain. The amount of data reduction depends on the accuracy of this bound, and such bounds are only available for relatively simple distributions.
Here we derive a new test which incorporates the variability in minibatch statistics as a natural part of the test and requires less data per iteration than prior work. We use a Barker test function (Barker, 1965), which makes our test naturally error tolerant. The idea of using a noise-tolerant Barker’s test function was suggested but not explored empirically in (Bardenet et al., 2016)
section 6.3. But the asymptotic test statistic CDF and the Barker function are different, which leads to fixed errors for the approach in(Bardenet et al., 2016)
. Here, we show that the difference between the distributions can be corrected with an additive random variable. This leads to a test which is fast, and whose error can be made arbitrarily small.
We note that this approach is fundamentally different from prior work. It makes no assumptions about the form of, and requires no global bounds on the posterior parameter distribution. It is exact in the limit as batch size increases by the Central Limit Theorem. This is not true of(Korattikara et al., 2014) and (Bardenet et al., 2014) which use tail bounds and provide only approximate tests even with arbitrarily large batches of data. Our test is also exact under the assumptions of Korattikara et al. (2014)
that the log probability ratios of batches are normally distributed about their mean. Rather than tail bounds, our approach uses moment estimates from the data to determine how far the minibatch posteriors deviate from a normal distribution. These bounds carry through to the overall accuracy of the test.
Our test is applicable when the variance (over data samples) of the log probability ratio between the proposal and the current state is small enough (less than 1). It’s not clear at first why this quantity should be bounded, but it is natural for well-specified models running Metropolis-Hastings sampling with optimal proposals(Roberts and Rosenthal, 2001) on a full dataset. If the posterior parameter distribution is a unit-variance normal distribution, then the posterior for samples will have variance . There is simply not enough information in samples to locate and efficiently sample from this posterior. This is not a property of any particular proposal or test, but of the information carried by the data. The variance condition succinctly captures the condition that the minibatch carries enough information to generate a sample. While we cannot expect to generate independent samples from the posterior using only a small subset of the data, there are three situations where we can exploit small minibatches:
Increase the temperature of the target distribution. Log likelihoods scale as , and so the variance of the likelihood ratio will vary as . As we demonstrate in Section 6.2, higher temperature can be advantageous for parameter exploration.
For continuous distributions, reduce the proposal step size (i.e. generate correlated samples). The variance of the log acceptance probability scales as the square of proposal step size.
Utilize Hamiltonian Dynamics for proposals and tests. Here the dynamics itself provide shaping to the posterior distribution, and the M-H test is only needed to correct quantization error. In terms of the information carried by the samples, this approach is not limited by the data in a particular minibatch since momentum is carried over time and “remembered” across multiple minibatches.
We note that case two above is characteristic of Gibbs samplers applied to large datasets (Dupuy and Bach, 2016). Such samplers represent a model posterior via counts over an entire dataset of samples. When a minibatch of samples is used to update the model, the counts for these samples only are updated. This creates “steps” of in the model parameters, and correlated samples from the model posterior. Correlated samples are still very useful in high-dimensional ML problems with multi-modal posteriors since they correspond to a finer-scale random walk through the posterior landscape. The contributions of this paper are as follows:
We develop a new, more efficient (in samples per test) minibatch acceptance test with quantifiable error bounds. The test uses a novel additive correction variable to implement a Barker test based on minibatch mean and variance.
We compare our new test and prior approaches on several datasets. We demonstrate several order-of-magnitude improvements in sample efficiency, and that the batch size distribution is short-tailed.
, a difficult-to-compute probability distributionis sampled using a Markov chain . The sample at time is generated using a candidate from a (simpler) proposal distribution , filtered by an acceptance test. The acceptance test is usually a Metropolis test. The Metropolis test has acceptance probability:
where denotes . With probability , we accept and set , otherwise set . The test is often implemented with an auxiliary random variable with a comparison ; here,
denotes the uniform distribution on the interval. For simplicity, we drop the subscript for the current sample and denote it as .
The acceptance test guarantees detailed balance, which means , where is the probability of a transition from state to . Here, . This condition, together with ergodicity, guarantees that the Markov chain has a unique stationary distribution
. For Bayesian inference, the target distribution is. The acceptance probability is now:
where is the prior. Computing samples this way requires all data points, but this is very expensive for large datasets.
To address this challenge, (Korattikara et al., 2014; Bardenet et al., 2014) perform approximate Metropolis-Hasting tests using sequential hypothesis testing. At each iteration, a subset of data is sampled and used to test whether to accept using an approximation to . If the approximate test does not yield a decision, the minibatch size is increased and the test repeated. This process continues until a decision. These methods either invoke the asymptotic CLT and assume that finite batch errors are normally distributed (Korattikara et al., 2014) or use a concentration bound (Bardenet et al., 2014). We refer to these algorithms, respectively, as AustereMH and MHSubLhd. While both show useful reductions in the number of samples required, they suffer from two drawbacks: (i) They are approximate, and always yield a decision with a finite error, (ii) They both require exact, dataset-wide bounds that depend on (see Section 5).111We obtained the authors code for both and found that they scanned the entire dataset at each step to obtain these estimates. We discuss a worst-case scenario in Section 2.2.
To simplify notation, we assume that temperature (saving to indicate the number of samples to draw). Temperature appears as an exponential on each likelihood, , so the effect would be to act as a factor on .
To reduce computational effort, an unbiased estimate ofbased on a minibatch can be used:
Finally, it will be convenient for our analysis to define . Thus, is the mean of over the entire dataset, and is the mean of the in its minibatch.
Since minibatches contains randomly selected samples, the values are i.i.d. random variables.333The analysis assumes sampling with replacement although implementations on typical large datasets will approximate this by sampling without replacement. By the Central Limit Theorem, we expect to be approximately Gaussian. The acceptance test then becomes a statistical test of the hypothesis that by establishing that is substantially larger than .
2.2 A Worst-Case Gaussian Example
Let be i.i.d. with known variance and (unknown) mean . We use a uniform prior on . The log likelihood ratio is
which is normally distributed over selection of the Normal samples . Since the have unit variance, their mean has variance , and the variance of is . In order to pass a hypothesis test that , there needs to be a large enough gap (several ) between and .
The posterior is a Gaussian centered on the sample mean , and with variance (i.e., ). In one dimension, an efficient proposal distribution has the same variance as the target distribution (Roberts and Rosenthal, 2001), so we use a proposal based on . It is symmetric , and since we assumed a uniform prior, . Our worst-case scenario is specified in Lemma 1.
See Appendix, Section A.1. ∎
Similar results can be shown for other distributions and proposals by identifying regions in product space such that the hypothesis test needs to separate nearly-equal values. It follows that the accelerated tests from prior work require at least a constant fraction in the amount of data consumed per test compared to full-data tests, so their speed-up is . The issue is the use of tail bounds to separate from zero; for certain input/random combinations, this difference can be arbitrarily close to zero. We avoid this by using the approximately normal variation in to replace the variation due to .
2.3 Mcmc Posterior Inference
There is a separate line of MCMC work drawing principles from statistical physics. One can apply Hamiltonian Monte Carlo (HMC) (Neal, 2010) methods which generate high acceptance and distant proposals when run on full batches of data. Recently Langevin Dynamics (Welling and Teh, 2011; Ahn et al., 2012) has been applied to Bayesian estimation on minibatches of data. This simplified dynamics uses local proposals and avoids M-H tests by using small proposal steps whose acceptance approaches 1 in the limit. However, the constraint on proposal step size is severe, and the state space exploration reduces to a random walk. Full minibatch HMC for minibatches was described in (Chen et al., 2014) which allows momentum-augmented proposals with larger step sizes. However, step sizes are still limited by the need to run accurately without M-H tests. By providing an M-H test with similar cost to standard gradient steps, our work opens the door to applying those methods with much more aggressive step sizes without loss of accuracy.
3 A New Mh Acceptance Test
3.1 Log-Likelihood Ratios
For our new M-H test, we denote the exact and approximate log likelihood ratios as and , respectively. First, is defined as
where , and match the corresponding functions within Equation (2). We separate out terms dependent and independent of the data as:
A minibatch estimator of , denoted as , is
Note that and are evaluated on the full dataset and a minibatch of size respectively. The term means is an unbiased estimator of .
The key to our test is a smooth acceptance function. We consider functions other than the classical Metropolis test that satisfy the detailed balance condition needed for accurate posterior estimation. A class of suitable functions is specified as follows:
If is any function such that , then the acceptance function satisfies detailed balance.
This result is used in (Barker, 1965) to define the Barker acceptance test.
3.2 Barker (Logistic) Acceptance Function
For our new MH test we use the Barker logistic (Barker, 1965) function: . Straightforward arithmetic shows that it satisfies the condition in Lemma 2. It is slightly less efficient than the Metropolis test, since its acceptance rate for vanishing likelihood difference is 0.5. However we will see that its overall sample efficiency is much higher than the earlier methods. See Appendix B for additional discussion.
Assume we begin with the current sample and a candidate sample , and that is a uniform random variable. We accept if , and reject otherwise. Since is monotonically increasing, its inverse is well-defined and unique. So an equivalent test is to accept iff
where is a random variable with the logistic distribution (its CDF is the logistic function). To see this notice that , that is the density corresponding to a logistic CDF, and finally that is the density of . The density of is symmetric, so we can equivalently test whether
for a logistic random variable .
3.3 A Minibatch Acceptance Test
We now describe acceptance testing using the minibatch estimator . From Equation (8), can be represented as a constant term plus the mean of IID terms of the form . As increases, therefore has a distribution which approaches a normal distribution by the Central Limit Theorem. We now describe this using an asymptotic argument and defer specific bounds between the CDFs of and a Gaussian to Section 5.
In the limit, since is normally distributed about its mean , we can write
where denotes a distribution which is approximately normal with variance . But to perform the test in Equation (10) we want for a logistic random variable (call it from now on). In (Bardenet et al., 2016) it was proposed to use in a Barker test, and tolerate the fixed error between the logistic and normal distributions.
Our approach is to instead decompose as
where we assume and that is a zero-mean “correction” variable with density . The two variables are added (i.e., their distributions convolve) to form . This decomposition requires an appropriate , which we derive in Section 4. Using samples from , the acceptance test is now
Therefore, assuming the variance of is small enough, if we have an estimate of from the current data minibatch, we test acceptance by adding a random variable and then accept if the result is positive (and reject otherwise).
If is exactly , the above test is exact, and as we show in Section 5, if there is a maximum error between the CDF of and the CDF of , then our test has an error of at most relative to the full batch version.
4 The Correction Distribution
Our test in Equation (13) requires knowing the distribution of
. In Section 5, we show that the test accuracy
depends on the absolute error between the CDFs of
and . Consequently, we need to minimize this in our construction of
. More formally, let where is
the standard normal CDF444Hence, is the CDF of a zero-mean
Gaussian with standard deviation
is the CDF of a zero-mean Gaussian with standard deviation., be the logistic function, and be the density of the correction distribution. Our goal is to solve:
where denotes convolution. To compute , we assume the input and another variable lie in the intervals and , respectively. We discretize the convolution by discretizing and into and values respectively. If and , then we can write and , and the objective can be written as:
Now define matrix
and vectorsand such that , , and , where the indices and are appropriately translated to be non-negative for and . The problem is now to minimize with the density non-negative constraint . We approximate this with least squares:
with regularization . The solution is well-known from the normal equations () and in practice yields an acceptable norm.
With this approach, there is no guarantee that . However, we have some flexibility in the choice of in Equation (14). As we decrease the variance of , the variance of grows by the same amount and is in fact the result of convolution with a Gaussian whose variance is the difference. Thus as decreases, grows and approaches the derivative of a logistic function at . It retains some weak negative values for but removal of those leads to small error. We use and for our experiments, which empirically provided excellent performance. See Table 3 in Appendix C.1 for detailed errors for different settings. Algorithm 1 describes our procedure, MHminibatch. A few points:
It uses an adaptive step size so as to use the smallest possible average minibatch size. Unlike previous work, the size distribution is short-tailed.
An additional normal variable is added to to produce a variable with unit variance. This is not mathematically necessary, but allows us to use a single correction distribution with for , saving on memory footprint.
The sample variance of is denoted as and is proportional to .
We now derive error bounds for our M-H test and the target distribution it generates. In Section 5.1, we present bounds on the absolute and relative error (in terms of the CDFs) of the distribution of versus a Gaussian. We then show in Section 5.2 that these bounds are preserved after the addition of other random variables (e.g., and ). It then follows that the acceptance test has the same error bound.
5.1 Bounding the Error of From a Gaussian
We use the following quantitative central-limit result:
Let be a set of zero-mean, independent, identically-distributed random variables with sample mean and sample variance where:
Then the t-statistic has a distribution which is approximately normal, with error bounded by:
See Appendix, Section A.2. ∎
showing that the distribution of approaches the normal distribution whose standard deviation is , as measured from the sample.
To apply this to our test, let , so that the are zero-mean, i.i.d. variables. If instead of all samples, we only extract a subset of samples corresponding to our minibatch, we can connect with our term: , so that . We can now substitute into Equation (18) and displace by the mean, giving:
where the upper bound can be expressed as . Corollary 1 shows that the distribution of approximates a Normal distribution with mean and variance . Furthermore, it bounds the error with estimable quantities: both and can be estimated as means of and , respectively, on each minibatch. We expect this will often be accurate enough on minibatches with hundreds of points, but otherwise bootstrap CIs can be computed.
5.2 Adding Random Variables
We next relate the CDFs of distributions and show that bounds are preserved after adding random variables.
Let and be two CDFs satisfying with in some real range. Let be the density of another random variable . Let be the convolution and be the convolution . Then (resp. ) is the CDF of sum of independent random variables with CDF (resp. ) and y with density . Then
See Appendix, Section A.3. ∎
From Lemma 4, we have the following Corollary:
If , then
where is the standard logistic function, and and are generated as per Algorithm 1.
See Appendix, Section A.4. ∎
5.3 Bounds on the Stationary Distribution
Bounds on the error of an M-H test imply bounds on the stationary distribution of the Markov chain under appropriate conditions. Such bounds were derived in both (Korattikara et al., 2014) and (Bardenet et al., 2014). We include the result from (Korattikara et al., 2014) (Theorem 1) here: Let denote the total variation distance between two distributions and . Let denote the transition kernel of the exact Markov chain, denote the exact posterior distribution, and denote the stationary distribution of the approximate transition kernel.
If satisfies the contraction condition for some constant and all probability distributions , then
where is the bound on the error in the acceptance test.
Here we compare with the most similar prior works (Korattikara et al., 2014) and Bardenet et al. (2014). In (Korattikara et al., 2014), an asymptotic CLT is used to argue that a modified standard M-H test can be used on subsets of the data. This assumes knowledge of dataset-wide mean each iteration (it depends on ). Determining exactly requires a scan over the entire dataset, or some model-specific bounds. (Korattikara et al., 2014) also propose a conservative variant which assumes and avoids the scan. We refer to the conservative version as AustereMH(c) and the non-conservative variant as AustereMH(nc). We analyze both in this section.
In (Bardenet et al., 2014) concentration bounds are used with a similar modification to the standard M-H test (MHSubLhd method). For MHSubLhd, the required global bound is denoted which once again depends on and so must be recomputed at each step, or estimated in a model-specific way. We obtained sample code for both methods from the authors, and found that both AustereMN(nc) and MHSubLhd scanned the entire dataset at each iteration to derive these bounds. We do not include the cost of doing this in our experiments, since otherwise there would be no improvement over testing the full dataset. However, it should be kept in mind that such bounds must be provided to these methods. Our test by contrast uses a quantitative form of the CLT which rely on measurable statistics from a single minibatch. It therefore requires no dataset-wide scans, and can be used, e.g. on streams of data.
. Hyperparameters for the latter were optimized using a grid-search over minibatch sizesand per-test thresholds described in Appendix C.2.1. Throughout our descriptions, we refer to a trial as the period when an algorithm collects all its desired samples , generally with or .
6.1 Mixture of Gaussians
This model is adapted from (Welling and Teh, 2011) by increasing the number of samples to 1 million. The parameters are , and the generation process is
We set and . We fix . The original paper sampled 100 data points and estimated the posterior. We are interested in performance on larger problems and so sampled 1,000,000 points to form the posterior of with the same prior from Equation (22). This produces a much sharper posterior with two very narrow peaks. Our goal is to reproduce the original posterior, so we adjust the temperature to . Taking logs, we get the target as shown in the far right of Figure 1.
We benchmark with AustereMH(c) and MHSubLhd. We initialized MHminibatch and MHSubLhd with . For AustereMH(c), we set the error bound to 0.005. For MHSubLhd, we increase sizes geometrically with and use parameters . All methods collect 3000 samples using a random walk proposer with covariance matrix , which means the M-H test is responsible for shaping the sample distribution.
Figure 1 shows scatter plots of the resulting samples for the three methods, with darker regions indicating a greater density of points. There are no obvious differences, showing that MHminibatch reaches an acceptable posterior. We further measure the similarity between each set of samples and the actual posterior. Due to space constraints, results are in Appendix C.2.2.
Figure 2 shows that MHminibatch dominates in terms of speed and efficiency. The histograms of the (final) minibatch sizes used each iteration show that our method consumes significantly less data; the distribution is short-tailed and the mean is 172, more than an order of magnitude better compared to the other two methods (averages are 12562 and 67508). We further ran 10 runs of mixture of Gaussians experiments and report minibatch sizes in Table 1. Sizes correspond to the running times of the methods, excluding the likelihood computation of all data points for AustereMH(nc) and MHSubLhd, which would drastically increase running time.
|Method||Average of MB Sizes|
one standard deviation) on the Gaussian mixture model. The averages are taken over 10 independent trials (3000 samples each).
6.2 Logistic Regression
We next test logistic regression for the binary classification of 1s versus 7s on the MNIST (LeCun and Cortes, 1998) dataset and (a subset of) infinite MNIST (Loosli et al., 2007). For the former, extracting all 1s and 7s resulted in 13,000 training samples, and for the latter, we used 87,000 additional (augmented) 1s and 7s to get 100,000 training samples. Both datasets use the same test set, with 2,163 samples. Henceforth, we call them MNIST-13k and MNIST-100k, respectively.
For all methods, we impose a uniform prior on and again use a random walk proposer, with covariance matrix for MNIST-13k and for MNIST-100k. The default temperature setting is a constant at for MNIST-13k and MNIST-100k. Performance of all methods implicitly relies on the step size and temperature. Setting temperature too low or step size too high will result in slow convergence for all methods. For MNIST-13k, each method generated 5000 samples for ten independent trials; due to MNIST-100k’s higher computational requirements, the methods generated 3000 samples for five independent trials. For additional parameter settings and an investigation on tuning step sizes, see Appendix C.3.
For MHSubLhd, we tried to use the provided symbolic bound for described in (Bardenet et al., 2014), but it was too high and provided no performance benefit. Instead we use the empirical from the entire dataset.
The first two subplots of Figure 3 display the prediction accuracy on both datasets for all methods as a function of the cumulative training points processed.555The curves do not span the same length over the x-axis since the methods consume different amounts of data. To generate the curves, for each of the sampled vectors , , we use as the logistic regression parameter. The results indicate that our test is more efficient, obtaining convergence more than an order of magnitude faster than AustereMH(nc) and several orders of magnitude compared to AustereMH(c) and MHSubLhd. We also observe the advantage of having higher temperature from the third plot in Figure 3, which plots average performance and one standard deviation for MHminibatch over 10 trials. During the exploration period, the accuracy rapidly increases, and then after 400 samples, we switch the temperature to 1, but this requires the step size to decrease, hence the smaller changes in accuracy.
Figure 4 shows log-log histograms of minibatch sizes for the methods on MNIST-100k. (Figure 5 in Appendix C.3 contains results for MNIST-13k.) The histograms only represent one representative trial; Table 2 contains the average of the average minibatch sizes ( one standard deviation) across all trials. MHminibatch, with average minibatch sizes of 125.4 and 216.5 for MNIST-13k and MNIST-100k, respectively, consumes more than 7x and 4x fewer data points than the next-best method, AustereMH(nc). We reiterate, however, that both AustereMH(nc) and MHSubLhd require computing and for all each iteration. Our results here do not count that extra data consumption. Only our method and AustereMH(c) rely solely on the minibatch of data each iteration.
7 Conclusions and Discussions
We have derived an M-H test for minibatch MCMC which approximates full data tests. We present theoretical results and experimentally show the benefits of our test on Gaussian mixtures and a logistic regression experiment.
A priority is to extend our work to methods such as Hamiltonian Monte Carlo and Langevin Dynamics which use efficient but asymmetric proposals. While there are various approaches to symmetrizing these proposals, they have high cost in the context of minibatch MCMC. Instead we plan to extend our method to log proposal ratios which have similar structure (whole-dataset mean plus additive noise) to the log probability ratio. These can be similarly absorbed in the Barker test.
Other possibilities for future work include integrating our algorithm with (Korattikara et al., 2014) by applying both tests each iteration, utilizing the variance reduction techniques suggested in (Chen and Ghahramani, 2016), and providing recipe for how to use our algorithm following the framework of (Ma et al., 2015).
- Ahn et al. (2012) Sungjin Ahn, Anoop Korattikara Balan, and Max Welling. Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring. In Proceedings of the 29th International Conference on Machine Learning (ICML), 2012.
- Bardenet et al. (2014) Rémi Bardenet, Arnaud Doucet, and Chris Holmes. Towards Scaling up Markov chain Monte Carlo: An Adaptive Subsampling Approach. In Proceedings of the 31st International Conference on Machine Learning (ICML), 2014.
- Bardenet et al. (2016) Rémi Bardenet, Arnaud Doucet, and Chris Holmes. On Markov Chain Monte Carlo Methods for Tall Data. Journal of Machine Learning Research (JMLR), 2016.
- Barker (1965) A. A. Barker. Monte-Carlo Calculations of the Radial Distribution Functions for a Proton-Electron Plasma. Australian Journal of Physics, 18:119–133, 1965.
- Bentkus et al. (1997) V. Bentkus, F. Gotze, and W.R.vanZwet. An Edgeworth Expansion for Symmetric Statistics. Annals of Statistics, 25(2), 1997.
- Brooks et al. (2011) Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng. Handbook of Markov Chain Monte Carlo. CRC press, 2011.
- Chen et al. (2014) T. Chen, E.B. Fox, and C. Guestrin. Stochastic Gradient Hamiltonian Monte Carlo. In Proceedings of the 31st International Conference on Machine Learning (ICML), 2014.
- Chen and Ghahramani (2016) Yutian Chen and Zoubin Ghahramani. Scalable Discrete Sampling as a Multi-Armed Bandit Problem. In Proceedings of the 33nd International Conference on Machine Learning, ICML, 2016.
- Dupuy and Bach (2016) Christophe Dupuy and Francis Bach. Online but Accurate Inference for Latent Variable Models with Local Gibbs Sampling. arXiv preprint arXiv:1603.02644, 2016.
- Gilks and Spiegelhalter (1996) W.R. Gilks and DJ Spiegelhalter. Markov chain Monte Carlo in practice. Chapman & Hall/CRC, 1996.
- Hastings (1970) W. K. Hastings. Monte Carlo Sampling Methods Using Markov Chains and Their Applications. Biometrika, 57:97–109, 1970.
- Korattikara et al. (2014) Anoop Korattikara, Yutian Chen, and Max Welling. Austerity in MCMC Land: Cutting the Metropolis-Hastings Budget. In Proceedings of the 31st International Conference on Machine Learning (ICML), 2014.
- LeCun and Cortes (1998) Yann LeCun and Corinna Cortes. MNIST Handwritten Digit Database. 1998. URL http://yann.lecun.com/exdb/mnist/.
Loosli et al. (2007)
Gaëlle Loosli, Stéphane Canu, and Léon Bottou.
Training Invariant Support Vector Machines using Selective Sampling.In Léon Bottou, Olivier Chapelle, Dennis DeCoste, and Jason Weston, editors, Large Scale Kernel Machines, pages 301–320. MIT Press, Cambridge, MA., 2007.
- Ma et al. (2015) Y. Ma, T. Chen, and E.B. Fox. A Complete Recipe for Stochastic Gradient MCMC. In Advances in Neural Information Processing Systems 28, 2015.
Maclaurin and Adams (2014)
Dougal Maclaurin and Ryan P. Adams.
Firefly Monte Carlo: Exact MCMC with Subsets of Data.
Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence, (UAI), 2014.
- Metropolis et al. (1953) Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller. Equation of State Calculations by Fast Computing Machines. The Journal of Chemical Physics, 21, 1953.
- Neal (2010) Radford M. Neal. MCMC Using Hamiltonian Dynamics. Handbook of Markov Chain Monte Carlo, 54:113–162, 2010.
- Novak (2005) Y. Novak. On Self-Normalized Sums and Student’s Statistic. Theory of Probability and its Applications, 49(2):336–344, 2005.
- Roberts and Rosenthal (2001) Gareth O. Roberts and Jeffrey S. Rosenthal. Optimal Scaling for Various Metropolis-Hastings Algorithms. Statistical Science, 16(4):351–367, 2001.
- Welling and Teh (2011) Max Welling and Yee Whye Teh. Bayesian Learning via Stochastic Gradient Langevin Dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML), 2011.
Appendix A Proofs of Lemmas and Corollaries
a.1 Proof of Lemma 1
Choose (event 1) and filtered for matching sign (event 2). As discussed in Lemma 1, both and have variance . If we denote as the CDF of the standard normal distribution, then the former event occurs with probability . The latter event, because we restrict signs, occurs with probability .
These events together guarantee that is negative by inspection of Equation (23) below. This implies that we can find a so that equals . Specifically, choose to satisfy . Using and Equation (5), we see that
Since the have mean 0.5, the resulting test with our chosen will never correctly succeed and must use all data points. Furthermore, if we sample values of near enough to , the terms in parenthesis will not be sufficiently different from 0.5 to allow the test to succeed.
The choices above for and guarantee that
Next, consider the range of values near :
The size of the range in is at least and occurs with probability at least . With in this range, we rewrite the test as:
so that, as in Equation (26), the LHS has expected value zero. Given our choice of intervals for the variables, we can compute the range for the right hand side (RHS) assuming666If , then the range would be but this does not matter for the purposes of our analysis. that :
Thus, the RHS is in . The standard deviation of the LHS given the interval constraints is at least . Consequently, the gap between the LHS and RHS in Equation (29) is at most standard deviations, limiting the range in which the test will be able to “succeed” without requiring more samples.
The samples , and are drawn independently and so the probability of the conjunction of these events is .
a.2 Proof of Lemma 3
a.3 Proof of Lemma 4
We first observe that
and since it follows that :
a.4 Proof of Corollary 2
We apply Lemma 4 twice. First take:
and convolve with the distribution of which has density where . This yields the next iteration of and :
Now we convolve with the distribution of :
Both steps preserve the error bound . Finally is a logistic CDF centered at , and so for a logistic random . We conclude that the probability of acceptance for the actual test differs from the exact test by at most .
a.5 Improved Error Bounds Based on Skew Estimation
We show that the CLT error bound can be improved to using a more precise limit distribution under an additional assumption. Let denote the moment, and denote the absolute moment of . If Cramer’s condition holds:
then Equation 2.2 in Bentkus et al.’s work on Edgeworth expansions [Bentkus et al., 1997] provides:
Let be a set of zero-mean, independent, identically-distributed random variables with sample mean and with defined as in Lemma 3. If satisfies Cramer’s condition, then
Lemma 6 shows that the average of the
has a more precise, skewed CDF limitwhere the skew term has weight proportional to a certain measure of skew derived from the moments: . Note that if the are symmetric, the weight of the correction term is zero, and the CDF of the average of the converges to at a rate of .
Here the limit is a normal CDF plus a correction term that decays as . Importantly, since where , the correction term can be rewritten giving:
From which we see that is a linear combination of , and