Stochastic Gradient MCMC with Stale Gradients

10/21/2016 ∙ by Changyou Chen, et al. ∙ Google Duke University 0

Stochastic gradient MCMC (SG-MCMC) has played an important role in large-scale Bayesian learning, with well-developed theoretical convergence properties. In such applications of SG-MCMC, it is becoming increasingly popular to employ distributed systems, where stochastic gradients are computed based on some outdated parameters, yielding what are termed stale gradients. While stale gradients could be directly used in SG-MCMC, their impact on convergence properties has not been well studied. In this paper we develop theory to show that while the bias and MSE of an SG-MCMC algorithm depend on the staleness of stochastic gradients, its estimation variance (relative to the expected estimate, based on a prescribed number of samples) is independent of it. In a simple Bayesian distributed system with SG-MCMC, where stale gradients are computed asynchronously by a set of workers, our theory indicates a linear speedup on the decrease of estimation variance w.r.t. the number of workers. Experiments on synthetic data and deep neural networks validate our theory, demonstrating the effectiveness and scalability of SG-MCMC with stale gradients.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The pervasiveness of big data has made scalable machine learning increasingly important, especially for deep models. A basic technique is to adopt stochastic optimization algorithms

[1], e.g.

, stochastic gradient descent and its extensions

[2]. In each iteration of stochastic optimization, a minibatch of data is used to evaluate the gradients of the objective function and update model parameters (errors are introduced in the gradients, because they are computed based on minibatches rather than the entire dataset; since the minibatches are typically selected at random, this yields the term “stochastic” gradient). This is highly scalable because processing a minibatch of data in each iteration is relatively cheap compared to analyzing the entire (large) dataset at once. Under certain conditions, stochastic optimization is guaranteed to converge to a (local) optima [1]

. Because of its scalability, the minibatch strategy has recently been extended to Markov Chain Monte Carlo (MCMC) Bayesian sampling methods, yielding SG-MCMC

[3, 4, 5].

In order to handle large-scale data, distributed stochastic optimization algorithms have been developed, for example [6], to further improve scalability. In a distributed setting, a cluster of machines with multiple cores cooperate with each other, typically through an asynchronous scheme, for scalability [7, 8, 9]. A downside of an asynchronous implementation is that stale gradients must be used in parameter updates (“stale gradients” are stochastic gradients computed based on outdated parameters, instead of the latest parameters; they are easier to compute in a distributed system, but introduce additional errors relative to traditional stochastic gradients). While some theory has been developed to guarantee the convergence of stochastic optimization with stale gradients [10, 11, 12], little analysis has been done in a Bayesian setting, where SG-MCMC is applied. Distributed SG-MCMC algorithms share characteristics with distributed stochastic optimization, and thus are highly scalable and suitable for large-scale Bayesian learning. Existing Bayesian distributed systems with traditional MCMC methods, such as [13], usually employ stale statistics instead of stale gradients, where stale statistics are summarized based on outdated parameters, e.g., outdated topic distributions in distributed Gibbs sampling [13]. Little theory exists to guarantee the convergence of such methods. For existing distributed SG-MCMC methods, typically only standard stochastic gradients are used, for limited problems such as matrix factorization, without rigorous convergence theory [14, 15, 16].

In this paper, by extending techniques from standard SG-MCMC [17], we develop theory to study the convergence behavior of SG-MCMC with Stale gradients (SG-MCMC). Our goal is to evaluate the posterior average of a test function , defined as , where is the desired posterior distribution with the possibly augmented model parameters (see Section 2). In practice, SG-MCMC generates samples and uses the sample average to approximate . We measure how approximates in terms of bias, MSE and estimation variance, defined as , and , respectively. From the definitions, the bias and MSE characterize how accurately approximates , and the variance characterizes how fast converges to its own expectation (for a prescribed number of samples ). Our theoretical results show that while the bias and MSE depend on the staleness of stochastic gradients, the variance is independent of it. In a simple asynchronous Bayesian distributed system with SG-MCMC, our theory indicates a linear speedup on the decrease of the variance w.r.t. the number of workers used to calculate the stale gradients, while maintaining the same optimal bias level as standard SG-MCMC. We validate our theory on several synthetic experiments and deep neural network models, demonstrating the effectiveness and scalability of the proposed SG-MCMC framework.

Related Work

Using stale gradients is a standard setup in distributed stochastic optimization systems. Representative algorithms include, but are not limited to, the ASYSG-CON [6] and HOGWILD! algorithms [18], and some more recent developments [19, 20]. Furthermore, recent research on stochastic optimization has been extended to non-convex problems with provable convergence rates [12]. In Bayesian learning with MCMC, existing work has focused on running parallel chains on subsets of data [21, 22, 23, 24], and little if any effort has been made to use stale stochastic gradients, the setting considered in this paper.

2 Stochastic Gradient MCMC

Throughout this paper, we denote vectors as bold lower-case letters, and matrices as bold upper-case letters. For example,

means a multivariate Gaussian distribution with mean

and covariance . In the analysis we consider algorithms with fixed-stepsizes for simplicity; decreasing-stepsize variants can be addressed similarly as in [17].

The goal of SG-MCMC is to generate random samples from a posterior distribution , which are used to evaluate a test function. Here represents the parameter vector and represents the data, is the prior distribution, and the likelihood for . SG-MCMC algorithms are based on a class of stochastic differential equations, called Itô diffusion, defined as

(1)

where represents the model states, typically augments such that and ; is the time index, is -dimensional Brownian motion, functions and are assumed to satisfy the usual Lipschitz continuity condition [25].

For appropriate functions and , the stationary distribution, , of the Itô diffusion (1) has a marginal distribution equal to the posterior distribution [26]. For example, denoting the unnormalized negative log-posterior as , the stochastic gradient Langevin dynamic (SGLD) method [3] is based on 1st-order Langevin dynamics, with , and , where is the identity matrix. The stochastic gradient Hamiltonian Monte Carlo (SGHMC) method [4] is based on 2nd-order Langevin dynamics, with , and for a scalar ; is an auxiliary variable known as the momentum [4, 5]. Diffusion forms for other SG-MCMC algorithms, such as the stochastic gradient thermostat [5] and variants with Riemannian information geometry [27, 26, 28], are defined similarly.

In order to efficiently draw samples from the continuous-time diffusion (1), SG-MCMC algorithms typically apply two approximations: i) Instead of analytically integrating infinitesimal increments , numerical integration over small step size is used to approximate the integration of the true dynamics. ii) Instead of working with the full gradient , a stochastic gradient , defined as

(2)

is calculated from a minibatch of size , where is a random subset of . Note that to match the time index in (1), parameters have been and will be indexed by “” in the -th iteration.

3 Stochastic Gradient MCMC with Stale Gradients

In this section, we extend SG-MCMC to the stale-gradient setting, commonly met in asynchronous distributed systems [7, 8, 9], and develop theory to analyze convergence properties.

3.1 Stale stochastic gradient MCMC (SG-Mcmc)

The setting for SG-MCMC is the same as the standard SG-MCMC described above, except that the stochastic gradient (2) is replaced with a stochastic gradient evaluated with outdated parameter instead of the latest version (see Appendix A for an example):

(3)

where denotes the staleness of the parameter used to calculate the stochastic gradient in the -th iteration. A distinctive difference between S

G-MCMC and SG-MCMC is that stale stochastic gradients are no longer unbiased estimations of the true gradients. This leads to additional challenges in developing convergence bounds, one of the main contributions of this paper.

  Input: , , , , ,
  Output:
  if  then
     Draw ;
     ;
     ;
  end if
Algorithm 1 State update of SGHMC with the stale stochastic gradient

We assume a bounded staleness for all ’s, i.e.,

for some constant . As an example, Algorithm 1 describes the update rule of the stale-SGHMC in each iteration with the Euler integrator, where the stale gradient with staleness is used.

3.2 Convergence analysis

This section analyzes the convergence properties of the basic SG-MCMC; an extension with multiple chains is discussed in Section 3.3. It is shown that the bias and MSE depend on the staleness parameter , while the variance is independent of it, yielding significant speedup in Bayesian distributed systems.

Bias and MSE

In [17], the bias and MSE of the standard SG-MCMC algorithms with a th order integrator were analyzed, where the order of an integrator reflects how accurately an SG-MCMC algorithm approximates the corresponding continuous diffusion. Specifically, if evolving with a numerical integrator using discrete time increment induces an error bounded by , the integrator is called a th order integrator, e.g., the popular Euler method used in SGLD [3] is a 1st-order integrator. In particular, [17] proved the bounds stated in Lemma 1.

Lemma 1 ([17]).

Under standard assumptions (see Appendix B), the bias and MSE of SG-MCMC with a th-order integrator at time are bounded as:

Bias:
MSE:

Here , where is the generator of the Itô diffusion (1) defined as

(4)

for any compactly supported twice differentiable function , means approaches zero along the positive real axis. is the same as except using the stochastic gradient instead of the full gradient.

We show that the bounds of the bias and MSE of SG-MCMC share similar forms as SG-MCMC, but with additional dependence on the staleness parameter. In addition to the assumptions in SG-MCMC [17] (see details in Appendix B), the following additional assumption is imposed.

Assumption 1.

The noise in the stochastic gradients is well-behaved, such that: 1) the stochastic gradient is unbiased, i.e., where denotes the random permutation over ; 2) the variance of stochastic gradient is bounded, i.e., ; 3) the gradient function is Lipschitz (so is ), i.e., .

In the following theorems, we omit the assumption statement for conciseness. Due to the staleness of the stochastic gradients, the term in SG-MCMC is equal to , where arises from . The challenge arises to bound these terms involving . To this end, define , and to be a functional satisfying the Poisson Equation***The existence of a nice is guaranteed in the elliptic/hypoelliptic SDE settings when is on a torus [25].:

(5)
Theorem 2.

After iterations, the bias of SG-MCMC with a th-order integrator is bounded, for some constant independent of , as:

where , are constants.

Theorem 3.

After iterations, the MSE of SG-MCMC with a th-order integrator is bounded, for some constant independent of , as:

where constants , .

The theorems indicate that both the bias and MSE depend on the staleness parameter . For a fixed computational time, this could possibly lead to unimproved bounds, compared to standard SG-MCMC, when is too large, i.e., the terms with would dominate, as is the case in the distributed system discussed in Section 4. Nevertheless, better bounds than standard SG-MCMC could be obtained if the decrease of is faster than the increase of the staleness in a distributed system.

Variance

Next we investigate the convergence behavior of the variance, . Theorem 4 indicates the variance is independent of , hence a linear speedup in the decrease of variance is always achievable when stale gradients are computed in parallel. An example is discussed in the Bayesian distributed system in Section 4.

Theorem 4.

After iterations, the variance of SG-MCMC with a th-order integrator is bounded, for some constant , as:

The variance bound is the same as for standard SG-MCMC, whereas could increase linearly w.r.t. the number of workers in a distributed setting, yielding significant variance reduction. When optimizing the the variance bound w.r.t. , we get an optimal variance bound stated in Corollary 5.

Corollary 5.

In term of estimation variance, the optimal convergence rate of SG-MCMC with a th-order integrator is bounded as:  .

In real distributed systems, the decrease of and increase of , in the bias and MSE bounds, would typically cancel, leading to the same bias and MSE level compared to standard SG-MCMC, whereas a linear speedup on the decrease of variance w.r.t. the number of workers is always achievable. More details are discussed in Section 4.

3.3 Extension to multiple parallel chains

This section extends the theory to the setting with parallel chains, each independently running an SG-MCMC algorithm. After generating samples from the chains, an aggregation step is needed to combine the sample average from each chain, i.e., , where is the number of iterations on chain . For generality, we allow each chain to have different step sizes, e.g., . We aggregate the sample averages as , where , .

Interestingly, with increasing , using multiple chains does not seem to directly improve the convergence rate for the bias, but improves the MSE bound, as stated in Theorem 6.

Theorem 6.

Let , , , the bias and MSE of parallel SG-MCMC chains with a th-order integrator are bounded, for some constants and independent of , as:

Bias:
MSE:

Assume that is independent of the number of chains. As a result, using multiple chains does not directly improve the bound for the biasIt means the bound does not directly relate to low-order terms of , though constants might be improved.. However, for the MSE bound, although the last two terms are independent of , the first term decreases linearly with respect to because . This indicates a decreased estimation variance with more chains. This matches the intuition because more samples can be obtained with more chains in a given amount of time.

The decrease of MSE for multiple-chain is due to the decrease of the variance as stated in Theorem 7.

Theorem 7.

The variance of parallel SG-MCMC chains with a th-order integrator is bounded, for some constant independent of , as:

When using the same step size for all chains, Theorem 7 gives an optimal variance bound of , i.e. a linear speedup with respect to is achieved.

In addition, Theorem 6 with and provides convergence rates for the distributed SGLD algorithm in [14], i.e., improved MSE and variance bounds compared to the single-server SGLD.

4 Applications to Distributed SG-MCMC Systems

Our theory for SG-MCMC is general, serving as a basic analytic tool for distributed SG-MCMC systems. We propose two simple Bayesian distributed systems with SG-MCMC in the following.

Single-chain distributed SG-MCMC

Perhaps the simplest architecture is an asynchronous distributed SG-MCMC system, where a server runs an SG-MCMC algorithm, with stale gradients computed asynchronously from workers. The detailed operations of the server and workers are described in Appendix A.

With our theory, now we explain the convergence property of this simple distributed system with SG-MCMC, i.e., a linear speedup w.r.t. the number of workers on the decrease of variance, while maintaining the same bias level. To this end, rewrite from Theorems 2 and 3, where is the average number of iterations on each worker. We can observe from the theorems that when in the bias and in the MSE, the terms with dominate. Optimizing the bounds with respect to yields a bound of for the bias, and for the MSE. In practice, we usually observe , making in the optimal bounds cancels, i.e., the same optimal bias and MSE bounds as standard SG-MCMC are obtained, no theoretical speedup is achieved when increasing . However, from Corollary 5, the variance is independent of , thus a linear speedup on the variance bound can be always obtained when increasing the number of workers, i.e., the distributed SG-MCMC system convergences a factor of faster than standard SG-MCMC with a single machine. We are not aware of similar conclusions from optimization, because most of the research focuses on the convex setting, thus only variance (equivalent to MSE) is studied.

Multiple-chain distributed SG-MCMC

We can also adopt multiple servers based on the multiple-chain setup in Section 3.3, where each chain corresponds to one server. The detailed architecture is described in Appendix A. This architecture trades off communication cost with convergence rates. As indicated by Theorems 6 and 7, the MSE and variance bounds can be improved with more servers. Note that when only one worker is associated with one server, we recover the setting of independent servers. Compared to the single-server architecture described above with workers, from Theorems 27, while the variance bound is the same, the single-server arthitecture improves the bias and MSE bounds by a factor of .

More advanced architectures

More complex architectures could also be designed to reduce communication cost, for example, by extending the downpour [7] and elastic SGD [29] architectures to the SG-MCMC setting. Their convergence properties can also be analyzed with our theory since they are essentially using stale gradients. We leave the detailed analysis for future work.

5 Experiments

Our primal goal is to validate the theory, comparing with different distributed architectures and algorithms, such as [30, 31], is beyond the scope of this paper. We first use two synthetic experiments to validate the theory, then apply the distributed architecture described in Section 4

for Bayesian deep learning. To quantitatively describe the speedup property, we adopt the the

iteration speedup [12], defined as: , where # is the iteration count when the same level of precision is achieved. This speedup best matches with the theory. We also consider the time speedup, defined as: , where the running time is recorded at the same accuracy. It is affected significantly by hardware, thus is not accurately consistent with the theory.

5.1 Synthetic experiments

Figure 1: MSE vs. # iterations () with increasing staleness . Resulting in roughly the same MSE.
Impact of stale gradients

A simple Gaussian model is used to verify the impact of stale gradients on the convergence accuracy, with . 1000 data samples are generated, with minibatches of size 10 to calculate stochastic gradients. The test function is . The distributed SGLD algorithm is adopted in this experiment. We aim to verify that the optimal MSE bound , derived from Theorem 3 and discussed in Section 4 (with ). The optimal stepsize is for some constant . Based on the optimal bound, setting for some fixed and varying ’s would result in the same MSE, which is . In the experiments we set , , , and average over 200 runs to approximate the expectations in the MSE formula. As indicated in Figure 1, approximately the same MSE’s are obtained after iterations for different values, consistent with the theory. Note since the stepsizes are set to make end points of the curves reach the optimal MSE’s, the curves would not match the optimal MSE curves of in general, except for the end points, i.e., they are lower bounded by .

Convergence speedup of the variance

A Bayesian logistic regression model (BLR) is adopted to verify the variance convergence properties. We use the Adult dataset

http://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/binary.html., a9a, with 32,561 training samples and 16,281 test samples. The test function is defined as the standard logistic loss. We average over 10 runs to estimate the expectation in the variance. We use the single-server distributed architecture in Section 4, with multiple workers computing stale gradients in parallel. We plot the variance versus the average number of iterations on the workers () and the running time in Figure 2 (a) and (b), respectively. We can see that the variance drops faster with increasing number of workers. To quantitatively relate these results to the theory, Corollary 5 indicates that , where means the number of workers and iterations at the same variance, i.e., a linear speedup is achieved. The iteration speedup and time speedup are plotted in Figure 2 (c), showing that the iteration speedup approximately scales linearly worker numbers, consistent with Corollary 5; whereas the time speedup deteriorates when the worker number is large due to high system latency.

(a) Variance vs. Iteration (b) Variance vs. Time (c) Speedup
Figure 2: Variance with increasing number of workers.

5.2 Applications to deep learning

We further test S

G-MCMC on Bayesian learning of deep neural networks. The distributed system is developed based on an MPI (message passing interface) extension of the popular Caffe package for deep learning

[32]. We implement the SGHMC algorithm, with the point-to-point communications between servers and workers handled by the MPICH library.The algorithm is run on a cluster of five machines. Each machine is equipped with eight 3.60GHz Intel(R) Core(TM) i7-4790 CPU cores.

We evaluate S

G-MCMC on the above BLR model and two deep convolutional neural networks (CNN). In all these models, zero mean and unit variance Gaussian priors are employed for the weights to capture weight uncertainties, an effective way to deal with overfitting

[33]. We vary the number of servers among , and the number of workers for each server from 1 to 9.

LeNet for MNIST

We modify the standard LeNet to a Bayesian setting for the MNIST dataset.LeNet consists of 2 convolutional layers, 2 max pool layers and 2 ReLU nonlinear layers, followed by 2 fully connected layers

[34]. The detailed specification can be found in Caffe. For simplicity, we use the default parameter setting specified in Caffe, with the additional parameter in SGHMC (Algorithm 1) set to , where is the moment variable defined in the SGD algorithm in Caffe.

Cifar10-Quick net for CIFAR10

The Cifar10-Quick net consists of 3 convolutional layers, 3 max pool layers and 3 ReLU nonlinear layers, followed by 2 fully connected layers. The CIFAR-10 dataset consists of 60,000 color images of size 3232 in 10 classes, with 50,000 for training and 10,000 for testing.Similar to LeNet, default parameter setting specified in Caffe is used.

In these models, the test function is defined as the cross entropy of the softmax outputs for test data with classes, i.e., . Since the theory indicates a linear speedup on the decrease of variance w.r.t. the number of workers, this means for a single run of the models, the loss would converge faster to its expectation with increasing number of workers. The following experiments verify this intuition.

5.2.1 Single-server experiments

We first test the single-server architecture in Section 4 on the three models. Because the expectations in the bias, MSE or variance are not analytically available in these complex models, we instead plot the loss versus average number of iterations ( defined in Section 4) on each worker and the running time in Figure 3. As mentioned above, faster decrease of the loss with more workers is expected.

For the ease of visualization, we only plot the results with workers; more detailed results are provided in Appendix I. We can see that generally the loss decreases faster with increasing number of workers. In the CIFAR-10 dataset, the final losses of 6 and 9 workers are worst than the one with 4 workers. It shows that the accuracy of the sample average suffers from the increased staleness due to the increased number of workers. Therefore a smaller step size should be considered to maintain high accuracy when using a large number of workers. Note the 1-worker curves correspond to the standard SG-MCMC, whose loss decreases much slower due to high estimation variance, though in theory it has the same level of bias as the single-server architecture for a given number of iterations (they will converge to the same accuracy).


Figure 3: Testing loss vs. #workers. From left to right, each column corresponds to the a9a, MNIST and CIFAR dataset, respectively. The loss is defined in the text.

5.2.2 Multiple-server experiments

Finally, we test the multiple-servers architecture on the same models. We use the same criterion as the single-server setting to measure the convergence behavior. The loss versus average number of iterations on each worker ( defined in Section 4) for the three datasets are plotted in Figure 4, where we vary the number of servers among , and use 2 workers for each server. The plots of loss versus time

and using different number of workers for each server are provided in the Appendix. We can see that in the simple BLR model, multiple servers do not seem to show significant speedup, probably due to the simplicity of the posterior, where the sample variance is too small for multiple servers to take effect; while in the more complicated deep neural networks, using more servers results in a faster decrease of the

loss, especially in the MNIST dataset.

Figure 4: Testing loss vs. #servers. From left to right, each column corresponds to the a9a, MNIST and CIFAR dataset, respectively. The loss is defined in the text.

6 Conclusion

We extend theory from standard SG-MCMC to the stale stochastic gradient setting, and analyze the impacts of the staleness to the convergence behavior of an SG-MCMC algorithm. Our theory reveals that the estimation variance is independent of the staleness, leading to a linear speedup w.r.t. the number of workers, although in practice little speedup in terms of optimal bias and MSE might be achieved due to their dependence on the staleness. We test our theory on a simple asynchronous distributed SG-MCMC system with two simulated examples and several deep neural network models. Experimental results verify the effectiveness and scalability of the proposed SG-MCMC framework.

Acknowledgements

Supported in part by ARO, DARPA, DOE, NGA, ONR and NSF.

References

Appendix A A simple Bayesian distributed system based on SG-Mcmc

We provide the detailed architecture of the simple Bayesian distributed system described in Section 4. We put the single-chain and multiple-chain distributed SG-MCMCs into a unified framework. Suppose there are servers and workers, the one with corresponds to the single-chain distributed SG-MCMC, whereas the one with corresponds to the multiple-chain distributed SG-MCMC. The servers and workers are responsible for the following tasks:

  • Each worker runs independently and communicates with a specific server. They are responsible for computing the stochastic gradients§§§This is the most expensive part in an SG-MCMC algorithm. of the parameter given by the server. Once the stochastic gradient is computed, the worker sends it to its assigned server and receive a new parameter sample from the server.

  • Each server independently maintains its own state vector and timestamp. At the -th timestampEach server is equipped with a timestamp because they are independent with each other., it receives a stale stochastic gradient from worker , updates the state vector to and increments the timestamp, then sends the new parameter sample to worker .

The sending and receiving in the servers and workers are performed asynchronously, enabling minimum communication cost and latency between the servers and workers. At testing, all the samples from the servers are collected and applied to a test function. Apparently, the training time using multiple servers is basically the same as using a single server because the sampling in different servers is independent. Figure 5 depicts the architecture of the proposed Bayesian distributed framework. Algorithm 2 details the algorithm on the servers and workers.

   Server
  Output:
  Initialize ;
  Send to all assigned workers;
  for  do
     Receive a stale stochastic gradient from a worker .
     Update to using . (*)
     Send to the worker .
  end for

 

   Worker
  repeat
     Receive from server .
     Compute with a minibatch.
     Send to server .
  until 
Algorithm 2 Asynchronous Distributed SG-MCMC

The update rule (*) of the state vector in Algorithm 2 depends on which SG-MCMC algorithm is employed. For instance, Algorithm 1 describes the update rule of the SGHMC with a 1st-order Euler integrator.

Appendix B Assumptions

First, following [25], we will need to assume the corresponding SDE of SG-MCMC to be either elliptic or hypoelliptic. The ellipticity/hypoellipticity describes whether the Brownian motion is able to spread over the whole parameter space. The SDE of the SGLD is elliptic, while for other SG-MCMC algorithms such as the SGHMC, the hypoellipticity assumption is usually reasonable. When the domain is on the torus, the ellipticity and hypoellipticity of an SDE guarantees the existence of a nice solution for the Poisson equation (5). The assumption is summarized in Assumption 2.

Figure 5: Architecture of the proposed Bayesian distributed framework. In the multi-server case, the dash lines on the servers indicate a simple averaging operation for testing, otherwise the servers are independent. Section 3.3 provides more details.
Assumption 2.

The corresponding SDE of a SG-MCMC algorithm is either elliptic or hypoellipticThe SDE of the SGLD can be verified to be elliptic. For other SG-MCMC algorithms such as the SGHMC, the hypoellipticity assumption is usually reasonable, see [25] on how to verify hypoellipticity of an SDE..

When is extended to the domain of for some integer , we need some assumptions on the solution of the Poisson equation (5). Note (5) can be equivalently written in an integration form [35] using Itô’s formula:

(6)

Intuitively, needs to be bounded if the discrepancy between and were to be bounded. This is satisfied if the SDE is defined in a bounded domain [25]. In the unbounded domain as for SG-MCMC algorithms, it turns out the following boundedness assumptions on suffice [17].

Assumption 3.

1) and its up to 3rd-order derivatives, , are bounded by a function , i.e., for , . 2) the expectation of on is bounded: . 3) is smooth such that , for some .

Furthermore, in our proofs the expectation of a function under a diffusion needs to be expanded in a Taylor expansion style, e.g., by using Kolmogorov’s backward equation. To ensure the remainder term to be bounded, it suffices to make the following assumption on the smoothness and boundedness of [35, 17].

Assumption 4.

is infinitely differentiable with bounded derivatives of any order; and for some integer and .

Appendix C Notation

For simplicity, we will simplify some notation used in the proof as follows:

Appendix D Proof of Theorem 2

In SG-MCMC, for the -th iteration, suppose a stochastic gradient with a staleness is used, e.g., . First, we will bound the difference between and the stochastic gradient at the -th iteration , by using the Lipschitz property of , with the following lemma.

Lemma 8.

Let , the expected difference between and is bounded by:

(7)

where the expectation is taken over the randomness of the SG-MCMC algorithm, e.g., the randomness from stochastic gradients and the injected Gaussian noise.

Proof.

Note the randomness of comes from two sources, the injected Gaussian noise and the stochastic gradient noise. We denote the expectations with respect to these two randomness as and , respectively. The whole expectation thus can be decomposed as .

Applying the Lipschitz property of , we have

From the definition of th-order integrator, i.e., , if we let

where is the starting point in the -th iteration, and note that

We have

(8)
(9)
(10)

where (10) is obtained by expanding the exponential operator and the assumption that the high order terms are bounded. ∎

Now we proceed to prove Theorem 2. The basic technique follows [17], thus we skip some derivations for some steps.

Proof of Theorem 2.

Before the proof, let us first define some notation. First, define the operator for each as a differential operator as for any function :

Second, define the local generator, , for an Itô diffusion, where the true gradient in (1) is replaced with the stochastic gradient from the -th iteration, i.e.,

for a compactly supported twice differentiable function , where is the same as but with the full gradient replaced with the stochastic gradient . Based on these definitions, we have

Following [17], for an SG-MCMC with a th-order integrator, and a test function , we have:

(11)

where is the identity map. Sum over in (11), take expectation on both sides, and use the relation to expand the first order term. We obtain

Divide both sides by , use the Poisson equation (5), and reorganize terms. We have:

(12)

According to [17], the term is bounded by