# A Variational Analysis of Stochastic Gradient Algorithms

Stochastic Gradient Descent (SGD) is an important algorithm in machine learning. With constant learning rates, it is a stochastic process that, after an initial phase of convergence, generates samples from a stationary distribution. We show that SGD with constant rates can be effectively used as an approximate posterior inference algorithm for probabilistic modeling. Specifically, we show how to adjust the tuning parameters of SGD such as to match the resulting stationary distribution to the posterior. This analysis rests on interpreting SGD as a continuous-time stochastic process and then minimizing the Kullback-Leibler divergence between its stationary distribution and the target posterior. (This is in the spirit of variational inference.) In more detail, we model SGD as a multivariate Ornstein-Uhlenbeck process and then use properties of this process to derive the optimal parameters. This theoretical framework also connects SGD to modern scalable inference algorithms; we analyze the recently proposed stochastic gradient Fisher scoring under this perspective. We demonstrate that SGD with properly chosen constant rates gives a new way to optimize hyperparameters in probabilistic models.

## Authors

• 34 publications
• 20 publications
• 76 publications
• ### Stochastic Gradient Descent as Approximate Bayesian Inference

Stochastic Gradient Descent with a constant learning rate (constant SGD)...
04/13/2017 ∙ by Stephan Mandt, et al. ∙ 0

• ### Bayesian interpretation of SGD as Ito process

The current interpretation of stochastic gradient descent (SGD) as a sto...
11/20/2019 ∙ by Soma Yokoi, et al. ∙ 20

• ### A Rule for Gradient Estimator Selection, with an Application to Variational Inference

Stochastic gradient descent (SGD) is the workhorse of modern machine lea...
11/05/2019 ∙ by Tomas Geffner, et al. ∙ 10

• ### PaloBoost: An Overfitting-robust TreeBoost with Out-of-Bag Sample Regularization Techniques

Stochastic Gradient TreeBoost is often found in many winning solutions i...
07/22/2018 ∙ by Yubin Park, et al. ∙ 0

• ### Early Stopping is Nonparametric Variational Inference

We show that unconverged stochastic gradient descent can be interpreted ...
04/06/2015 ∙ by Dougal Maclaurin, et al. ∙ 0

• ### Fluctuation-dissipation relations for stochastic gradient descent

The notion of the stationary equilibrium ensemble has played a central r...
09/28/2018 ∙ by Sho Yaida, et al. ∙ 0

• ### Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring

In this paper we address the following question: Can we approximately sa...
06/27/2012 ∙ by Sungjin Ahn, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Stochastic gradient descent (SGD) has become crucial to modern machine learning. SGD optimizes a function by following noisy gradients with a decreasing step size. The classical result of Robbins and Monro (1951) is that this procedure provably reaches the optimum of the function (or local optimum, when it is nonconvex). Recent studies investigate the merits of adaptive step sizes (Duchi et al., 2011; Tieleman and Hinton, 2012), gradient or iterate averaging (Toulis et al., ; Défossez and Bach, 2015), and constant step-sizes (Bach and Moulines, 2013; Flammarion and Bach, 2015). Stochastic gradient descent has enabled efficient optimization with massive data.

Recently, stochastic gradients (SG) have also been used in the service of scalable Bayesian Markov Chain Monte-Carlo (MCMC) methods, where the goal is to generate samples from a conditional distribution of latent variables given a data set. In Bayesian inference, we assume a probabilistic model

with data and hidden variables ; our goal is to approximate the posterior

 p(θ|x)=exp{logp(θ,x)−logp(x)}. (1)

New scalable MCMC algorithms—such as SG Langevin dynamics (Welling and Teh, 2011), SG Hamiltonian Monte-Carlo (Chen et al., 2014), and SG Fisher scoring (Ahn et al., 2012)—employ stochastic gradients of to improve convergence and computation of existing sampling algorithms. Also see Ma et al. (2015) for a complete classification of these algorithms.

These methods all take precautions to sample from an asymptotically exact posterior. In contrast to this and specifically in the limit of large data, we will show how to effectively use the simplest stochastic gradient descent algorithm as a sensible approximate Bayesian inference method. Specifically, we consider SGD with a constant learning rate (constant SGD). Constant SGD first marches toward an optimum of the objective function and then bounces around its vicinity because of the sampling noise in the gradient. (In contrast, traditional SGD converges to the optimum by decreasing the learning rate.) Our analysis below rests on the idea that constant SGD can be interpreted as a stochastic process with a stationary distribution, one that is centered on the optimum and that has a certain covariance structure. The main idea is that we can use this stationary distribution to approximate a posterior.

Here is how it works. The particular profile of the stationary distribution depends on the parameters of the algorithm—the constant learning rate, the preconditioning matrix, and the minibatch size, all of which affect the noise and the gradients. Thus we can set as the objective function and set the parameters of constant SGD such that its stationary distribution is close to the exact posterior (Eq. 1). Specifically, in the spirit of variational Bayes (Jordan et al., 1999b), we set those parameters to minimize the Kullback-Leibler (KL) divergence. With those settings, we can perform approximate inference by simply running constant SGD.

In more detail, we make the following contributions:

• [leftmargin=*]

• First, we develop a variational Bayesian view of stochastic gradient descent. Based on its interpretation as a continuous-time stochastic process—specifically a multivariate Ornstein-Uhlenbeck (OU) process (Uhlenbeck and Ornstein, 1930; Gardiner et al., 1985)

—we compute stationary distributions for a large class of SGD algorithms, all of which converge to a Gaussian distribution with a non-trivial covariance matrix. The stationary distribution is parameterized by the learning rate, minibatch size, and preconditioning matrix.

Results about the multivariate OU process enable us to compute the KL divergence between the stationary distribution and the posterior analytically. Minimizing the KL, we can relate the optimal step size or preconditioning matrix to the Hessian and noise covariances near the optimum. The resulting criteria strongly resemble AdaGrad (Duchi et al., 2011)

(Tieleman and Hinton, 2012), and classical Fisher scoring (Longford, 1987). We demonstrate how these different optimization methods compare, when used for approximate inference.

• Then, we analyze scalable MCMC algorithms. Specifically, we use the stochastic process perspective to compute the stationary sampling distribution of stochastic gradient Fisher scoring (Ahn et al., 2012). The view from the multivariate OU process reveals a simple justification for this method: we show that the preconditioning matrix suggested in SGFS is indeed optimal. We also derive a criterion for the free noise parameter in SGFS such as to enhance numerical stability, and we show how the stationary distribution is modified when the preconditioner is approximated with a diagonal matrix (as is often done in practice for high-dimensional problems).

• Finally, we show how using SGD with a constant learning rate confers an important practical advantage: it allows simultaneous inference of the posterior and optimization of meta-level parameters, such as hyperparameters in a Bayesian model. We demonstrate this technique on a Bayesian multinomial logistic regression model with normal priors.

Our paper is organized as follows. In section 2 we review the continuous-time limit of SGD, showing that it can be interpreted as an OU process. In section 3 we present consequences of this perspective: the interpretation of SGD as variational Bayes and results around stochastic gradient Fisher Scoring (Ahn et al., 2012). In the empirical study (section 4), we show that our theoretical assumptions are satisfied for different models, and that we can use SGD to perform gradient-based hyperparameter optimization.

## 2 Continuous-Time Limit Revisited

We first review the theoretical framework that we use throughout the paper. Our goal is to characterize the behavior of SGD when using a constant step size. To do this, we approximate SGD with a continuous-time stochastic process (Kushner and Yin, 2003; Ljung et al., 2012).

### 2.1 Problem setup

Consider loss functions of the following form:

 L(θ)=1N∑Nn=1ℓn(θ),g(θ)≡∇θL(θ). (2)

Such loss functions are common in machine learning, where is a loss function that depends on data and parameters . Each

is the contribution to the overall loss from a single observation. For example, when finding a maximum-a-posteriori estimate of a model, the contributions to the loss may be

 ℓn(θ)=−logp(xn|θ)+1Nlogp(θ), (3)

where is the likelihood and is the prior. For simpler notation, we will suppress the dependence of the loss on the data.

From this loss we construct stochastic gradients. Let be a set of random indices drawn uniformly at random from the set . This set indexes functions , and we call a “minibatch” of size

. Based on the minibatch, we used the indexed functions to form a stochastic estimate of the loss and a stochastic gradient,

 ^LS(θ)=1S∑n∈Sℓn(θ),^gS(θ)=∇θ^LS(θ). (4)

In expectation the stochastic gradient is the full gradient, i.e., . We use this stochastic gradient in the SGD update

 θ(t+1)=θ(t)−ϵ^gS(θ(t)). (5)

Equations 4 and 5 define the discrete-time process that SGD simulates from. We will approximate it with a continuous-time process that is easier to analyze.

### 2.2 SGD as a Ornstein-Uhlenbeck process

We now show how to approximate the discrete-time Eq. 5 with the continuous-time Ornstein-Uhlenbeck process (Uhlenbeck and Ornstein, 1930). This leads to the stochastic differential equation below in Eq. 11. To justify the approximation, we make four assumptions. We verify its accuracy in Section 4.

Assumption 1.   Observe that the stochastic gradient is a sum of

independent, uniformly sampled contributions. Invoking the central limit theorem, we assume that the gradient noise is Gaussian with variance

:

 ^gS(θ)≈g(θ)+1√SΔg(θ),Δg(θ)∼N(0,C(θ)). (6)

Assumption 2.   We assume that the noise covariance is approximately constant. Further, we decompose the constant noise covariance into a product of two constant matrices:

 C=BB⊤. (7)

This assumption is justified when the iterates of SGD are confined to a small enough region around a local optimum of the loss that the noise covariance does not vary significantly in that region.

Assumption 3.   We now define and combine Eqs. 56, and 7 to rewrite the process as

 Δθ(t)=−ϵg(θ(t))+√ϵSBΔW,ΔW∼N(0,ϵI). (8)

This is a discretization of the following continuous-time stochastic differential equation: 111We performed the conventional substitution rules when discretizing a continuous-time stochastic process. These substitution rules are , and , see e.g. (Gardiner et al., 1985).

 dθ(t)=−g(θ)dt+√ϵSBdW(t). (9)

We assume that this continuous-time limit is approximately justified and that we can neglect the discretization errors.

Assumption 4.   Finally, we assume that the stationary distribution of the iterates is constrained to a region where the loss is well approximated by a quadratic function,

 L(θ)=12θ⊤Aθ. (10)

(Without loss of generality, we assume that a minimum of the loss is at .) This assumption makes sense when the loss function is smooth and the stochastic process reaches a low-variance quasi-stationary distribution around a deep local minimum. The exit time of a stochastic process is typically exponential in the height of the barriers between minima, which can make local optima very stable even in the presence of noise (Kramers, 1940).

SGD as an Ornstein-Uhlenbeck process.   For what follows, define . The four assumptions above result in a specific kind of stochastic process, the multivariate Ornstein-Uhlenbeck process (Uhlenbeck and Ornstein, 1930). It is

 dθ(t)=−Aθ(t)dt+Bϵ/SdW(t) (11)

This connection helps us analyze properties of SGD because the Ornstein-Uhlenbeck process has an analytic stationary distribution that is Gaussian. This distribution will be the core analytic tool of this paper:

 q(θ)∝exp{−12θ⊤Σ−1θ}. (12)

The covariance satisfies

 ΣA⊤+AΣ=ϵSBB⊤. (13)

Without explicitly solving this equation, we see that the resulting covariance is proportional to the learning rate and inversely proportional to the magnitude of and minibatch size . (More details are in the Appendix.) This characterizes the stationary distribution of running SGD with a constant step size.

## 3 SGD as Approximate Inference

We discussed a continuous-time interpretation of SGD with a constant step size (constant SGD). We now discuss how to use constant SGD as an approximate inference algorithm. To repeat the set-up from the introduction, consider a probabilistic model with data and hidden variables ; our goal is to approximate the posterior in Eq. 1.

We set the loss to be proportional to the negative log-joint distribution (Eqs.

2 and 3

), which equals the posterior up to an additive constant. The classical goal of SGD is to minimize this loss, leading us to a maximum-a-posteriori point estimate of the parameters. This is how SGD is used in many statistical models, including logistic regression, linear regression, matrix factorization, neural network classifiers, and regressors. In contrast, our goal here is to tune the parameters of SGD such that we approximate the posterior with its stationary distribution. Thus we use SGD as a posterior inference algorithm.

Fig. 1 shows an example. Here we illustrate two Bayesian posteriors—from a linear regression problem (left) and a logistic regression problem (right)—along with iterates from a constant SGD algorithm. In these figures, we set the parameters of the optimization to values that minimize the Kullback-Leibler (KL) divergence between the stationary distribution of the OU process and the posterior—these results come from our theorems below. The top plots optimize both a preconditioning matrix and the step size; the middle plots optimize only the step size. (The middle plots are from a more efficient algorithm, but it is less accurate.) We can see that the stationary distribution of constant SGD can be made close to the exact posterior.

Fig. 1 also compares the empirical covariance of the iterates with the predicted covariance in terms of Eq. 13. The close match supports the assumptions of Sec. 2.

We will use this perspective in two ways. First, we develop optimal algorithmic conditions for constant SGD to best approximate the posterior, connecting to well-known results around adaptive learning rates and preconditioners. Second, we use it to analyze Stochastic Gradient Fisher Scoring (Ahn et al., 2012), both in its exact form and its more efficient approximate form.

### 3.1 Constant stochastic gradient descent

First, we show how to tune constant SGD’s parameters to minimize the KL divergence to the posterior; this is a type of variational inference (Jordan et al., 1999a). Based on this analysis, we derive three versions of constant SGD—one with a constant step size, one with a full preconditioning matrix, and one with a diagonal preconditioning matrix. Each one yields samples from an approximate posterior, and each trades of efficiency and accuracy in a different way. Finally, we show how to use these algorithms to learn hyperparameters.

Assumption 4 from Sec. 2 says that the posterior is approximately Gaussian in the region that the stationary distribution focuses on,

 f(θ)∝exp{−N2θ⊤Aθ}. (14)

(The scalar corrects the averaging in equation 2.) In setting the parameters of SGD, we minimize the KL divergence between the posterior and the stationary distribution (Eqs. 12 and 13) as a function of the learning rate and minibatch size . We can optionally include a preconditioning matrix , i.e. a matrix that premultiplies the stochastic gradient to modify its convergence behavior.

Hence, we minimize

 {ϵ∗,S∗,H∗}=argminϵ,S,HKL(q(θ)∣∣f(θ)). (15)

The distributions and are both Gaussians. Their means coincide, at the minimum of the loss, and so their KL divergence is

 KL(q||f) =Eq(θ)[logf(θ)]−Eq(θ)[logq(θ)] (16) =12(NTr(AΣ)−log|NA|−log|Σ|−D),

where is the determinant and is the dimension of .

We suggest three variants of constant SGD that generate samples from an approximate posterior.

#### Theorem 1 (constant SGD).

Under assumptions [A1-A4], the constant learning rate which minimizes KL divergence from the stationary distribution of SGD to the posterior is

 ϵ∗=2DSNTr(BB⊤). (17)

To prove this claim, we face the problem that the covariance of the stationary distribution depends indirectly on through Eq. 13. Inspecting this equation reveals that is independent of and . This simplifies the entropy term . Since is constant, we can neglect it when minimizing KL divergence.

We also need to simplify the term , which still depends on and through . To do this, we again use Eq. 13, from which follows that . The KL divergence is therefore, up to constant terms,

 KL(q||f)c=ϵN2STr(BB⊤)−Dlog(ϵ/S) (18)

Minimizing KL divergence over results in Eq. 17 for the optimal learning rate. ∎

Theorem 1 suggests that the learning rate should be chosen inversely proportional to the average of diagonal entries of the noise covariance. We can also precondition SGD with a diagonal matrix . This gives more tuning parameters to better approximate the posterior. Under the same assumptions, we ask for the optimal diagonal preconditioner.

#### Theorem 2 (preconditioned constant SGD).

The constant preconditioning matrix for SGD that minimizes KL divergence from the sampling distribution to the posterior is

 H∗=2SϵN(BB⊤)−1 (19)

To prove this claim, we need the Ornstein-Uhlenbeck process which corresponds to preconditioned SGD. Preconditioning Eq. 11 with H results in

 dθ(t)=−HAθ(t)dt+HBϵ/SdW(t). (20)

All our results carry over after substituting . Eq. 13, after the transformation and multiplication by from the left, becomes

 AΣ+H−1ΣA⊤H=ϵSBB⊤H (21)

Using the cyclic property of the trace, this implies that . Hence up to constant terms, the KL divergence is

 KL(q||f) c=ϵN2STr(BB⊤H)+12log(ϵS|HΣ−1H|) (22) =ϵN2STr(BB⊤H)+Trlog(H)+D2logϵS−12log|Σ|.

(We used that .) Taking derivatives with respect to the entries of results in Eq. 19. ∎

In high-dimensional applications where working with large dense matrices is impractical, the preconditioner may be constrained to be diagonal. The following corollary is a direct consequence of Eq. 22 when constraining the preconditioner to be diagonal:

#### Corollary 1

The optimal diagonal preconditioning matrix for SGD that minimizes KL divergence is

 H∗kk=2SϵNBB⊤kk. (23)

We showed that the optimal diagonal preconditioner is the inverse of the diagonal part of the noise matrix. Similar preconditioning matrices have been suggested earlier in optimal control theory based on very different arguments, see (Widrow and Stearns, 1985). Our result also relates to AdaGrad and its relatives (Duchi et al., 2011; Tieleman and Hinton, 2012), which also adjust the preconditioner based on the square root of the diagonal entries of the noise covariance. In the supplement we derive an optimal global learning rate for AdaGrad-style diagonal preconditioners.

In Sec. 4, we compare three versions of constant SGD for approximate posterior inference: one with a scalar step size, one with a diagonal preconditioner, and one with a dense preconditioner.

### 3.2 Stochastic Gradient Fisher Scoring

We now investigate Stochastic Gradient Fisher Scoring (Ahn et al., 2012), a scalable Bayesian MCMC algorithm. We use the variational perspective to rederive the Fisher scoring update and identify it as optimal. We also analyze the sampling distribution of the truncated algorithm, one with diagonal preconditioning (as it is used in practice), and quantify the bias that this induces.

The basic idea here is that the stochastic gradient is preconditioned and additional noise is added to the updates such that the algorithm approximately samples from the Bayesian posterior. More precisely, the update is

 θ(t+1)=θ(t)−ϵH^g(θ(t))+√ϵHEW(t). (24)

The matrix is a preconditioner and is Gaussian noise; we control the preconditioner and the covariance of the noise. Stochastic gradient Fisher scoring suggests a preconditioning matrix that leads to samples from the posterior even if the learning rate is not asymptotically small. We show here that this preconditioner follows from our variational analysis.

#### Theorem 3 (SGFS)

Under assumptions A1-A4, the preconditioning matrix in Eq. 24 that minimizes KL divergence between the stationary distribution of SGFS and the posterior is

 H∗=2N(ϵBB⊤+EE⊤)−1. (25)

To prove the claim, we go through the steps of section 2 to derive the corresponding Ornstein-Uhlenbeck process, For simplicity, we have set the minibatch size to , hence . In the appendix, we derive the following KL divergence between the posterior and the sampling distribution: We can now minimize this KL divergence over the parameters and . When is given, minimizing over gives Eq. 25 ∎.

The solution given in Eq. 25 not only minimizes the KL divergence, but makes it , meaning that the stationary sampling distribution is the posterior. This solution corresponds to the suggested Fisher Scoring update in the idealized case when the sampling noise distribution is estimated perfectly (Ahn et al., 2012). Through this update, the algorithm thus generates posterior samples without decreasing the learning rate to zero. (This is in contrast to Stochastic Gradient Langevin Dynamics by Welling and Teh (2011).)

In practice, however, SGFS is often used with a diagonal approximation of the preconditioning matrix (Ahn et al., 2012; Ma et al., 2015). However, researchers have not explored how the stationary distribution is affected by this truncation, which makes the algorithm only approximately Bayesian. We can quantify its deviation from the exact posterior and we derive the optimal diagonal preconditioner.

#### Corollary 2 (approximate SGFS).

When approximating the Fisher scoring preconditioning matrix by a diagonal matrix or a scalar, respectively, then and , respectively.

This follows from the KL divergence in the proof of theorem 3.

Note that we have not made any assumptions about the noise covariance . To keep the algorithm stable, it may make sense to impose a maximum step size , so that . We can satisfy Eq. 25 by choosing . Solving for yields

 Ekk=2hmaxN−ϵBB⊤kk. (26)

Hence, to keep the learning rates bounded in favor of stability, one can inject noise in dimensions where the variance of the gradient is too small. This guideline is opposite to the advice of choosing proportional to , as suggested by Ahn et al. (2012), but follows naturally from the variational analysis.

### 3.3 Implications for hyperparameter optimization

One of the major benefits to the Bayesian approach is the ability to fit hyperparameters to data without expensive cross-validation runs by placing hyperpriors on those hyperparameters. Empirical Bayesian methods fit hyperparameters by finding the hyperparameters that maximize the

marginal likelihood of the data, integrating out the main model parameters:

 λ⋆=argmaxλlogp(y|x,λ)=argmaxλlog∫θp(y,θ|x,λ)dθ.

When this marginal log-likelihood is intractable, a common approach is to use

variational expectation-maximization (VEM)

(Bishop, 2006), which iteratively optimizes a variational lower bound on the marginal log-likelihood over . If we approximate the posterior with some distribution , then VEM tries to find a value for

that maximizes the expected log-joint probability

.

If we interpret the stationary distribution of SGD as a variational approximation to a model’s posterior, then we can justify jointly optimizing parameters and hyperparameters as a VEM algorithm. This should avoid degenerate solutions, as long as we use the learning rates and preconditioners derived above. In the experimental section, we show that gradient-based hyperparameter learning is a cheap alternative to cross-validation in constant SGD.

## 4 Experiments

We test our theoretical assumptions in section 4.1 and find good experimental evidence that they are correct. In this section, we compare against other approximate inference algorithms. In section 4.2 we show that constant SGD lets us optimize hyperparameters in a Bayesian model.

### 4.1 Confirming the stationary distribution’s covariance

In this section, we confirm empirically that the stationary distributions of SGD with KL-optimal constant learning rates are as predicted by the Ornstein-Uhlenbeck process.

Data.   We first considered the following data sets. (1) The Wine Quality Data Set222P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis, ’Wine Quality Data Set’, UCI Machine Learning Repository., containing instances, features, and one integer output variable (the wine rating). (2) A data set of Protein Tertiary Structure333Prashant Singh Rana, ’Protein Tertiary Structure Data Set’, UCI Machine Learning Repository., containing instances, features and one output variable. (3) The Skin Segmentation Data Set444Rajen Bhatt, Abhinav Dhall, ’Skin Segmentation Dataset’, UCI Machine Learning Repository., containing instances, features, and one binary output variable. We applied linear regression on data sets and and applied logistic regression on data set . We rescaled the feature to unit length and used a mini-batch of sizes , and , respectively. The quadratic regularizer was . The constant learning rate was adjusted according to Eq. 17.

Fig. 1 shows two-dimensional projections of samples from the posterior (blue) and the stationary distribution (cyan), where the directions were chosen two be the smallest and largest principal component of the posterior. Both distributions are approximately Gaussian and centered around the maximum of the posterior. To check our theoretical assumptions, we compared the covariance of the sampling distribution (yellow) against its predicted value based on the Ornstein-Uhlenbeck process (red), where very good agreement was found. Since the predicted covariance is based on approximating SGD as a multivariate Ornstein-Uhlenbeck process, we conclude that our modeling assumptions are satisfied to a very good extent. The unprojected 11-dimensional covariances on wine data are also compared in Fig. 2. The bottom row of Fig. 1 shows the sampling distributions of black box variational inference (BBVI) using the reparametrization trick (Kucukelbir et al., 2015). Our results show that the approximation to the posterior given by constant SGD is not worse than the approximation given by BBVI.

We also computed KL divergences between the posterior and stationary distributions of various algorithms: constant SGD with KL-optimal learning rates and preconditioners, Stochastic Gradient Langevin Dynamics, Stochastic Gradient Fisher Scoring (with and without diagonal approximation) and BBVI. For SG Fisher Scoring, we set the learning rate to of Eq. 17, while for Langevin dynamics we chose the largest rate that yielded stable results ( for data sets , and , respectively). We found that constant SGD can compete in approximating the posterior with the MCMC algorithms under consideration. This suggests that the most important factor is not the artificial noise involved in scalable MCMC, but rather the approximation of the preconditioning matrix.

### 4.2 Optimizing hyperparameters

To test the hypothesis of Section 3.3, namely that constant SGD as a variational algorithm allows for gradient-based hyperparameter learning, we experimented with a Bayesian multinomial logistic (a.k.a. softmax) regression model with normal priors. The negative log-joint being optimized is

 L≡−logp(y,θ|x)=λ2∑d,kθ2dk−DK2log(λ)+DK2log2π+∑nlog∑kexp{∑dxndθdk}−∑dxndθdyn, (27)

where indexes examples, indexes features and indexes classes.

is the feature vector for the

th example and is the class for that example. Equation 27 has the degenerate maximizer , , which has infinite posterior density which we hope to avoid in our approach.

Data.

In all experiments, we applied this model to the MNIST dataset (60000 training examples, 10000 test examples, 784 features) and the cover type dataset (500000 training examples, 81012 testing examples, 54 features).

Figure 3 shows the validation loss achieved by maximizing equation 27 over for various values of , as well as the values of selected by SGD and BBVI. The results suggest that this approach can be used as an inexpensive alternative to cross-validation or other VEM methods for hyperparameter selection.

## 5 Related Work

Our paper relates to Bayesian inference and stochastic optimization.

Scalable MCMC.

Recent work in Bayesian statistics focuses on making MCMC sampling algorithms scalable by using stochastic gradients. In particular,

Welling and Teh (2011) developed stochastic gradient Langevin dynamics (SGLD). This algorithm samples from a Bayesian posterior by adding artificial noise to the stochastic gradient which, at long times, dominates the SGD noise. Also see Sato and Nakagawa (2014) for a detailed convergence analysis of the algorithm. Though elegant, one disadvantage of SGLD is that the learning rate must be decreased to achieve the correct sampling regime, and the algorithm can suffer from slow mixing times. Other research suggests improvements to this issue, using Hamiltonian Monte-Carlo (Chen et al., 2014) or thermostats (Ding et al., 2014). Ma et al. (2015) give a complete classification of possible stochastic gradient-based MCMC schemes.

Above, we analyzed properties of stochastic gradient Fisher scoring (Ahn et al., 2012). This algorithm speeds up mixing times in SGLD by preconditioning a gradient with the inverse sampling noise covariance. This allows constant learning rates, while maintaining long-run samples from the posterior. In contrast, we do not aim to sample exactly from the posterior. We describe how to tune the parameters of SGD such that its stationary distribution approximates the posterior.

Maclaurin et al. (2015) also interpret SGD as a non-parametric variational inference scheme, but with different goals and in a different formalism. The paper proposes a way to track entropy changes in the implicit variational objective, based on estimates of the Hessian. As such, they mainly consider sampling distributions that are not stationary, whereas we focus on constant learning rates and distributions that have (approximately) converged. Note that their notion of hyperparameters does not refer to model parameters but to parameters of SGD.

Stochastic Optimization.   Stochastic gradient descent is an active field (Zhang, 2004; Bottou, 1998). Many papers discuss constant step-size SGD. Bach and Moulines (2013); Flammarion and Bach (2015) discuss convergence rate of averaged gradients with constant step size, while Défossez and Bach (2015) analyze sampling distributions using quasi-martingale techniques. Toulis et al. (2014) calculate the asymptotic variance of SGD for the case of decreasing learning rates, assuming that the data is distributed according to the model. None of these papers use variational arguments.

The fact that optimal preconditioning (using a decreasing Robbins-Monro schedule) is achieved by choosing the inverse noise covariance was first shown in (Sakrison, 1965), but here we derive the same result based on different arguments and suggest a scalar prefactor. Note the optimal scalar learning rate of can also be derived based on stability arguments, as was done in the context of least mean square filters (Widrow and Stearns, 1985).

Finally, Chen et al. (2015a) also draw analogies between SGD and scalable MCMC. They suggest annealing the posterior over iterations to use scalable MCMC as a tool for global optimization. We follow the opposite idea and suggest to use constant SGD as an approximate sampler by choosing appropriate learning rate and preconditioners.

Stochastic differential equations.   The idea of analyzing stochastic gradient descent with stochastic differential equations is well established in the stochastic approximation literature (Kushner and Yin, 2003; Ljung et al., 2012). Recent work focuses on dynamical aspects of the algorithm. Li et al. (2015) discuss several one-dimensional cases and momentum. Chen et al. (2015b) analyze stochastic gradient MCMC and studied their convergence properties using stochastic differential equations.

Our work makes use of the same formalism but has a different focus. Instead of analyzing dynamical properties, we focus on stationary distributions. Further, our paper introduces the idea of minimizing KL divergence between multivariate sampling distributions and the posterior.

## 6 Conclusions

We analyzed new optimization goals of stochastic gradient descent in the context of statistical machine learning. Instead of decreasing the learning rate to zero, we ask for optimal constant learning rates such that Kullback-Leibler divergence between the stationary distribution of SGD and the posterior is minimized. This goal leads to criteria for optimal learning rates, minibatch sizes and preconditioning matrices. To be able to compute stationary distributions and KL divergences, we approximated SGD in terms of a continuous-time stochastic process, leading to the Ornstein-Uhlenbeck process. We also presented a novel analysis of stochastic gradient Fisher scoring. Finally, we demonstrated that a simple SGD algorithm can compete with stochastic variational methods at empirical Bayesian hyperparameter optimization.

## References

• Ahn et al. (2012) Ahn, S., Korattikara, A., and Welling, M. (2012). Bayesian posterior sampling via stochastic gradient fisher scoring. arXiv preprint arXiv:1206.6380.
• Bach and Moulines (2013) Bach, F. and Moulines, E. (2013). Non-strongly-convex smooth stochastic approximation with convergence rate o (1/n). In Advances in Neural Information Processing Systems, pages 773–781.
• Bishop (2006) Bishop, C. (2006). Pattern Recognition and Machine Learning. Springer New York.
• Bottou (1998) Bottou, L. (1998). Online learning and stochastic approximations. On-line learning in neural networks, 17(9):25.
• Chen et al. (2015a) Chen, C., Carlson, D., Gan, Z., Li, C., and Carin, L. (2015a). Bridging the gap between stochastic gradient mcmc and stochastic optimization. arXiv preprint arXiv:1512.07962.
• Chen et al. (2015b) Chen, C., Ding, N., and Carin, L. (2015b). On the convergence of stochastic gradient mcmc algorithms with high-order integrators. In Advances in Neural Information Processing Systems, pages 2269–2277.
• Chen et al. (2014) Chen, T., Fox, E. B., and Guestrin, C. (2014). Stochastic gradient hamiltonian monte carlo. arXiv preprint arXiv:1402.4102.
• Défossez and Bach (2015) Défossez, A. and Bach, F. (2015). Averaged least-mean-squares: Bias-variance trade-offs and optimal sampling distributions. In

Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics

, pages 205–213.
• Ding et al. (2014) Ding, N., Fang, Y., Babbush, R., Chen, C., Skeel, R. D., and Neven, H. (2014). Bayesian sampling using stochastic gradient thermostats. In Advances in neural information processing systems, pages 3203–3211.
• Duchi et al. (2011) Duchi, J., Hazan, E., and Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159.
• Flammarion and Bach (2015) Flammarion, N. and Bach, F. (2015). From averaging to acceleration, there is only a step-size. arXiv preprint arXiv:1504.01577.
• Gardiner et al. (1985) Gardiner, C. W. et al. (1985). Handbook of stochastic methods, volume 4. Springer Berlin.
• Jordan et al. (1999a) Jordan, M., Ghahramani, Z., Jaakkola, T., and Saul, L. (1999a). Introduction to variational methods for graphical models. Machine Learning, 37:183–233.
• Jordan et al. (1999b) Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., and Saul, L. K. (1999b). An introduction to variational methods for graphical models. Machine learning, 37(2):183–233.
• Kramers (1940) Kramers, H. A. (1940). Brownian motion in a field of force and the diffusion model of chemical reactions. Physica, 7(4):284–304.
• Kucukelbir et al. (2015) Kucukelbir, A., Ranganath, R., Gelman, A., and Blei, D. (2015). Automatic variational inference in stan. In Advances in Neural Information Processing Systems, pages 568–576.
• Kushner and Yin (2003) Kushner, H. J. and Yin, G. (2003). Stochastic approximation and recursive algorithms and applications, volume 35. Springer Science & Business Media.
• Li et al. (2015) Li, Q., Tai, C., et al. (2015). Dynamics of stochastic gradient algorithms. arXiv preprint arXiv:1511.06251.
• Ljung et al. (2012) Ljung, L., Pflug, G. C., and Walk, H. (2012). Stochastic approximation and optimization of random systems, volume 17. Birkhäuser.
• Longford (1987) Longford, N. T. (1987). A fast scoring algorithm for maximum likelihood estimation in unbalanced mixed models with nested random effects. Biometrika, 74(4):817–827.
• Ma et al. (2015) Ma, Y.-A., Chen, T., and Fox, E. B. (2015). A complete recipe for stochastic gradient mcmc. arXiv preprint arXiv:1506.04696.
• Maclaurin et al. (2015) Maclaurin, D., Duvenaud, D., and Adams, R. P. (2015). Early stopping is nonparametric variational inference. arXiv preprint arXiv:1504.01344.
• Robbins and Monro (1951) Robbins, H. and Monro, S. (1951). A stochastic approximation method. The annals of mathematical statistics, pages 400–407.
• Sakrison (1965) Sakrison, D. J. (1965). Efficient recursive estimation; application to estimating the parameters of a covariance function. International Journal of Engineering Science, 3(4):461–483.
• Sato and Nakagawa (2014) Sato, I. and Nakagawa, H. (2014). Approximation analysis of stochastic gradient langevin dynamics by using fokker-planck equation and ito process. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 982–990.
• Tieleman and Hinton (2012) Tieleman, T. and Hinton, G. (2012). Lecture 6.5—RmsProp: Divide the Gradient by a Running Average of its Recent Magnitude. COURSERA: Neural Networks for Machine Learning.
• Toulis et al. (2014) Toulis, P., Airoldi, E., and Rennie, J. (2014). Statistical analysis of stochastic gradient methods for generalized linear models. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 667–675.
• (28) Toulis, P., Tran, D., and Airoldi, E. M. Towards stability and optimality in stochastic gradient descent.
• Uhlenbeck and Ornstein (1930) Uhlenbeck, G. E. and Ornstein, L. S. (1930). On the theory of the brownian motion. Physical review, 36(5):823.
• Welling and Teh (2011) Welling, M. and Teh, Y. W. (2011). Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 681–688.
• Widrow and Stearns (1985) Widrow, B. and Stearns, S. D. (1985). Adaptive signal processing. Englewood Cliffs, NJ, Prentice-Hall, Inc., 1985, 491 p., 1.
• Zhang (2004) Zhang, T. (2004). Solving large scale linear prediction problems using stochastic gradient descent algorithms. In Proceedings of the twenty-first international conference on Machine learning, page 116. ACM.

## Appendix A Stationary Covariance

The Ornstein-Uhlenbeck process has an analytic solution in terms of the stochastic integral (Gardiner et al., 1985),

 θ(t)=exp(−At)θ(0)+√ϵS∫t0exp[−A(t−t′)]BdW(t′) (28)

Following Gardiner’s book we derive an algebraic relation for the stationary covariance of the multivariate Ornstein-Uhlenbeck process. Define . Using the formal solution for given in the main paper, we find

 AΣ+ΣA⊤ =ϵS∫t−∞Aexp[−A(t−t′)]BB⊤exp[−A⊤(t−t′)]dt′ +ϵS∫t−∞exp[−A(t−t′)]BB⊤exp[−A⊤(t−t′)]dt′A⊤ =ϵS∫t−∞ddt′(exp[−A(t−t′)]BB⊤exp[−A⊤(t−t′)]) =ϵSBB⊤.

We used that the lower limit of the integral vanishes by the positivity of the eigenvalues of

.

## Appendix B Stochastic Gradient Fisher Scoring

We start from the Ornstein-Uhlenbeck process

 dΘ(t) = −HAθ(t)dt+H[Bϵ/S+E]dW(t) (29) = −A′θ(t)dt+B′dW(t).

We defined and . As derived in the paper, the variational bound is (up to a constant)

 KLc=N2Tr(AΣ)−log(|NA|). (30)

To evaluate it, the task is to remove the unknown covariance from the bound. To this end, as before, we use the identity for the stationary covariance .

The criterion for the stationary covariance is equivalent to

 HAΣ+ΣAH = ϵHBB⊤H+HEE⊤H⊤ ⇔AΣ+H−1ΣAH = ϵBB⊤H+EE⊤H ⇒Tr(AΣ) = 12Tr(H(ϵBB⊤+EE⊤)) (31)

We can re-parametrize the covariance as , such that is now unknown. The KL divergence is therefore

 KL = −N2Tr(AΣ)+log(|NA|) (32) = N4Tr(H(ϵBB⊤+EE⊤))+12log|T| +12log|H|+12log|NA|+D2,

which is the result we give in the main paper.

## Appendix C Square root preconditioning

Finally, we analyze the case where we precondition with a matrix that is proportional to the square root of the diagonal entries of the noise covariance.

We define

 G = √diag(BB⊤) (33)

as the diagonal matrix that contains square roots of the diagonal elements of the noise covariance. We use an additional scalar learning rate .

#### Theorem (taking square roots).

Consider SGD preconditioned with as defined above. Under the previous assumptions, the constant learning rate which minimizes KL divergence between the stationary distribution of this process and the posterior is

 ϵ∗ = 2DSNTr(BB⊤G−1). (34)

For the proof, we read off the appropriate KL divergence from the proof of Theorem 2 with :

 KL(q||f)c=ϵN2STr(BB⊤G−1)−Trlog(G)+D2logϵS−12log|Σ| (35)

Minimizing this KL divergence over the learning rate yields Eq. 34 ∎.