Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Learning in the Big Data Regime

03/25/2019 ∙ by Huy N. Chau, et al. ∙ 0

Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) is a momentum version of stochastic gradient descent with properly injected Gaussian noise to find a global minimum. In this paper, non-asymptotic convergence analysis of SGHMC is given in the context of non-convex optimization, where subsampling techniques are used over an i.i.d dataset for gradient updates. Our results complement those of [RRT17] and improve on those of [GGZ18].

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Let

be a probability space where all the random objects of this paper will be defined. The expectation of a random variable

with values in a Euclidean space will be denoted by .

We consider the following optimization problem

(1)

and is a random element in some measurable space with an unknown probability law . The function is assumed continuously differentiable (for each ) but it can possibly be non-convex. Suppose that one has access to i.i.d samples drawn from , where is fixed. Our goal is to compute an approximate minimizer such that the population risk

is minimized, where the expectation is taken with respect to the training data and additional randomness generating .

Since the distribution of is unknown, we consider the empirical risk minimization problem

(2)

using the dataset

Stochastic gradient algorithms based on Langevin Monte Carlo have gained more attention in recent years. Two popular algorithms are Stochastic Gradient Langevin Dynamics (SGLD) and Stochastic Gradient Hamiltonian Monte Carlo (SGHMC). First, we summarize the use of SGLD in optimization, as presented in [RRT17]. Consider the overdamped Langevin stochastic differential equation

(3)

where is the standard Brownian motion in and is the inverse temperature parameter. Under suitable assumptions on , the SDE (3) admits the Gibbs measure as its unique invariant distribution. In addition, it is shown that for sufficiently big , the Gibbs distribution concentrates around global minimizers of . Therefore, one can use the value of from (3), (or from its discretized counterpart SGLD), as an approximate solution to the empirical risk problem, provided that is large and temperature is low.

In this paper, we consider the underdamped (second-order) Langevin diffusion

(4)
(5)

where, model the position and the momentum of a particle moving in a field of force with random force given by Gaussian noise. It is shown that under some suitable conditions for , the Markov process is ergodic and has a unique stationary distribution

where is the normalizing constant

It is easy to observe that the -marginal distribution of is the invariant distribution of (3). We consider the first order Euler discretization of (4), (5), also called Stochastic Gradient Hamiltonian Monte Carlo (SGHMC), given as follows

(6)
(7)

where is a step size parameter and

is a sequence of i.i.d standard Gaussian random vectors in

. The initial condition may be random, but independent of .

In certain contexts, the full knowledge of the gradient is not available, however, using the dataset

, one can construct its unbiased estimates. In what follows, we adopt the general setting given by

[RRT17]. Let be a measurable space, and such that for any ,

(8)

where is a random element in with probability law . Conditionally on , the SGHMC algorithm is defined by

(9)
(10)

where is a sequence of i.i.d. random elements in with law . We also assume from now on that are independent.

Our ultimate goal is to find approximate global minimizers to the problem (1). Let be the output of the algorithm (9),(10) after iterations, and be such that . The excess risk is decomposed as follows, see also [RRT17],

(11)

The remaining part of the present paper is about finding bounds for these errors. Section 2 summarizes technical conditions and the main results. Comparison of our contributions to previous studies is discussed in Section 3. Proofs are given in Section 4.

Notation and conventions. For , scalar product in is denoted by . We use to denote the Euclidean norm (where the dimension of the space may vary). denotes the Borel - field of . For any -valued random variable and for any , let us set . We denote by the set of with . The Wasserstein distance of order between two probability measures and on is defined by

(12)

where is the set of couplings of , see e.g. [Vil08]. For two -valued random variables and , we denote . We do not indicate in the notation and it may vary.

2 Asumptions and main results

The following conditions are required throughout the paper.

Assumption 2.1.

The function is continuously differentiable, takes non-negative values, and there are constants such that for any ,

Assumption 2.2.

There is such that, for each ,

Assumption 2.3.

There exist constants such that

Assumption 2.4.

For each , it holds that

Assumption 2.5.

There exists a constant such that for every ,

Assumption 2.6.

The law of the initial state satisfies

where is the Lyapunov function defined in (16) below.

Remark 2.7.

If the set of global minimizers is bounded, we can always redefine the function to be quadratic outside a compact set containing the origin while maintaining its minimizers. Hence, Assumption 2.3 can be satisfied in practice. Assumption 2.4 means that the estimated gradient is also Lipschitz when using the same training dataset. For example, at each iteration of SGHMC, we may sample uniformly with replacement a random minibatch of size . Then we can choose where are i.i.d random variables having distribution . The gradient estimate is thus

which is clearly unbiased and Assumption 2.4 will be satisfied whenever Assumption 2.2 is in force. Assumption 2.5

controls the variance of the gradient estimate.

An auxiliary continuous time processes is needed in the subsequent analysis. For a step size , denote by the scaled Brownian motion. Let be the solutions of

(13)
(14)

with initial condition where may be random but independent of .

Our first result tracks the discrepancy between the SGHMC algorithm (9), (10) and the auxiliary processes (13), (14).

Theorem 2.8.

There exists a constant such that for all ,

(15)
Proof.

The proof of this theorem is given in Section 4.2. ∎

The following is the main result of the paper.

Theorem 2.9.

Suppose that the SGHMC iterates are defined by (9), (10). The expected population risk can be bounded as

where

where are appropriate constants and is the metric defined in (17) below.

Proof.

The proof of this theorem is given in Section 4.3. ∎

Corollary 2.10.

Let . We have

whenever

Proof.

From the proof of Theorem 2.9, or more precisely from (43), we need to choose and such that

First, we choose and so that and then

will hold for large enough. ∎

3 Related work and our contributions

Non-asymptotic convergence rate Langevin dynamics based algorithms for approximate sampling log-concave distributions are intensively studied in recent years. For example, overdamped Langevin dynamics are discussed in [WT11], [Dal17b], [DM16], [DK17], [DM17] and others. Recently, [BCM18] treats the case of non-i.i.d. data streams with a certain mixing property. Underdamped Langevin dynamics are examined in [CFG14], [Nea11], [CCBJ17], etc. Further analysis on HMC are discussed on [BBLG17], [Bet17]. Subsampling methods are applied to speed up HMC for large datasets, see [DQK17], [QKVT18].

The use of momentum to accelerate optimization methods are discussed intensively in literature, for example [AP16]. In particular, performance of SGHMC is experimentally proved better than SGLD in many applications, see [CDC15], [CFG14]. An important advantage of the underdamped SDE is that convergence to its stationary distribution is faster than that of the overdamped SDE in the -Wasserstein distance, as shown in [EGZ17].

Finding an approximate minimizer is similar to sampling distributions concentrate around the true minimizer. This well known connection give rise to the study of simulated annealing algorithms, see [Hwa80], [Gid85], [Haj85], [CHS87], [HKS89], [GM91], [GM93]. Recently, there are many studies further investigate this connection by means of non asymptotic convergence of Langevin based algorithms and in stochastic non-convex optimization and large-scale data analysis, [CCG16], [Dal17a].

Relaxing convexity is a more challenging issue. In [CCAY18], the problem of sampling from a target distribution where is L-smooth everywhere and -strongly convex outside a ball of finite radius. They provide upper bounds for the number of steps to be within a given precision level of the 1-Wasserstein distance between the HMC algorithm and the equilibrium distribution.

Our work continues these lines of research, the most similar setting to ours is the most recent paper [GGZ18]. We summarize our contributions below:

  • Diffusion approximation. In Lemma 10 of [GGZ18], the upper bound for the 2-Wasserstein distance between the SGHMC algorithm at step and underdamped SDE at time is (up to constants) given by

    which depends on the number of iteration . Therefore obtaining a precision requires a careful choice of and even . By introducing the auxiliary SDEs (13, 14), we are able to improve this bound by

    see Theorem 2.8. This upper bound is better not only in convergence rate for both step size ( vs. ) and variance ( vs. ) but also in the number of iterations. This improves Lemma 10 and hence Theorem 2 of [GGZ18]. Our analysis for variance of the algorithm is also different. The iteration does not accumulate mean squared errors, as the number of step goes to infinity.

  • Our proof for Theorem 2.8 is relatively simple and we do not need to adopt the techniques of [RRT17] which involve heavy functional analysis, e.g. the weighted Csiszár - Kullback - Pinsker inequalities in [BV05] is not needed.

  • Thanks to the big data regime, dependence structure of the dataset in the sampling mechanism, can be arbitrary, see the proof of Theorem 2.8. The i.i.d assumption on dataset is used only for the generalization error. We could also incorporate non-i.i.d data in our analysis, see Remark 4.5, but this is left for further research.

4 Proofs

4.1 A contraction result

In this section, we recall a contraction result of [EGZ17]. First, it should be noticed that the constant and the function in their paper are and in the present paper, respectively. Here, the subscript stands for “contraction”. Using the upper bound of Lemma 5.1 for below, there exist constants small enough and such that

Therefore, Assumption 2.1 of [EGZ17] is satisfied, noting that and

We define the Lyapunov function

(16)

For any , we set

where are suitable positive constants to be fixed later and is continuous, non-decreasing concave function such that , is on for some constant with right-sided derivative and left-sided derivative and is constant on . For any two probability measures on , we define

(17)

Note that and are semimetrics but not necessarily metrics. A result from [EGZ17] is recalled below.

For a probability measure on , we denote by the law of when .

Theorem 4.1.

There exists a continuous non-decreasing concave function with such that for all probability measures on , we have

(18)

where the following relations hold:

The function is constant on , on with

and satisfies .

Proof.

See Theorem 2.3 and Corollary 2.6 of [EGZ17]. ∎

It should be emphasized that , and consequently, contracts at the rate .

4.2 Proof of Theorem 2.8

Proof.

For each , we define

Let be -valued random variables satisfying Assumption 2.6. For , we recursively define , and

(19)
(20)

Let . For each , and for each , we set

(21)

We estimate for ,

and

(22)

Denote . By Assumption 2.4, the estimation continues as follows

(23)

Using (22), one obtains

(24)

noting that Therefore, the estimation in (23) continues as

Applying the discrete-time version of Grönwall’s lemma and taking squares, noting also that yield

where

(25)

Taking conditional expectation with respect to , the estimation becomes

Since the random variables are independent, the sequence of random variables , are independent conditionally on , noting that is measurable with respect to . In addition, they have zero mean by the tower property of conditional expectation. By Assumption 2.4,

and thus

by the independence of from . From Assumptions 2.1, 2.5, and from Lemma 5.1, we deduce that

Therefore,

(26)

Doob’s inequality and (26) imply

Taking one more expectation and using Lemma 5.3 give