# Implicit Langevin Algorithms for Sampling From Log-concave Densities

For sampling from a log-concave density, we study implicit integrators resulting from θ-method discretization of the overdamped Langevin diffusion stochastic differential equation. Theoretical and algorithmic properties of the resulting sampling methods for θ∈ [0,1] and a range of step sizes are established. Our results generalize and extend prior works in several directions. In particular, for θ>1/2, we prove geometric ergodicity and stability of the resulting methods for all step sizes. We show that obtaining subsequent samples amounts to solving a strongly-convex optimization problem, which is readily achievable using one of numerous existing methods. Numerical examples supporting our theoretical analysis are also presented.

There are no comments yet.

## Authors

• 4 publications
• 5 publications
• 10 publications
• ### Log-concave sampling: Metropolis-Hastings algorithms are fast!

We consider the problem of sampling from a strongly log-concave density ...
01/08/2018 ∙ by Raaz Dwivedi, et al. ∙ 0

• ### A Polynomial Time Algorithm for Maximum Likelihood Estimation of Multivariate Log-concave Densities

We study the problem of computing the maximum likelihood estimator (MLE)...
12/13/2018 ∙ by Ilias Diakonikolas, et al. ∙ 0

• ### Efficient constrained sampling via the mirror-Langevin algorithm

We propose a new discretization of the mirror-Langevin diffusion and giv...
10/30/2020 ∙ by Kwangjun Ahn, et al. ∙ 0

• ### Complexity of zigzag sampling algorithm for strongly log-concave distributions

We study the computational complexity of zigzag sampling algorithm for s...
12/21/2020 ∙ by Jianfeng Lu, et al. ∙ 0

• ### Theoretical guarantees for approximate sampling from smooth and log-concave densities

Sampling from various kinds of distributions is an issue of paramount im...
12/23/2014 ∙ by Arnak S. Dalalyan, et al. ∙ 0

• ### Mirrored Langevin Dynamics

We generalize the Langevin Dynamics through the mirror descent framework...
02/27/2018 ∙ by Ya-Ping Hsieh, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Effectively sampling from arbitrary unnormalized probability distributions is a fundamental aspect of the Monte Carlo method, and is central in Bayesian inference. The most common cases involve probability densities

with support on all of , which can be written in the unnormalized form as

 π(x)∝exp(−f(x)),x∈Rd.

The sampling problem concerns the construction of a set of points whose empirical distribution approaches in some appropriate sense. A standard approach is Markov Chain Monte Carlo (MCMC), in which approximate sampling from is accomplished by simulating a

-ergodic Markov chain. By the ergodic theorem, this provides consistent Monte Carlo estimators for expectations involving the density

. The most popular approach to generate such a set of points is the Metropolis-Hastings algorithm (Hastings, 1970), which constructs a -ergodic Markov chain by generating a proposal from a given transition kernel and implements an acceptance criterion for these proposals (see Robert and Casella (1999) for an overview of such methods). While geometric rates of convergence (geometric ergodicity) can be guaranteed in a wide variety of settings, performance is highly susceptible to the underlying proposal. However, the effectiveness of Metropolis-Hastings methods diminish in higher dimensions, as step sizes must be scaled inversely with dimension, making rapid exploration of the space unlikely (see for example, Roberts and Rosenthal (2001)).

Many of these issues lie with the steadfast requirement of consistency: that the sample empirical distribution should asymptotically be the same as . Ensuring this requirement in turn can result in incurring serious penalty to the mixing rate of the chain. However, when seeking a fixed (finite) number of samples, which is almost always the case in practice, consistency is not necessarily a decisive property. Therefore, it has recently become popular to consider rapidly converging Markov chains whose stationary distributions are only approximations to with a bias of adjustable size; see for example (Dalalyan, 2017a, b; Wibisono, 2018; Cheng et al., 2018; Cheng and Bartlett, 2018)

. While the resulting Monte Carlo estimator is no longer consistent, it will often have dramatically smaller variance. This is an example of a

bias-variance trade-off, where a biased method can require significantly less computational effort to reach the same mean-squared error as an asymptotically unbiased Metropolis-Hastings chain (Korattikara et al., 2014).

The most studied of these methods is the unadjusted Langevin algorithm (ULA), seen in Roberts and Tweedie (1996), which is constructed by considering the overdamped Langevin diffusion equation, given by the stochastic differential equation (SDE)

 L0∼π0,dLt=−12∇f(Lt)dt+dWt, (1)

and employing the forward Euler integrator, also known as Euler–Maruyama approximation (Kloeden and Platen, 2013), to obtain iterates of the form

 Xk+1=Xk−h2∇f(Xk)+√hZk. (2)

Here, denotes -dimensional standard Brownian motion, is some arbitrary (possibly deterministic) initial distribution, each is an independent standard

-dimensional normal random vector, and

is the step size parameter representing the temporal mesh size of the Euler method. Since creftype 2 is explicitly defined, it is often referred to as explicit Euler scheme. This main interest in creftype 1 lies in the well known fact that, under certain mild conditions and regardless of , for any the distribution of is absolutely continuous (so we may consider its density on ), and is an ergodic Markov process with limiting distribution , that is, as for all (Kolmogorov, 1937).

However, unlike the Langevin SDE, the distribution of samples obtained from ULA creftype 2 will, generally speaking, not converge to as . More precisely, ULA is an asymptotically biased sampling algorithm, with corresponding bias proportional to step size (temporal mesh size). Despite this, in situations where MCMC fails to perform well, for example, high-dimensional problems, ULA can provide approximate samples from the target density with acceptable accuracy (Durmus and Moulines, 2016).

The theoretical properties of ULA, including geometric ergodicity (Hansen, 2003; Roberts and Tweedie, 1996), and performance in high dimensions (Durmus and Moulines, 2016) are well understood. Of particular relevance to us is the recent work of Dalalyan (2017a), Dalalyan (2017b), and Durmus and Moulines (2017), concerning the stability of ULA. Although it does not possess a single technical definition, stability of stochastic processes is often well-understood conceptually — some common characterizations include non-evanescence and Harris recurrence (Meyn and Tweedie, 2012, p. 15). To establish stability, the aforementioned works develop theoretical guarantees in the form of error bounds on the 2-Wasserstein metric between iterates of ULA and the target distribution. Doing so gives conditions under which ULA is bounded in probability, which in turn implies non-evanescence (Meyn and Tweedie, 2012, Proposition 12.1.1), and Harris recurrence, of the corresponding Markov chain (Meyn and Tweedie, 2012, Theorem 9.2.2). The inexact case, where is approximated to within an absolute tolerance, is also considered (Dalalyan and Karagulyan, 2017). Some alternative unadjusted explicit methods have also been considered; these are usually derived using other diffusions whose stationary distributions can also be prescribed (Cheng et al., 2017).

As a direct result of the explicit nature of the underlying discretization scheme, the main issue with ULA-type algorithms is that they are stable only up to a fixed step size, beyond which the chain is no longer ergodic. In fact, Roberts and Tweedie (1996) actively discourage the use of ULA for this reason, and show that ULA may be transient for large step sizes. Stability is an essential concept when designing and analyzing methods for the numerical integration of continuous-time differential equations (Ascher, 2008)

. In some cases, this step size must be taken extremely small to remain stable. This becomes a major hindrance to the performance of the method in practice. Drawing comparisons to the theory of ordinary differential equations (ODEs) by dropping the stochastic term, in these cases, the Langevin diffusion is said to be

stiff (Ascher and Petzold, 1998)

. Ill-conditioned problems, such as sampling from any multivariate normal distribution with a covariance matrix possessing a large condition number

(Golub and Van Loan, 2012, §2.6.2), are likely to induce a stiff Langevin diffusion (Lambert, 1991, §6.2). The negative side-effects associated with ill-conditioning as well as the restrictions on step size are often only exacerbated in high-dimensional problems.

In this light, a natural alternative to using explicit schemes with careful choice of step size is to consider implicit variants. From the established theory of numerical solutions of ODEs, it is well-known that implicit integrators have larger regions of stability than explicit alternatives, that is, one can take larger steps without unboundedly magnifying the underlying discretization errors (Ascher, 2008). Motivated by this, we can instead consider the -method scheme (Ascher, 2008, p. 84), which when applied to Langevin dynamics creftype 1, yields general iterations of form

 Xk+1=Xk−h2[θ∇f(Xk+1)+(1−θ)∇f(Xk)]+√hZk, (3)

for some . The special cases of and correspond to forward, backward, and trapezoidal integrators, respectively. Of course, for , creftype 3 reduces to the explicit Euler scheme creftype 2. As the choice of define the endpoint in an implicit way, such integrators are often referred to as implicit.

To our knowledge, there have only been a handful of efforts to study the properties of sampling algorithms obtained from such implicit schemes. A universal analysis of sampling schemes based on general Langevin diffusion was conducted in Mattingly et al. (2002). There, it was shown that the implicit Euler scheme, and other numerical methods satisfying a certain minorization condition are geometrically ergodic for sufficiently small step sizes under the assumption that is ‘essentially quadratic’. In a more focused analysis, Casella et al. (2011) investigated the ergodic properties of a few implicit schemes (including the -method scheme) for a restricted family of one-dimensional super-Gaussian target densities. They found that, in this setting, the -method results in a geometrically ergodic chain for any step size , provided that , and suggested the same might be true in higher dimensions. Under slightly weaker assumptions than strong convexity, Kopec (2014) conducted a weak backward error analysis providing error bounds on the expectation of the fully implicit Euler scheme () with respect to suitable test functions. More recently, Wibisono (2018) considered the case and provided a rate of convergence of the scheme towards its biased stationary distribution under the 2-Wasserstein metric, assuming strong convexity and small step sizes. Despite these efforts, it is still unclear how implicit schemes compare with explicit schemes more generally for large step sizes, and what the effect of is on the bias of the method.

The aim of this work is to study the -method sampling scheme creftype 3 for all , as it applies to the relevant case of strongly log-concave distributions (that is, where is a strongly convex function). Such distributions arise frequently in Bayesian regression problems (Bishop and Tipping, 2003), for example generalized linear models (GLMs) with a Gaussian prior (Chatfield et al., 2010).

##### Contributions.

To those ends, the contributions of this work are as follows:

1. We show that the transition density associated with creftype 3 has a closed form solution. Then, using this, we establish conditions for geometric ergodicity, in terms of , the step size , Lipschitz continuity, tail behavior, and semi-convexity of (Theorem 1). By doing so, we show stability of the -method scheme in multivariate settings for any step size when and is strongly convex, proving the conjecture by Casella et al. (2011).

2. We provide non-asymptotic theoretical guarantees for long-time behavior of creftype 3, which extend those of Dalalyan and Karagulyan (2017, Theorem 1) to the general implicit case (Theorem 2).

3. As for , iterations of creftype 3 involve solving a non-linear equation, we study the effect of inexact solutions of the underlying sub-problems. We propose practically computable termination criteria and quantify the effect of approximating each iterate on the convergence rate and long-term bias of the chain with this criteria.

4. We establish large step size asymptotics for

via a central limit theorem as

(Theorem 3

). As a consequence, we develop an effective default heuristic choice of step size.

5. Finally, we demonstrate the empirical performance of the implicit

-method scheme in a series of numerical experiments; namely sampling from high-dimensional Gaussian distributions, and the posterior density of a Bayesian logistic regression problem involving a real data set.

Proofs of all results can be found in Appendix A.

##### Notation.

In the sequel, vectors and matrices are denoted by bold lowercase and Romanized bold uppercase letters, for example, and

, respectively. We denote the identity matrix by

. Regular lower-case and upper-case letters, such as s and , are used to denote scalar constants. Random vectors are denoted by italicized bold uppercase letters, such as . For two symmetric matrices and , indicates that is symmetric positive semi-definite. For vectors, we let denote the Euclidean norm of appropriate dimension, and denote the norm acting on random vectors, that is, . For matrices, denotes the spectral norm.

## 2 Implicit Langevin Algorithm (ILA)

In this section, we establish conditions under which the sequence of -method iterates creftype 3 form a Markov chain that is geometrically ergodic. For this, we impose the following assumption on the smoothness of , which ensures the existence of a unique solution to creftype 1; see Ikeda and Watanabe (2014, Theorem 2.4–3.1).

###### Assumption 1

The function (it is twice continuously differentiable), and is -Lipschitz, that is,

 ∥∇f(x)−∇f(y)∥≤M∥x−y∥,for any x,y∈Rd.

Under creftype 1, Dalalyan (2017b) shows that if , iterations of ULA creftype 2 are stable. However, this restriction is a fundamental disadvantage of ULA. If is particularly large, as might be the case in ill-conditioned problems and in high dimensions where the ULA is commonly applied, then the step size must be taken very small, which results in slow mixing time of the chain and high autocorrelation of the samples. In sharp contrast, we now show that for appropriate choice of , creftype 3 does not suffer from this restriction.

The process of obtaining samples by iterating creftype 3 is outlined in Algorithm 1. For brevity, we henceforth refer to this procedure as the implicit Langevin algorithm (ILA). Note that (3) can be rewritten as

 (I+12hθ∇f)(Xk+1)=Xk−12h(1−θ)∇f(Xk)+√hZk,

where denotes the identity mapping. Assuming that solutions to creftype 3 exist and are unique, that is is globally invertible, this implies

 Xk+1=(I+12hθ∇f)−1[Xk−12h(1−θ)∇f(Xk)+√hZk]. (4)

Conditions under which the procedure creftype 4 is guaranteed to be well-defined are discussed in §2.1. For the time being, it is also assumed that in Algorithm 1 that (3) can be solved exactly. The discussion of inexact solutions is relegated to §3.

### 2.1 Theoretical analysis of Algorithm 1

In this section, we establish sufficient conditions for the geometric ergodicity of the sequence of iterates generated from Algorithm 1. To conduct such an analysis, we require the transition kernel density induced from (3). In general, this is only implicitly defined, however, assuming is globally invertible, is nevertheless available in closed form. Assuming that , this is true whenever is chosen so that

 I+hθ2∇2f(x)≻0,for all x∈Rd. (5)

Therefore, at the very least, for (3) to be well-defined as a sampling method, we require to be semi-convex, that is, there exists some such that is positive-semidefinite for all . For example, under Assumption 1, is -semi-convex and (5) holds if . This restriction on step size can be removed entirely if is assumed to be convex.

From (4), for fixed , note that

is the probability density function of the random variable

 Y=(I+12hθ∇f)−1(x−12h(1−θ)∇f(x)+√hZ),

where . As this is an invertible transformation of a standard Gaussian random vector, by the change of variables theorem (see for example, (Shao, 2008, Proposition 1.8)), we have

 p(y∣x)=∣∣∣det(I+hθ2∇2f(y))∣∣∣ϕ(y+hθ2∇f(y);x−h(1−θ)2∇f(x),hI), (6)

where is the density of a multivariate Gaussian distribution with mean and covariance matrix , and ‘’ denotes the determinant of a matrix.

It can be seen from (6) that increasing (or when ) alters the landscape of the transition density, and the shape of its level-sets. To illustrate this, Figure 1 depicts the contour plots of the transition kernel creftype 6 for an anisotropic example problem with differing and initial state for the same step size. It can be seen that the case with (ULA) results in an isotropic proposal in all situations, whereas other choices of (implicit methods) yield proposal densities that can better adapt to the anisotropic target density.

With an explicit expression for the transition density, we can investigate the stability of the iterates given by Algorithm 1. The most convenient way of doing this is by demonstrating geometric ergodicity of the chain induced by the transition kernel (6). Recall that a Markov chain with -fold transition kernel is said to be geometrically ergodic toward an invariant density if there exist constants and such that

 supx∈Rd∫Rd|pn(y|x)−ν(y)|dy≤Cρn,for all n=1,2,….

Similarly, a diffusion process with transition kernel density is said to be exponentially ergodic towards an invariant density if there exist constants such that

 supx∈Rd∫Rd|pt(y|x)−ν(y)|dy≤Ce−λt,for % all t>0.

It was shown in (Hansen, 2003, Eqn. (12)) that the overdamped Langevin diffusion (1) is exponentially ergodic provided the following assumption on holds:

###### Assumption 2

There exists a constant such that

 m\coloneqqliminf∥x∥→∞⟨∇f(x),x⟩∥x∥2.

Intuitively, creftype 2 imposes super-Gaussian tails of the target distribution. Under Assumptions 1 and 2, for any and , there exists a constant depending on , such that (Lemma 1)

 ⟨∇f(x)−∇f(y),x−y⟩≥m∥x−y∥2−c(y),for every x∈Rd. (7)

creftype 2 is not new, having also appeared in Kopec (2014), and appears to be among the weakest assumptions one can make to effectively study these implicit schemes. Clearly, creftype 7 is a significantly weaker condition than strong convexity:

###### Assumption 3

The function is -strongly convex, that is, there exists such that

 ⟨∇f(x)−∇f(y),x−y⟩≥m∥x−y∥2,for any x,y∈Rd.

In fact, it is straightforward to show that the values of in Assumptions 2 and 3 must coincide. Furthermore, we remark that under Assumptions 1 and 3, the spectrum of every Hessian matrix is controlled to be within . Strong convexity is quite a natural assumption in Bayesian regression problems, as it can be guaranteed for the class of GLMs with Gaussian priors (DasGupta, 2011).

Under Assumptions 1 and 2, we can now establish geometric ergodicity of the -method scheme under certain conditions on and (Theorem 1).

###### Theorem 1

For satisfying Assumptions 1 and 2 that is -semi-convex, the iterates of the -method scheme with associated transition kernel (6) form a geometrically ergodic chain when provided we also have either , or both and .

While Theorem 1 establishes the geometric ergodicity of the chain towards some stationary distribution, in general, that distribution need not necessarily be . Nevertheless, under Assumptions 1 and 3, we have established that the -method discretization of the overdamped Langevin equation is stable for any step size, provided . For these choices of , this implies that ILA is less strict about step size tuning than ULA. As will be seen in §5, this will prove to have a profound effect on the performance of ILA relative to ULA on high-dimensional problems.

### 2.2 Asymptotic exactness for the normal distribution

Among all values for , we draw special attention to the choice . This resulting integrator, also known as the trapezoidal scheme, is known to be second-order accurate when applied to ODEs; that is, for a quadratic function , iterates of the trapezoidal scheme for solving yield points of the exact solution (Süli and Mayers, 2003, §12.4). An important consequence of this is that the global error incurred in the trapezoidal scheme is as . Unfortunately, as a consequence of the Itô calculus, this property does not hold for numerical solutions of stochastic differential equations. The construction of schemes with global error generally requires careful treatment of the stochastic term — see for example Anderson and Mattingly (2009). However, the notion of second-order accuracy itself carries over in a rather remarkable way.

The case where is a quadratic form corresponds to sampling from a (multivariate) normal distribution , where

 f(x)=12(x−μ)⊤Σ−1(x−μ). (8)

It is easy to see that, in this particular setting, creftype 3 becomes explicitly solvable. Indeed, letting , we see that , and so

 Xk+1=(I+hθ2Q)−1[(I−h(1−θ)2Q)(Xk−μ)+√hZk]+μ. (9)

Observe that if is chosen to be a fixed value, all of the iterates are normally distributed. As a consequence of Lévy continuity, the stationary distribution of the ILA, if it exists, must also be normally distributed. In particular, due to (9), it must have mean and covariance satisfying

 m−μ =(I+12hθQ)−1(I−12h(1−θ)Q)(m−μ) V =(I+12hθQ)−2[(I−12h(1−θ)Q)2V+hI].

Since , it must be the case that . Solving for , the stationary distribution of the ILA is found to be

 N(μ,Σ(I+12h(θ−12)Q)−1).

Here we encounter the remarkable fact that when is quadratic and , regardless of the step size chosen, ILA is asymptotically unbiased! To our knowledge, this was first observed in Wibisono (2018), however, as a consequence of our analysis, we can now deduce that is the only choice of that yields this property. While asymptotic exactness is unlikely to hold for other sampling problems, it suggests that cases involving approximately quadratic should see near optimal performance when .

## 3 Inexact Implicit Langevin Algorithm (i-ILA)

It is clear that the utility of ILA is dependent on the solvability of creftype 3. Fortunately, this is made feasible by a useful reinterpretation of solutions to (3) as those of a corresponding optimization problem. Indeed, the inverse operator is quite commonly considered in convex optimization, as it is equivalent to the proximal operator defined by

 proxf(v)=argminx∈Rd{f(x)+12∥x−v∥2},v∈Rd.

This equivalence follows from that of an optimization problem and the root-finding problem for its critical values (Parikh and Boyd, 2014, Eqn. (3.4)). Therefore, (3) can be formulated as the following optimization problem (after rescaling by ): equationparentequation

 Xk+1 =argminx∈RdF(x;Xk,Zk), (10a) where F(x;y,z)\coloneqqθf(x)+1h∥∥ ∥∥x−(y−h(1−θ)2∇f(y)+√hz)∥∥ ∥∥2. (10b)

This reinterpretation of (3) was also noted in Wibisono (2018), although only the case was considered. Iterations of the form (10) are often referred to as proximal-point methods in the optimization literature (Combettes and Pesquet, 2011; Parikh and Boyd, 2014). We remark that proximal operators were used in Pereyra (2016) in the construction of a proximal unadjusted Langevin algorithm (P-ULA). In fact, iterates of their P-ULA algorithm would correspond with (10) when and if the Gaussian term were to be moved outside the proximal operator. As one might expect, this discrepancy has a significant impact on the covariance of each proposal as ; it will be shown in Theorem 3 that the asymptotic covariance of these proposals is generally anisotropic.

The implementation of Algorithm 1 now hinges entirely on our ability to solve the subproblem creftype 10. For the unadjusted Langevin algorithm where , this can be done trivially through a closed form solution. However, for , we generally have to resort to an iterative optimization scheme to solve creftype 10. Thus far, we have assumed that the optimization problem in creftype 10 can be solved exactly. However, more often than not this is infeasible, and one must instead consider the effects of approximate solutions of creftype 10 in the overall convergence of the chain. This results in a sampling variant, which is henceforth referred to as i-ILA (for inexact ILA).

The most natural way of doing this is by measuring the error in the corresponding root-finding problem (3) via the norm of the gradient of the subproblem creftype 10b, . This is ideal because not only can it be readily computed in practice, but also the termination criterion of many iterative optimization algorithms is based on this norm falling below a given tolerance; for example, see Nocedal and Wright (2006). Furthermore, efficient algorithms for directly minimizing , as a surrogate function for optimization of , have been recently proposed, which enjoy linear, that is, geometric, convergence rates, even in the absence of smoothness or convexity of (Roosta et al., 2018). In addition, for sampling in distributed computational environments, such as when large-scale data cannot be stored on a single machine, distributed variants of these surrogate optimization algorithms have also been recently considered (Crane and Roosta, 2019). These algorithms are particularly suitable as part of i-ILA since they are guaranteed to rapidly and monotonically decrease ; recall that need not be monotonically decreasing in optimization algorithms that optimize directly.

With this in mind, we consider an inexact modification of Algorithm 1, shown in Algorithm 2, for approximate sampling from .

### 3.1 Theoretical analysis of Algorithm 2

The increased stability offered by Algorithm 1 has been established in Theorem 1. However, while Theorem 1 guarantees rapid convergence towards some stationary distribution, closeness of the -method iterates to the target distribution and the effect of increasing on its bias as a sampling method, has yet to be established. Furthermore, Algorithm 1 and its guarantees given by Theorem 1 require exact solutions of the root-finding problem creftype 3, whereas Algorithm 2 allows for such problems to be solved only inexactly. To address both of these problems, we devote this section to the development of theoretical guarantees of Algorithm 2, inspired by the techniques of Dalalyan (2017b). These guarantees come in the form of rate of convergence estimates under the 2-Wasserstein metric, defined between two probability measures and by

 W2(ν,π)=infX∼ν,Y∼π∥X−Y∥2,

where the infimum is taken over all couplings of and , and is attained by some optimal coupling (Villani, 2008, Thm. 4.1). The 2-Wasserstein metric can be readily linked to other quantities of interest. For example, from the Kantorovich-Rubinstein formula (Villani, 2008, Eqn. (5.11)), for any -Lipschitz function , we have that

 |ν(φ)−π(φ)|\coloneqq∣∣∣∫φd(ν−π)∣∣∣≤MW2(ν,π).

Our guarantees will require the same assumptions on seen in Dalalyan (2017b), that are Assumptions 1 and 3. Under these assumptions, the condition number of in creftype 10b can be written as

 κh\coloneqq1+12θhM1+12θhm. (11)

Recall that the condition number creftype 11 encodes and summarizes the curvature (the degree of relative flatness and steepness), of the graph of . In optimization, it is well-known that a large condition number typically amounts to a more difficult problem to solve, and hence algorithms that can take such contorted curvature into account (Newton-type methods, for example), are more appropriate (Roosta-Khorasani and Mahoney, 2018; Xu et al., 2017). It is only natural to anticipate that challenges corresponding to problem ill-conditioning similarly carry over to sampling procedures as well. Indeed, large ratios of , which imply increasingly anisotropic level-sets for , can hint at more difficult sampling problems. For example, this difficulty directly manifest itself in ill-conditioning of , which in turn results in more challenging sub-problems. Furthermore, in such situations, taking a larger step size can only exacerbate the ill-condition of . As a result, similar to the role played by second-order methods in optimization, one can naturally expect to see implicit methods to be more appropriate for ill-conditioned sampling problems.

Under Assumptions 1 and 3, the discrepancy between the inexact variant of the -method given in Algorithm 2 and the target density under the 2-Wasserstein metric is described in Theorem 2.

###### Theorem 2

Suppose satisfies Assumptions 1 and 3. Let and let denote the distribution of the iterate obtained by Algorithm 2, for each , starting from . Let be as in (11), and if , let

 h∗=(θ−12)(M+m)+√(θ−12)2(M+m)2+4θ(1−θ)mMθ(1−θ)mM. (12)

Furthermore,

1. [label=()]

2. if or , then let

 ρ=1−12h(1−θ)m1+12hθm,andC=κhθm; (13)
3. alternatively, if and , or if and , then let

 ρ=12h(1−θ)M−112hθM+1,andC=12κ2hh2+12h(2θ−1)M. (14)

Then, for any ,

 W2(νt,π)≤κhρtW2(ν0,π)+C(ϵ+min{2M√hd(2+√hM),4√Md}).

###### Remark 1

As , for the transition point we have . Moreover, at and , , and the bias term coincides with that in Dalalyan and Karagulyan (2017, Theorem 1). Theorem 2 is thus a generalization of Dalalyan and Karagulyan (2017, Theorem 1) to arbitrary and error .

As Theorem 1 did for Algorithm 1, Theorem 2 shows that for , Algorithm 2 is stable for all . Theorem 2 does suggest that smaller values of will achieve faster convergence rates and smaller biases for small step sizes, although this does not appear to be the case in practice (for example, refer to §5). Observe that, for and fixed , the bias term is in the order of . This implies that increasing the condition number when is bounded below (for example the spherical Gaussian prior in Bayesian regression) results in smaller bias and faster convergence. This is in sharp contrast to ULA whose performance significantly degrades with increasing condition number in such settings.

Also, we would like to reiterate that, in stark contrast to what is observed in Roberts and Rosenthal (2001) for Metropolis-Hastings algorithms, the rate of convergence in Theorem 2 for Algorithm 2 is not dependent on the dimension in any form other than through the appearances of and . The dimension appears in the bias term simply due to the natural expansion of the Euclidean distance with dimension. In particular, following Durmus and Moulines (2016), as the dependence on dimension is at most polynomial, this lends credence to the claim that implicit Langevin methods are well-equipped to handle high-dimensional sampling problems.

## 4 Asymptotics for large step size

While Theorem 2 provides an essential description of the behavior of Algorithm 2, the bounds presented there are tightest for smaller step sizes on the order of , and are less effective when is larger. Unfortunately, the most useful applications of ILA will occur when is large, and so the small step size () regime will not be all that relevant. Enabled by the increased stability of ILA, we present a novel analysis of Algorithm 1 by establishing a central limit-type theorem regarding asymptotic behavior of the iterates in the regime.

Before we begin with a formal analysis, we are able to obtain insight by considering the behavior of the subproblem creftype 10 as . For , we have

 1h∥∥∥x−xt+h(1−θ)2∇f(xt)+√hzt∥∥∥2=(1−θ)x⋅∇f(xt)+O(h−1/2)+C,

where does not depend on , and so does not contribute to solving (10). As a result, the iterates of the method in the regime will satisfy the relations , where we let

 ∇f(xθt)=(1−1θ)∇f(xt),x∈Rd. (15)

Letting denote the unique mode of , iterating (15) gives

 ∥∇f(xt)−∇f(x∗)∥=ρt∥∇f(x0)−∇f(x∗)∥,

where . Under Assumption 3, we obtain

 mρtM∥x0−x∗∥≤∥xt−x∗∥≤Mρtm∥x0−x∗∥.

Therefore, the behavior of the -method for large is determined according to the three regimes depicted in Table 1.

The case is clearly undesirable from a practical standpoint. Moreover, the collapse towards the mode seen when is close to one suggests enormous potential bias for large step sizes. On the other hand, the case provides no damping effect whatsoever (a fact also supported by Theorem 2), making it susceptible to rare large proposals. Based on this preliminary analysis, for some small , a choice of appears to provide the safest, and potentially the most accurate of our -method samplers. This aligns with the rule-of-thumb used for -method discretization of ODEs (Ascher, 2008, p. 85). To formally extend these characterizations to the implicit -method scheme creftype 3, in Theorem 3, a central limit theorem as is obtained for a single step of the scheme about the deterministic map .

###### Theorem 3

Given any , consider iterations given by creftype 10, where . Conditioned on , we have

 √h(Xk+1−Xθk)D⟶h→∞N(0,4θ2∇2f(Xθk)−2).

Theorem 3 implies that as , the implicit -method scheme behaves similarly to a Markov chain with transitions

 Wk+1=Wθk+2θ√h∇2f(Wθk)−1Zk,

whose dynamics mimic those of the map , but with an additional normally-distributed noise term at each step. Furthermore, the variance of this noise term increases as the implicit component of the scheme diminishes (taking ).

### 4.1 A heuristic choice for step size

A consequence of the proof of Theorem 3 is that the covariance of the proposal density from the transition kernel behaves asymptotically as

 Σh(x)≈h(I+hθ2∇2f(xθ))−2,as h→∞.

Conversely, it is relatively straightforward to show that

 Σh(x)≈h(I+hθ2∇2f(x))−2,as h→0.

These two expressions coincide when , where denotes the mode. At this point, one might expect a ‘good’ transition kernel to resemble the Laplace approximation of the distribution about , which has covariance . This suggests a heuristic for choosing a good step size in practice, by taking as a solution to the one-dimensional optimization problem

 ^hθ\coloneqqargminh≥0∥∥∥h(I+hθ2∇2f(x∗))−2−∇2f(x∗)−1∥∥∥E, (16)

where the norm can be any matrix norm of choice. Solutions to (16) can be obtained using off-the-shelf methods in univariate optimization, such as golden section search (Cottle and Thapa, 2017, §9.5). We will show in the next section that, for several examples, the step size obtained from creftype 16 with Frobenius norm tends to be an effective choice in practice, especially for , where it reveals itself to be near optimal in all of our experiments. For this choice of norm, creftype 16 can be replaced by the equivalent problem

 ^hθ=argminh≥0d∑k=1[h(1+hθ2λk)−2−1λk]2, (17)

where

are the eigenvalues of

. One drawback is that solving (16) or (17) require either inversion of ), or knowledge of its spectrum, respectively, both of which may be prohibitively expensive in high dimensions. In many problems, however, it is reasonable to assume a certain distribution of its spectrum; for example, that the eigenvalues of decay exponentially:

 logλk≈(1−k−1d−1)logM+k−1d−1logm,k=1,…,d, (18)

where and take the place of the smallest and largest eigenvalues of , respectively. Simplifying assumptions such as these can be justified in problems where Hessian of is approximately low rank, in the sense that it has a small stable rank (Roosta-Khorasani and Ascher, 2015), and hence its spectrum decays fast. Under these assumptions, solving creftype 17 becomes more tractable; we shall make use of this for Figure 3 in §5.

## 5 Numerical experiments

In this section, we evaluate the empirical performance of Algorithms 1 and 2 in high-dimensions as measured by a few discrepancy measures. Recall that the total variation distance between any two absolutely continuous distributions with densities and over respectively is given by

 dTV(p,q)\coloneqq12∫Rd|p(x)−q(x)|dx.

Since the total variation metric is too difficult to directly estimate in higher dimensions, we follow the standard approach in the literature (see for example Durmus and Moulines (2016) and Maire et al. (2018)) and consider instead the mean marginal total variation (MMTV),

 MMTV(p,q)\coloneqq12dd∑i=1∫R|pi(x)−qi(x)|dx,

which we estimate as follows. First, kernel smoothing is applied to samples for each marginal from an extended MCMC run, as well as samples obtained from a single run of each method. The total variation between these estimated univariate densities is then computed with high accuracy via Gauss-Kronrod quadrature (Kahaner et al., 1989).

As a weakness of MMTV is its inability to adequately compare the covariances within coordinates between the two sample sets, we also compare with a second discrepancy measure; maximum mean discrepancy (MMD) (Gretton et al., 2012; Muandet et al., 2017). Letting denote a reproducing kernel Hilbert space over with reproducing kernel , MMD is defined as the integral probability metric

 MMD2(p,q) \coloneqq[sup∥h∥H≤1∫Rdh(x)[p(x)−q(x)]dx]2 =Ep,pk(X,~X)−2Ep,qk(X,Y)+Eq,qk(Y,~Y),

where are independent random variables with distribution , and are independent random variables with distribution . These expectations can be estimated using samples from and . In our experiments, we use the Gaussian kernel:

 k(x,y)=exp(−12σ2∥x−y∥2),

where the kernel bandwidth parameter is chosen so that is the median of , where denotes the -th sample taken from .

### 5.1 High-dimensional Gaussian distributions

To highlight the effects of problem ill-conditioning, we once again consider sampling from a multivariate Gaussian distribution, as in (8), with explicitly computable iterates given by (9). It is easy to see that satisfies Assumptions 1 and 3. To show efficacy in higher dimensions, we will consider . Furthermore, to test the effects of ill-conditioning, we focus on three choices of with condition numbers . Each is generated using the method of Bendel and Mickey (1978) to uniformly sample a correlation matrix with eigenvalues given by (18) for and . For simplicity, we take . MMTV and MMD discrepancies were computed between and samples of points generated by Algorithm 1 with and a variety of step sizes (encompassing and the step size heuristics in §4). Common random numbers were used, and no burn-in period was applied. The results are shown in Figure 2.

Due to the rapid explosion in magnitude of samples generated by ULA when , we only display discrepancies for ULA for . This critical value of is highlighted as a black solid vertical line.

In light of the large step size asymptotics, the existence of an “optimal” step size for as evidenced in these plots is perhaps not too surprising. However, it is surprising to see that, especially for large , this optimum is much greater than the maximum allowed step size of for ULA. Most notable here is the greatly improved performance of the implicit method () at this optimum over ULA for any allowable step size. Moreover, in all cases, the optimal performance of ILA for exceeds that of the purely implicit method (). These two facts are not suggested by Theorem 2, implying that the large step size asymptotics should indeed play a significant role in the analysis of implicit methods moving forward.

In all cases, the step size heuristic for performs poorly, suggesting the fully implicit case operates by a different mechanism that is currently unknown to us. For , , which is clearly the optimal step size, as it yields exact samples (). In fact, , is the only choice of and which results in exact samples in this scenario. According to both estimated MMTV and MMD, the step size heuristic is an almost optimal choice of for all considered, even in high dimensions.

### 5.2 Logistic regression

We now consider sampling problems involving the Bayesian posterior densities of generalized linear models (GLM), which have log-concave likelihood functions, with Gaussian priors. For simplicity, and without loss of generality, we consider radially symmetric Gaussians. For a GLM with this choice of prior, posterior densities are of the form

 f(x)=n∑i=1(Φ(aTix)−biaTix)+λ2∥x∥2,

where are the response and covariate pairs, , and the domain of depends on the GLM. The cumulant generating function, , determines the type of GLM. For example, in the case of logistic regression, ; see McCullagh and Nelder (1989) for further details and applications. It is easy to see that

 ∇2f(x)=n∑i=1aia⊤iΦ′′(a⊤ix)+λI=A⊤DA+λI,

where is a matrix whose -th row is , is a diagonal matrix whose -th diagonal element is , and is the precision parameter of the prior. As a result, for Assumption 3, we have

 λ≤m≤M≤∥A∥2supt∈RΦ′′(t)+λ. (19)

For our example, we consider Bayesian logistic regression in this setting, yielding

 f(x)∝n∑i=1(log(1+exp(aTix))−biaTix)+λ2||x||2, (20)

and . We use the musk (version 1) dataset from the UCI repository (Dua and Graff, 2019), with the prior precision parameter . These choices yield a target distribution which is relatively ill-conditioned, whose Hessian about its mode possesses a condition number of . We estimate the values and according to their lower and upper bounds in (19). For the target density given according to (20), MMTV and MMD discrepancies were computed between samples of points generated using Algorithm 2 (with , and a variety of step sizes encompassing and the step size heuristics in §4 under the assumption that eigenvalues are distributed according to (18)) and a gold standard run comprised of 50,000 samples obtained from hand-tuned SMMALA (Girolami and Calderhead, 2011). Due to the large difference in computation time between ULA and ILA, for the sake of comparison, we also computed MMTV and MMD discrepancies for the ULA algorithm using the same step sizes, now with a thinning factor of 50. This factor was chosen so that the computation time of ULA became roughly equivalent to the other ILA methods. Once again, common random numbers were used, and no burn-in period was applied. The results are shown in Figure 3, and follow a similar pattern to those found in the previous example. The step size heuristic for performs admirably in this case, yielding samples with smaller discrepancies to the gold standard run than ULA for any reasonable step size, even when thinned to account for the difference in computation time.

## 6 Conclusions

In the context of sampling from an unnormalized probability distribution, we considered a general class of unadjusted sampling algorithms that are based on implicit discretization of the Langevin dynamics. Unlike the traditional Metropolis-adjusted sampling algorithms, these unadjusted methods relax the requirement of consistency that the sample empirical distribution should asymptotically be the same as the target distribution, and hence avoid incurring serious penalty to the mixing rate of the chain. As a result, these variants generate rapidly converging Markov chains whose stationary distributions are only approximations of the target distribution with a bias that is of adjustable size. When one seeks a fixed (finite) number of samples, which is almost always the case in practice, this latter unadjusted view point can offer greatly many advantages.

In this context, we focused on the class of discretization schemes generated using -method in the context of smooth and strongly log-concave densities, explicitly deriving the transition kernel of the chain and establishing the corresponding sub-problems that are formulated as optimization problems. For smooth densities, the resulting implicit Langevin algorithms (ILA) have been shown to be geometrically ergodic for , irrespective of the step size. We also considered inexact variants (i-ILA) where the optimization sub-problems are solved only approximately. For this, we established non-asymptotic convergence of the sample empirical distribution to the target as measured by 2-Wasserstein metric, finding again that for , the resulting scheme is unconditionally stable for all step sizes. Furthermore, the growth rate in the bias term, that is shown to depend on problem’s condition number, is greatly diminished for . Together with our numerical experiments, this suggests that the implicit methods are a more appropriate choice for ill-conditioned problems than explicit schemes. Furthermore, the case appears to perform best in practice, especially when paired with our default heuristic choice of step size. The underlying reason for this is likely related to its asymptotic exactness for the normal distribution. It was suggested in Wibisono (2018) that the asymptotic bias of the case could be second-order accurate, which would imply the increased performance we have observed. Unfortunately, proving this claim remains an open problem.

Although enticing, extensions of these results to non-convex cases may prove challenging due to the potential lack of unique solutions for the implicit scheme, and the relative difficulty of non-convex optimization in general. Nevertheless, one could find success in considering that is only strongly convex outside of a compact region, as in Cheng et al. (2018). Furthermore, although it has not been treated explicitly, we believe that implicit methods should prove effective in big data problems, that is with and , where it might be computationally prohibitive to evaluate or its gradient exactly. In this regard, one can use optimization algorithms that can employ inexact oracle information; see Roosta-Khorasani and Mahoney (2018) for example. The efficacy of this approach would prove interesting for future research.

#### Acknowledgments

All authors have been supported by the Australian Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS) under grant number CE140100049. Liam Hodgkinson acknowledges the support of an Australian Research Training Program (RTP) Scholarship. Fred Roosta was partially supported by the Australian Research Council through a Discovery Early Career Researcher Award (DE180100923). Part of this work was done while Fred Roosta was visiting the Simons Institute for the Theory of Computing.

## References

• Anderson and Mattingly [2009] David F. Anderson and Jonathan C. Mattingly. A weak trapezoidal method for a class of stochastic differential equations. arXiv preprint arXiv:0906.3475, 2009.
• Ascher [2008] Uri M. Ascher. Numerical methods for evolutionary differential equations. SIAM, 2008.
• Ascher and Petzold [1998] Uri M. Ascher and Linda Petzold. Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations. Other Titles in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM, 3600 Market Street, Floor 6, Philadelphia, PA 19104), 1998. ISBN 9781611971392.
• Bendel and Mickey [1978] Robert B. Bendel and M. Ray Mickey. Population correlation matrices for sampling experiments. Communications in Statistics-Simulation and Computation, 7(2):163–182, 1978.
• Bishop and Tipping [2003] Christopher M. Bishop and Michael E. Tipping. Bayesian regression and classification. Nato Science Series sub Series III Computer And Systems Sciences, 190:267–288, 2003.
• Boucheron et al. [2013] Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration inequalities: A nonasymptotic theory of independence. Oxford university press, 2013.
• Casella et al. [2011] Bruno Casella, Gareth Roberts, and Osnat Stramer. Stability of partially implicit Langevin schemes and their MCMC variants. Methodology and Computing in Applied Probability, 13(4):835–854, December 2011.
• Chatfield et al. [2010] Chris Chatfield, Jim Zidek, and Jim Lindsey. An introduction to generalized linear models. Chapman and Hall/CRC, 2010.
• Cheng and Bartlett [2018] Xiang Cheng and Peter Bartlett. Convergence of Langevin MCMC in KL-divergence. In Proceedings of Algorithmic Learning Theory, volume 83, pages 186–211, 2018.
• Cheng et al. [2017] Xiang Cheng, Niladri S. Chatterji, Peter L. Bartlett, and Michael I. Jordan. Underdamped Langevin MCMC: A non-asymptotic analysis. arXiv preprint arXiv:1707.03663, 2017.
• Cheng et al. [2018] Xiang Cheng, Niladri S. Chatterji, Yasin Abbasi-Yadkori, Peter L. Bartlett, and Michael I. Jordan. Sharp Convergence Rates for Langevin Dynamics in the Nonconvex Setting. arXiv preprint arXiv:1805.01648, 2018.
• Combettes and Pesquet [2011] Patrick L. Combettes and Jean-Christophe Pesquet. Proximal splitting methods in signal processing. In Fixed-point algorithms for inverse problems in science and engineering, pages 185–212. Springer, 2011.
• Cottle and Thapa [2017] Richard W. Cottle and Mukund N. Thapa. Linear and Nonlinear Optimization. International Series in Operations Research & Management Science. Springer New York, 2017. ISBN 9781493970537.
• Crane and Roosta [2019] Rixon Crane and Fred Roosta. DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization. arXiv preprint arXiv:1901.05134, 2019.
• Dalalyan [2017a] Arnak S. Dalalyan. Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent. arXiv preprint arXiv:1704.04752, 2017a.
• Dalalyan [2017b] Arnak S. Dalalyan. Theoretical guarantees for approximate sampling from smooth and log-concave densities. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(3):651–676, 2017b.
• Dalalyan and Karagulyan [2017] Arnak S. Dalalyan and Avetik G. Karagulyan. User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient. arXiv preprint arXiv:1710.00095, 2017.
• DasGupta [2011] Anirban DasGupta.

Probability for Statistics and Machine Learning: Fundamentals and Advanced Topics

.
Springer Texts in Statistics. Springer New York, 2011. ISBN 9781441996336.
• Dua and Graff [2019] Dheeru Dua and Casey Graff. UCI machine learning repository, 2019.
• Durmus and Moulines [2016] Alain Durmus and Eric Moulines. High-dimensional Bayesian inference via the Unadjusted Langevin Algorithm. arXiv:1605.01559, May 2016.
• Durmus and Moulines [2017] Alain Durmus and Eric Moulines. Nonasymptotic convergence analysis for the unadjusted Langevin algorithm. The Annals of Applied Probability, 27(3):1551–1587, 2017.
• Girolami and Calderhead [2011] Mark Girolami and Ben Calderhead. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2):123–214, 2011.
• Golub and Van Loan [2012] Gene H. Golub and Charles F. Van Loan. Matrix Computations, volume 3. JHU Press, 4 edition, 2012.
• Gretton et al. [2012] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research, 13(Mar):723–773, 2012.
• Hansen [2003] Niels Richard Hansen. Geometric ergodicity of discrete-time approximations to multivariate diffusions. Bernoulli, 9(4):725–743, 2003.
• Hastings [1970] Wilfred K. Hastings. Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57(1):97–109, 1970. ISSN 00063444.
• Ikeda and Watanabe [2014] Nobuyuki Ikeda and Shinzo Watanabe. Stochastic differential equations and diffusion processes, volume 24. Elsevier, 2014.
• Kahaner et al. [1989] David Kahaner, Cleve B. Moler, Stephen Nash, and George E. Forsythe. Numerical Methods and Software. Prentice-Hall series in computational mathematics. Prentice Hall, 1989.
• Kloeden and Platen [2013] Peter E. Kloeden and Eckhard Platen. Numerical solution of stochastic differential equations, volume 23. Springer Science & Business Media, 2013.
• Kolmogorov [1937] Andrei N. Kolmogorov. Zur umkehrbarkeit der statistischen naturgesetze. Mathematische Annalen, 113(1):766–772, 1937.
• Kopec [2014] Marie Kopec. Weak backward error analysis for overdamped Langevin processes. IMA Journal of Numerical Analysis, 35(2):583–614, 2014.
• Korattikara et al. [2014] Anoop Korattikara, Yutian Chen, and Max Welling. Austerity in MCMC land: Cutting the Metropolis-Hastings budget. In International Conference on Machine Learning, pages 181–189, 2014.
• Lambert [1991] John Denholm Lambert. Numerical methods for ordinary differential systems: the initial value problem. John Wiley & Sons, Inc., 1991.
• Maire et al. [2018] Florian Maire, Nial Friel, and Pierre Alquier. Informed sub-sampling MCMC: approximate Bayesian inference for large datasets. Statistics and Computing, pages 1–34, Jun 2018.
• Mattingly et al. [2002] Jonathan C. Mattingly, Andrew M. Stuart, and Desmond J. Higham. Ergodicity for SDEs and approximations: locally Lipschitz vector fields and degenerate noise. Stochastic processes and their applications, 101(2):185–232, 2002.
• McCullagh and Nelder [1989] Peter McCullagh and John A. Nelder. Generalized linear models, volume 37. CRC press, 1989.
• Meyn and Tweedie [2012] Sean P. Meyn and Richard L. Tweedie. Markov chains and stochastic stability. Springer Science & Business Media, 2012.
• Muandet et al. [2017] Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, and Bernhard Schölkopf. Kernel mean embedding of distributions: A review and beyond. Foundations and Trends® in Machine Learning, 10(1-2):1–141, 2017.
• Nocedal and Wright [2006] Jorge Nocedal and Stephen Wright. Numerical Optimization. Springer Science & Business Media, 2006.
• Parikh and Boyd [2014] Neal Parikh and Stephen Boyd. Proximal Algorithms. Foundations and Trends® in Optimization, 1(3):127–239, 2014.
• Pereyra [2016] Marcelo Pereyra. Proximal Markov Chain Monte Carlo algorithms. Statistics and Computing, 26(4):745–760, Jul 2016. ISSN 1573-1375.
• Robert and Casella [1999] Christian P. Robert and George Casella. Monte Carlo Statistical Methods. Springer texts in statistics. Springer, 1999. ISBN 9780387987071.
• Roberts and Rosenthal [2001] Gareth O. Roberts and Jeffrey S. Rosenthal. Optimal scaling for various Metropolis-Hastings algorithms. Statistical science, 16(4):351–367, 2001.
• Roberts and Tweedie [1996] Gareth O. Roberts and Richard L. Tweedie. Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli, 2(4):341–363, 12 1996.
• Roosta et al. [2018] Fred Roosta, Yang Liu, Peng Xu, and Michael W. Mahoney. Newton-MR: Newton’s Method Without Smoothness or Convexity. arXiv preprint arXiv:1810.00303, 2018.
• Roosta-Khorasani and Ascher [2015] Farbod Roosta-Khorasani and Uri M. Ascher. Improved bounds on sample size for implicit matrix trace estimators. Foundations of Computational Mathematics, 15(5):1187–1212, 2015.
• Roosta-Khorasani and Mahoney [2018] Farbod Roosta-Khorasani and Michael W. Mahoney. Sub-sampled Newton methods. Mathematical Programming, Nov 2018.
• Shao [2008] Jun Shao. Mathematical Statistics. Springer Texts in Statistics. Springer New York, 2008. ISBN 9780387217185.
• Süli and Mayers [2003] Endre Süli and David F. Mayers. An introduction to numerical analysis. Cambridge University Press, 2003.
• Villani [2008] Cédric Villani. Optimal Transport: Old and New, volume 338. Springer Science & Business Media, 2008.
• Wibisono [2018] Andre Wibisono. Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem. arXiv preprint arXiv:1802.08089, 2018.
• Xu et al. [2017] Peng Xu, Farbod Roosta-Khorasani, and Michael W. Mahoney. Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information. arXiv preprint arXiv:1708.07164, 2017.

## Appendix A Proofs

In this section, we gather the detailed proofs of all the main results of this work along with some additional technical lemmas. In particular, Sections A.3, A.2 and A.1 give, respectively, the proofs of Theorems 3, 2 and 1.

### a.1 Proof of Theorem 1 (Geometric Ergodicity)

To establish geometric ergodicity, we prove the stronger Proposition 1 below. First, we connect Assumptions 1 and 2 to the lower bound (7).

###### Lemma 1

The condition (7) holds under Assumptions 1 and 2.

###### Proof

By creftype 1, observe that for any , we have

 ⟨∇f(x+y)−∇f(x)−∇f(y),x⟩∥x∥2 =⟨∇f(x+y)−∇f(x),x⟩∥x∥2−⟨∇f(y),x⟩∥x∥2 ≤∥∇f(x+y)−∇f(x)∥∥x∥∥x∥2+∥∇f(y)∥∥x∥∥x∥2 =M∥y∥+∥∇f(y)∥∥x∥.

Hence, we have

 liminf∥x∥→∞⟨∇f(x)−∇f(y),x−y⟩∥x−y∥2 =liminf∥x∥→∞⟨∇f(x),x⟩∥x∥2 +liminf∥x∥→∞⟨∇f(x+y)−∇f(x)−∇f(y),x⟩∥x∥2 =liminf∥x∥→∞⟨∇f(x),x⟩∥x∥2>0.

The result is implied by the definition of limit infimum.

To state the result, we recall the definition of -uniform ergodicity, as seen in Meyn and Tweedie [2012].

###### Definition 1

A -ergodic Markov chain with Markov transition operator on is -uniformly ergodic for a measurable function if

 supx∈Rdsup|ϕ|≤V|Pnϕ(x)−π(ϕ)|V(x)→0,as n→∞.

By [Meyn and Tweedie, 2012, Theorem 16.0.1], any -uniformly ergodic Markov chain is also geometrically ergodic.

###### Proposition 1

For any and satisfying Assumptions 1 and 2, let denote the Lyapunov drift function

 Vs(x)=exp(s∥x−x∗+12hθ∇f(x)∥), (21)

where is a critical point of . Supposing that (5) holds, the -method scheme with transition kernel (6) is -uniformly ergodic provided , or and

 h<4mM2(1−2θ).

###### Proof

It is immediately apparent from the positivity of (6) due to (5) that the iterates of the -method scheme are aperiodic and irreducible with respect to Lebesgue measure. Furthermore, it follows from [Meyn and Tweedie, 2012, Proposition 6.2.8] that all compact sets are small. Therefore, by [Meyn and Tweedie, 2012, Theorem 15.0.1] and [Meyn and Tweedie, 2012, Lemma 15.2.8], it suffices to show that

 limsup∥x∥→∞PVs(x)Vs(x)=0,

where is the Markov transition operator of the -method scheme. Indeed, by the definition of , for a given , there exists a such that

 sup∥x∥≥KPVs(x)Vs(x)≤λ,

and so