On Markov Chain Gradient Descent

09/12/2018 ∙ by Tao Sun, et al. ∙ NetEase, Inc 8

Stochastic gradient methods are the workhorse (algorithms) of large-scale optimization problems in machine learning, signal processing, and other computational sciences and engineering. This paper studies Markov chain gradient descent, a variant of stochastic gradient descent where the random samples are taken on the trajectory of a Markov chain. Existing results of this method assume convex objectives and a reversible Markov chain and thus have their limitations. We establish new non-ergodic convergence under wider step sizes, for nonconvex problems, and for non-reversible finite-state Markov chains. Nonconvexity makes our method applicable to broader problem classes. Non-reversible finite-state Markov chains, on the other hand, can mix substatially faster. To obtain these results, we introduce a new technique that varies the mixing levels of the Markov chains. The reported numerical results validate our contributions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper, we consider a stochastic minimization problem. Let

be a statistical sample space with probability distribution

(we omit the underlying -algebra). Let be a closed convex set, which represents the parameter space. is a closed convex function associated with . We aim to solve the following problem:

(1.1)

A common method to minimize (1.1) is Stochastic Gradient Descent (SGD) [11]:

(1.2)

However, for some problems and distributions, direct sampling from is expensive or impossible, and it is possible that is not known. In these cases, it can be much cheaper to sample by following a Markov chain that has the desired distribution as its equilibrium distribution.

To be concrete, imagine solving problem (1.1) with a discrete space , where and

, and the uniform distribution

over . A straightforward way to obtain a uniform sample is iteratively randomly sampling until the constraint is satisfied. Even if the feasible set is small, it may take up to iterations to get a feasible sample. Instead, one can sample a trajectory of a Markov chain described in [4]; to obtain a sample -close to the distribution , one only needs samples [2], where is the cardinality of . This presents a signifant saving in sampling cost.

Markov chains also naturally arise in some applications. Common examples are systems that evolve according to Markov chains, for example, linear dynamic systems with random transitions or errors. Another example is a distributed system in which every node locally stores a subset of training samples; to train a model using these samples, we can let a token that holds all the model parameters traverse the nodes following a random walk, so the samples are accessed according to a Markov chain.

Suppose that the Markov chain has a stationary distribution and a finite mixing time , which is how long a random trajectory needs to be until its current state has a distribution that roughly matches . A larger means a closer match. Then, in order to run one iteration of (1.2), we can generate a trajectory of samples and only take the last sample . To run another iteration of (1.2), we repeat this process, i.e., sample a new trajectory and take .

Clearly, sampling a long trajectory just to use the last sample wastes a lot of samples, especially when is large. But, this may seem necessary because , for all small , have large biases. After all, it can take a long time for the random trajectory to explore all of the space, and it will often double back and visit states that it previously visited. Furthermore, it is also difficult to choose an appropriate . A small will cause large bias in , which slows the SGD convergence and reduces its final accuracy. A large , on the other hand, is wasteful especially when is still far from convergence and some bias does not prevent (1.2) to make good progress. Therefore, should increase adaptively as increases — this makes the choice of even more difficult.

So, why waste samples, why worry about , and why not just apply every sample immediately in stochastic gradient descent? This approach has appeared in [5, 6], which we call the Markov Chain Gradient Descent (MCGD) algorithm for problem (1.1):

(1.3)

where are samples on a Markov chain trajectory and is a subgradient.

Let us examine some special cases. Suppose the distribution is supported on a set of points, . Then, by letting , problem (1.1) reduces to the finite-sum problem:

(1.4)

By the definition of , each state has the uniform probability . At each iteration of MCGD, we have

(1.5)

where is a trajectory of a Markov chain on that has a uniform stationary distribution. Here, and are two different, but related Markov chains. Starting from a deterministic and arbitrary initialization , the iteration is illustrated by the following diagram:

(1.6)

In the diagram, given each , the next state is statistically independent of ; given and , the next iterate is statistically independent of and .

Another application of MCGD involves a network: consider a strongly connected graph with the set of vertices and set of edges . Each node possess some data and can compute . To run MCGD, we employ a token that carries the variable , walking randomly over the network. When it reaches a node , node reads form the token and computes to update according to (1.5). Then, the token walks away to a random neighbor of node .

1.1 Numerical tests

We present two kinds of numerical results. The first one is to show that MCGD uses fewer samples to train both a convex model and a nonconvex model. The second one demonstrates the advantage of the faster mixing of a non-reversible Markov chain. Our results on nonconvex objective and non-reversible chains are new.

1. Comparision with SGD
Let us compare:

  1. MCGD (1.3), where is taken from one trajectory of the Markov chain;

  2. SGD, for , where each is the th sample of a fresh, independent trajectory. All trajectories are generated by starting from the same state .

To compute gradients, SGD uses times as many samples as MCGD. We did not try to adapt as increases because there lacks a theoretical guidance.

In the first test, we recover a vector

from an auto regressive process, which closely resembles the first experiment in [1]. Set matrix A as a subdiagonal matrix with random entries . Randomly sample a vector , , with the unit 2-norm. Our data are generated according to the following auto regressive process:

Clearly, forms a Markov chain. Let denote the stationary distribution of this Markov chain. We recover as the solution to the following problem:

We consider both convex and nonconvex loss functions, which were not done before in the literature. The convex one is the logistic loss

where . And the nonconvex one is taken as

from [7]. We choose as our stepsize, where . This choice is consistently with our theory below.

Figure 1: Comparisons of MCGD and SGD for . is the average of .

Our results in Figure 1 are surprisingly positive on MCGD, more so to our expectation. As we had expected, MCGD used significantly fewer total samples than SGD on every . But, it is surprising that MCGD did not need even more gradient evaluations. Randomly generated data must have helped homogenize the samples over the different states, making it less important for a trajectory to converge. It is important to note that SGD1 and SGD2, as well as SGD4, in the nonconvex case, stagnate at noticeably lower accuracies because their values are too small for convergence.

2. Comparison of reversible and non-reversible Markov chains
We also compare the convergence of MCGD when working with reversible and non-reversible Markov chains (the definition of reversibility is given in next section). As mentioned in [14], transforming a reversible Markov chain into non-reversible Markov chain can significantly accelerate the mixing process. This technique also helps to accelerate the convergence of MCGD.

In our experiment, we first construct an undirected connected graph with nodes with edges randomly generated. Let denote the adjacency matrix of the graph, that is,

Let be the maximum number of outgoing edges of a node. Select and compute . The transition probability of the reversible Markov chain is then defined by, known as Metropolis-Hastings markov chain,

Obviously, is symmetric and the stationary distribution is uniform. The non-reversible Markov chain is constructed by adding cycles. The edges of these cycles are directed and let denote the adjacency matrix of these cycles. If , then . Let be the weight of flows along these cycles. Then we construct the transition probability of the non-reversible Markov chain as follows,

where . See [14] for an explanation why this change makes the chain mix faster.

In our experiment, we add 5 cycles of length 4, with edges existing in . is set to be . We test MCGD on a least square problem. First, we select ; and then for each node , we generate , and . The objective function is defined as,

The convergence results are depicted in Figure 2.


Figure 2:

Comparison of reversible and irreversible Markov chains. The second largest eigenvalues of reversible and non-reversible Markov chains are 0.75 and 0.66 respectively.

1.2 Known approaches and results

It is more difficult to analyze MCGD due to its biased samples. To see this, let be the probability to select in the th iteration. SGD’s uniform probability selection (

) yields an unbiased gradient estimate

(1.7)

for some . However, in MCGD, it is possible to have for some . Consider a “random wal”. The probability is determined by the current state , and we have only for and for , where denotes the neighborhood of . Therefore, we no longer have (1.7).

All analyses of MCGD must deal with the biased expectation. Papers [6, 5] investigate the conditional expectation . For a sufficiently large , it is sufficiently close to (but still different). In [6, 5], the authors proved that, to achieve an error, MCGD with stepsize can return a solution in iteration. Their error bound is given in the ergodic sense and using . The authors of [10] proved a and have almost sure convergence under diminishing stepsizes , . Although the authors did not compute any rates, we computed that their stepsizes will lead to a solution with error in iterations, for , and for . In [1], the authors improved the stepsizes to and showed ergodic convergence; in other words, to achieve error, it is enough to run MCGD for iterations. There is no non-ergodic result regarding the convergence of . It is worth mentioning that [10, 1] use time non-homogeneous Markov chains, where the transition probability can change over the iterations as long as there is still a finite mixing time. In [1], MCGD is generalized from gradient descent to mirror descent. In all these works, the Markov chain is required to be reversible, and all functions , , are assumed to be convex. However, non-reversible chains can have substantially faster convergence and thus more numerically efficient.

1.3 Our approaches and results

In this paper, we improve the analyses of MCGD to non-reversible finite-state Markov chains and to nonconvex functions. The former allows us to have faster mixing, and the latter frequently appears in applications. Our convergence result is given in the non-ergodic sense though the rate results are still given the ergodic sense. It is important to mention that, in our analysis, the mixing time of the underlying Markov chain is not tied to a fixed mixing level but can vary to different levels. This is essential because MCGD needs time to reduce its objective error from its current value to a lower one, and this time becomes longer when the current value is lower since a more accurate Markov chain convergence and thus a longer mixing time are required. When are all convex, we allow them to be non-differentiable and MCGD to use subgradients, provided that is bounded. When any of them is nonconvex, we assume is the full space and are differentiable with bounded gradients. The bounded-gradient assumption is due to a technical difficulty associated with nonconvexity.

Specifically, in the convex setting, we prove (minimum of over ) for both exact and inexact MCGD with stepsizes , . The convergence rates of MCGD with exact and inexact subgradient computations are presented. The first analysis of nonconvex MCGD is also presented with its convergence given in the expectation of . These results hold for non-reversible finite-state Markov chains and can be extended to time non-homogeneous Markov chain under extra assumptions [10, Assumptions 4 and 5] and [1, Assumption C], which essentially ensure finite mixing.

Our results for finite-state Markov chains are first presented in Sections 3 and 4. They are extended to continuous-state reversible Markov chains in Section 5.

Some novel results are are developed based on new techniques and approaches developed in this paper. To get the stronger results in general cases, we used the varying mixing time rather than fixed ones. Several technical lemmas (Lemmas 2,3,4,5) are given in next section, in which, Lemma 2 plays a core role in our analyses.

We list the possible extensions of MCGD that are not discussed in this paper. The first one is the accelerated versions including the Nesterov’s acceleration and variance reduction schemes. The second one is the design and optimization of Markov chains to improve the convergence of MCGD.

2 Preliminaries

2.1 Markov chain

We recall some definitions, properties, and existing results about the Markov chain. Although we use the finite-state time-homogeneous Markov chain, results can be extended to more general chains under similar extra assumptions in [10, Assumptions 4, 5] and [1, Assumption C].

Definition 1 (finite-state time-homogeneous Markov chain).

Let be an -matrix with real-valued elements. A stochastic process in a finite state space is called a time-homogeneous Markov chain with transition matrix if, for , , and , we have

(2.1)

Let the probability distribution of be denoted as the non-negative row vector , that is, . satisfies When the Markov chain is time-homogeneous, we have and

(2.2)

for , where denotes the th power of . A Markov chain is irreducible if, for any , there exists such that . State is said to have a period if whenever is not a multiple of and is the greatest integer with this property. If , then we say state is aperiodic. If every state is aperiodic, the Markov chain is said to be aperiodic.

Any time-homogeneous, irreducible, and aperiodic Markov chain has a stationary distribution with and , and . It also holds that

(2.3)

The largest eigenvalue of

is 1, and the corresponding left eigenvector is

.

Assumption 1.

The Markov chain is time-homogeneous, irreducible, and aperiodic. It has a transition matrix and has stationary distribution .

2.2 Mixing time

Mixing time is how long a Markov chain evolves until its current state has a distribution very close to its stationary distribution. The literature has a thorough investigation of various kinds of mixing times, with the majority for reversible Markov chains (that is, ). Mixing times of non-reversible Markov chains are discussed in [3]. In this part, we consider a new type of mixing time of non-reversible Markov chain. The proofs are based on basic matrix analysis. Our mixing time gives us a direct relationship between and the deviation of the distribution of the current state from the stationary distribution.

To start a lemma, we review some basic notions in linear algebra. Let be the -dimensional complex field. The modulus of a complex number is given as . For a vector , the and norms are defined as , . For a matrix , its -induced and Frobenius norms are , , respectively.

We know , as . The following lemma presents a deviation bound for finite .

Lemma 1.

Let Assumption 1 hold and let be the th largest eigenvalue of , and

Then, we can bound the largest entry-wise absolute value of the deviation matrix as

(2.4)

for , where is a constant that also depends on the Jordan canonical form of and is a constant that depends on and . Their formulas are given in (6.19) and (6.20) in the Supplementary Material.

Remark 1.

If is symmetric, then all ’s are all real and nonnegative, , and . Furthermore, (6.16) can be improved by directly using for the right side as

3 Convergence analysis for convex minimization

This part considers the convergence of MCGD in the convex cases, i.e., and are all convex. We investigate the convergence of scheme (1.5). We prove non-ergodic convergence of the expected objective value sequence under diminishing non-summable stepsizes, where the stepsizes are required to be “almost” square summable. Therefore, the convergence requirements are almost equal to SGD. This section uses the following assumption.

Assumption 2.

The set is assumed to be convex and compact.

Now, we present the convergence results for MCGD in the convex (but not necessarily differentiable) case. Let be the minimum value of over .

Theorem 1.

Let Assumptions 1 and 2 hold and be generated by scheme (1.5). Assume that , , are convex functions, and the stepsizes satisfy

(3.1)

Then, we have

(3.2)

Define

We have:

(3.3)

where . Therefore, if we select the stepsize as , we get the rate .

Furthermore, consider the inexact version of MCGD:

(3.4)

where the noise sequence is arbitrary but obeys

(3.5)

Then, for iteration (3.4), results (3.2) and (3.3) still hold; furthermore, if with and as , the rate also holds.

The stepsizes requirement (3.1) is nearly identical to the one of SGD and subgradient algorithms. In the theorem above, we use the stepsize setting as . This kind of stepsize requirements also works for SGD and subgradient algorithms. The convergence rate of MCGD is , which is also as the same as SGD and subgradient algorithms for .

4 Convergence analysis for nonconvex minimization

This section considers the convergence of MCGD when one or more of is nonconvex. In this case, we assume , , are differentiable and is Lipschitz with 111This is for the convenience of the presentation in the proofs. If each has a , it is possible to improve our results slights. But, we simply set . We also set as the full space. We study the following scheme

(4.1)

We prove non-ergodic convergence of the expected gradient norm of under diminishing non-summable stepsizes. The stepsize requirements in this section are slightly stronger than those in the convex case with an extra factor. In this part, we use the following assumption.

Assumption 3.

The gradients of are assumed to be bounded, i.e., there exists such that

(4.2)

We use this new assumption because is now the full space, and we have to directly bound the size of . In the nonconvex case, we cannot obtain objective value convergence, and we only bound the gradients. Now, we are prepared to present our convergence results of nonconvex MCGD.

Theorem 2.

Let Assumptions 1 and 3 hold and be generated by scheme (4.1). Also, assume is differentiable and is -Lipschitz, and the stepsizes satisfy

(4.3)

Then, we have

(4.4)

and

(4.5)

where is given in Lemma 1. If we select the stepsize as , , then we get the rate .

Furthermore, let be a sequence of noise and consider the inexact nonconvex MCGD iteration:

(4.6)

If the noise sequence obeys

(4.7)

then the convergence results (4.4) and (4.5) still hold for inexact nonconvex MCGD. In addition, if we set as and the noise satisfy for , then (4.4) still holds and .

This proof of Theorem 2 is different from previous one. In particular, we cannot expect some sort of convergence to , where due to nonconvexity. To this end, we use the Lipschitz continuity of () to derive the “descent”. Here, the “” contains a polynomial compisition of constants and .

Compared with MCGD in the convex case, the stepsize requirements of nonconvex MCGD become a tad higher; in summable part, we need rather than . Nevertheless, we can still use for .

5 Convergence analysis for continuous state space

When the state space is a continuum, there are infinitely many possible states. In this case, we consider an infinite-state Markov chain that is time-homogeneous and reversible. Using the results in [8, Theorem 4.9], the mixing time of this kind of Markov chain still has geometric decrease like (2.4). Since Lemma 1 is based on a linear algebra analysis, it no longer applies to the continuous case. Nevertheless, previous results still hold with nearly unchanged proofs under the following assumption:

Assumption 4.

For any , , , , and .

We consider the general scheme

(5.1)

where are samples on a Markov chain trajectory. If , the scheme then reduces to (1.3).

Corollary 1.

Assume is convex for each . Let the stepsizes satisfy (3.1) and be generated by Algorithm (5.1), and satisfy (3.5). Let . If Assumption 4 holds and the Markov chain is time-homogeneous, irreducible, aperiodic, and reversible, then we have

where is the geometric rate of the mixing time of the Markov chain (which corresponds to in the finite-state case).

Next, we present our result for a possibly nonconvex objective function under the following assumption.

Assumption 5.

For any , is differentiable, and . In addition, , is the full space, and .

Since is differentiable and is the full space, the iteration reduces to

(5.2)
Corollary 2.

Let the stepsizes satisfy (4.3), be generated by Algorithm (5.2), the noises obey (4.7), and Assumption 5 hold. Assume the Markov chain is time-homogeneous, irreducible, and aperiodic and reversible. Then, we have

(5.3)

where is geometric rate for the mixing time of the Markov chain.

6 Conclusion

In this paper, we have analyzed the stochastic gradient descent method where the samples are taken on a trajectory of Markov chain. One of our main contributions is non-ergodic convergence analysis for convex MCGD, which uses a novel line of analysis. The result is then extended to the inexact gradients. This analysis lets us establish convergence for non-reversible finite-state Markov chains and for nonconvex minimization problems. Our results are useful in the cases where it is impossible or expensive to directly take samples from a distribution, or the distribution is not even known, but sampling via a Markov chain is possible. Our results also apply to decentralized learning over a network, where we can employ a random walker to traverse the network and minimizer the objective that is defined over the samples that are held at the nodes in a distribute fashion.

References

  • [1] John C Duchi, Alekh Agarwal, Mikael Johansson, and Michael I Jordan. Ergodic mirror descent. SIAM Journal on Optimization, 22(4):1549–1578, 2012.
  • [2] Martin Dyer, Alan Frieze, Ravi Kannan, Ajai Kapoor, Ljubomir Perkovic, and Umesh Vazirani. A mildly exponential time algorithm for approximating the number of solutions to a multidimensional knapsack problem. Combinatorics, Probability and Computing, 2(3):271–284, 1993.
  • [3] James Allen Fill. Eigenvalue bounds on convergence to stationarity for nonreversible markov chains, with an application to the exclusion process. The annals of applied probability, 62–87, 1991.
  • [4] Mark Jerrum and Alistair Sinclair. The markov chain monte carlo method: an approach to approximate counting and integration. Approximation algorithms for NP-hard problems, 482–520, 1996.
  • [5] Bjorn Johansson, Maben Rabi, and Mikael Johansson. A simple peer-to-peer algorithm for distributed optimization in sensor networks. In Decision and Control, 2007 46th IEEE Conference on, 4705–4710. IEEE, 2007.
  • [6] Björn Johansson, Maben Rabi, and Mikael Johansson. A randomized incremental subgradient method for distributed optimization in networked systems. SIAM Journal on Optimization, 20(3):1157–1170, 2009.
  • [7] Song Mei, Yu Bai, Andrea Montanari, et al. The landscape of empirical risk for nonconvex losses. The Annals of Statistics, 46(6A):2747–2774, 2018.
  • [8] Ravi Montenegro, Prasad Tetali, et al. Mathematical aspects of mixing times in markov chains. Foundations and Trends® in Theoretical Computer Science, 1(3):237–354, 2006.
  • [9] Rufus Oldenburger et al. Infinite powers of matrices and characteristic roots. Duke Mathematical Journal, 6(2):357–361, 1940.
  • [10] S Sundhar Ram, A Nedić, and Venugopal V Veeravalli. Incremental stochastic subgradient algorithms for convex optimization. SIAM Journal on Optimization, 20(2):691–717, 2009.
  • [11] Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals of mathematical statistics, 400–407, 1951.
  • [12] Herbert Robbins and David Siegmund. A convergence theorem for non negative almost supermartingales and some applications. In Optimizing methods in statistics, 233–257. Elsevier, 1971.
  • [13] Ralph Tyrell Rockafellar. Convex Analysis. Princeton university press, 2015.
  • [14] Konstantin S Turitsyn, Michael Chertkov, and Marija Vucelja. Irreversible monte carlo algorithms for efficient sampling. Physica D: Nonlinear Phenomena, 240(4-5):410–414, 2011.
  • [15] Jinshan Zeng and Wotao Yin. On nonconvex decentralized gradient descent. IEEE Transactions on signal processing, 66(11):2834–2848, 2018.

Supplementary material for On Markov Chain Gradient Descent

6.1 Technical lemmas

We present technical lemmas used in this paper.

Lemma 2.

Consider two nonnegative sequences and that satisfy

  1. and , and

  2. , and

  3. for some and .

Then, we have .

We call the sequence satisfying parts 1 and 2 a weakly summable sequence since it is not necessarily summable but becomes so after multiplying a non-summable yet diminishing sequence . Without part 3, it is generally impossible to claim that converges to 0. This lemma generalizes [15, Lemma 12].

Proof of Lemma 2

From parts 1 and 2, we have . Therefore, it suffices to show .

Assume . Let . Then, we have infinite many segments such that and

(6.1)

It is possible that , then, the terms in (6.1) will vanish. But it does not affact the following proofs. By the assumption , we further have for infinitely many sufficiently large . This leads to the following contradiction

(6.2)
(6.3)

The following lemma is used to derive the boundedness of some specific sequence. It is used in the inexact MCGD.

Lemma 3.

Consider four nonnegative sequences , and that satisfy

(6.4)

Then, we have and .

Proof of Lemma 3

The convergence Lemma 3 has been given in [12, Theorem 1]. Here, we prove the order for . Noting that , we then have

As we have nonnegative number sequences , so

(6.5)

Thus, we get . With direct calculations, we get

Using the got estimation of , we then derive the result.

Lemma 4.

Let , and , and be real numbers. Then,

(6.6)

if .

Proof of Lemma 4

Let , then, we just need to consider the function

(6.7)

Letting and the convexity of when ,

(6.8)

Thus, we have