SVRG for Policy Evaluation with Fewer Gradient Evaluations

06/09/2019 ∙ by Zilun Peng, et al. ∙ 0

Stochastic variance-reduced gradient (SVRG) is an optimization method originally designed for tackling machine learning problems with a finite sum structure. SVRG was later shown to work for policy evaluation, a problem in reinforcement learning in which one aims to estimate the value function of a given policy. SVRG makes use of gradient estimates at two scales. At the slower scale, SVRG computes a full gradient over the whole dataset, which could lead to prohibitive computation costs. In this work, we show that two variants of SVRG for policy evaluation could significantly diminish the number of gradient calculations while preserving a linear convergence speed. More importantly, our theoretical result implies that one does not need to use the entire dataset in every epoch of SVRG when it is applied to policy evaluation with linear function approximation. Our experiments demonstrate large computational savings provided by the proposed methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

footnotetext: Equal contribution, Mila, Université de Montréal, Mila, McGill University, Facebook AI Research

In reinforcement learning (RL), an agent continuously interacts with an environment by choosing actions as prescribed by a way of behaving called a policy. The agent observes its current state and performs an action based on its current policy (which is a probability distribution conditioned on state), then it reaches a new state and obtains a reward. The goal of the agent is to improve its policy, but a key requirement in this process is the ability to evaluate the expected long-term return of the current policy, called the value function. After evaluating the policy, the policy can be updated so that more valuable states are visited more often. Performing policy evaluation efficiently is thus imperative to the success of training a RL agent.

Temporal difference (TD) learning (Sutton, 1988) is a classic method for policy evaluation, which uses the Bellman equation to bootstrap the estimation process and continually update the value function. Least Squares Temporal Difference (LSTD) method (Bradtke and Barto, 1996; Boyan, 2002) is a more data-efficient approach which uses the data to construct a linear system approximating the original problem, then solves this system. It also has the advantage of not requiring a learning rate parameter. However, LSTD is not computationally feasible when the number of features is large, because it requires inverting a matrix of size . When is large, stochastic gradient based approaches, such as GTD (Sutton et al., 2008), GTD2 and TDC (Sutton et al., 2009) are preferred because the amount of computation and storage during each update is linear in . Compared to classical TD, these algorithms truly compute a gradient (instead of performing a fixed-point approximation which is in fact not a gradient update); as a result, they enjoy better theoretical guarantees, especially in the case of off-policy learning, in which the policy of interest for the evaluation is different from the policy generating the agent’s experience.

Convex problems with large (number of data samples) and appear often in machine learning and there are many efficient stochastic gradient methods for finding solutions (e.g. SAG (Roux et al., 2012), SVRG (Johnson and Zhang, 2013), SAGA (Defazio et al., 2014)). In the problem of interests here, policy evaluation with linear function approximation, the objective function is a saddle-point formulation of the empirical Mean Squared Projected Bellman Error (MSPBE). It is convex-concave and not strongly convex in the primal variable, so existing powerful convex optimization methods do not directly apply.

Despite this problem, Du et al. (2017) showed that SVRG and SAGA can be applied to solve the saddle point version of MSPBE with linear convergence rates, leading to fast, convergent methods for policy evaluation. An important and computationally heavy step of SVRG is to compute a full gradient at the beginning of every epoch. Subsequent stochastic gradient updates use this full gradient so that the variance of updating directions is reduced. In this paper, we address the computational bottleneck of SVRG by extending two methods, Batching SVRG (Harikandeh et al., 2015) and SCSG (Lei and Jordan, 2017), for policy evaluation. These methods were originally proposed to make SVRG computationally efficient when solving strongly convex problems, so they do not directly apply to our problem, a convex-concave function without strong convexity in the primal variable.

In this work, we make the following key contributions:

  1. We show that both Batching SVRG and SCSG achieve linear convergence rate for policy evaluation while saving considerably amount of gradient computations. To the best of our knowledge, this is the first result for Batching SVRG and SCSG in saddle-point setting.

  2. While our analysis builds on the ideas of Lei and Jordan (2017), our proofs end up quite different and also a lot simpler because we exploit the structure of our problem.

  3. Our experimental results demonstrate that given the same amount of data, batching SVRG and SCSG achieve better empirical performances than vanilla SVRG on some standard benchmarks.

2 Background

In RL, a Markov Decision Process (MDP) is typically used to model the interaction between an agent and its environment. A MDP is defined by a tuple

, where is the set of possible states, is the set of actions,the transition probability function maps state-action pairs to distributions over next states. denotes the reward function: , which returns the immediate reward that an agent will receive after performing an action at state and is the discount factor used to discount rewards received farther in the future. For simplicity, we will assume and are finite.

A policy is a mapping from states to distributions over actions. The value function for policy , denoted , represents the expected sum of discounted rewards along the trajectories induced by the policy in the MDP: . can be obtained as the fixed point of the Bellman operator over the action-value function where is the expected immediate reward and is defined as .

In this paper, we are concerned with the policy evaluation problem (Sutton and Barto, 1998) i.e estimation of for a given policy . In order to obtain generalization between different states, should be represented in a functional form. In this paper, we focus on linear function approximation of the form: where

is a weight vector and

is a feature map from a states to a given -dimensional feature space.

3 Objective Functions

We assume that the Markov chain induced by the policy

is ergodic and admits a unique stationary distribution, denoted by , over states. We write for the diagonal matrix whose diagonal entries are .

If denotes the matrix obtained by stacking the state feature vectors row by row, then it is known (Bertsekas, 2011) that is the fixed point of the projected Bellman operator :

(1)

where is the orthogonal projection onto the space with respect to the weighted Euclidean norm . Rather than computing a sequence of iterates given by the projected Bellman operator, another approach for finding is to directly minimize (Sutton et al., 2009; Liu et al., 2015) the Mean Squared Projected Bellman Error (MSPBE):

(2)

By substituting the definition of into (2), we can write MSPBE as a standard weighted least-squares problem (See Sutton et al. (2009) for a complete derivation):

(3)

where , and are defined as follows: , and where the expectations are taken with respect to the stationary distribution.

Empirical MSPBE:

We focus here on the batch setting where we collect a dataset of transitions generated by the policy . We replace the quantities , and in (3) by their empirical estimates:

(4)

where for all , for a given transition

(5)

Therefore we consider the empirical MSPBE defined as follows:

(6)

Finite sum structure:

We aim at using stochastic variance-reduction techniques to our problem. These methods are designed for problem with finite sum structure as follows:

(7)

Unfortunately, even by replacing quantities , and by their finite-sample estimates, the obtained empirical objective in (6) could not be written in such form (7). However, Du et al. (2017) convert the empirical MSPBE minimization in (6) into a convex-concave saddle point problem which presents a finite sum structure. To this end, Du et al. (2017) use the convex-conjugate trick. Recall that the convex conjugate of a real-valued function is defined as:

(8)

and is convex, we have . Also, if , then . Thanks to the latter relation, the empirical MSPBE minimization is equivalent to:

(9)

The obtained objective, we denote by , in (9) could be written as where

4 Existing Optimization Algorithms

Before presenting our new methods, we first review briefly existing algorithms that solve the saddle-point problem (9). Let’s define the vector obtained by stacking the primal and negative dual gradients:

(10)

We have where

Gradient temporal difference:

GTD2 algorithm Sutton et al. (2009), when applied to the batch setting, consists in the following update: for a uniformly sampled :

(11)

where and are step sizes on and . GTD2 has a low computation cost per iteration but only a sublinear convergence rate (Touati et al., 2018).

SVRG for policy evaluation:

Du et al. (2017) applied SVRG to solve the saddle-point problem (9). The idea is to alternate between full and stochastic gradient updates in two layers of loops. In the outer loop, a snapshot of the current variables is saved together with its full gradients vector . Between snapshots, the variables are updated with a gradient estimate corrected using the stochastic gradient:

(12)

where is uniformly sampled. Du et al. (2017) showed that the algorithm has a linear convergence rate although the objective (9) is not strongly convex in the primal variable . However, the algorithm remains inefficient in term of computations as it requires to compute a full gradient using the entire dataset in the outer loop. In the rest of the paper, ”An epoch” means an iteration of the outer loop. In the sequel, we introduce two variants of SVRG for policy evaluation that alleviate the latter computational bottleneck while preserving the linear convergence rate.

5 Proposed Methods

5.1 Batching SVRG for Policy Evaluation

Algorithm 1 presents batching SVRG for policy evaluation. It applies batching SVRG Harikandeh et al. (2015) on solving the convex concave formulation of the empirical MSPBE. Harikandeh et al. (2015) show that SVRG is robust to an inexact computation of the full gradient. In order to speed up the algorithm, we propose algorithm 1, similar to Harikandeh et al. (2015), estimating the full gradient in each epoch using only a subset (a mini-batch) of size of training examples: In each iteration of the inner loop in algorithm 1, it uses to update and . is the usual SVRG update, except that the full gradients is replaced with the mini-batch gradients : where is sampled uniformly in .
Algorithm 1 Batching SVRG for PE Input: initial point , , and Output: 1:  for m = 0 to M-1 do 2:     Set and to . 3:     Choose a mini-batch size 4:     Sample a set with elements uniformly from 5:     Compute 6:     for j = 0 to K-1 do 7:         Sample uniformly randomly from 8:          9:         

10:     end for 11:      12:  end for 13:  return Algorithm 2 SCSG for PE Input: initial point , , , and Output: 1:  for m = 1 to M do 2:     Set and to . 3:     Sample a set with elements uniformly from 4:     Compute 5:      6:     for j = 0 to  do 7:         Sample uniformly randomly from 8:          9:         
10:     end for 11:      12:  end for 13:  return

5.2 Stochastically Controlled Stochastic Gradient (SCSG) for Policy Evaluation

Algorithm 2 presents Stochastically Controlled Stochastic Gradient (SCSG) for Policy Evaluation. SCSG is initially introduced for convex minimization in Lei and Jordan (2017). Here, we apply it to our convex-concave saddle-point problem. Similar to Bachting SVRG for policy evaluation in algorithm 1, algorithm 2 implements the gradient computation on a subset of training examples at each epoch, but the mini-batch size is fixed in advance and not varying. Moreover, instead of being fixed, the number of iteration for the inner loop in algorithm 2

is sampled from a geometrically distributed random variable:

for each epoch .

6 Convergence Analysis

6.1 Notations and Preliminary

In order to characterize the convergence rates of the proposed algorithms 1 and 2, we need to introduce some new notations and state new assumptions.

We denote by the spectral norm of the matrix A and by

its condition number. If the eigenvalues of a matrix

are real, we use and to denote respectively the largest and the smallest eigenvalue.

If we set for a positive constant , it is possible to write the inner loop update (line 9 in both algorithms) as an update for the vector as follows :

where:

and their corresponding averages over the mini-batch :

Let’s now define the matrix (the vector ) as the average of matrices (vectors ) over the entire dataset:

To simplify notations, we overload the notation . Another important quantity that characterizes smoothness of our problem is defined below as:

(13)

The matrix will play a key role in the convergence analysis of both algorithms 1 and 2. Du et al. (2017) have already studied the spectral properties of as it was critical for the convergence of SVRG for policy evaluation. The following lemma, restated from (Du et al., 2017), show the condition should satisfy so that is diagonalizable with all its eigenvalues real and positive.

Assumption 1.

nonsingular and is definite positive. This implies that the saddle-point problem admits a unique solution and we define .

Lemma 1.

(Du et al., 2017) Suppose assumption 1 holds and if we choose , then the matrix is diagonalizable with all its eigenvalues real and positive.

If assumptions of lemma 1 hold, we can write as where is a diagonal matrix whose diagonal entries are the eigenvalues of , and

consists of it eigenvectors as column. We define the residual vector

. To study the behaviour of our algorithms, we use the potential function . As , the convergence of implies the convergence of .

6.2 Convergence of batching SVRG for Policy Evaluation

In order to study the behavior of algorithm 1, we defined the error occurred at epoch . This error comes from computing the gradients over a mini-batch instead of the entire dataset.

(14)

The stochastic update of the inner loop could be written as follows:

(15)
Theorem 1.

Assume assumption 1 holds and if we choose , and , then we obtain:

(16)

Note that if , the error is zero and we recover the convergence rate of SVRG in theorem 1. Moreover, we could still maintain the linear convergence rate if the error term vanishes at an appropriate rate. In particular, the corollary below provides a possible batching strategy to control the error term.

Corollary 1.

Suppose that assumptions of theorem 1 hold. If we the sample variance of the norms of the vectors is bounded by a constant : and we set for some constants and then we obtain:

(17)

We conclude that an exponentially-increasing schedule of mini-batch sizes achieves linear convergence rate for Batching SVRG. Moreover, this batching strategy saves many gradients computations in early stages of the algorithm comparing to vanilla SVRG.

6.3 Convergence of SCSG for Policy Evaluation

Algorithm 2 considers a fixed mini-batch size instead of varying size as in algorithm 1. Moreover, the number of iteration of the inner loop is sampled from a geometric distribution, i.e. , which implies that the number of iteration is equal in expectation to .

Before stating the convergence result, we introduce the complexity measure defined as follows:

(18)

This quantity is equivalent to the complexity measure that is introduced by Lei and Jordan (2017) to motivate and analyze SCSG for convex finite sum minimization problem, and that is defined as:

(19)
Theorem 2.

Suppose assumption 1 holds. We set and . We choose , so that . Assume that the dataset size is large enough: , then we obtain

(20)
Corollary 2.

Suppose assumption 1 holds and is large enough. We set , and . Let , the computational cost that algorithm 2 requires to obtain is:

(21)

Technical detail

In the proof of theorem 2, we set the value of step-size earlier in the proof to simplify our derivation. In fact, we could continue the derivation but with complicated expressions and then set to be dependant on the batch-size . We conjecture by doing this, that the computational cost would be: . In particular, we drop the assumption that is large enough.

In table 1, GTD2 is the cheapest computationally but it has sublinear convergence rate. Both SVRG and SCSG achieve linear convergence rate. When the sample size is small, SVRG and SCSG have an equivalent computational cost. However, when is large and the required accuracy is low, SCSG saves unnecessary computations and is able to achieve the target accuracy with potentially less than a single pass through the dataset.

Algo Computational Cost
GTD2

SVRG

SCSG
Table 1: Computational cost of different policy evaluation algorithms. We report the computational cost of GTD2 and SVRG from Touati et al. (2018) and Du et al. (2017) respectively. We use quantities in our result to represent their computational costs.

7 Related Works

Stochastic gradient methods (Robbins and Monro, 1951) is the most popular method for optimizing convex problems with a finite sum structure, but has slow convergence rate due to the inherent variance. Later, various works show that a faster convergence rate is possible provided that the objective function is strongly convex and smooth. Some representative ones are SAG, SVRG, SAGA (Roux et al., 2012; Johnson and Zhang, 2013; Defazio et al., 2014). Among these methods, SVRG has no memory requirements but requires a lot of computations. There have been attempts to make SVRG computationally efficient for minimizing convex problems (Harikandeh et al., 2015; Lei and Jordan, 2017), but they do not directly apply to the problem of our interests, a convex-concave saddle-point problem without strong convexity in the primal variable. A general convex-concave saddle-point problem can be solved with linear convergence rate (Balamurugan and Bach, 2016), but their method requires strong convexity in the primal variable and the proximal mappings of variables in our problem are difficult to compute (Du et al., 2017).

Many existing works study policy evaluation with linear function approximation. Gradient based approaches (Baird, 1995; Sutton et al., 2008, 2009; Liu et al., 2015) choose different objective functions and parameters of the value function are optimized toward solutions of their objective functions. Least square approaches (Bradtke and Barto, 1996; Boyan, 2002) directly compute the closed form solutions and have high computation costs because they need to compute matrix inverses. The idea of SVRG has been applied to policy evaluation Korda and L.A. (2015); Du et al. (2017). In this work, we extend SVRG for policy evaluation, proposed in Du et al. (2017), and show that the amount of computations can be reduced with linear convergence guarantees.

In control case, Papini et al. (2018) adapt SVRG to policy gradient and they use mini-batch to approximate the full-gradient similarly to our work. However, their problem is a non-convex minimization and they obtain a sublinear convergence rate.

8 Experiments

Figure 1: The left and middle figures show performances of Batching SVRG and SVRG in Random MDP and Mountain Car environments. Batching SVRG is evaluated with different parameter settings. batch svrg 1.05 and batch svrg 1.1 mean that we increase the batch size of Batching SVRG by 5% and 10% in every epoch. The right figure plots the number of times each method uses data samples against the number of epochs.
(a)
(b)
(c)
(d)
Figure 2: Performances of SCSG and SVRG in Random MDP and Mountain Car environments. We evaluate SCSG under different parameter settings. scsg 0.05, scsg 0.1 and scsg 0.5 mean that batch sizes () of SCSG are set to , and respectively. In Figure 1(a) and 1(c), objective value is recorded every time SCSG and SVRG use data samples. Figure 1(b) and 1(d) show performances of SCSG and SVRG in every epoch.

We compare empirical performances of our proposed algorithms with SVRG on two benchmarks: Random MDP and Mountain Car. Details of the two environments, along with other experimental details, are given in the supplementary material. Figure 1 demonstrates that Batching SVRG is able to achieve the same performances of SVRG while using a significantly less amount of data. Two algorithms have identical empirical performances in Random MDP environment. In Mountain Car environment, Batching SVRG’s performance is worse than SVRG in early epochs, but it later reaches the same level of objective values and has the same convergence speed with SVRG. This is expected because our theoretical result suggests that having an approximation error will not affect the overall convergence rate if the error decreases properly. Figure 2 shows performances of SCSG and SVRG. We plot our results against two metrics, number of epochs and number of times a method has used data samples. Since SVRG uses the entire data set to evaluate the full gradient in every epoch, its performances are not as good as SCSG in terms of the amount of data used. We also demonstrate that SVRG is better than SCSG in terms of the number of epochs. This is not surprising as SCSG samples its number of inner loop iterations from a geometric distribution, so an epoch of SCSG is significantly shorter than SVRG.

9 Conclusion and future work

In this paper, we show that Batching SVRG and SCSG converge linearly when solving the saddle-point formulation of MSPBE. This problem is convex-concave and is not strongly convex in the primal variable, so it is very different with the original objective function that Batching SVRG and SCSG attempt to solve. Our algorithms are very practical because they require fewer gradient evaluations than the vanilla SVRG for policy evaluation. It would be useful in the future to get more empirical evaluations for the proposed algorithms. In general, we think that there is a lot of room for applying more efficient optimization algorithms to problems in reinforcement learning, in order to obtain better theoretical guarantees and to improve sample and computational efficiency.

References

  • Baird (1995) Baird, L. (1995). Residual algorithms : Reinforcement learning with function approximation. In International Conference on Machine Learning.
  • Balamurugan and Bach (2016) Balamurugan, P. and Bach, F. (2016). Stochastic variance reduction methods for saddle-point problems. In Advances in Neural Information Processing Systems.
  • Bertsekas (2011) Bertsekas, D. P. (2011). Temporal difference methods for general projected equations. IEEE Transactions on Automatic Control.
  • Boyan (2002) Boyan, J. (2002). Technical update: Least-squares temporal difference learning. Machine Learning, 49(2):233–246.
  • Bradtke and Barto (1996) Bradtke, S. J. and Barto, A. G. (1996). Linear least-squares algorithms for temporal difference learning. Machine Learning, 22(1-3):33–57.
  • Defazio et al. (2014) Defazio, A., Bach, F., and Lacoste-Julien, S. (2014). Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems.
  • Du et al. (2017) Du, S. S., Chen, J., Li, L., Xiao, L., and Zhou, D. (2017). Stochastic variance reduction methods for policy evaluation. In International Conference on Machine Learning.
  • Harikandeh et al. (2015) Harikandeh, R., Ahmed, M. O., Virani, A., Schmidt, M., Konečný, J., and Sallinen, S. (2015). Stop wasting my gradients: Practical svrg. In Advances in Neural Information Processing Systems.
  • Johnson and Zhang (2013) Johnson, R. and Zhang, T. (2013).

    Accelerating stochastic gradient descent using predictive variance reduction.

    In Advances in Neural Information Processing Systems.
  • Korda and L.A. (2015) Korda, N. and L.A., P. (2015). On td(0) with function approximation: Concentration bounds and a centered variant with exponential convergence. In International Conference on Machine Learning.
  • Lei and Jordan (2017) Lei, L. and Jordan, M. I. (2017). Less than a single pass: Stochastically controlled stochastic gradient method. In

    International Conference on Artificial Intelligence and Statistics

    .
  • Liu et al. (2015) Liu, B., Liu, J., Ghavamzadeh, M., Mahadevan, S., and Petrik, M. (2015). Finite-sample analysis of proximal gradient td algorithms. In Conference on Uncertainty in Artificial Intelligence.
  • Papini et al. (2018) Papini, M., Binaghi, D., Canonaco, G., Pirotta, M., and Restelli, M. (2018). Stochastic variance-reduced policy gradient. International Conference on Machine Learning.
  • Robbins and Monro (1951) Robbins, H. and Monro, S. (1951). A stochastic approximation method. In Annals of Mathematical Statistics, pages 400–407.
  • Roux et al. (2012) Roux, N. L., Schmidt, M., and Bach, F. (2012). Minimizing finite sums with the stochastic average gradient. In Advances in Neural Information Processing Systems.
  • Sutton (1988) Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning, 3(1):9–44.
  • Sutton and Barto (1998) Sutton, R. S. and Barto, A. G. (1998). Introduction to Reinforcement Learning. MIT Press, Cambridge, MA, USA, 1st edition.
  • Sutton et al. (2009) Sutton, R. S., Maei, H. R., Precup, D., Bhatnagar, S., Silver, D., Szepesvári, C., and Wiewiora, E. (2009). Fast gradient-descent methods for temporal-difference learning with linear function approximation. In International Conference on Machine Learning, pages 993–1000.
  • Sutton et al. (2008) Sutton, R. S., Szepesvári, C., and Maei, H. R. (2008). A convergent o(n) temporal-difference algorithm for off-policy learning with linear function approximation. In Advances in Neural Information Processing Systems.
  • Touati et al. (2018) Touati, A., Bacon, P.-L., Precup, D., and Vincent, P. (2018). Convergent tree backup and retrace with function approximation. In International Conference on Machine Learning, pages 4962–4971.

Appendix A Proof of theorem 1

Proof.

Define the residual vector and as:

(22)

and are and at the beginning of epoch . and are and at epoch and iteration of the inner loop . and are optimal solutions of (9). From the first order optimality condition, we know that

The above equality is obtained by setting (10) to a zero vector.

By writing out Algorithm 1’s update, we have:

(23)

we defined the error coming from using a mini-batch to compute the gradients at epoch .

(24)

we obtain then:

(25)

Subtract both sides by and use the first order optimality condition. We obtain:

(26)

We set so that is diagonalizable by Lemma 1. Let where contains eigenvectors and contains eigenvalues of . Multiply both sides of (A) by , then take squared 2-norm and expectation. Set . We get:

(27)

The cross term in the second equality is simplified by using and is independent with , and

. We use in the last inequality that same independence and that the variance of a random variable is less than its second moment.

We borrow the following useful inequalities from appendix C of Du et al. (2017).

(28)
(29)

Now we bound the cross term in (A):

(30)

the first inequality is obtained by Cauchy-Schwartz inequality The last inequality follows from the fact that for any and we select in order for the inequality to hold.

Put (28), (29) and (A) back to (A). We obtain:

(31)

If we choose , then and are smaller than which implies that:

(32)

Note that , because of the following inequalities cited from Appendix C in Du et al. (2017):

(33)

Now enrolling the above inequality (A) from to , we obtain: