Policy Search using Dynamic Mirror Descent MPC for Model Free Off Policy RL

10/23/2021 ∙ by Soumya Rani Samineni, et al. ∙ 0

Recent works in Reinforcement Learning (RL) combine model-free (Mf)-RL algorithms with model-based (Mb)-RL approaches to get the best from both: asymptotic performance of Mf-RL and high sample-efficiency of Mb-RL. Inspired by these works, we propose a hierarchical framework that integrates online learning for the Mb-trajectory optimization with off-policy methods for the Mf-RL. In particular, two loops are proposed, where the Dynamic Mirror Descent based Model Predictive Control (DMD-MPC) is used as the inner loop to obtain an optimal sequence of actions. These actions are in turn used to significantly accelerate the outer loop Mf-RL. We show that our formulation is generic for a broad class of MPC based policies and objectives, and includes some of the well-known Mb-Mf approaches. Based on the framework we define two algorithms to increase sample efficiency of Off Policy RL and to guide end to end RL algorithms for online adaption respectively. Thus we finally introduce two novel algorithms: Dynamic-Mirror Descent Model Predictive RL(DeMoRL), which uses the method of elite fractions for the inner loop and Soft Actor-Critic (SAC) as the off-policy RL for the outer loop and Dynamic-Mirror Descent Model Predictive Layer(DeMo Layer), a special case of the hierarchical framework which guides linear policies trained using Augmented Random Search(ARS). Our experiments show faster convergence of the proposed DeMo RL, and better or equal performance compared to other Mf-Mb approaches on benchmark MuJoCo control tasks. The DeMo Layer was tested on classical Cartpole and custom-built Quadruped trained using Linear Policy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 27

page 28

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1.1 Motivation

Model-Free Reinforcement Learning (Mf-RL) algorithms are widely applied to solve challenging control tasks as they eliminate the need to model the complex dynamics of the system. However, these techniques are significantly data hungry and require millions of transitions. Furthermore, these characteristics highly limit successful training on hardware as undergoing such high number of transitions in hardware environments is infeasible. Thus, in order to overcome this hurdle, various works have settled for a two loop model-based approach, typically referred to as Model-based Reinforcement Learning (Mb-RL) algorithms. Such strategies take the benefit of the explored dynamics of the system by learning the dynamics model, and then determining an optimal policy in this model. Hence this “inner-loop” optimization allows for a better choice of actions before interacting with the original environment.

The inclusion of model-learning in RL has significantly improved sampling efficiency [11, 16], and there are numerous works in this direction. DRL algorithms, while exploring, collect significant amount of state transitions, which can be used to generate an approximate dynamics model of the system. In the context of robotics, this model has proven to be very beneficial in developing robust control strategies based on predictive simulations [8]. They have successfully handled minor disturbances and demonstrated sim2real feasibility. Moreover, the process of planning with the learnt model is mainly motivated by the Model Predictive Control (MPC), which is a well known strategy used in classical real-time control. Given the model and the cost formulation, a typical MPC structure can be formulated in the form of a finite horizon trajectory optimization problem. Thus our work is motivated to propose a generalised framework combining Model Free and Model Based methods.

1.2 Related Work

The work with such a view of Mb-Mf approach and exploiting the approximated dynamics with random shooting, [16] demonstrated its efficacy in leveraging the overall learning performance. Further, the work also showed how model-based (Mb) additions to typical model-free (Mf) algorithms can accelerate significantly the latter ones. Additionally, in this context of Mb-Mf RL algorithms, [13] also introduced the use of value functions with an MPC formulation and [5] shows a similar formulation with high-dimensional image observations. Recent works presented in [29] showed adaptation to Dynamic changes using MPC with world models and [15] proposes an actor critic framework using model predictive rollouts and demonstrated applicability on real hardware. The TOPDM [1], a close approach to DeMo RL demonstrates spinning a pen between the fingers, the most challenging examples in dexterous hand manipulation.

Further, prior works [11], [24], [26] have explored guiding RL Policies using Mirror Descent Approaches with KL Constraint on the policy update. As far as our knowledge, we are the first to generalise the Mb Mf Framework in the literature with the view of Dynamic Mirror Descent MPC to RL polices

1.3 Contribution

With a view toward strengthening existing Mb-Mf approaches for learning, we propose a generic framework that integrates a model-based optimization scheme with model-free off-policy learning. Motivated by the success of online learning algorithms [27]

in RC buggy models, we combine them with off-policy Mf learning, thereby leading to a two-loop Mb-Mf approach. In particular, we implement dynamic mirror descent (DMD) algorithms on a model-estimate of the system, and then the outer loop Mf-RL is used on the real system. The main advantage with this setting is that the inner loop is computationally light; the number of iterations can be large without effecting the overall performance. Since this is a hierarchical approach, the inner loop policy helps improve the outer loop policy, by effectively utilizing the control choices made on the approximate dynamics. This approach, in fact, provides a more generic framework for some of the Mb-Mf approaches (e.g.,

[15], [17]).

In addition to the proposed framework, we introduce two new algorithms DeMo RL and DeMo Layer. The Dynamic Mirror-Descent Model Predictive RL (DeMoRL), uses Soft actor-critic (SAC) [4] in the outer loop as off-policy RL, and Cross-Entropy Method (CEM) in the inner loop as DMD-MPC [27]. In particular, we use the exponential family of control distributions with CEM on the objective. In each iteration, the optimal control sequence obtained is then applied on the model-estimate to collect additional data. This is appended to the buffer, which is then used by the outer-loop for learning the optimal policy. We show that the DMD-MPC accelerates the learning of the outer-loop by simply enriching the data with better choices of state-control transitions. We finally demonstrate this method on custom robotic environments and MuJoCo benchmark control tasks. Simulation results show that the proposed methodology is better than or at least as good as MoPAC [15] and MBPO [8] in terms of sample-efficiency. Furthermore, as our formulation is closer to that of [15], it is worth mentioning that even though we do not show results in hardware, the proposed algorithms can be used to train in hardware more effectively, which will be a part of future work.

The DeMo Layer, a special instance of hierarchical framework guides linear policies trained using Augmented Random Search(ARS). The experiments are conducted Cartpole swing up and quadrupedal walking. Our experimental results show that proposed DeMo Layer could improve the policy and could be used end to end with any RL algorithm during deployment.

1.4 Outline of the Report

The report is structured as follows:

  • Chapter 2. Problem Formulation
    In this chapter, we provide the preliminaries for Reinforcement Learning and Online Learning as followed in the report. We further describe the RL algorithms in specific the Augmented Random Search, Soft Actor Critic and Online Learning approach to MPC.

  • Chapter 3. Methodology: Novel Framework & Algorithms
    We will describe the hierarchical framework for the proposed strategy, followed by the description of the DMD-MPC. With the proposed generalised framework, we formulate the two novel algorithms associated with the strategy DeMo RL and DeMO Layer in this chapter.

  • Chapter 4. Experimental Results
    In this chapter we run experiments of DeMo RL on benchmark Mujoco Control Tasks and we compare the results with existing and state of the art algorithms MOPAC and MBPO. The experiments of DeMO Layer was conducted on swinging up Cartpole and custom built quadruped Stoch2. Further, we discuss our experimental results and show significance of proposed algorithms.

  • Chapter 5. Conclusion & Future Work
    Finally, we end the report by summarizing the work done and proposing some interesting future directions.

2.1 Optimal Control- MPC

The Model Predictive Control is a widely applied control strategy and gives practical and robust controllers. It considers a stochastic dynamics model an approximation to real system and solves an H step optimisation problem at every time step and applies first control to the real dynamical system to go to the next state . A popular MPC objective is the expected -step future costs

(2.1)
(2.2)

where, is the cost incurred (for the control problem) and is the terminal cost.
Since optimal control is obtained from which is based on , thus MPC is effectively state-feedback as desired for a stochastic system and is an effective tool for control tasks involving dynamic environments or non stationary setup.

Though MPC sounds intuitively promising, the optimization is approximated in practice and the control command

needs to be computed in real time at high frequency. Hence, a common practice is to heuristically bootstrap the previous approximate solution as the initialization to the current problem.

2.2 Reinforcement Learning Framework

We consider an infinite horizon Markov Decision Process (MDP) given by

where refers to set of states of the robot and refers to the set of control or actions. is the reward function,

refers to the function that gives transition probabilities between two states for a given action, and

is the discount factor of the MDP. The distribution over initial states is given by and the policy is represented by parameterized by , a potentially feasible high-dimensional space. If a stochastic policy is used, then . For ease of notations, we will use a deterministic policy to formulate the problem. Wherever a stochastic policy is used, we will show the extensions explicitly. In this formulation, the optimal policy is the policy that maximizes the expected return ():

where the subscript for denotes the step index. Note that the system model dynamics can be expressed in the form of an equation:

(2.3)

The offpolicy techniques like TD3, SAC have shown better sample complexity compared to TRPO, PPO. A simple random search based a Model Free Technique, Augmented Random Search [14], proposed a Linear deterministic policy highly competitive to other Model Free RL Techniques like TRPO, PPO and SAC. In the subsequent sections we describe the ARS algorithm in detail along with the improvement in its implementation and we also describe SAC.

2.3 Online Learning Framework

Another sequential decision making technique, Online learning is a framework for analyzing online decision making, essentially with three components: the decision set, the learner’s strategy for updating decisions, and the environment’s strategy for updating per-round losses.

At round the learner makes a decision ,along with a side information

, then environment chooses a loss function

and the learner suffers a cost . along with side information like the gradient of loss to aid in choosing the next decision.

Here, the learner’s goal is to minimize the accumulated costs i.e., by minimizing the regret. We describe, in detail the Online Learning Approach to Model Predictive Control [27] in subsequent sections.

2.4 Description of Algorithms

We describe the RL and Online Learning algorthms that are used in this work - Augmented Random Search(ARS), Soft Actor Critic(SAC) and Online Learning Approach to MPC.

2.4.1 Augmented Random Search

Random Search is a Derivative Free Optimisation where the gradient is estimated through finite difference Method [18].Objective is to maximize Expected return of a policy parameterised by under noise

The gradient is found from the gradient estimate obtained from gradient of smoothened version of above objective with Gaussian noise unlike from policy gradient theorem. Gradient of smoothened objective is

where is zero mean Gaussian. If is sufficiently small, the Gradient estimate would be close to the gradient of original objective. Further bias could be reduced with a two point estimate,

A Basic Random Search would involve the update of policy parameters according to

(2.4)

Augmented Random Search, defines an update rule,

(2.5)

Policy is linear state feedback law,

where x is the state and It proposes three Augmentations to Basic Random Search.

i) Using top best b performing directions, They order the perturbation directions , in decreasing order according to and and uses only the top b directions.

ii)Scaling by the standard deviation, helps in an adjusting the step size.


iii) Normalization of the states

Accelerating ARS

Most optimisers use Adam to accelerate Stochastic Gradient Descent

[7] in practical implementations. Hence with ARS we estimate the gradient, an acceleration technique is not used. So, we define an acceleration based Gradient Estimate to ARS for faster convergence. Future Work would involve validating this approach. The Modified ARS Algorithm with and are the small and large step sizes respectively.

1:  
2:  
3:  
Algorithm 1 Accelerated ARS

2.4.2 Soft Actor Critic

Soft Actor-Critic (SAC) [4] is an offpolicy model-free RL algorithm based on principle of entropy maximization, with entropy of policy in addition to reward. It uses soft policy iteration for policy evaluation and improvement. It uses two Q Value functions to mitigate positive bias of value based methods and a minimum of the Q-functions is used for the value gradient and policy gradient. Further, two Q Functions Speeds up training process. It also uses a target network with weights updated by exponentially moving average, with a smoothing constant , to increase stability.
The SAC policy is updated using the loss function

where , and represent the replay buffer, value function and Q-function associated with . The exploration by SAC helps in learning the underlying dynamics. In each gradient step we update SAC parameters using data

and represent target networks.

2.4.3 Online Learning for MPC

The Online Learning (OL) makes a decision at time to optimise for the regret over time while MPC also optimizes for a finite -step horizon cost at every time instant, thus having a close similarity to OL [27].

The proposed work is motivated by such an OL approach to MPC, which considers a generic algorithm Dynamic Mirror Descent (DMD) MPC, a framework that represents different MPC algorithms. DMD is reminiscent of the proximal update with a Bregman divergence that acts as a regularization to keep the current control distribution parameterized by at time , close to the previous one. The second step of DMD uses the shift model to anticipate the optimal decision for the next instant.

The DMD-MPC proposes to use the shifted previous solution for shift model as approximation to the current problem. The proposed methodology also aims to obtain an optimal policy for a finite horizon problem considering -steps into the future using DMD MPC.
Denote the sequence of states and controls as , and , with . The cost for steps is given by

(2.6)

where, is the cost incurred (for the control problem) and is the terminal cost. Each of the are related by

(2.7)

with being the estimate of . We will use the short notation to represent (2.7). It will be shown later that in a two-loop scheme, the terminal cost can be the value function obtained from the outer loop.
Now, by following the principle of DMD-MPC, for a rollout time of , we sample the tuple from a control distribution () parameterized by . To be more precise, is also a sequence of parameters:

which yield the control tuple . Therefore, given the control distribution paramater at round , we obtain at round from the following update rule:

(2.8)

where is the MPC objective/cost expressed in terms of and , is the shift model, is the step size for the DMD, and is the Bregman divergence for a strictly convex function .
Note that the shift parameter is critical for convergence of this iterative procedure. Typically, this is ensured by making it dependent on the state . In particular, for the proposed two-loop scheme, we make dependent on the outer loop policy . Also note that resulting parameter is still state-dependent, as the MPC objective is dependent on .

With the two policies, and at time , we aim to develop a synergy in order to leverage the learning capabilities of both of them. In particular, the ultimate goal is to learn them in “parallel”, i.e., in the form of two loops. The outer loop optimizes and the inner loop optimizes for the MPC Objective. We discuss this in more detail in Section 3.

3.1 Generalised Framework: DMD MPC & RL

In classical Mf-RL, data from the interactions with the original environment are used to obtain the optimal policy parameterized by . While the interactions of the policy are stored in memory buffer, , for offline batch updates, they are used to optimize the parameters for the approximated dynamics of the model, . Such an optimized policy can then be used in the DMD-MPC strategy to update the control distribution, . The controls sampled from this distribution are rolled out with the model, , to collect new transitions and store these in a separate buffer . Finally, we update using both the data i.e., from the buffer via one of the off-policy approaches (e.g. DDPG [12], SAC [4]). In this work, we will demonstrate this using Soft Actor-Critic (SAC) [4]. This gives a generalised hierarchical framework with two loops: Dynamic Mirror Descent (DMD) based Model Predictive Control (MPC) forming an inner loop and model-free RL in the outer loop. A graphical representation of Model Free RL, Model Based RL and the described framework are given in Figure 3.1, Figure 3.2 and Figure 3.3.

There are two salient features in the two-loop approach:

  • At round , we obtain the shifting operator by using the outer loop parameter . This is in stark contrast to the classical DMD-MPC method shown in [27], wherein the shifting operator is only dependent on the control parameter of the previous round .

  • Inspired by [13, 15], the terminal cost is the value of the terminal state for the finite horizon problem as estimated by the value function (, parameterized by ) associated with the outer loop policy, . This will efficiently utilise the model learned via the RL interactions and will in turn optimize with the updated setup.

Figure 3.1: The Model Free Reinforcement Learning
Figure 3.2: Model Based Reinforcement Learning.
Figure 3.3: The proposed hierarchical structure of Dynamic-Mirror Descent Model-Predictive Reinforcement Learning (DeMoRL) with an inner loop DMD-MPC update and an outer loop RL update.

Since there is limited literature on theoretical guarantees of DRL algorithms, it is difficult to show convergences and regret bounds for the proposed two-loop approach. However, there are guarantees on regret bounds for dynamic mirror descent algorithms in the context of online learning [6]. We restate them here using our notations for ease of understanding. We reuse their following definitions:

By a slight abuse of notations, we have omitted in the arguments for . We have the following:

Lemma 3.1

Let the sequence be as in (2.4.3), and let be any feasible arbitrary sequence; then for the class of convex MPC objectives , we have

Theorem 3.1

Given the shift operator that is dependent on the outer-loop policy parameterised by at state , the Dynamic Mirror Descent (DMD) algorithm using a diminishing step sequences gives the overall regret with the comparator sequence as,

(3.1)

with

Based on such a formulation, the regret bound is .

Proofs of both Lemma 3.1 and Theorem 3.1 are given in [6]. Theorem 3.1 shows that the regret is bounded by , where the shifting operator is dependent on the outer-loop policy. However, this result is not guaranteed for non-convex objectives, which will be a subject of future work.

Having described the main methodology, we will now study a widely used family of control distributions that can be used in the inner loop, the exponential family.

Exponential family of control distributions

We consider a parametric set of probability distributions for our control distributions in the exponential family, given by natural parameters

, sufficient statistics and expectation parameters [27]. Further, we set Bregman divergence in (2.4.3) to the KL divergence, i.e.,

After employing KL divergence, our update rule becomes:

(3.2)

The natural parameter of control distribution, , is obtained with the proposed shift model from the outer loop RL policy by setting the expectation parameter of : . Note that we have overloaded the notation to map the sequence to , which is the sequence of 111Note that if the policy is stochastic, then . This is similar to the control choices made in [15, Algorithm 2, Line 4].. Then, we have the following gradient of the cost:

(3.3)

where

is the sufficient statistic, and for our experiments we choose Gaussian distribution for control and

. We finally have the following update rule for the expectation parameter [27]:

(3.4)

Based on the data collected in the outer loop, the inner loop is executed via DMD-MPC as follows:

  • The shifting parameter is obtained by using the outer loop parameter . Now, considering -step horizon, for , obtain

    (3.5)
    (3.6)
    (3.7)

    where represents the covariance for control distribution.

  • Collect , and apply DMD-MPC (3.12) to obtain .

MPC objective formulations

Similar to the exponential family, we can use different types of MPC objectives. Specifically, we will be using the method of elite fractions that allows us to select only the best transitions. This is given by the following:

(3.8)

where we choose as the top elite fraction from the estimates of rollouts. Alternative formulations are also possible, and, specifically, the objective used by the MPPI method in [28] is obtained by setting the following objective and = 1 in (3.4) and for some :

(3.9)

This shows that our formulation is more generic and some of the existing approaches could be derived with suitable choice: [15, 1] and [8]. Table 4.3 shows the specific DMD-MPC algorithm and the corresponding shift operator used for each case.

Mb-Mf Algorithm RL DMD-MPC Shift Operator
MoPAC SAC MPPI Obtained from Mf-RL Policy
TOPDM TD3 MPPI with CEM Left shift (obtained from the previous iterate)
DeMoRL SAC CEM Obtained from Mf-RL Policy
Table 3.1: Mb-Mf algorithms as special cases of our generalised framework

3.2 DeMo RL Algorithm

DeMoRL algorithm derives from other Mb-Mf methods in terms of learning dynamics and follows a similar ensemble dynamics model approach. We have shown it in Algorithm 2. There are three parts in this algorithm: Model learning, Soft Actor-Critic and DMD-MPC. We describe them below.

Model learning. The functions to approximate the dynamics of the system are

-probabilistic deep neural networks

[9] cumulatively represented as . Such a configuration is believed to account for the epistemic uncertainty of complex dynamics and overcomes the problem of over-fitting generally encountered by using single models [2].

SAC. Our implementation of the proposed algorithm uses Soft Actor-Critic (SAC) [4] as the model-free RL counterpart. Based on principle of entropy maximization, the choice of SAC ensures sufficient exploration motivated by the soft-policy updates, resulting in a good approximation of the underlying dynamics.

DMD-MPC. Here, we solve for using a Monte-Carlo estimation approach. For a horizon length of , we collect trajectories using the current policy and the more accurate dynamic models from the ensemble having lesser validation losses. For all trajectories, the complete cost is calculated using a deterministic reward estimate and the value function through (2). After getting the complete state-action-reward -step trajectories we execute the following based on the CEM [21] strategy:

  • Choose the elite trajectories according to the total -step cost incurred. We set for our experiments, and denote the chosen respective action trajectories and costs as and respectively. Note that we have also tested for other values of , and the ablations are shown in the Appendix attached as supplementary.

  • Using and we calculate as the reward weighted mean of the actions i.e.

    (3.10)
  • Finally, we update the current policy actions, according to (3.4) as

    (3.11)
1:  
2:  
3:  for each iteration do
4:     
5:     for

 each model learning epoch 

do
6:        
7:     end for
8:     for  each DMD-MPC iteration do
9:        
10:        Simulate M trajectories of H steps horizon: (3.5), (3.6) and (3.7)
11:         (3.10) and (3.11)
12:        
13:        
14:     end for
15:     for each gradient update step do
16:        Update SAC parameters using data from
17:     end for
18:  end for
Algorithm 2 DeMoRL Algorithm

3.3 DeMo Layer

We consider the special case of above generalised framework where the outer loop RL is not updated and is already trained till convergence. At a state, the RL policy gives distribution over actions. With the shift model obtained from the trained RL the equation 3.12 now has fixed shift model.

(3.12)

The ,is obtained from RL policy by setting

The updated policy is optimal both in terms of long term expected reward and short term horizon based cost.Following derivations from previous section we have the closed form expression for action with Gaussian distribution for control policy,


This gives an action which is a convex combination of RL action and the action that are good according to the current cost

Figure 3.4: The proposed DeMo Layer with an inner loop DMD-MPC update to guide outer loop RL.

We sample an action from the updated policy and apply it to the real environment unlike previous case, thus guiding the RL Policy End to End.A graphical representation of the DemoLayer framework is given in Figure 3.4.We describe the three parts of DeMo Layer, here: Model learning, ARS and DMD-MPC.

Model Learning:

Stoch

:Model Dynamics is learnt using Feed forward Neural Networks

[16]

Cartpole:We used the Model given in Open AI gym with biased length for the MPC

ARS: It is a linear deterministic policy and we have implemented this with modification as in Algorithm 1 for faster convergence.

DMD-MPC.: We use the similar strategy as described in DeMo RL.

4.1 DeMo RL: Results and Comparison

Several experiments were conducted on the MuJoCo [25] continuous control tasks with the OpenAI-Gym benchmark and the performance was compared with recent related works MoPAC [15] and MBPO [8]

. First, we discuss the hyperparameters used for all our experiments and then the performance achieved in the conducted experiments

(a) HalfCheetah-v2
(b) Ant-v2
(c) Hopper-v2
(d) InvertedDoublePendulum-v2
Figure 4.1: Reward Performance of DeMoRL algorithm over other model based algorithms: MoPAC and MBPO.

As the baseline of our framework is built upon MBPO implementation, we use the same hyperparameters for our experiments and both the algorithms. We compare the results of three different seeds and the reward performance plots are shown in Figure 4.1. For the inner DMD-MPC loop we choose a constant horizon length of and perform trajectory rollouts. With our elite fraction as , the updated model-based transitions are added to the MPC-buffer. This process is iterated with a batch-size of thus completing the DMD-MPC block in Algorithm 2. Inspired from MoPAC and MBPO, the number of interactions with the true environment for SAC were kept constant to for each epoch.

For HalfCheetah-v2, Hopper-v2 and InvertedDoublePendulum-v2, we clearly note an accelerated progress with approximately faster rate in the reward performance curve. Whereas in Ant-v2, our rewards were comparable with MoPAC but still significantly better than MBPO. Our final rewards are eventually the same as achieved by MoPAC and MBPO, however the progress rate is faster for all our experiments. and requires lesser number of true environment interactions. Furthermore, all the experiments were conducted with the same set of hyperparameters, thus tuning them individually might give better insights. Table 4.3 shows the empirical analysis of the acceleration achieved by DeMoRL.

Environment Epochs 20 40 60
HalfCheetah-v2 DeMoRL 7333 10691 11037
MoPAC 4978 8912 10212
MBPO 7265 9461 10578
Ant-v2 DeMoRL 984.0 2278.4 3845.5
MoPAC 593.6 2337.3 3649.5
MBPO 907.5 1275.6 1891.9
Hopper-v2 DeMoRL 3077.3 3077.5 3352.4
MoPAC 789.9 3137.9 3270.2
MBPO 813.9 2683.5 3229.9
Table 4.1: Mean Reward Performance Analysis of DeMo RL

Here, we not only show a generic formulation of the DMD-MPC, but also demonstrate how new types of objectives can be obtained and further improvements can be made. As shown in the table, we perform better than or at least as good as MoPAC, which uses information theoretic model predictive path integrals (i-MPPI) [28], a special case of our setup as shown in (3.9). The MPPI formulation uses all the rollouts to calculate the action sequence while the CEM uses elite rollouts, which contributed to the accelerated progress.

Here we show a detailed study on the elite percentage referring to the previous steps, after getting complete state-action-reward -step trajectories, we execute Steps 1 to 3 in page .

Figure 4.2: Ablation study for elite percentage: Reward performance curve (left) and Acceleration analysis as epochs to reach 10000 rewards (right)

Given the sequence of controls , we collect the resulting trajectory and add them to our buffer. Therefore, the quality of is a significant factor affecting the quality of data used for the outer loop RL-policy. The selection strategy being CEM, a quality metric is dependent on the choice of elite fractions . We perform an ablation study for values of and on HalfCheetah-v2 OpenAI gym environment. The analysis was performed based on the reward performance curves as shown in Fig. 4.2 (left). Additionally, we realize the number of the epochs required to reach a certain level of performance as a good metric to measure acceleration achieved. Such an analysis is provided in Fig. 4.2 (right). We make the following observations:

  • Having a lesser value of might ensure that learned dynamics is exploited the most, but decreases the exploration performed in the approximated environment.

  • Similarly, having higher value of on the other hand will do more exploration using a “not-so-perfect” policy and dynamics.

Thus, the elite fraction balances between exploration and exploitation.

4.2 DeMo Layer: Results for Cartpole and Stoch

We have conducted experiments on two different environments.
Cartpole: A linear policy is trained on Cartpole using ARS and there exists no linear policy that could acheive swing up and balance. Showed that DeMo Layer could guide the linear policy to acheive swing up and balance on cartpole.
Stoch: Stoch2, a quadruped Robot is trained using the linear approach given in [19]. With neural network approximation to the Model Dynamics, DeMo Layer is implemented on Stoch to learn robust walking for episode length of 500.

Environment Linear Policy Linear Policy with DeMo Layer
Cartpole 1400 1700
Stoch2 1500 1850
Table 4.2: Reward Performance Analysis of DeMo Layer
Environment Horizon Sampled Trajectories
Cartpole 120 90
Stoch2 20 200
Table 4.3: Hyper parameters used for DeMo Layer

The simulation results for cartpole and stoch could be found here 111https://github.com/soumyarani/End-to-End-Guided-RL-using-Online-Learning

References