Value-Based Reinforcement Learning for Continuous Control Robotic Manipulation in Multi-Task Sparse Reward Settings

07/28/2021
by   Sreehari Rammohan, et al.
6

Learning continuous control in high-dimensional sparse reward settings, such as robotic manipulation, is a challenging problem due to the number of samples often required to obtain accurate optimal value and policy estimates. While many deep reinforcement learning methods have aimed at improving sample efficiency through replay or improved exploration techniques, state of the art actor-critic and policy gradient methods still suffer from the hard exploration problem in sparse reward settings. Motivated by recent successes of value-based methods for approximating state-action values, like RBF-DQN, we explore the potential of value-based reinforcement learning for learning continuous robotic manipulation tasks in multi-task sparse reward settings. On robotic manipulation tasks, we empirically show RBF-DQN converges faster than current state of the art algorithms such as TD3, SAC, and PPO. We also perform ablation studies with RBF-DQN and have shown that some enhancement techniques for vanilla Deep Q learning such as Hindsight Experience Replay (HER) and Prioritized Experience Replay (PER) can also be applied to RBF-DQN. Our experimental analysis suggests that value-based approaches may be more sensitive to data augmentation and replay buffer sample techniques than policy-gradient methods, and that the benefits of these methods for robot manipulation are heavily dependent on the transition dynamics of generated subgoal states.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

10/07/2021

Robotic Lever Manipulation using Hindsight Experience Replay and Shapley Additive Explanations

This paper deals with robotic lever control using Explainable Deep Reinf...
11/05/2016

Combining policy gradient and Q-learning

Policy gradient is an efficient technique for improving a policy in a re...
02/22/2021

Improved Learning of Robot Manipulation Tasks via Tactile Intrinsic Motivation

In this paper we address the challenge of exploration in deep reinforcem...
03/02/2021

Learning Robotic Manipulation Tasks through Visual Planning

Multi-step manipulation tasks in unstructured environments are extremely...
11/26/2020

Reinforcement Learning for Robust Missile Autopilot Design

Designing missiles' autopilot controllers has been a complex task, given...
06/19/2019

QXplore: Q-learning Exploration by Maximizing Temporal Difference Error

A major challenge in reinforcement learning for continuous state-action ...
11/24/2020

Achieving Sample-Efficient and Online-Training-Safe Deep Reinforcement Learning with Base Controllers

Application of Deep Reinforcement Learning (DRL) algorithms in real-worl...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Current RL algorithms aimed at robot manipulation are either policy gradient methods, such as PPO [13], or actor-critic methods, such as DDPG [8], TD3 [4], or SAC [5]. These methods have a stable learning process because they directly optimize policy parameters based on expected return, but still suffer from sample inefficient function approximation when compared to value-based optimization approaches. While value-based approaches were previously either limited by their function approximation capability or could only work in discrete action spaces, recent works [2]

have developed novel neural network architectures that enable general function approximation of continuous state-action value functions only using the Bellman error

[2].

In this paper we evaluate RBF-DQN [2], a action value-based method inspired by Q-networks, to complete multiple RLBench [6] tasks in continuous state and action spaces. RLBench provides a challenging testbed for evaluating reinforcement learning algorithms in multi-task robot settings because it only provides sparse-rewards for completing long-horizon tasks. We conduct experiments in 5 different tasks (Fetch Reach, Button Push, Toilet Down, Open Drawer, and Pick and Place), and investigate how value-based approaches (like RBF-DQN) compare to state-of-the-art actor-critic methods (like T3D, PPO, and SAC) and how their performance is impacted by typical data augmentation and replay buffer sampling techniques. To the best of our knowledge, this is the first comparison of value-based approaches against actor-critic methods for continuous robot control in sparse reward multi-task settings.

Fig. 1: Pick and Lift RLBench Task. The agent is tasked with picking up the red block and moving it to the point in space represented by the red sphere.

Ii Background and related work

Reinforcement learning is the study of maximizing an agent’s long term discounted reward through interactions with an environment [15]

. It is commonly modeled as a Markov Decision Process (MDP)  

[10], defined by the tuple . In robotic manipulation domains, denotes the continuous state space, represents the continuous action space, is the transition model, and is the reward model. The discount factor, , determines the importance of immediate rewards compared with future rewards. The action-value function , with and , is defined as the maximum expected return achievable by following a particular policy [9], after seeing some state and then taking some action . The optimal action-value function corresponds with the optimal policy. The optimal action-value function follows an important identity which is known as the Bellman equation  [3]:

(1)

Ii-a Q-Learning

When the reward function and transition function are known, then the optimal can be easily found by using standard dynamic programming algorithms such as value iteration. However if the model dynamics are not known, RL algorithms will need to find by interacting with the environment without learning an explicit model. One notable example of these model-free algorithms is Q-learning  [18], which approximates using an estimator that depends on parameters . The parameters can be updated iteratively through gradient descent using the estimates (often stabilized using a target network, such as in DQN):

(2)

Ii-B Rbf-Dqn

Introduced by Asadi et al. [2], RBF-DQN is a value-based method that can efficiently approximate

using a set of radial basis functions, and simultaneously approximates the action maximizing

-value with bounded error. Specifically, RBF-DQN approximates by optimizing centroid locations and centroid values as functions of state and parameters and with the following equation: [2]:

(3)

During training, both the centroid locations and state-dependent centroid values are learned, these are then used during forward propagation to form the Q function output [2]. In multi-dimensional action spaces, the temperature parameter can be tuned to ensure an upper bound error [2]:

(4)

The key to why RBF-DQN is so powerful as a value-based method in continuous action spaces is due to its action-maximization property as well as the fact that it is a universal function approximator [2]. In Q learning, the update rule (2) relies on finding . This is prohibitively expensive in continuous action spaces, due to a nearly infinite search space, and employing tricks like discretizing the action space may produce sub-optimal solutions. The action maximization property of RBF-DQN however, guarantees that all critical points of can be well-approximated by a centroid location [2]. This makes action-maximization as simple as searching over all centroids where represents a centroid location.

Ii-C HER and PER

Most robotic manipulation tasks are under the sparse reward setting, which makes training an RL agent extremely challenging due ineffective exploration, leading to high sample inefficiency. Hindsight Experience Replay (HER) [1] and Prioritized Experience Replay (PER) [12] are two methods which can be used to improve the sample efficiency of previously experienced states. As agents train, transition tuples are collected and stored in a replay buffer , where , and is the goal. These transitions often come from trajectories generated by the agent’s policy during each episode, and they are stored in the replay buffer as a dataset of samples to train with.

In HER, after each training episode, both the original goal and potentially multiple hindsight goals are selected from the current trajectory according to a goal selection strategy, and these are stored in the replay buffer.

In PER [12], transitions are sampled from the replay buffer weighted by their TD or Bellman error, rather than being sampled uniformly. This conceptually means that the agent prioritizes transitions in the replay buffer which it finds surprising or unexpected.

In [12]

, the probability

of sampling a transition is based on the priority of the transition :

(5)

The hyperparameter

determines the degree to which prioritization is used.

Iii Technical Approach

Our aim is to demonstrate RBF-DQN’s efficacy on robot manipulation tasks. We applied RBF-DQN to various simulated robotic manipulation tasks under sparse rewards to investigate RBF-DQN’s performance on these tasks. We also investigated how combining HER and PER with RBF-DQN impacted performance.

Iii-a RLBench

RLBench is a robot learning simulator with many realistic & challenging tasks involving a Franka Panda Arm, such as Fetch Reach, Open Door, and Close Toilet [6]. One aim of RLBench is to provide a standardized suite of tasks for benchmarking performance of RL strategies.

Iii-B Goal Selection and Detection

For our robotic manipulation tasks, we utilized two hindsight goal selection strategies for use with HER. A simple hindsight goal selection strategy, known as final, passes the last state of a trajectory into a function which maps states to goals. We also considered a strategy called future, which considers states later on in the trajectory relative to a given timestep as goals.

In our ablation studies with RBF-DQN involving HER, we use the final and future strategy where we not only use the final state as a hindsight goal, but also future states later on in the trajectory relative to a given timestep .

The specifics behind depend largely on the manipulation task being performed. For Fetch Reach, takes the state as input and returns the position of the end effector, but for a task like Open Drawer, returns the state of the prismatic joint representing how open or closed the drawer is.

Finally, we determine whether a goal was achieved by checking if the norm of the achieved and desired goal is less than some arbitrary (which for our experiments has been set to ): .

Iv Experiments

Fig. 2: Policy evaluation success rate for RLBench tasks Fetch Reach, Button Push, Toilet Down, Open Drawer, and Pick and Lift. The upper row compares RBF-DQN with baselines TD3, PPO, and SAC. The bottom row compares RBF-DQN combined with HER, PER, and HER+PER. Data is averaged over a rolling window of size 5. Each episode is 200 steps. For Fetch Reach

, PPO converged at around 2000 iterations, with 10 update epochs in each iteration and considered as 20000 episodes equally. TD3 converged at around 13000 episodes, and SAC converged at around 12000 episodes. For

Button Push, PPO converged around at 1300 iterations or 13000 episodes, TD3 converged at around 7000 episodes and SAC converged at 14000 episodes. For Toilet Down, PPO converged after 1300 iterations or 13000 episodes, TD3 converged after more than 11000 episodes and SAC needs at least 13000 episodes to converge. For Open Drawer

, PPO achieved a maximum success rate of 0.30 after 3000 episodes, TD3 achieved a success rate of 0.30 after 5000 episodes and SAC can hardly achieve such a success rate along the process. We run all algorithms for 3 seeds and shade the 95% confidence interval for each run.

We evaluated RBF-DQN along with state of the art baseline implementations of SAC, TD3, and PPO [11]. All algorithms were tested on five tasks in RLBench [6]

using a Franka Panda Arm with 8 DoF (7 joints + 1 gripper tip), each with a continuous range of motion. Agents receive a reward of 1 when they completes the task and a reward of 0 for all other time steps. All variations are trained in the joint velocity action space, where actions are represented as an 8 dimensional vector, where each element corresponds to a joint velocity or gripper tip open position. For each task, we used the low dimensional state space provided by RLBench, consisting of information about the robot arm joint velocities, and all objects in the scene. The state space was pruned to reduce its dimensionality and remove irrelevant information. Each agent was trained for 3,000 episodes, where each episode corresponds to a maximum of 200 steps.

Descriptions of the tasks, initialization sequences, and state spaces are described below.

Reach and Button Push: The robot arm is required to move to a target position in the environment (and push a button). The state space is 17 dimensional, representing the joint positions of the arm, position of the end effector tip, and position of the target to reach. Goals for HER on the Fetch Reach and Button Push task are derived from the ending end-effector position.

Toilet Seat Down: The robot arm is required to put the lid of a toilet seat down. The state space is 101 dimensional, where the state encompasses information about the gripper joint positions and velocities as well as information about the toilet, like its position, orientation and joint state (how open or closed the lid is). Goals for HER are based on the toilet lid joint.

Pick and Place: The robot arm is required to pick up a block and move it to a spot in 3D space. This task proved extremely difficult in the sparse reward setting, so we simplified the task by first motion planning to the block, forming and maintaining a grasp throughout the trajectory (locking the 8th element of the action vector to keep the gripper closed). The reduced state space is 51 dimensional, representing the robot joint velocities, block position, and target position in space. Goals for HER are formed from the position of the end effector.

Open Drawer Task: The robot arm is required to pull open a drawer. Due to the difficulty of this task in the sparse reward setting, we initialize the gripper at the beginning of each episode to make durative contact with the bottom handle of the drawer, and form a grip. Throughout the trajectory, the robot has full control over its 8 dimensional action space. The reduced state space consisted of 45 dimensions: the joint velocities and gripper state of the robot arm, the waypoint of the bottom drawer, and the prismatic joint of the bottom drawer (loosely representing how open or closed the drawer is). Goals for HER were formed with the drawer’s prismatic joint, which increases from 0 as the drawer is opened.

V Discussion and Analysis

From the results, we see RBF-DQN under an -greedy policy compares favorably to other state-of-the-art baselines under the same conditions. In the five sparse reward RLBench robotic manipulation tasks evaluated, RBF-DQN required 1/3 as many episodes to succeed at each task, which is a significant breakthrough in sample efficiency for robotic manipulation.

We note that while RBF-DQN was successful, not all of the sampling strategies using (HER, PER, HER+PER) were equally effective on each task: PER may result in unstable learning, and HER may not always be feasible to incorporate. Fetch Reach and Pick and Place had success under HER and HER+PER, but when using PER only, the training had a tendency to become unstable; for Button Push, neither PER, HER or HER+PER outperformed vanilla RBF-DQN; for Open Drawer, HER did not increase performance, while PER increased learning speed but was unstable. For Toilet Down, as a result of their being no intermediate stable goal states (the lid is either up, or falling down due to gravity with slight perturbations), HER is not useful, and PER leads to unstable learning compared to vanilla RBF-DQN.

Differences in the environment may play a role in the success of the sampling strategies in terms of what areas of state space the agent explored. In particular, the stability of trajectory states sampled from the experience buffer (and those which are chosen as hindsight goals) may have an impact on success. Fetch Reach and Button Push have the property that all states in the state space are stable: in the absence of robotic control, states (of the end effector or the button) do not transition to a fixed point.

For Toilet Down, the goal state of the toilet lid is an attractor for lid joint angle due to gravity, so certain perturbations of the lid when it is open can cause the lid to fall to the goal state. Setting hindsight goals for lid angles which naturally fall towards the goal state, with no robot contact on the lid, could be a successful hindsight goal selection strategy. However, since the robot does not need an intelligent policy at the subgoal states due to the attraction dynamics of this task family, hindsight goal selection is not as beneficial as in cases where planning is challenging from the subgoal states.

Additionally, the mapping from state space to goal space is critical: certain tasks can only be completed if the robot successfully maintains durative contact with the object throughout the trajectory. Therefore, certain tasks require hindsight goals to be created out of states that maintain durative contact. Pick and Place and Open Drawer states are stable, but only as long as the gripper maintains contact with the object or handle. Therefore, in Pick and Place, we opted to always ensure the gripper remained closed. In contrast, for Open Drawer, we performed only grasp initiation, but subsequently allowed the robot arm full control over its DoF, implying that it could potentially release its grip on the drawer. We observe that due to these differences, HER on Pick and Place was more effective than HER on Open Drawer, since in Open Drawer, there is a very low probability that the gripper remains closed throughout the trajectory.

In certain experiments such as Button Push, Fetch Reach, and Open Drawer, HER + PER resulted in performance of the agent collapsing near the end of training. It is possible that the bias introduced by Priority Experience Replay is significant enough to destabalize convergence at the end of training, and that the weighted importance sampling ratios in PER would benefit from an annealing schedule that reduced the weights over time. This is especially problematic for value-based approaches like RBF-DQN which tend to be less stable during training than policy-gradient methods like TD3, PPO and SAC since they optimize for low Bellman error rather than directly improving the expected returns of the policy. Future work will investigate approaches for mitigating the destabilization issues introduced by biased replay buffer sampling techniques.

Even without common sample-efficiency improvements, we have demonstrated that RBF-DQN is relatively more sample efficient than current state of the art baselines, and is able to perform better or comparably on multiple robot manipulation tasks. We attribute the success of RBF-DQN on sparse reward, continuous state & action manipulation tasks to the action maximization and function approximation properties of RBFs, which guarantee the location of the max centroids to approximately correspond with the max Q value at a given state, within an error. This property is extremely powerful, allowing action maximization to be achieved over simple centroid search (of which there are finitely many), suggesting why RBF-DQN performs efficiently.

Our results provide strong motivations for incorporating RBF-DQN as a sample-efficient value-based method in the domain of robotic manipulation. It seems promising that RBF functions can be leveraged to improve sample complexity for robotic manipulation tasks with both on-line and off-line RL methods. It would be interesting to see how RBF-DQN performs in higher dimensional state representations, and how other sampling methods or goal generation methods could be utilized to improve sample efficiency.

Vi Conclusion

We have experimentally seen that RBF-DQN is comparable or better at common robotic manipulation tasks to PPO, TD3, and SAC. Especially when paired with HER and PER, RBF-DQN is a powerful value based model for off-policy continuous action space robotic manipulation.

In the future, we hope to experiment with using RBF-DQN on vision based state input (depth images, and point clouds), incorporating sample efficiency algorithms like CURL [14] and RAD [7], as well as adapting HER to work with image based state input. Furthermore, we are working to improve the stability and over-estimation tendencies of RBF-DQN by exploring the potential of incorporating dueling [17] and double [16] DQN techniques into RBF-DQN.

Acknowledgments

References

  • [1] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, P. Abbeel, and W. Zaremba (2017-07) Hindsight Experience Replay. arXiv e-prints, pp. arXiv:1707.01495. External Links: 1707.01495 Cited by: §II-C.
  • [2] K. Asadi, N. Parikh, R. E. Parr, G. D. Konidaris, and M. L. Littman (2020-02) Deep Radial-Basis Value Functions for Continuous Control. arXiv e-prints, pp. arXiv:2002.01883. External Links: 2002.01883 Cited by: §I, §I, §II-B, §II-B, §II-B.
  • [3] R. Bellman (1952) On the theory of dynamic programming.. Proceedings of the National Academy of Sciences of the United States of America 38 8, pp. 716–9. Cited by: §II.
  • [4] S. Fujimoto, H. van Hoof, and D. Meger (2018) Addressing function approximation error in actor-critic methods. External Links: 1802.09477 Cited by: §I.
  • [5] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine (2018) Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. External Links: 1801.01290 Cited by: §I.
  • [6] S. James, Z. Ma, D. Rovick Arrojo, and A. J. Davison (2019-09) RLBench: The Robot Learning Benchmark & Learning Environment. arXiv e-prints, pp. arXiv:1909.12271. External Links: 1909.12271 Cited by: §I, §III-A, §IV.
  • [7] M. Laskin, K. Lee, A. Stooke, L. Pinto, P. Abbeel, and A. Srinivas (2020-04) Reinforcement Learning with Augmented Data. arXiv e-prints, pp. arXiv:2004.14990. External Links: 2004.14990 Cited by: §VI.
  • [8] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra (2019) Continuous control with deep reinforcement learning. External Links: 1509.02971 Cited by: §I.
  • [9] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. A. Riedmiller (2013) Playing atari with deep reinforcement learning. CoRR abs/1312.5602. External Links: Link, 1312.5602 Cited by: §II.
  • [10] M. L. Puterman (1994) Markov decision processes—discrete stochastic dynamic programming. John Wiley & Sons, Inc., New York, NY. Cited by: §II.
  • [11] A. Raffin, A. Hill, M. Ernestus, A. Gleave, A. Kanervisto, and N. Dormann (2019) Stable baselines3. GitHub. Note: https://github.com/DLR-RM/stable-baselines3 Cited by: §IV.
  • [12] T. Schaul, J. Quan, I. Antonoglou, and D. Silver (2015-11) Prioritized Experience Replay. arXiv e-prints, pp. arXiv:1511.05952. External Links: 1511.05952 Cited by: §II-C, §II-C, §II-C.
  • [13] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov (2017) Proximal policy optimization algorithms. External Links: 1707.06347 Cited by: §I.
  • [14] A. Srinivas, M. Laskin, and P. Abbeel (2020-04) CURL: Contrastive Unsupervised Representations for Reinforcement Learning. arXiv e-prints, pp. arXiv:2004.04136. External Links: 2004.04136 Cited by: §VI.
  • [15] R. S. Sutton and A. G. Barto (1998) Reinforcement learning: An introduction. The MIT Press. Cited by: §II.
  • [16] H. van Hasselt, A. Guez, and D. Silver (2015-09) Deep Reinforcement Learning with Double Q-learning. arXiv e-prints, pp. arXiv:1509.06461. External Links: 1509.06461 Cited by: §VI.
  • [17] Z. Wang, T. Schaul, M. Hessel, H. van Hasselt, M. Lanctot, and N. de Freitas (2015-11) Dueling Network Architectures for Deep Reinforcement Learning. arXiv e-prints, pp. arXiv:1511.06581. External Links: 1511.06581 Cited by: §VI.
  • [18] C. Watkins and P. Dayan (1992-05) Technical note: q-learning. Machine Learning 8, pp. 279–292. External Links: Document Cited by: §II-A.