I Introduction
Robotic grasping is one of the most fundamental robotic manipulation tasks: before interacting with objects in the world, a robot typically must begin by grasping them. Prior work in robotic manipulation has sought to address the grasping problem through a wide range of methods, from analytic grasp metrics [43, 36] to learningbased approaches [2]
. Learning grasping directly from selfsupervision offers considerable promise in this field: if a robot can become progressively better at grasping through repeated experience, perhaps it can achieve a very high degree of proficiency with minimal human involvement. Indeed, learningbased methods inspired by techniques in computer vision have achieved good results in recent years
[22]. However, these methods typically do not reason about the sequential aspect of the grasping task, either choosing a single grasp pose [33], or repeatedly choosing the next most promising grasp greedily [24]. While previous works have explored deep reinforcement learning (RL) as a framework for robotic grasping in a sequential decision making context, such studies have been limited to either single objects [34], or simple geometric shapes such as cubes [40].In this work, we explore how RL can be used to automatically learn robotic grasping skills for diverse objects, with a focus on comparing a variety of RL methods in a realistic simulated benchmark. One of the most important challenges in learningbased grasping is generalization: can the system learn grasping patterns and cues that allow it to succeed at grasping new objects that were not seen during training? Successful generalization typically requires training on a large variety of objects and scenes, so as to acquire generalizeable perception and control. Prior work on supervised learning of grasping has used tens of thousands
[33] to millions [24] of grasps, with hundreds of different objects. This regime poses a major challenge for RL: if the learning is conducted primarily onpolicy, the robot must repeatedly revisit previously seen objects to avoid forgetting, making it difficult to handle extremely diverse grasping scenarios. Offpolicy reinforcement learning methods might therefore be preferred for tasks such as grasping, where the wide variety of previously seen objects is crucial for generalization. Indeed, the supervised learning methods explored in previous work [33, 24] can be formalized as special cases of offpolicy reinforcement learning that do not consider the sequential nature of the grasping task.Our aim in this paper is to understand which offpolicy RL algorithms are best suited for visionbased robotic grasping. A number of modelfree, offpolicy deep reinforcement learning methods have been proposed in recent years for solving tasks such as Atari games [28] and control of simple simulated robots [25]. However, these works do not explore the kinds of diverse and highly varied situations that arise in robotic grasping, and the focus is typically on final performance (e.g., expected reward), rather than generalization to new objects and situations. Furthermore, training typically involves progressively collecting more and more onpolicy data, while retaining old offpolicy data in a replay buffer. We study how the relative performance of these algorithms varies in an offpolicy regime that emphasizes diversity and generalization.
The first contribution of this paper is a simulated grasping benchmark for a robotic arm with a twofinger parallel jaw gripper, grasping random objects from a bin. This task is available as an opensource Gym environment^{2}^{2}2Code for the grasping environment is available at https://goo.gl/jAESt9 [3]. Next, we present an empirical evaluation of offpolicy deep RL algorithms on visionbased robotic grasping tasks. These methods include the grasp success prediction approach proposed by [24], Qlearning [28], path consistency learning (PCL) [29], deep deterministic policy gradient (DDPG) [25], Monte Carlo policy evaluation [39], and Corrected MonteCarlo, a novel offpolicy algorithm that extends Monte Carlo policy evaluation for unbiased offpolicy learning.
Our discussion of these methods provide a unified treatment of the various Qfunction estimation techniques in the literature, including our novel proposed approach. Our results show that deep RL can successfully learn grasping of diverse objects from raw pixels, and can grasp previously unseen objects in our simulator with an average success rate of 90%. Surprisingly, naïve Monte Carlo evaluation is a strong baseline in this challenging domain, despite being biased in the offpolicy case, and our proposed unbiased, corrected version achieves comparable performance. Deep Qlearning also excels in limited data regimes. We also analyze the stability of the different methods, and differences in performance across onpolicy and offpolicy cases and different amounts of offpolicy data. Our results shed light on how the different methods compare on a realistic simulated robotic task, and suggest avenues for developing new, more effective deep RL algorithms for robotic manipulation, discussed in Section VII. To our knowledge, our paper is the first to provide an open benchmark for robotic grasping from image observations and heldout test objects, as well as a detailed comparison of a wide variety of deep RL methods on these tasks.
Ii Related Work
A number of works combine RL algorithms with deep neural network function approximators. Modelfree algorithms for deep RL generally fall into one of two areas: policy gradient methods [44, 38, 27, 45] and valuebased methods [35, 28, 25, 15, 16], with actorcritic algorithms combining the two classes [29, 31, 14]. It is generally well known that modelfree deep RL algorithms can be unstable and difficult to tune [18]. Most of the prior works in this field, including popular benchmarks [7, 1, 3], have primarily focused on applications in video games and relatively simple simulated robot locomotion tasks, and do not generally evaluate on diverse tasks that emphasize the need for generalization to new situations. The goal of this work is to evaluate which approaches are suitable for visionbased robotic grasping, in terms of both stability and generalization performance, two factors that are rarely evaluated in standard RL benchmarks.
A number of approaches have sought to apply deep RL methods for solving tasks on real robots. For example, guided policy search methods have been applied for solving a range of manipulation tasks, including contactrich, visionbased skills [23], nonprehensile manipulation [10], and tasks involving significant discontinuities [5, 4]. Other papers have directly applied modelfree algorithms like fitted Qiteration [21], Monte Carlo return estimates [37], deep deterministic policy gradient [13], trustregion policy optimization [11], and deep Qnetworks [46] for learning skills on real robots. These papers have provided excellent examples of successful deep RL applications, but generally tackle individual skills, and do not emphasize generalizing to task instances beyond what the robot was trained on. The goal of this work is to provide a systematic comparison of deep RL approaches to robotic grasping. In particular, we test generalization to new objects in a cluttered environment where objects may be obscured and the environment dynamics are complex, in contrast to works such as [40], [34], and [19], which consider grasping simple geometric shapes such as blocks.
Outside of deep RL, learning policies for grasping diverse sets of objects has been studied extensively in the literature. For a complete survey of approaches, we refer readers to Bohg et al. [2]. Prior methods have typically relied on one of three sources of supervision: human labels [17, 22], geometric criteria for grasp success computed offline [12], and robot selfsupervision, measuring grasp success using sensors on the robot’s gripper [33]
. Deep learning has been recently incorporated into such systems
[20, 22, 24, 26, 32]. These prior methods do not consider the sequential decision making formalism of grasping maneuvers, whereas our focus in this paper is on evaluating RL algorithms for grasping. We do include a comparison to a prior method that learns to predict grasp outcomes without considering the sequential nature of the task [24], and observe that deep RL methods are more suitable in harder, more cluttered environments.Finally, a primary consideration of this paper is the ability to effectively learn from large amounts of offpolicy data, which makes deploying new algorithms much more practical. Sadheghi et al. use deep reinforcement learning to from offline simulated data to learn a model for drone flight [37]. Other papers have considered largescale data collection for robotics. For example, Finn et al. learn a predictive model of sensory inputs and used it to plan [8, 9]. Pinto & Gupta [33] and Levine et al. [24] both use supervised learning techniques for learning to grasp. Unlike these prior approaches, we focus on modelfree RL algorithms, which can consider the future consequences of their actions (e.g., in order to enable pregrasp manipulation).
Iii Preliminaries
We first define the RL problem and the notation that we use in the rest of the paper. We consider a finitehorizon, discounted Markov decision process (MDP): at each timestep
, the agent will observe the current state , take an action , and then receive a reward and observe the next state , each stochastically determined by the environment. Episodes have length timesteps. The goal of the agent is to find a policy , parameterized by , under which the expected reward is maximized. We will additionally assume that future rewards are discounted by , such that the objective becomes:(1) 
Note that the expectation is with respect to both the policy and the environment dynamics. To reduce notational clutter, we use without a subscript to refer to expectation only over the environment dynamics (and not a specific policy) and specify a specific policy only when relevant.
Iv Problem Setup
Our proposed benchmark for visionbased robotic grasping build on top of the Bullet simulator [6]
. In this environment, a robotic arm with 7 degrees of freedom attempts to grasp objects from a bin. The arm has a fixed number of timesteps (
) to find a good grasp, at which point the gripper closes and the episode ends. The reward is binary and provided only at the last step, with for a successful grasp and 0 for a failed grasp. The observed state consists of the current RGB image from the viewpoint of the robot’s camera and the current timestep (Figure 1). The timestep is included in the state, since the policy must know how many steps remain in the episode to decide whether, for example, it has time for a pregrasp manipulation, or whether it must immediately move into a good grasping position. The arm moves via position control of the verticallyoriented gripper. Continuous actions are represented by a Cartesian displacement , where is a rotation of the wrist around the axis. The gripper automatically closes when it moves below a fixed height threshold, and the episode ends. At the beginning of each new episode, the object positions and rotations are randomized within the bin.The benchmark consists of two different RL environments, shown in in Figure 1.

Regular grasping. The first grasping task tests generalization, with 900 randomly generated rigid objects with diverse random shapes used during training, and 100 testing performed on new objects on which the model was never trained previously. In each episode there are 5 objects in the bin. Every 20 episodes, the objects are randomly switched out. Both training and test objects are visualized in Figure 2.

Targeted grasping in clutter. In this task, the robot must pick up a particular crossshaped object in a bin with many other objects, which may occlude each other visually (See Figure 1 right). The arm may disturb other objects in the bin when attempting to select and grasp the “target object”. We chose this setting because grasping specific objects in clutter may require more nuanced behavior from the robot. The robot trains on objects which are kept the same for all episodes. We evaluate performance on sets of 7 objects where 3 of them are “target” objects, and the robot only receives reward for picking up one of the target objects.
V Reinforcement Learning Algorithms
In addition to proposing a visionbased grasping benchmark, we aim to evaluate offpolicy deep RL algorithms to determine which methods are best suited for learning complex robotic manipulation skills, such as grasping, in diverse settings that require generalization to novel objects. Our detailed experimental evaluation includes wellknown algorithms such as Qlearning [42, 41], deep deterministic policy gradient (DDPG) [25], which we show to be a variant of Qlearning with approximate maximization, path consistency learning (PCL) [29], Monte Carlo policy evaluation [39], which consists of simple supervised regression onto estimated returns, and a novel corrected version of Monte Carlo policy evaluation, which makes the algorithm unbiased in the offpolicy case, with a correction term that resembles Qlearning and PCL.
Va Learning to Grasp with Supervised Learning
The first method in our comparison is based on the grasping controller described by Levine et al. [24]
. This method does not consider longhorizon returns, but instead uses a greedy controller to choose the actions with the highest predicted probability of producing a successful grasp. We include this approach in our comparison because it is a recent example of a prior grasping method that learns to perform closedloop feedback control using deep neural networks from raw monocular images. To our knowledge, no prior method learns visionbased robotic grasping with deep networks for grasping of diverse objects with reinforcement learning, making this prior approach the closest point of comparison.
This prior method learns an outcome predictor for the nextstep reward after taking a single action . This amounts to learning a singlestep Qfunction. To obtain labeled data from multistep grasping episodes, this method uses “synthetic” actions obtained by taking the position of the gripper at any point during the episode, denoted , and computing the action that would move the gripper to the final pose of the episode . Since actions correspond to changes in gripper pose, the action label is simply given by , and the outcome of the entire episode is used as the label for each step within that episode. This introduces bias: taking a straightline path from to does not always produce the same grasp outcome as the actual sequence of intermediate steps taken in the corresponding episode. The action is selected by maximizing the Qfunction via stochastic optimization. In our implementation, we employ the crossentropy method (CEM), with 3 iterations and 64 samples per iteration. For further details, we refer the reader to prior work [24].
VB OffPolicy QLearning
We begin by describing the standard offpolicy Qlearning algorithm [42], which is one of the best known and most popular methods in this class. Qlearning aims to estimate the Qfunction by minimizing the Bellman error, given by
(2) 
Expectations over correspond to stateaction pairs sampled from an offpolicy replay buffer. Minimizing this quantity for all states results in the optimal Qfunction, which induces an optimal policy. In practice, the Bellman error is minimized only at sampled states, by computing the gradient of Equation (2) with respect to the Qfunction parameters
and using stochastic gradient descent. The gradient is computed only through the
term, without considering the derivative of the nondifferentiable operator. Applying Qlearning offpolicy is then straightforward: batches of states, actions, rewards, and subsequent states, of the form are sampled from the buffer of stored transition tuples, and the gradient of the Bellman error is computed on these samples. In practice, a number of modifications to this method are employed for stability, as suggested in prior work [28]. First, we employ a target network inside the that is decorrelated from the learned Qfunction, by keeping a lagged copy of the Qfunction that is delayed by 50 gradient updates. We refer to this target network as . Second, we employ double Qlearning (DQL) [41], which we found in practice improves the performance of this algorithm. In double Qlearning, the operator uses the action that maximizes the current network , but the value obtained from the target network , resulting in the following error estimate:To handle continuous actions, we use a simple stochastic optimization method to compute the in the target value: we sample actions uniformly at random, and pick the one with the largest Qvalue. While this method is crude, it is efficient, easy to parallelize, and we found it to work well for our 4dimensional action parameterization. The action at execution time is selected with CEM, in the same way as described in the previous section.
VC Regression with Monte Carlo Return Estimates
Although the Qlearning algorithm discussed in the previous section is one of the most commonly used and popular Qfunction learning methods for deep reinforcement learning, it is far from the simplest. In fact, if we can collect onpolicy data, we can estimate Qvalues directly with supervised regression. Assuming episodes of length , the empirical loss for Monte Carlo policy evaluation [39] is given by
where the first sum is taken over sampled episodes, and the second sum over the time steps within each episode. If the samples are drawn from the latest policy, this method provides an unbiased approach to estimating Qvalues. Monte Carlo return estimates were previously used to learn deep reinforcement learning polices for drone flight in [37]. In contrast to Qlearning, it does not require bootstrapping – the use of the most recent function approximator or target network to estimate target values. This makes the method very simple and stable, since the optimization reduces completely to standard supervised regression. However, the requirement to obtain onpolicy samples severely limits the applicability of this approach for realworld robotic manipulation. In our experiments, we evaluate how well this kind of Qfunction estimator performs when employed on offpolicy data. Surprisingly, it provides a very competitive alternative, despite being a biased estimator in the offpolicy case.
VD Corrected Monte Carlo Evaluation
The Monte Carlo (MC) policy evaluation algorithm described in the previous section is a wellknown method for estimating Qvalues [39], but not an especially popular one: it does not benefit from bootstrapping, and is biased when applied in an offpolicy setting. We can improve this approach by removing the offpolicy bias through the addition of a correction term, which we describe in this section. This correction is a novel contribution of our paper, motivated by the surprising effectiveness of the naïve Monte Carlo evaluation method. Let and be the Qvalues and state values of the optimal policy:
(3)  
(4) 
We may express the advantage of a stateaction pair as
(5)  
(6) 
Thus we have
(7) 
If we perform a discounted sum of the two sides of Equation 7 over we induce a telescoping cancellation:
(8) 
(9) 
where we recall that . Equivalently, we have
(10) 
Thus, we may train a parameterized to minimize the squared difference between the LHS and RHS of Equation 10. Note that this resulting algorithm is a modified version of Monte Carlo augmented with a correction to the future reward given by the discounted sum of advantages. Another interpretation of this correction is the difference between the Qvalues of the actions actually taken along the sampled trajectory, and the optimal actions. This means that “good” actions along “bad” trajectories are given higher values, while “bad” actions along “good” trajectories are given lower values. This removes the bias of Monte Carlo when applied to offpolicy data. We also note that this corrected Monte Carlo may be understood as a variant of PCL [29], discussed below, without the entropy regularization. In practice, we also multiply the correction term by a coefficient , which we anneal from 0 to 1 during training to improve stability. When , the method corresponds to supervised regression, and when , it becomes unbiased.
VE Deep Deterministic Policy Gradient
Deep deterministic policy gradient (DDPG) [25] is an algorithm that combines elements of Qlearning and policy gradients. Originally derived from the theory of deterministic policy gradients, this algorithm aims to learn a deterministic policy , by propagating gradients through a critic . However, DDPG can also be interpreted as an approximate Qlearning algorithm. To see this, observe that, in Qlearning, the policy that is used at test time is obtained by solving . In continuous action spaces, performing this optimization at every decision step is computationally expensive. The actor, which is trained in DDPG according to the objective
(11) 
can be seen as an approximate maximizer of the Qfunction with respect to the action at any given state . This amortizes the search over actions. The update equations in DDPG closely resemble Qlearning. The Qfunction is updated according to the gradient of the bootstrapped objective
and the actor is updated by taking one gradient step for the maximization in Equation (11). Comparing this equation to that of standard double Qlearning, we see that the only difference for the Qfunction update is the use of instead of the . The practical implementation of DDPG closely follows that of Qlearning, with the addition of the actor update step after each Qfunction update. In practice, a lagged (“target network”) version of the actor is used to compute the target value [25].
VF Path Consistency Learning
Path consistency learning (PCL) [29] is a stochastic optimal control variant of Qlearning that resembles our corrected Monte Carlo method. Though the full derivation of this algorithm is outside the scope of this paper, we briefly summarize its implementation, and include it for comparison due to its similarity with corrected MC. PCL augments the RL objective in Equation (1) with a weighted discounted entropy regularizer,
The corresponding optimal value function is given by
and together the policy and value function must satisfy step consistency for any :
(12) 
PCL minimizes the squared difference between the LHS and RHS of Equation 12 for a parameterized . In our experiments, we use a variant TrustPCL [30], which uses a Gaussian policy and modifies the entropy regularizer to relative entropy with respect to a prior version of the policy.
VG Summary and Unified View
In this section, we provide a unified view that summarizes the individual choices made in each of the above algorithms. All of the methods perform regression onto some kind of target value to estimate a Qfunction, and the principal distinguishing factors among these methods consist of the following two choices:
Bootstrapping or Monte Carlo returns
The standard Qlearning algorithm and DDPG use the bootstrap, by employing the current function approximator (or, in practice, a target network ) to determine the value of the policy at the next step, via the term . In contrast, both Monte Carlo variants, PCL, and the singlestep supervised method use the actual return of the entire episode. This is in general biased in the offpolicy case, since the current policy might perform better than the policy that collected the data. In the case of Monte Carlo with corrections and PCL, a correction term is added to compensate for the bias. We will see that adding the correction to Monte Carlo substantially improves performance.
Maximization via an actor, or via sampling
The DDPG and PCL methods use a second network to choose the actions, while the other algorithms only learn a Qfunction, and choose actions by maximizing it with stochastic search. The use of a separate actor network has considerable benefits: obtaining the action from the actor is much faster than stochastic search, and the actor training process can have an amortizing effect that can accelerate learning [25]. However, our empirical experiments show that this comes at a price: learning a value function and its corresponding actor function jointly makes them codependent on each other’s output distribution, resulting in instability.
The following table summarizes the specific choices for these two parameters made by each of the algorithms:
algorithm  target value  action selection 

supervised learning  episode value^{3}^{3}3Supervised learning uses the actual episode value, but does not use the actual actions of the episode, instead merging multiple actions into a cumulative action that leads to the episode’s final state.  stochastic search 
Qlearning  bootstrapped  stochastic search 
Monte Carlo  episode value  stochastic search 
Corrected Monte Carlo  corrected episode value  stochastic search 
DDPG  bootstrapped  actor network 
PCL  corrected episode value^{4}^{4}4PCL also includes an entropy regularizer in the objective.  actor network 
Vi Experiments
We evaluate each RL algorithm along four axes: overall performance, dataefficiency, robustness to offpolicy data, and hyperparameter sensitivity, all of which are important for the practicality of applying these methods to real robotic systems. As discussed in Section
IV, we consider two challenging simulated grasping scenarios, regular and targeted grasping, with performance evaluated on heldout test objects. All algorithms use variants of the deep neural network architecture shown in Figure 3 to represent the Qfunction.Via Data Efficiency and Performance
We consider learning with both onpolicy and offpolicy data. In each setting we initialize the pool of experience with an amount of randompolicy data (10k, 100k, or 1M grasps)^{5}^{5}5Code for the random policy is available at https://goo.gl/hPS6ca. A Qfunction model is trained from this data. In the onpolicy case we periodically sample 50 onpolicy grasps every 1k training steps which are used to augment the initial pool. This setting is onpolicy in the sense that we continually recollect data with the latest policy. However, the amount of onpolicy data is still significantly less than traditional onpolicy algorithms which sample a batch of onpolicy experience for each gradient step. This procedure is thus more representative of a robotic learning setting, where data collection is much more expensive than training iterations. We find that the difference between offpolicy and onpolicy is slight across all algorithms in all environments (see Figures 45). This suggests that the amount of onpolicy data necessary to have a significant benefit of performance is more than what we applied, and therefore likely unfeasible for robotics.
Overall, DQL, supervised learning, MC, and our CorrMC variant learn the most successful policies given enough data (see Figures 45
). DQL tends to perform better in lowdata regimes, while MC and corrected MC achieve slightly better performance in the highdata regime on the harder targeted task. The good performance of DQL in lowdata regimes can be partially explained by the variance reduction effect of the bootstrapped target estimate.
Although CorrMC and standard MC perform well, often competitively with DQL in highdata regimes, standard MC does not actually perform substantially worse than CorrMC in most cases. Although standard MC is highly biased in the offpolicy setting, it still achieves good results, except in the lowest data regimes with purely offpolicy data. This suggests that the bias incurred from this approach may not be as disastrous as generally believed, which should merit further investigation in future work. It is clear that while supervised training can perform well, standard modelfree deep RL methods can perform competitively and, in some cases, slightly better. Generally, DDPG and PCL perform poorly compared to the other baselines.
ViB Analyzing Stability
When applying deep RL methods to real robotic systems, we care not only about performance and data efficiency, but robustness to different hyperparameter values. Extensively tuning a learning algorithm for a particular environment can be tedious and impractical, and current deep RL methods are known to be unstable [18]. In this section, we will study the robustness of each algorithm to hyperparameters and different random seeds. For each algorithm we sweep over different values for learning rate (, , ), number of convolution filters and fullyconnected units in each layer (, ), discount factor (, )^{6}^{6}6MC and Supervised do not use the discount factor hyperparameter., and duration (in training steps) of perstep exploration with a linearly decaying schedule (, ). All hyperparameter sweeps were done in the onpolicy learning setting with 100k initial random grasps.
In Figure 6, we show an analysis of the sensitivity of each algorithm to each combination of the aforementioned hyperparameter values and 9 random seeds. Our results show that DQL, CorrMC, PCL, MC, and Supervised are relatively stable across different hyperparameter values. This plot is insightful, showing that although MC and CorrMC yield similar performance given optimal hyperparameters, the unbiased CorrMC is slightly more robust to hyperparameter choice. The performance of DDPG drops substantially for suboptimal hyperparameters. Correspondingly, DDPG (which is the least stable) typically achieves the worst performance in our experiments. These results strongly indicate that algorithms that employ a second network for the actor suffer a considerable drop in stability, while approximate maximization via stochastic search, though crude, provides significant benefits in this regard.
Vii Discussion and Future Work
We presented an empirical evaluation of a range of offpolicy, modelfree deep reinforcement learning algorithms. Our set of algorithms includes popular modelfree methods such as double Qlearning, DDPG, and PCL, as well as a prior method based on supervised learning with synthetic actions [24]. We also include a naïve Monte Carlo method, which is biased in the offpolicy case but surprisingly achieves reasonable performance, often outperforming DDPG, and a corrected version of this Monte Carlo method, which is a novel contribution of this work. Our experiments are conducted in a diverse grasping simulator on two types of tasks: a grasping task that evaluates generalization to novel random objects not seen during training, and a targeted grasping task that requires isolating and grasping a particular type of object in clutter.
Our evaluation indicates that DQL performs better on both grasping tasks than other algorithms in lowdata regimes, for both offpolicy and onpolicy learning, and additionally having the desirable property of being relatively robust to choice of hyperparameters. When data is more plentiful, algorithms that regress to a multistep return, such as Monte Carlo or the corrected variant of Monte Carlo typically achieve slightly better performance. When considering the algorithm features summarized in Section VG, we find that the use of an actor network substantially reduces stability, leading to poor performance and severe hyperparameter sensitivity. Methods that use entire episode values for supervision tend to perform somewhat better when data is plentiful, while the bootstrapped DQL method performs substantially better in low data regimes. These insights suggest that, in robotic settings where offpolicy data is available, singlenetwork methods may be preferred for stability, and methods that use (corrected) full episode returns should be preferred when data is plentiful, while bootstrapped methods are better in low data regimes. A natural implication of this result is that future research into robotic reinforcement learning algorithms might focus on combining the best of bootstrapping and multistep returns, by adjusting the type of target value based on data availability. Another natural extension of our work is to evaluate a similar range of methods in realworld settings. Since the algorithms we evaluate all operate successfully in offpolicy regimes, they are likely to be reasonably practical to use in realistic settings.
Viii Acknowledgements
We thank Laura Downs, Erwin Coumans, Ethan Holly, JohnMichael Burke, and Peter Pastor for helping with experiments.
References

[1]
M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling.
The arcade learning environment: An evaluation platform for general
agents.
Journal of Artificial Intelligence Research (JAIR)
, 2013.  [2] J. Bohg, A. Morales, T. Asfour, and D. Kragic. Datadriven grasp synthesis—a survey. Transactions on Robotics, 2014.
 [3] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym. arXiv preprint, 2016.

[4]
Y. Chebotar, K. Hausman, M. Zhang, G. Sukhatme, S. Schaal, and S. Levine.
Combining modelbased and modelfree updates for trajectorycentric
reinforcement learning.
International Conference on Machine Learning (ICML)
, 2017.  [5] Y. Chebotar, M. Kalakrishnan, A. Yahya, A. Li, S. Schaal, and S. Levine. Path integral guided policy search. In International Conference on Robotics and Automation (ICRA), 2017.
 [6] E. Coumans and Y. Bai. pybullet, a python module for physics simulation, games, robotics and machine learning. http://pybullet.org/, 2016–2017.
 [7] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel. Benchmarking deep reinforcement learning for continuous control. In International Conference on Machine Learning (ICML), 2016.
 [8] C. Finn, I. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through video prediction. In Neural Information Processing Systems (NIPS), 2016.
 [9] C. Finn and S. Levine. Deep visual foresight for planning robot motion. In International Conference on Robotics and Automation (ICRA), 2017.

[10]
C. Finn, X. Y. Tan, Y. Duan, T. Darrell, S. Levine, and P. Abbeel.
Deep spatial autoencoders for visuomotor learning.
In International Conference on Robotics and Automation (ICRA), 2016.  [11] A. Ghadirzadeh, A. Maki, D. Kragic, and M. Björkman. Deep predictive policy training using reinforcement learning. arXiv preprint, 2017.
 [12] C. Goldfeder, M. Ciocarlie, H. Dang, and P. K. Allen. The columbia grasp database. In International Conference on Robotics and Automation (ICRA), 2009.
 [13] S. Gu, E. Holly, T. Lillicrap, and S. Levine. Deep reinforcement learning for robotic manipulation with asynchronous offpolicy updates. International Conference on Robotics and Automation (ICRA), 2017.
 [14] S. Gu, T. Lillicrap, Z. Ghahramani, R. E. Turner, and S. Levine. Qprop: Sampleefficient policy gradient with an offpolicy critic. International Conference on Learning Representations (ICLR), 2017.
 [15] S. Gu, T. Lillicrap, I. Sutskever, and S. Levine. Continuous deep Qlearning with modelbased acceleration. In International Conference on Machine Learning (ICML), 2016.
 [16] T. Haarnoja, H. Tang, P. Abbeel, and S. Levine. Reinforcement learning with deep energybased policies. International Conference on Machine Learning (ICML), 2017.
 [17] A. Herzog, P. Pastor, M. Kalakrishnan, L. Righetti, J. Bohg, T. Asfour, and S. Schaal. Learning of grasp selection based on shapetemplates. Autonomous Robots, 2014.
 [18] R. Islam, P. Henderson, M. Gomrokchi, and D. Precup. Reproducibility of benchmarked deep reinforcement learning tasks for continuous control. arXiv preprint, 2017.
 [19] S. James and E. Johns. 3d simulation for robot arm control with deep qlearning. arXiv preprint, 2016.
 [20] D. Kappler, J. Bohg, and S. Schaal. Leveraging big data for grasp planning. In International Conference on Robotics and Automation (ICRA), 2015.
 [21] S. Lange, M. Riedmiller, and A. Voigtlander. Autonomous reinforcement learning on raw visual input data in a real world application. In International Joint Conference on Neural Networks (IJCNN), 2012.
 [22] I. Lenz, H. Lee, and A. Saxena. Deep learning for detecting robotic grasps. The International Journal of Robotics Research (IJRR), 2015.
 [23] S. Levine, C. Finn, T. Darrell, and P. Abbeel. Endtoend training of deep visuomotor policies. Journal of Machine Learning Research (JMLR), 2016.
 [24] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen. Learning handeye coordination for robotic grasping with deep learning and largescale data collection. The International Journal of Robotics Research (IJRR), 2016.
 [25] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. International Conference on Learning Representations (ICLR), 2016.
 [26] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg. Dexnet 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. arXiv preprint, 2017.
 [27] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning (ICML), 2016.
 [28] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint, 2013.
 [29] O. Nachum, M. Norouzi, K. Xu, and D. Schuurmans. Bridging the gap between value and policy based reinforcement learning. Neural Information Processing Systems (NIPS), 2017.
 [30] O. Nachum, M. Norouzi, K. Xu, and D. Schuurmans. Trustpcl: An offpolicy trust region method for continuous control. arXiv preprint, 2017.
 [31] B. O’Donoghue, R. Munos, K. Kavukcuoglu, and V. Mnih. PGQ: Combining policy gradient and Qlearning. arXiv preprint, 2016.
 [32] A. t. Pas, M. Gualtieri, K. Saenko, and R. Platt. Grasp pose detection in point clouds. arXiv preprint, 2017.
 [33] L. Pinto and A. Gupta. Supersizing selfsupervision: Learning to grasp from 50k tries and 700 robot hours. In International Conference on Robotics and Automation (ICRA), 2016.
 [34] I. Popov, N. Heess, T. Lillicrap, R. Hafner, G. BarthMaron, M. Vecerik, T. Lampe, Y. Tassa, T. Erez, and M. Riedmiller. Dataefficient deep reinforcement learning for dexterous manipulation. arXiv preprint, 2017.
 [35] M. Riedmiller. Neural fitted q iterationfirst experiences with a data efficient neural reinforcement learning method. In European Conference on Machine Learning (ECML). Springer, 2005.
 [36] A. Rodriguez, M. T. Mason, and S. Ferry. From caging to grasping. International Journal of Robotics Research (IJRR), 2012.
 [37] F. Sadeghi and S. Levine. (cad)$^2$rl: Real singleimage flight without a single real image. CoRR, abs/1611.04201, 2016.
 [38] J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimization. In International Conference on Machine Learning (ICML), pages 1889–1897, 2015.
 [39] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT press Cambridge, 1998.
 [40] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. arXiv preprint, 2017.
 [41] H. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double qlearning. AAAI, 2016.
 [42] C. J. Watkins and P. Dayan. Qlearning. Machine learning, 1992.
 [43] J. Weisz and P. K. Allen. Pose error robust grasping from contact wrench space metrics. In Robotics and Automation (ICRA), 2012 IEEE International Conference on, pages 557–562. IEEE, 2012.
 [44] R. J. Williams. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 1992.
 [45] Y. Wu, E. Mansimov, S. Liao, R. Grosse, and J. Ba. Scalable trustregion method for deep reinforcement learning using kroneckerfactored approximation. arXiv preprint, 2017.
 [46] F. Zhang, J. Leitner, M. Milford, B. Upcroft, and P. Corke. Towards visionbased deep reinforcement learning for robotic motion control. arXiv preprint, 2015.
Comments
There are no comments yet.