How can we embed prior knowledge into robot control systems? For simple tasks, an engineer can embed the entire solution into the system by instructing desired joint angle configurations for a robot to follow. Approaches for more complicated tasks might embed physical modelling into the control system, however this is often brittle because many real-world physics effects are difficult to capture accurately.
In this paper we consider the family of industrial insertion tasks where the robot inserts a grasped part into a tight-fitting socket. Today, the engineering time for fine-tuning state-of-the-art feedback controllers for such tasks can be similar in cost to the robot hardware itself. Flexible manufacturing with smaller lot-sizes and faster engineering cycles requires being able to quickly synthesize robust control policies, which can handle variability. This also broadens the space of manufacturing tasks accessible to robot automation. Notably, while the family of insertion tasks share significant structure, few existing methods have demonstrated the capability to take advantage of that similarity. Many of the most effective current methods for compliant robotic insertion [39, 40, 4, 23] require physical models, or else rely on manually-tuned feedback controllers to attain satisfactory performance.
Ideally, the task structure for an insertion task should be automatically inferred from the experience of having solved similar tasks. This insight leads us to meta-reinforcement learning methods, which given experience with a family of tasks, adapt to a new task from this family. However, while reinforcement learning (RL) methods can solve a task by learning from data, applying RL in the real world on many tasks is expensive. To circumvent this problem, we represent a task distribution entirely in simulation. Here, we can control various facets of the environment, samples are cheap, and reward functions are easy to specify. In simulation, we learn the latent structure of the task using probabilistic embeddings for actor-critic RL (PEARL), an off-policy meta-RL algorithm, which embeds each task into a latent space . The meta-learning algorithm first learns the task structure in simulation by training on a wide variety of generated insertion tasks. For our family of insertion tasks, the size and placement of the components, parameters of the controller, and magnitude of sensor errors are all randomized, resulting in the policy learning robust and adaptive search strategies based on the latent task code. After training in simulation, the algorithm is then capable of quickly adapting to a real-world task.
In this work, we solve industrial robotic insertion problems by learning the latent structure of the tasks with meta-RL. Concretely, we look at the task of grasping and inserting two parts: a Misumi E-model electrical connector into its socket (one of the most challenging tasks from the IROS 2017 Robotic Grasping and Manipulation Competition ) and a gear on a shaft. We adapt the same policy, which was learned in simulation, to each of the two tasks, despite their distinct physical properties. Moreover, in each task, our method adapts with just 20 trials, significantly fewer than in previous work. We present the robotic system, including a system to account for grasp errors from camera images. Finally, we cover the comprehensive evaluation of our method against both conventional methods and learning-based methods for different degrees of environment variability.
Ii Related Work
Studies on peg-in-hole insertion have been ubiquitous in industrial automation, as it is representative of many common assembly problems. The key challenges involved in insertion tasks are modeling of physical contacts, and handling errors in perception and control. Since physical modeling of contacts and friction is often difficult, deployed controllers for insertions are based on heuristic search patterns that handle the issues implicitly. These methods include random search, spiral search or raster search. The search patterns are combined with compliant control methods, which have been amongst the first model-based strategies for solving insertion tasks [39, 40, 23]. The parameters of these controllers are manually tuned in order to overcome perception and control errors for a specific system. The search patterns are often embedded in control state-machines, which guide the system. Other approaches focus on high-precision assembly by taking advantage of high-dimensional geometry or contact information [16, 35, 12, 17]. These approaches require a significant amount of engineering, modeling and tuning.
Instead of relying on human ingenuity to solve robotic control tasks, the paradigm of RL has promise to autonomously learn the control policy from data. RL has thus far been used in a variety of settings, such as playing ping-pong . RL with expressive function approximators, or deep RL, further allows tasks to be learned from raw sensor inputs such as images. Deep RL has shown success in games [20, 32], in learning fine robotic manipulation skills  and navigation . Specifically, peg insertion tasks have commonly been considered in deep RL settings. Florensa et al. investigate difficult insertion tasks with sparse rewards in simulation using a reverse curriculum . Another approach to solving these tasks is to use prior data such as demonstrations [11, 21, 27, 31]. Vecerik et al.  perform a real-world insertion task utilizing demonstrations. Alternatively, one can learn a residual policy for contacts that is superposed with conventional controls [13, 33].
Another line of work considers first learning on simulation models of a task and then transferring the policy to the real world. One approach is domain randomization, which trains on a wide distribution of tasks in simulation assuming that the real world task is captured in that distribution [30, 36, 24, 26]. Further work adaptively randomizes the distribution of the tasks [22, 29, 19]. One can also actively adapt the simulator by switching between simulation and real-world interaction to guide the simulator [42, 3]. In this work, we take a different approach by using meta-learning to learn a distribution of skills in simulation, followed by adaptation in the real world.
In this section, we define basic notation and describe reinforcement learning and meta-learning.
Iii-a Reinforcement Learning
We model our problem as a Markov decision process, where an agent at every discrete timestepis in state , executes an action , receives a scalar reward
, and the environment evolves to the next state according to the transition probability. The agent attempts to maximize the expected return where is the planning horizon and is a discount factor. In reinforcement learning, the agent learns a policy that is optimized from data.
Meta-learning is the problem of rapid adaptation: given experience in some family of tasks, how can use that experience to quickly adapt to a new task at test time? Formally, consider a task to be defined by the reward function , initial state distribution , and transition distribution . We consider some distribution over tasks , which we want to perform well on at test time by collecting limited experience during training time.
Several methods have explored this setting. One class of methods separates the training time into meta-training and meta-testing, and attempts to learn a model (a policy, forward model, or loss function) during meta-training that improves meta-test performance[7, 9, 34, 2, 41]. In the meta-RL setting, these methods effectively take advantage of back-propagation through on-policy gradient updates, which limits their sample efficiency.
Iii-C Probabilistic Embeddings for Actor-Critic RL
Probabilistic embeddings for actor-critic RL (PEARL) is a meta-learning algorithm that enables sample efficient meta-learning by reusing past data with off-policy RL algorithms . The key idea is to condition the policy on the past transitions of the current task, which is termed the context . The context is encoded into a latent variable , and we train the policy . During meta-training PEARL learns the policy parameters and the inference network which is factorized as and are Gaussian factors, resulting in a Gaussian posterior. The parameters and are learned with an off-policy algorithm that additionally learns a critic. At meta-test time, is sampled before every rollout, and the new data is used to update the posterior.
Iv Industrial Insertion Tasks
We apply meta-learning to two real-world industrial insertion tasks, a waterproof electrical connector plug and a 3D-printed gear, pictured in Fig. 1. For our experiments, we use the Rethink Robotics Sawyer robot running a Cartesian-space end effector position controller, further detailed in V-C1. Thus the action-space is 3-dimensional. As observations, the current end effector positions relative to the the assumed goal location are used, resulting in a 3-dimensional observation space. Each real-world trial consists of 50 steps with a maximum step size of . The duration of each step is calculated by multiplying the length of the step with a desired average velocity of . After each trial a reset is performed, the reset position is located above the insertion socket. The workspace of the robot is defined as a cylinder with a radius of and a height of , centered at the goal location. If it happens that the robot leaves the workspace, it gets pushed back inside, perpendicular to the nearest surface of the cylinder. If an insertion is completed before the end of a trial, the end effector is kept still but rewards are collected until the end of the trial. We use the following sparse reward function during the real-world adaptation phase:
where the threshold to detect a successful insertion with a height measurement is below the tip of the insertion.
The key insight of this work is that industrial insertion tasks have shared structure that can be exploited by learning from data on a family of tasks. Thus, in order to obtain a general meta-RL policy for the real-world, we first design a family of tasks in simulation to reflect the real world tasks. Then, we use meta-learning in simulation to learn a policy and task embedding that allows fast adaptation to new tasks in that family. Finally, we apply the learned policy in the real world, where the complete task is to first grasp a part and then insert it. Below, we describe each of these steps in detail.
V-a Simulated Environment Design
To simulate the family of industrial insertion tasks, we use the physics engine MuJoCo . The simulated environment, shown in Fig. 2, contains the Sawyer robot, a table, blocks on the table that form a hole, and a block that fits into this hole located in the robot’s parallel gripper. The blocks on the table are fixed and can not move, and the block inside the robot’s gripper is welded to the end effector. Like the real world, Cartesian-space end effector position control is used, with the maximum step size in each of the 3 directions set to . The family of tasks is generated by randomizing simulation parameters. The following parameters are randomized:
, the horizontal offset of the goal, within .
, the clearance of the insertion task, modified by changing the size of the block between and , while the size of the square hole remains fixed at .
, the scaling of the position controller’s step size, in the range .
Additionally, the reset position of the end-effector is uniformly sampled inside a cube with side length , located above the ideal goal, before each reset.
The observation space of the simulated environment consists of the 3-dimensional end effector location, measured relative to the ideal goal, to which the random perturbations are added. Centering observations with respect to the calibrated goal location allows the reuse a final policy on different robot setups.
The reward function during the simulated meta-training is the -distance to a full insertion with the current goal location. During the meta-adaptation phase, the sparse reward function in Eq. (1) is used. This is done because the exact goal location is not known during test-time, but a successful insertion can be indicated via a height measurement. The choice of rewards during training and test time are comparable to prior work .
V-B Sim-to-Real Transfer via Meta Reinforcement Learning
Using the simulator, we train a policy with the meta-RL algorithm on the family of tasks. Although any meta-RL algorithm could potentially be used, in this work we use PEARL for a number of reasons. First, due its capability for off-policy training, it is highly sample efficient. Second, PEARL learns a task embedding, which allows it to explicitly learn a latent structure over the family of tasks. This property of the algorithm also allows for very fast adaptation, which is vital in the real-world as collecting real-world samples can be expensive. The training of PEARL is outlined in Alg. 1. The meta-RL policy trained in simulation is then able to adapt to tasks sampled from the training distribution within a small number of trials.
We then perform policy adaptation on the real system until consistent performance is reached, as detailed in Alg. 2. From the perspective of the algorithm, the real system is just another task to be adapted to. This simple procedure is surprisingly effective at learning robust, adaptive controllers.
V-C Real-World Execution
While we only train the insertion skill in simulation, in the real world the task is to first grasp the part and then insert it. In this section, we cover the real-world implementation details including a more accurate controller for the Sawyer, and an algorithm for grasp detection and correction.
V-C1 Robot Impedance Controller
The control scheme we developed for precise end effector position control of the Sawyer robot is presented in Alg. 3. With this controller, the robot consistently reaches a target with a precision of . In addition to the low-level control, we added a non-interfering high-level impedance controller, that does not decrease precision. Using position commands instead of velocity commands resulted in an average position error of . With the default end effector position provided by the manufacturer, a target was reachable within , the provided impedance controller showed an error of .
To safely perform insertion tasks, we developed an impedance controller that operates in end effector position space. After each execution of Alg. 3, the vertical force at the end effector is measured. If it exceeds a threshold of , a small upwards move is initiated. If the force still exceeds the threshold, a larger upwards move follows. This procedure can also be used to achieve a desired downwards force, which we do in experiments with policies that only control the horizontal movement. An additional safety feature, shown in Alg. 3, is a low frequency measurement of the end effector forces in-between the high frequency commands that are sent to the robot. When a threshold of is exceeded, the robot stops the current motion and waits for the next commanded action. During upwards movements, this safety feature is disabled to prevent the robot from getting stuck while pressing down.
V-C2 Grasp Algorithm
A RealSense D435 depth camera is mounted to the robot arm and used to scan the workspace to calculate a grasp based on a depth image. We clean the depth image from artifacts and use a hand-tuned distance threshold to binarize the image. In most cases, this already extracts individual objects sufficiently. We then apply a contour fining algorithm to extract rectangular contours, check if the size of a found contour matches the assumed object size, and use temporal filtering to average the object location over multiple frames. The grasp will be planned along one of the principal axes of the rectangular bounding box. Hand-eye calibration is used to find the corresponding real-world coordinates in the robot frame. The requirements for this grasp approach are that the graspable object is clearly detectable in the depth image and that the distance threshold and the assumed object size are set appropriately.
V-C3 Grasp Error Correction
In a real factory setting, each object that is about to be inserted by a robot needs to be grasped first. This increases the time per insertion attempt and induces unavoidable grasp errors when using a non-self-centering parallel gripper. In order to resemble the real setting as precisely, as possible, we include the grasping in our experimental setup. To mitigate grasp errors, we propose a grasp-correction algorithm that only requires a single image taken of the bottom of the grasped object to calculate the object’s displacement with respect to a reference grasp.
Our grasp correction algorithm uses an image of the bottom of the grasped object and compares it with a reference image using cross-correlation. From the cross-correlation of the new image with the reference image, the translation with respect to the reference grasp pose can be inferred reliably. The goal location is then adjusted based on the computer grasp error.
Rotational grasp errors are not considered because a the objects were not seen to rotate inside the parallel gripper and the rotation of the fixed goal location was calibrated. In different setups however, rotations may be a major source of error and should be investigated.
Vi Experimental Results
We conduct a series of experiments to answer the following questions:
Can PEARL learn to robustly adapt to novel insertion tasks in simulation?
Is it possible to adapt insertion policies learned using meta-RL in simulation to the real reward?
How does sim-to-real meta-RL compare to existing solutions to robotic insertion problems, in terms of robustness and efficiency?
What patterns and behavior does the algorithm learn in simulation that allow it to transfer to the real world?
We address each of these questions in our experimental evaluation, presented below.
Vi-a Adaptation in Simulation
First, we examine the performance of PEARL on our family of simulated insertion tasks. The adaptation performance on test tasks after training is shown in Fig 5.
In the results, we see that the zero-shot performance of the trained policies is about . But given 20 trials in the new environment, the algorithm can successfully adapt to solve each of the new tasks.
Vi-B Real-World Adaptation
After training in simulation, we adapt the meta-RL policy to tasks in the real world. As discussed in Sec. V-C3, in the real-world tasks the object (either the connector or the gear) is picked up using our grasp system, each grasp is evaluated using a camera image, and grasp errors are compensated according to our grasp correction algorithm. Since each grasp is slightly different, grasping introduces an additional challenge, requiring our method to compensate for this realistic source of variability.
In addition to the grasping, we consider robustness to poorly calibrated setups by perturbing the goal location. Thus, we evaluate the method on five different tasks between the two use cases: the plug insertion task with no noise, mm noise, and mm noise, and the the gear task with no noise, and mm noise.
The real-world adaptation results are presented in Fig. 6. The results show that in each case we can adapt to all the tasks in less than 20 trials of real-world interaction.
The goal perturbation presents a challenge to all methods. We see that our methods is able to still solve the insertion task. In contrast to the heuristic methods, our method has the capacity to learn very complex search strategies and continuously adapt. Since the simulated meta-training phase included sufficient randomization of the goal location, the meta-trained policy explores quite broadly when adapting to the real world.
Vi-C Comparison of Robustness and Sample Efficiency
|RL from Scratch||1.0||0.32||0.0||1.0||0.92|
|RL from Scratch||5.8|
In these experiments, we evaluate whether meta-RL is a viable solution for industrial insertion tasks, comparing the method to existing solutions that are used today. We compare to four strong baselines, which are either covered in past research or used widely in industry. In total, we compare the following methods:
Straight downwards. Move straight downwards from the reset position.
Random search. In this stochastic search policy, described in , the robot moves horizontally between search points that are sampled uniformly inside of a square shaped search space with side length , centered above the assumed goal location. The robot moves downwards at the first sampled point until a vertical contact force of is sensed at the end effector. If no successful insertion is detected, the end effector moves back upwards until the measured force decreases below and then moves horizontally to the next sampled point where it attempts the another insertion in the same way. At most 50 random insertion attempts are executed in each trial.
Spiral search. The robot generates a spiral above the assumed goal location and iteratively attempts to insert downwards at points in the spiral . During the downwards movement, a force threshold of is used to indicate contact and signals the robot to move to the next point in the same way as described above in random search. In our implementation, the distance to the center increases by each rotation and insertions are attempted at points apart along the spiral. At most, the robot moves through 50 points.
RL from scratch. We train SAC  in the real world from scratch, using the same action space, state space and sparse reward function as in the real-world adaptation phase with PEARL, described in Sec. IV. The training with SAC requires substantially more environment steps than adapting PEARL, which is why we choose to rigidly mount the adapter to the robot’s gripper and leave out the grasping during the training. At test-time, the grasping is performed. The SAC policies were trained for 2 hours and 20 minutes; repeating success was already visible after 1 hour and 20 minutes of training.
PEARL Sim2Real. Our method using meta RL, as described in Section V.
We evaluate these methods along two dimensions. Most importantly, we measure the success rate of the method on a task. We also measure the time needed for each insertion, to compare the efficiency of the different methods - the moment of successful insertion is detected via a height threshold. The measurement of efficiency is important for practical applications, since throughput is a major consideration in industrial settings. The results of the insertion time were averaged over 10 successful insertions per task and policy.
Results of the experiments performed on all five tasks are reported in Table I. We immediately see that our method is the only one that consistently solves every task, and is almost always the fastest, except when moving straight down works. The gear use-case is visibly more difficult and not solvable with naïve downwards movement. The two heuristic search methods: random search and spiral search, are not always able to succeed at the more difficult settings in the given 50 steps. Meta-RL sim-to-real transfer shows the best performance among the most difficult tasks. Videos of our results can be viewed at http://pearl-insertion.github.io.
Vi-D What behavior does the policy transfer from simulation?
We believe the main knowledge transferred from sim-to-real is structured exploration noise
. We investigate by comparing the learned stochastic policy in PEARL to the deterministic evaluation of this policy done by always choosing the most likely action, which is the mean of the output with a Gaussian distribution. Prior work has consistently found that, although stochasticity helps at training time, the deterministic policy gives better final returns. In Fig. 7 and II, we compare the stochastic and deterministic policy when learning in simulation and performing sim-to-real transfer with PEARL.
|PEARL Success Rate||Deterministic||Stochastic|
|Connector Plug +3mm||0.44||1.0|
As shown in Fig. 7, the stochastic policy consistently achieves a higher success rate. During the real-world adaptation, we observed better exploration with the stochastic policy, as well as a slightly better final performance, reported in Tab. II. The failed insertion attempts of the deterministic policy happened because the gear became stuck at the first stage of the insertion. This physical phenomena was not modeled in the simulation. However, the stochastic policy was still able to recover in all cases because it produced oscillating movements around the contact point of the insertion. In Fig. 8
we visualize the computed actions of a PEARL policy that was trained on a 2D sparse point robot environment with uniformly distributed goals around the origin and adapted in the real world on the electrical connector plug task. It is visible that the deterministic policy does not perform any movement inside of the goal region, whereas the stochastic policy learned to fully explore the goal region. We observed this movement inside of the goal region to be beneficial when performing insertions in the real world, as a slight misjudgement of the shape, size and location of the goal region can be compensated with these stochastic actions.
Finally, we can infer what behavior is learned by analyzing the situations in which the sim-to-real transfer with PEARL did not work well. For instance, the real-world adaptation failed when the randomization of the reset position was left out during the training in simulation. The trained meta-policy did not learn a stable behavior outside of the direct paths to the training goals. In the real world adaptation phase, inaccuracies of the real robot’s movement caused the end effector to enter unstable regions, in which a continuous movement in a direction away from the origin occurred. The real-world adaptation also failed when the randomization amount was too high, as sometimes none of the insertion attempts during the real-world adaptation phase succeeded. Due to the use of sparse rewards, PEARL does not obtain any explicit information about the goal location in this case. When we observed this failure case, we reduced the amount of randomization in simulation.
In this paper, we studied meta-reinforcement learning for industrial insertion tasks. Our method first performs meta-training in a low-fidelity simulation, and then actively adapts to a variety of real-world insertion and assembly tasks. This approach can solve complex real-world tasks in under 20 trials, performing connector assembly and a 3D-printed gear insertion task. We also demonstrated the feasibility of our method under challenging conditions, such as noisy goal specification and complex connector geometries.
Our method shifts the burden of engineering robotics solutions from designing accurate analytic physical models to designing a family of representative simulated tasks. Furthermore, as our method requires experience in the real world only for the final adaptation step, the work of designing the simulation may be amortized across many tasks. Thus, we believe that our work illustrates the potential of meta-RL to provide a scalable and general method for rapid adaptation in manufacturing and industrial robotics.
This work was supported by the Siemens Corporation, the Office of Naval Research under a Young Investigator Program Award, and Berkeley DeepDrive.
-  (2016) Learning to learn by gradient descent by gradient descent. In NeurIPS, pp. 3988–3996. External Links: Cited by: §III-B.
-  (2019-05) A data-efficient framework for training and sim-to-real transfer of navigation policies. In International Conference on Robotics and Automation (ICRA), Vol. 2019-May, pp. 782–788. External Links: Cited by: §III-B.
-  (2019) Closing the Sim-to-Real Loop: Adapting Simulation Randomization with Real World Experience. In ICRA, External Links: Cited by: §II.
-  (2001) Search strategies for peg-in-hole assemblies with position uncertainty. In IROS, Cited by: §I, §II.
-  (2016-11) RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning. External Links: Cited by: §III-B.
-  (2018) Robotic grasping and manipulation competition: competitor feedback and lessons learned. In Robotic Grasping and Manipulation, Y. Sun and J. Falco (Eds.), Cham, pp. 180–189. External Links: Cited by: §I.
-  (2017) Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. In ICML, External Links: Cited by: §III-B.
-  (2018) Reverse Curriculum Generation for Reinforcement Learning. In ICLR, External Links: Cited by: §II.
-  (2018) Recasting Gradient-Based Meta-Learning As Hierarchical Bayes. In ICLR, External Links: Cited by: §III-B.
-  (2018) Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. In ICML, External Links: Cited by: item 4, §VI-D.
Learning from Demonstrations for Real World Reinforcement Learning.
AAAI Conference on Artificial Intelligence, External Links: Cited by: §II.
-  (2017) Deep reinforcement learning for high precision assembly tasks. In IROS, pp. 819–825. Cited by: §II.
-  (2019) Residual Reinforcement Learning for Robot Control. In ICRA, External Links: Cited by: §II.
-  (2018) Self-supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation. In ICRA, External Links: Cited by: §II.
End-to-End Training of Deep Visuomotor Policies.
Journal of Machine Learning Research (JMLR)17 (1), pp. 1334–1373. External Links: Cited by: §II.
-  (2014) Localization and Manipulation of Small Parts Using GelSight Tactile Sensing. In IROS, External Links: Cited by: §II.
-  (2019) Reinforcement learning on variable impedance controller for high-precision robotic assembly. In ICRA, Cited by: §II.
-  (2018) Multi-Robot Assembly Strategies and Metrics. ACM computing surveys 51 (1), pp. 14. Cited by: item 2, item 3.
-  (2019-04) Active Domain Randomization. In Conference on Robot Learning (CoRL), External Links: Cited by: §II.
Playing Atari with Deep Reinforcement Learning.
NIPS Workshop on Deep Learning, pp. 1–9. External Links: Cited by: §II.
-  (2018) Overcoming Exploration in Reinforcement Learning with Demonstrations. In ICRA, External Links: Cited by: §II.
-  (2019-10) Solving Rubik’s Cube with a Robot Hand. External Links: Cited by: §II.
-  (2013) Intuitive peg-in-hole assembly strategy with a compliant manipulator. In IEEE ISR 2013, pp. 1–5. Cited by: §I, §II.
-  (2018) Sim-to-Real Transfer of Robotic Control with Dynamics Randomization. In ICRA, External Links: Cited by: §II.
-  (2010) Relative Entropy Policy Search. In AAAI Conference on Artificial Intelligence, pp. 1607–1612. External Links: Cited by: §II.
-  (2018) Asymmetric Actor Critic for Image-Based Robot Learning. RSS. Cited by: §II.
-  (2018) Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations. In Robotics: Science and Systems, External Links: Cited by: §II.
-  (2019) Efficient off-policy meta-reinforcement learning via probabilistic context variables. In ICML, pp. 5331–5340. Cited by: Fig. 1, §I, §III-B, §III-C, §V-A.
-  (2019) BayesSim: adaptive domain randomization via probabilistic inference for robotics simulators. In Robotics: Science and Systems (RSS), External Links: Cited by: §II.
-  (2017) CAD 2 RL: Real Single-Image Flight Without a Single Real Image. In RSS, External Links: Cited by: §II.
-  (2019) Deep Reinforcement Learning for Industrial Insertion Tasks with Visual Inputs and Natural Rewards. arXiv preprint arXiv: 1906.05841. External Links: Cited by: §II.
Mastering the game of Go with deep neural networks and tree search. Nature 529 (7587), pp. 484–489. External Links: Cited by: §II.
-  (2018) Residual Policy Learning. arXiv preprint arXiv:1812.06298. External Links: Cited by: §II.
-  (2018) Universal Planning Networks. In ICML, External Links: Cited by: §III-B.
-  (2017) Learning from the hindsight plan — episodic mpc improvement. In ICRA, pp. 336–343. Cited by: §II.
-  (2017) Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World. IROS. Cited by: §II.
-  (2012) MuJoCo: A physics engine for model-based control. In IROS, pp. 5026–5033. External Links: Cited by: §V-A.
-  (2017) Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards. CoRR abs/1707.0. External Links: Cited by: §II.
-  (1977) Force feedback control of manipulator fine motions. Cited by: §I, §II.
-  (1982) Quasi-static assembly of compliantly supported rigid parts. ASME J. Dynamic Systems Measurement, and Control 104, pp. 65–77. Cited by: §I, §II.
-  (2019) Learning to Learn How to Learn: Self-Adaptive Visual Navigation Using Meta-Learning. In Computer Vision and Pattern Recognition (CVPR), Cited by: §III-B.
-  (2019) Environment Probing Interaction Policies. In International Conference on Learning Representations, External Links: Cited by: §II.