Meta-Reinforcement Learning for Robotic Industrial Insertion Tasks

04/29/2020 ∙ by Gerrit Schoettler, et al. ∙ Siemens AG berkeley college 0

Robotic insertion tasks are characterized by contact and friction mechanics, making them challenging for conventional feedback control methods due to unmodeled physical effects. Reinforcement learning (RL) is a promising approach for learning control policies in such settings. However, RL can be unsafe during exploration and might require a large amount of real-world training data, which is expensive to collect. In this paper, we study how to use meta-reinforcement learning to solve the bulk of the problem in simulation by solving a family of simulated industrial insertion tasks and then adapt policies quickly in the real world. We demonstrate our approach by training an agent to successfully perform challenging real-world insertion tasks using less than 20 trials of real-world experience.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

How can we embed prior knowledge into robot control systems? For simple tasks, an engineer can embed the entire solution into the system by instructing desired joint angle configurations for a robot to follow. Approaches for more complicated tasks might embed physical modelling into the control system, however this is often brittle because many real-world physics effects are difficult to capture accurately.

In this paper we consider the family of industrial insertion tasks where the robot inserts a grasped part into a tight-fitting socket. Today, the engineering time for fine-tuning state-of-the-art feedback controllers for such tasks can be similar in cost to the robot hardware itself. Flexible manufacturing with smaller lot-sizes and faster engineering cycles requires being able to quickly synthesize robust control policies, which can handle variability. This also broadens the space of manufacturing tasks accessible to robot automation. Notably, while the family of insertion tasks share significant structure, few existing methods have demonstrated the capability to take advantage of that similarity. Many of the most effective current methods for compliant robotic insertion [39, 40, 4, 23] require physical models, or else rely on manually-tuned feedback controllers to attain satisfactory performance.

Ideally, the task structure for an insertion task should be automatically inferred from the experience of having solved similar tasks. This insight leads us to meta-reinforcement learning methods, which given experience with a family of tasks, adapt to a new task from this family. However, while reinforcement learning (RL) methods can solve a task by learning from data, applying RL in the real world on many tasks is expensive. To circumvent this problem, we represent a task distribution entirely in simulation. Here, we can control various facets of the environment, samples are cheap, and reward functions are easy to specify. In simulation, we learn the latent structure of the task using probabilistic embeddings for actor-critic RL (PEARL), an off-policy meta-RL algorithm, which embeds each task into a latent space [28]. The meta-learning algorithm first learns the task structure in simulation by training on a wide variety of generated insertion tasks. For our family of insertion tasks, the size and placement of the components, parameters of the controller, and magnitude of sensor errors are all randomized, resulting in the policy learning robust and adaptive search strategies based on the latent task code. After training in simulation, the algorithm is then capable of quickly adapting to a real-world task.

In this work, we solve industrial robotic insertion problems by learning the latent structure of the tasks with meta-RL. Concretely, we look at the task of grasping and inserting two parts: a Misumi E-model electrical connector into its socket (one of the most challenging tasks from the IROS 2017 Robotic Grasping and Manipulation Competition [6]) and a gear on a shaft. We adapt the same policy, which was learned in simulation, to each of the two tasks, despite their distinct physical properties. Moreover, in each task, our method adapts with just 20 trials, significantly fewer than in previous work. We present the robotic system, including a system to account for grasp errors from camera images. Finally, we cover the comprehensive evaluation of our method against both conventional methods and learning-based methods for different degrees of environment variability.

Fig. 1: We present results on solving two real-world use cases of robotic industrial insertion tasks: plugging in an electrical connector and a gear assembly task, both shown on the left. We model a family of simulated insertion tasks by randomizing simulator parameters. Next, we use meta-reinforcement learning in simulation to learn a latent embedding of tasks and a policy that can rapidly adapt to a new task in that family. Finally, we show that the policy can indeed be quickly adapted to real-world tasks with only 20 trials on the physical robot. Videos and other materials are available at http://pearl-insertion.github.io.

Ii Related Work

Studies on peg-in-hole insertion have been ubiquitous in industrial automation, as it is representative of many common assembly problems. The key challenges involved in insertion tasks are modeling of physical contacts, and handling errors in perception and control. Since physical modeling of contacts and friction is often difficult, deployed controllers for insertions are based on heuristic search patterns that handle the issues implicitly. These methods include random search, spiral search or raster search 

[4]. The search patterns are combined with compliant control methods, which have been amongst the first model-based strategies for solving insertion tasks [39, 40, 23]. The parameters of these controllers are manually tuned in order to overcome perception and control errors for a specific system. The search patterns are often embedded in control state-machines, which guide the system. Other approaches focus on high-precision assembly by taking advantage of high-dimensional geometry or contact information [16, 35, 12, 17]. These approaches require a significant amount of engineering, modeling and tuning.

Instead of relying on human ingenuity to solve robotic control tasks, the paradigm of RL has promise to autonomously learn the control policy from data. RL has thus far been used in a variety of settings, such as playing ping-pong [25]. RL with expressive function approximators, or deep RL, further allows tasks to be learned from raw sensor inputs such as images. Deep RL has shown success in games [20, 32], in learning fine robotic manipulation skills [15] and navigation [14]. Specifically, peg insertion tasks have commonly been considered in deep RL settings. Florensa et al. investigate difficult insertion tasks with sparse rewards in simulation using a reverse curriculum [8]. Another approach to solving these tasks is to use prior data such as demonstrations [11, 21, 27, 31]. Vecerik et al. [38] perform a real-world insertion task utilizing demonstrations. Alternatively, one can learn a residual policy for contacts that is superposed with conventional controls [13, 33].

Another line of work considers first learning on simulation models of a task and then transferring the policy to the real world. One approach is domain randomization, which trains on a wide distribution of tasks in simulation assuming that the real world task is captured in that distribution [30, 36, 24, 26]. Further work adaptively randomizes the distribution of the tasks [22, 29, 19]. One can also actively adapt the simulator by switching between simulation and real-world interaction to guide the simulator [42, 3]. In this work, we take a different approach by using meta-learning to learn a distribution of skills in simulation, followed by adaptation in the real world.

Iii Background

In this section, we define basic notation and describe reinforcement learning and meta-learning.

Iii-a Reinforcement Learning

We model our problem as a Markov decision process, where an agent at every discrete timestep

is in state , executes an action , receives a scalar reward

, and the environment evolves to the next state according to the transition probability

. The agent attempts to maximize the expected return where is the planning horizon and is a discount factor. In reinforcement learning, the agent learns a policy that is optimized from data.

Iii-B Meta-Learning

Meta-learning is the problem of rapid adaptation: given experience in some family of tasks, how can use that experience to quickly adapt to a new task at test time? Formally, consider a task to be defined by the reward function , initial state distribution , and transition distribution . We consider some distribution over tasks , which we want to perform well on at test time by collecting limited experience during training time.

Several methods have explored this setting. One class of methods separates the training time into meta-training and meta-testing, and attempts to learn a model (a policy, forward model, or loss function) during meta-training that improves meta-test performance

[7, 9, 34, 2, 41]. In the meta-RL setting, these methods effectively take advantage of back-propagation through on-policy gradient updates, which limits their sample efficiency.

The other class of methods effectively learn a latent representation of the task [5, 1, 28]. The last of these can take advantage of off-policy data, allowing sample-efficient learning on real robots, and we describe the algorithm further in the following section.

Iii-C Probabilistic Embeddings for Actor-Critic RL

Probabilistic embeddings for actor-critic RL (PEARL) is a meta-learning algorithm that enables sample efficient meta-learning by reusing past data with off-policy RL algorithms [28]. The key idea is to condition the policy on the past transitions of the current task, which is termed the context . The context is encoded into a latent variable , and we train the policy . During meta-training PEARL learns the policy parameters and the inference network which is factorized as and are Gaussian factors, resulting in a Gaussian posterior. The parameters and are learned with an off-policy algorithm that additionally learns a critic. At meta-test time, is sampled before every rollout, and the new data is used to update the posterior.

Iv Industrial Insertion Tasks

We apply meta-learning to two real-world industrial insertion tasks, a waterproof electrical connector plug and a 3D-printed gear, pictured in Fig. 1. For our experiments, we use the Rethink Robotics Sawyer robot running a Cartesian-space end effector position controller, further detailed in V-C1. Thus the action-space is 3-dimensional. As observations, the current end effector positions relative to the the assumed goal location are used, resulting in a 3-dimensional observation space. Each real-world trial consists of 50 steps with a maximum step size of . The duration of each step is calculated by multiplying the length of the step with a desired average velocity of . After each trial a reset is performed, the reset position is located above the insertion socket. The workspace of the robot is defined as a cylinder with a radius of and a height of , centered at the goal location. If it happens that the robot leaves the workspace, it gets pushed back inside, perpendicular to the nearest surface of the cylinder. If an insertion is completed before the end of a trial, the end effector is kept still but rewards are collected until the end of the trial. We use the following sparse reward function during the real-world adaptation phase:

(1)

where the threshold to detect a successful insertion with a height measurement is below the tip of the insertion.

V Method

The key insight of this work is that industrial insertion tasks have shared structure that can be exploited by learning from data on a family of tasks. Thus, in order to obtain a general meta-RL policy for the real-world, we first design a family of tasks in simulation to reflect the real world tasks. Then, we use meta-learning in simulation to learn a policy and task embedding that allows fast adaptation to new tasks in that family. Finally, we apply the learned policy in the real world, where the complete task is to first grasp a part and then insert it. Below, we describe each of these steps in detail.

V-a Simulated Environment Design

To simulate the family of industrial insertion tasks, we use the physics engine MuJoCo [37]. The simulated environment, shown in Fig. 2, contains the Sawyer robot, a table, blocks on the table that form a hole, and a block that fits into this hole located in the robot’s parallel gripper. The blocks on the table are fixed and can not move, and the block inside the robot’s gripper is welded to the end effector. Like the real world, Cartesian-space end effector position control is used, with the maximum step size in each of the 3 directions set to . The family of tasks is generated by randomizing simulation parameters. The following parameters are randomized:

  • , the horizontal offset of the goal, within .

  • , the clearance of the insertion task, modified by changing the size of the block between and , while the size of the square hole remains fixed at .

  • , the scaling of the position controller’s step size, in the range .

Additionally, the reset position of the end-effector is uniformly sampled inside a cube with side length , located above the ideal goal, before each reset.

Fig. 2: In the simulated environment, we model the connector insertion with a square block that is inserted into a slightly larger square hole. The parameters of the simulator are randomized to generate a family of insertion tasks of different difficulties.

The observation space of the simulated environment consists of the 3-dimensional end effector location, measured relative to the ideal goal, to which the random perturbations are added. Centering observations with respect to the calibrated goal location allows the reuse a final policy on different robot setups.

The reward function during the simulated meta-training is the -distance to a full insertion with the current goal location. During the meta-adaptation phase, the sparse reward function in Eq. (1) is used. This is done because the exact goal location is not known during test-time, but a successful insertion can be indicated via a height measurement. The choice of rewards during training and test time are comparable to prior work [28].

V-B Sim-to-Real Transfer via Meta Reinforcement Learning

Using the simulator, we train a policy with the meta-RL algorithm on the family of tasks. Although any meta-RL algorithm could potentially be used, in this work we use PEARL for a number of reasons. First, due its capability for off-policy training, it is highly sample efficient. Second, PEARL learns a task embedding, which allows it to explicitly learn a latent structure over the family of tasks. This property of the algorithm also allows for very fast adaptation, which is vital in the real-world as collecting real-world samples can be expensive. The training of PEARL is outlined in Alg. 1. The meta-RL policy trained in simulation is then able to adapt to tasks sampled from the training distribution within a small number of trials.

We then perform policy adaptation on the real system until consistent performance is reached, as detailed in Alg. 2. From the perspective of the algorithm, the real system is just another task to be adapted to. This simple procedure is surprisingly effective at learning robust, adaptive controllers.

1:Batch of simulated training tasks from , learning rates
2:Initialize replay buffers for each training task
3:while not done do
4:   for each  do
5:      Initialize context
6:      for  do
7:         Sample
8:         Gather data from and add to
9:         Update
10:         Sample
11:      end for
12:   end for
13:   for step in training steps do
14:      for each  do
15:         Sample context and RL batch
16:         Sample
17:         
18:         
19:         
20:      end for
21:      
22:      
23:      
24:   end for
25:end while
Algorithm 1 PEARL Training in Simulation
1:Trained Meta-RL poliy , trained context encoder , real test task
2:for  do
3:   Sample
4:   Roll out policy to collect data
5:   Accumulate context
6:end for
Algorithm 2 PEARL Sim-to-Real Adaptation

V-C Real-World Execution

While we only train the insertion skill in simulation, in the real world the task is to first grasp the part and then insert it. In this section, we cover the real-world implementation details including a more accurate controller for the Sawyer, and an algorithm for grasp detection and correction.

V-C1 Robot Impedance Controller

The control scheme we developed for precise end effector position control of the Sawyer robot is presented in Alg. 3. With this controller, the robot consistently reaches a target with a precision of . In addition to the low-level control, we added a non-interfering high-level impedance controller, that does not decrease precision. Using position commands instead of velocity commands resulted in an average position error of . With the default end effector position provided by the manufacturer, a target was reachable within , the provided impedance controller showed an error of .

1:desired end effector location, orientation, and duration for action
2:Calculate desired joint angles via inverse kinematics
3:Form smooth spline between and
4:while  do
5:   Evaluate as slope of at time
6:   Send as joint velocity command to actuators
7:   Measure end effector forces at
8:   if any  then
9:      break
10:   end if
11:end while
Algorithm 3 Robot Control Scheme

To safely perform insertion tasks, we developed an impedance controller that operates in end effector position space. After each execution of Alg. 3, the vertical force at the end effector is measured. If it exceeds a threshold of , a small upwards move is initiated. If the force still exceeds the threshold, a larger upwards move follows. This procedure can also be used to achieve a desired downwards force, which we do in experiments with policies that only control the horizontal movement. An additional safety feature, shown in Alg. 3, is a low frequency measurement of the end effector forces in-between the high frequency commands that are sent to the robot. When a threshold of is exceeded, the robot stops the current motion and waits for the next commanded action. During upwards movements, this safety feature is disabled to prevent the robot from getting stuck while pressing down.

V-C2 Grasp Algorithm

A RealSense D435 depth camera is mounted to the robot arm and used to scan the workspace to calculate a grasp based on a depth image. We clean the depth image from artifacts and use a hand-tuned distance threshold to binarize the image. In most cases, this already extracts individual objects sufficiently. We then apply a contour fining algorithm to extract rectangular contours, check if the size of a found contour matches the assumed object size, and use temporal filtering to average the object location over multiple frames. The grasp will be planned along one of the principal axes of the rectangular bounding box. Hand-eye calibration is used to find the corresponding real-world coordinates in the robot frame. The requirements for this grasp approach are that the graspable object is clearly detectable in the depth image and that the distance threshold and the assumed object size are set appropriately.

Fig. 3: Robot view through a RealSense D435 camera, mounted to the robot arm. The depth image is binarized with a tuned threshold and contours are extracted. A rectangular bounding box (red) is drawn for contours of the expected size and used to calculate an aligned and centered grasp. Averaging the calculated grasps over over multiple frames improved the stability, resulting in 100 consecutive successful grasps when only a single object is in frame.

V-C3 Grasp Error Correction

In a real factory setting, each object that is about to be inserted by a robot needs to be grasped first. This increases the time per insertion attempt and induces unavoidable grasp errors when using a non-self-centering parallel gripper. In order to resemble the real setting as precisely, as possible, we include the grasping in our experimental setup. To mitigate grasp errors, we propose a grasp-correction algorithm that only requires a single image taken of the bottom of the grasped object to calculate the object’s displacement with respect to a reference grasp.

Our grasp correction algorithm uses an image of the bottom of the grasped object and compares it with a reference image using cross-correlation. From the cross-correlation of the new image with the reference image, the translation with respect to the reference grasp pose can be inferred reliably. The goal location is then adjusted based on the computer grasp error.

Rotational grasp errors are not considered because a the objects were not seen to rotate inside the parallel gripper and the rotation of the fixed goal location was calibrated. In different setups however, rotations may be a major source of error and should be investigated.

Fig. 4: Camera view when scanning the bottom of the part to calculate the pixel offset with respect to a reference grasp. A Kinect v1 camera is mounted upside down on the table to take an image after each grasp.

Vi Experimental Results

We conduct a series of experiments to answer the following questions:

  1. [A.]

  2. Can PEARL learn to robustly adapt to novel insertion tasks in simulation?

  3. Is it possible to adapt insertion policies learned using meta-RL in simulation to the real reward?

  4. How does sim-to-real meta-RL compare to existing solutions to robotic insertion problems, in terms of robustness and efficiency?

  5. What patterns and behavior does the algorithm learn in simulation that allow it to transfer to the real world?

We address each of these questions in our experimental evaluation, presented below.

Vi-a Adaptation in Simulation

First, we examine the performance of PEARL on our family of simulated insertion tasks. The adaptation performance on test tasks after training is shown in Fig 5.

Fig. 5: Success rate on test tasks in the adaptation phase in simulation after training. Per experiment, 20 random seeds are evaluated. We see that the PEARL policy successfully learned to adapt to unseen task in simulation. Descriptions of the randomized parameters are given in Section V-A.

In the results, we see that the zero-shot performance of the trained policies is about . But given 20 trials in the new environment, the algorithm can successfully adapt to solve each of the new tasks.

Vi-B Real-World Adaptation

After training in simulation, we adapt the meta-RL policy to tasks in the real world. As discussed in Sec. V-C3, in the real-world tasks the object (either the connector or the gear) is picked up using our grasp system, each grasp is evaluated using a camera image, and grasp errors are compensated according to our grasp correction algorithm. Since each grasp is slightly different, grasping introduces an additional challenge, requiring our method to compensate for this realistic source of variability.

In addition to the grasping, we consider robustness to poorly calibrated setups by perturbing the goal location. Thus, we evaluate the method on five different tasks between the two use cases: the plug insertion task with no noise, mm noise, and mm noise, and the the gear task with no noise, and mm noise.

The real-world adaptation results are presented in Fig. 6. The results show that in each case we can adapt to all the tasks in less than 20 trials of real-world interaction.

Fig. 6: Comparison of the success rates after each trial of the adaptation phase in the real world. Note that in each of the five tasks, our method is able to adapt to the task with less than 20 real-world trials.

The goal perturbation presents a challenge to all methods. We see that our methods is able to still solve the insertion task. In contrast to the heuristic methods, our method has the capacity to learn very complex search strategies and continuously adapt. Since the simulated meta-training phase included sufficient randomization of the goal location, the meta-trained policy explores quite broadly when adapting to the real world.

Vi-C Comparison of Robustness and Sample Efficiency

Success Rate

Plug Gear
Policy    0 mm mm    0 mm
Straight Down 0.88 0.0 0.0 0.32 0.0
Random Search 1.0 1.0 1.0 0.48 0.32
Spiral-Search 1.0 1.0 1.0 0.84 0.8
RL from Scratch 1.0 0.32 0.0 1.0 0.92
PEARL Sim2Real 1.0 1.0 1.0 1.0 1.0

Insertion Time

Plug Gear
Policy 0 mm mm 0 mm
Straight Down 3.3 5.3
Random Search 5.6 7.0 9.7 20.3
Spiral-Search
RL from Scratch 5.8
PEARL Sim2Real 6.8 8.2 5.7 8.0
TABLE I: Comparison of real-world insertion performance. Moving straight down to the goal location is not a reliable insertion strategy for this task. Due to the high precision requirements, random insertion strategies also fail. Only the policy trained with PEARL can achieve a very high success rate on this task. In the second table, we show average time for insertion in seconds, measurement started at tip of insertion and stopped when fully inserted.

In these experiments, we evaluate whether meta-RL is a viable solution for industrial insertion tasks, comparing the method to existing solutions that are used today. We compare to four strong baselines, which are either covered in past research or used widely in industry. In total, we compare the following methods:

  1. Straight downwards. Move straight downwards from the reset position.

  2. Random search. In this stochastic search policy, described in [18], the robot moves horizontally between search points that are sampled uniformly inside of a square shaped search space with side length , centered above the assumed goal location. The robot moves downwards at the first sampled point until a vertical contact force of is sensed at the end effector. If no successful insertion is detected, the end effector moves back upwards until the measured force decreases below and then moves horizontally to the next sampled point where it attempts the another insertion in the same way. At most 50 random insertion attempts are executed in each trial.

  3. Spiral search. The robot generates a spiral above the assumed goal location and iteratively attempts to insert downwards at points in the spiral [18]. During the downwards movement, a force threshold of is used to indicate contact and signals the robot to move to the next point in the same way as described above in random search. In our implementation, the distance to the center increases by each rotation and insertions are attempted at points apart along the spiral. At most, the robot moves through 50 points.

  4. RL from scratch. We train SAC [10] in the real world from scratch, using the same action space, state space and sparse reward function as in the real-world adaptation phase with PEARL, described in Sec. IV. The training with SAC requires substantially more environment steps than adapting PEARL, which is why we choose to rigidly mount the adapter to the robot’s gripper and leave out the grasping during the training. At test-time, the grasping is performed. The SAC policies were trained for 2 hours and 20 minutes; repeating success was already visible after 1 hour and 20 minutes of training.

  5. PEARL Sim2Real. Our method using meta RL, as described in Section V.

We evaluate these methods along two dimensions. Most importantly, we measure the success rate of the method on a task. We also measure the time needed for each insertion, to compare the efficiency of the different methods - the moment of successful insertion is detected via a height threshold. The measurement of efficiency is important for practical applications, since throughput is a major consideration in industrial settings. The results of the insertion time were averaged over 10 successful insertions per task and policy.

Results of the experiments performed on all five tasks are reported in Table  I. We immediately see that our method is the only one that consistently solves every task, and is almost always the fastest, except when moving straight down works. The gear use-case is visibly more difficult and not solvable with naïve downwards movement. The two heuristic search methods: random search and spiral search, are not always able to succeed at the more difficult settings in the given 50 steps. Meta-RL sim-to-real transfer shows the best performance among the most difficult tasks. Videos of our results can be viewed at http://pearl-insertion.github.io.

Vi-D What behavior does the policy transfer from simulation?

We believe the main knowledge transferred from sim-to-real is structured exploration noise

. We investigate by comparing the learned stochastic policy in PEARL to the deterministic evaluation of this policy done by always choosing the most likely action, which is the mean of the output with a Gaussian distribution. Prior work has consistently found that, although stochasticity helps at training time, the deterministic policy gives better final returns

[10]. In Fig. 7 and II, we compare the stochastic and deterministic policy when learning in simulation and performing sim-to-real transfer with PEARL.

PEARL Success Rate Deterministic Stochastic
Connector Plug +3mm 0.44 1.0
Gear +2mm 0.84 1.0
TABLE II: Comparison of deterministic and stochastic evaluation. It was observed that a deterministic policy is equally able to adapt to the real-world setting, but shows slightly less consistency at test-time. When getting stuck at the tip of the insertion, the deterministic policy predominantly failed to recover, whereas the stochastic policy managed to still solve the task in most cases.

As shown in Fig. 7, the stochastic policy consistently achieves a higher success rate. During the real-world adaptation, we observed better exploration with the stochastic policy, as well as a slightly better final performance, reported in Tab. II. The failed insertion attempts of the deterministic policy happened because the gear became stuck at the first stage of the insertion. This physical phenomena was not modeled in the simulation. However, the stochastic policy was still able to recover in all cases because it produced oscillating movements around the contact point of the insertion. In Fig. 8

we visualize the computed actions of a PEARL policy that was trained on a 2D sparse point robot environment with uniformly distributed goals around the origin and adapted in the real world on the electrical connector plug task. It is visible that the deterministic policy does not perform any movement inside of the goal region, whereas the stochastic policy learned to fully explore the goal region. We observed this movement inside of the goal region to be beneficial when performing insertions in the real world, as a slight misjudgement of the shape, size and location of the goal region can be compensated with these stochastic actions.

Fig. 7: Comparison of the PEARL training in simulation with either a deterministic or stochastic policy. Although the deterministic policy is formed by computing the maximum likely action of the stochastic policy, our model of the insertion task benefits from a stochastic policy evaluation. We think this is due to the more extensive exploration during the adaptation step of PEARL.
Fig. 8: Visualization of the policy outputs on a grid around a goal that is slightly shifted to the left. Here, a policy was trained on a two-dimensional version of our simulated environment, where movement in z-direction was disabled. In contrast to a deterministic policy evaluation, a stochastic evaluation also shows movement inside of the goal region, which we observed as beneficial when executing such policy on a real robot.

Finally, we can infer what behavior is learned by analyzing the situations in which the sim-to-real transfer with PEARL did not work well. For instance, the real-world adaptation failed when the randomization of the reset position was left out during the training in simulation. The trained meta-policy did not learn a stable behavior outside of the direct paths to the training goals. In the real world adaptation phase, inaccuracies of the real robot’s movement caused the end effector to enter unstable regions, in which a continuous movement in a direction away from the origin occurred. The real-world adaptation also failed when the randomization amount was too high, as sometimes none of the insertion attempts during the real-world adaptation phase succeeded. Due to the use of sparse rewards, PEARL does not obtain any explicit information about the goal location in this case. When we observed this failure case, we reduced the amount of randomization in simulation.

Vii Conclusion

In this paper, we studied meta-reinforcement learning for industrial insertion tasks. Our method first performs meta-training in a low-fidelity simulation, and then actively adapts to a variety of real-world insertion and assembly tasks. This approach can solve complex real-world tasks in under 20 trials, performing connector assembly and a 3D-printed gear insertion task. We also demonstrated the feasibility of our method under challenging conditions, such as noisy goal specification and complex connector geometries.

Our method shifts the burden of engineering robotics solutions from designing accurate analytic physical models to designing a family of representative simulated tasks. Furthermore, as our method requires experience in the real world only for the final adaptation step, the work of designing the simulation may be amortized across many tasks. Thus, we believe that our work illustrates the potential of meta-RL to provide a scalable and general method for rapid adaptation in manufacturing and industrial robotics.

Viii Acknowledgements

This work was supported by the Siemens Corporation, the Office of Naval Research under a Young Investigator Program Award, and Berkeley DeepDrive.

References

  • [1] M. Andrychowicz, M. Denil, S. G. Colmenarejo, M. W. Hoffman, D. Pfau, T. Schaul, B. Shillingford, and N. De Freitas (2016) Learning to learn by gradient descent by gradient descent. In NeurIPS, pp. 3988–3996. External Links: 1606.04474, ISSN 10495258 Cited by: §III-B.
  • [2] H. Bharadhwaj, Z. Wang, Y. Bengio, and L. Paull (2019-05) A data-efficient framework for training and sim-to-real transfer of navigation policies. In International Conference on Robotics and Automation (ICRA), Vol. 2019-May, pp. 782–788. External Links: Document, ISBN 9781538660263, ISSN 10504729 Cited by: §III-B.
  • [3] Y. Chebotar, A. Handa, V. Makoviychuk, M. Macklin, J. Issac, N. Ratliff, and D. Fox (2019) Closing the Sim-to-Real Loop: Adapting Simulation Randomization with Real World Experience. In ICRA, External Links: 1810.05687v4, Link Cited by: §II.
  • [4] S. R. Chhatpar and M.S. Branicky (2001) Search strategies for peg-in-hole assemblies with position uncertainty. In IROS, Cited by: §I, §II.
  • [5] Y. Duan, J. Schulman, X. Chen, P. L. Bartlett, I. Sutskever, and P. Abbeel (2016-11) RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning. External Links: 1611.02779, Link Cited by: §III-B.
  • [6] J. Falco, Y. Sun, and M. Roa (2018) Robotic grasping and manipulation competition: competitor feedback and lessons learned. In Robotic Grasping and Manipulation, Y. Sun and J. Falco (Eds.), Cham, pp. 180–189. External Links: ISBN 978-3-319-94568-2 Cited by: §I.
  • [7] C. Finn, P. Abbeel, and S. Levine (2017) Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. In ICML, External Links: Link Cited by: §III-B.
  • [8] C. Florensa, D. Held, M. Wulfmeier, and P. Abbeel (2018) Reverse Curriculum Generation for Reinforcement Learning. In ICLR, External Links: 1707.05300, Link Cited by: §II.
  • [9] E. Grant, C. Finn, S. Levine, T. Darrell, and T. Griffiths (2018) Recasting Gradient-Based Meta-Learning As Hierarchical Bayes. In ICLR, External Links: arXiv:1801.08930v1, Link Cited by: §III-B.
  • [10] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine (2018) Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. In ICML, External Links: arXiv:1801.01290v2, Link Cited by: item 4, §VI-D.
  • [11] T. Hester, M. Vecerik, O. Pietquin, M. Lanctot, T. Schaul, B. Piot, D. Horgan, J. Quan, A. Sendonaris, G. Dulac-Arnold, I. Osband, J. Agapiou, J. Z. Leibo, and A. Gruslys (2018) Learning from Demonstrations for Real World Reinforcement Learning. In

    AAAI Conference on Artificial Intelligence

    ,
    External Links: 1704.03732, Link Cited by: §II.
  • [12] T. Inoue, G. De Magistris, A. Munawar, T. Yokoya, and R. Tachibana (2017) Deep reinforcement learning for high precision assembly tasks. In IROS, pp. 819–825. Cited by: §II.
  • [13] T. Johannink, S. Bahl, A. Nair, J. Luo, A. Kumar, M. Loskyll, J. Aparicio Ojea, E. Solowjow, and S. Levine (2019) Residual Reinforcement Learning for Robot Control. In ICRA, External Links: 1812.03201v2, Link Cited by: §II.
  • [14] G. Kahn, A. Villaflor, B. Ding, P. Abbeel, and S. Levine (2018) Self-supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation. In ICRA, External Links: 1709.10489v3, Link Cited by: §II.
  • [15] S. Levine, C. Finn, T. Darrell, and P. Abbeel (2016) End-to-End Training of Deep Visuomotor Policies.

    Journal of Machine Learning Research (JMLR)

    17 (1), pp. 1334–1373.
    External Links: Document, 1504.00702, ISBN 9781479969227, ISSN 15337928, Link Cited by: §II.
  • [16] R. Li, R. Platt, W. Yuan, A. Ten Pas, N. Roscup, M. A. Srinivasan, and E. Adelson (2014) Localization and Manipulation of Small Parts Using GelSight Tactile Sensing. In IROS, External Links: Link Cited by: §II.
  • [17] J. Luo, E. Solowjow, C. Wen, J. Aparicio Ojea, A. Agogino, A. Tamar, and P. Abbeel (2019) Reinforcement learning on variable impedance controller for high-precision robotic assembly. In ICRA, Cited by: §II.
  • [18] J. A. Marvel, R. Bostelman, and J. Falco (2018) Multi-Robot Assembly Strategies and Metrics. ACM computing surveys 51 (1), pp. 14. Cited by: item 2, item 3.
  • [19] B. Mehta, M. Diaz, F. Golemo, C. J. Pal, and L. Paull (2019-04) Active Domain Randomization. In Conference on Robot Learning (CoRL), External Links: 1904.04762, Link Cited by: §II.
  • [20] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller (2013) Playing Atari with Deep Reinforcement Learning. In

    NIPS Workshop on Deep Learning

    ,
    pp. 1–9. External Links: Document, 1312.5602, ISBN 1476-4687 (Electronic) 0028-0836 (Linking), ISSN 0028-0836, Link Cited by: §II.
  • [21] A. Nair, B. Mcgrew, M. Andrychowicz, W. Zaremba, and P. Abbeel (2018) Overcoming Exploration in Reinforcement Learning with Demonstrations. In ICRA, External Links: Link Cited by: §II.
  • [22] OpenAI, I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino, M. Plappert, G. Powell, R. Ribas, J. Schneider, N. Tezak, J. Tworek, P. Welinder, L. Weng, Q. Yuan, W. Zaremba, and L. Zhang (2019-10) Solving Rubik’s Cube with a Robot Hand. External Links: 1910.07113, Link Cited by: §II.
  • [23] H. Park, J. Bae, J. Park, M. Baeg, and J. Park (2013) Intuitive peg-in-hole assembly strategy with a compliant manipulator. In IEEE ISR 2013, pp. 1–5. Cited by: §I, §II.
  • [24] X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel (2018) Sim-to-Real Transfer of Robotic Control with Dynamics Randomization. In ICRA, External Links: ISBN 9781538630815, Link Cited by: §II.
  • [25] J. Peters, K. Mülling, and Y. Altün (2010) Relative Entropy Policy Search. In AAAI Conference on Artificial Intelligence, pp. 1607–1612. External Links: Link Cited by: §II.
  • [26] L. Pinto, M. Andrychowicz, P. Welinder, W. Zaremba, and P. Abbeel (2018) Asymmetric Actor Critic for Image-Based Robot Learning. RSS. Cited by: §II.
  • [27] A. Rajeswaran, V. Kumar, A. Gupta, J. Schulman, E. Todorov, and S. Levine (2018) Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations. In Robotics: Science and Systems, External Links: Link Cited by: §II.
  • [28] K. Rakelly, A. Zhou, C. Finn, S. Levine, and D. Quillen (2019) Efficient off-policy meta-reinforcement learning via probabilistic context variables. In ICML, pp. 5331–5340. Cited by: Fig. 1, §I, §III-B, §III-C, §V-A.
  • [29] F. Ramos, R. Carvalhaes Possas, and D. Fox (2019) BayesSim: adaptive domain randomization via probabilistic inference for robotics simulators. In Robotics: Science and Systems (RSS), External Links: 1906.01728v1, Link Cited by: §II.
  • [30] F. Sadeghi and S. Levine (2017) CAD 2 RL: Real Single-Image Flight Without a Single Real Image. In RSS, External Links: 1611.04201v4, Link Cited by: §II.
  • [31] G. Schoettler, A. Nair, J. Luo, S. Bahl, J. Aparicio Ojea, E. Solowjow, and S. Levine (2019) Deep Reinforcement Learning for Industrial Insertion Tasks with Visual Inputs and Natural Rewards. arXiv preprint arXiv: 1906.05841. External Links: 1906.05841v2 Cited by: §II.
  • [32] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis (2016-01)

    Mastering the game of Go with deep neural networks and tree search

    .
    Nature 529 (7587), pp. 484–489. External Links: Document, 1610.00633, ISBN 1476-4687 (Electronic)$\$r0028-0836 (Linking), ISSN 0028-0836, Link Cited by: §II.
  • [33] T. Silver, K. Allen, J. Tenenbaum, and L. Kaelbling (2018) Residual Policy Learning. arXiv preprint arXiv:1812.06298. External Links: 1812.06298, Link Cited by: §II.
  • [34] A. Srinivas, A. Jabri, P. Abbeel, S. Levine, and C. Finn (2018) Universal Planning Networks. In ICML, External Links: arXiv:1804.00645v2, Link Cited by: §III-B.
  • [35] A. Tamar, G. Thomas, T. Zhang, S. Levine, and P. Abbeel (2017) Learning from the hindsight plan — episodic mpc improvement. In ICRA, pp. 336–343. Cited by: §II.
  • [36] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel (2017) Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World. IROS. Cited by: §II.
  • [37] E. Todorov, T. Erez, and Y. Tassa (2012) MuJoCo: A physics engine for model-based control. In IROS, pp. 5026–5033. External Links: Document, ISBN 9781467317375, ISSN 21530858, Link Cited by: §V-A.
  • [38] M. Večerík, T. Hester, J. Scholz, F. Wang, O. Pietquin, B. Piot, N. Heess, T. Rothörl, T. Lampe, and M. Riedmiller (2017) Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards. CoRR abs/1707.0. External Links: Link Cited by: §II.
  • [39] D. E. Whitney (1977) Force feedback control of manipulator fine motions. Cited by: §I, §II.
  • [40] D. E. Whitney (1982) Quasi-static assembly of compliantly supported rigid parts. ASME J. Dynamic Systems Measurement, and Control 104, pp. 65–77. Cited by: §I, §II.
  • [41] M. Wortsman, K. Ehsani, M. Rastegari, A. Farhadi, and R. Mottaghi (2019) Learning to Learn How to Learn: Self-Adaptive Visual Navigation Using Meta-Learning. In Computer Vision and Pattern Recognition (CVPR), Cited by: §III-B.
  • [42] W. Zhou, L. Pinto, and A. Gupta (2019) Environment Probing Interaction Policies. In International Conference on Learning Representations, External Links: Link Cited by: §II.