RetinaGAN: An Object-aware Approach to Sim-to-Real Transfer

11/06/2020 ∙ by Daniel Ho, et al. ∙ The Team at X 0

The success of deep reinforcement learning (RL) and imitation learning (IL) in vision-based robotic manipulation typically hinges on the expense of large scale data collection. With simulation, data to train a policy can be collected efficiently at scale, but the visual gap between sim and real makes deployment in the real world difficult. We introduce RetinaGAN, a generative adversarial network (GAN) approach to adapt simulated images to realistic ones with object-detection consistency. RetinaGAN is trained in an unsupervised manner without task loss dependencies, and preserves general object structure and texture in adapted images. We evaluate our method on three real world tasks: grasping, pushing, and door opening. RetinaGAN improves upon the performance of prior sim-to-real methods for RL-based object instance grasping and continues to be effective even in the limited data regime. When applied to a pushing task in a similar visual domain, RetinaGAN demonstrates transfer with no additional real data requirements. We also show our method bridges the visual gap for a novel door opening task using imitation learning in a new visual domain. Visit the project website at



There are no comments yet.


page 1

page 2

page 3

page 5

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Fig. 1: Overview of RetinaGAN pipeline. Left: Train RetinaGAN using pre-trained perception model to create a sim-to-real model. Right: Train the behavior policy model using the sim-to-real generated images. This policy can later be deployed in real.

Vision-based reinforcement learning and imitation learning methods incorporating deep neural network structure can express complex behaviors, and they solve robotics manipulation tasks in an end-to-end fashion

[18, 25, 15]. These methods are able to generalize and scale on complicated robot manipulation tasks, though they require many hundreds of thousands of real world episodes which are costly to collect.

Some of this data collection effort can be mitigated by collecting these required episodes in simulation and applying sim-to-real transfer methods. Simulation provides a safe, controlled platform for policy training and development with known ground truth labels. Such simulated data can be cheaply scaled. However, directly executing such a policy in the real world typically performs poorly, even if the simulation configuration is carefully controlled, because of visual and physical differences between the domains known as the reality gap. In practice, we find the visual difference to be the bottleneck in our learning algorithms and focus further discussion solely on this.

One strategy to overcome the visual reality gap is pixel-level domain adaptation; such methods may employ generative adversarial networks to translate the synthetic images to the real world domain [4]. However, a GAN may arbitrarily change the image, including removing information necessary for a given task. More broadly for robotic manipulation, it is important to preserve scene features that directly interact with the robot, like object-level structure and textures.

To address this, we propose RetinaGAN, a domain adaptation technique which requires strong object semantic awareness through an object detection consistency loss. RetinaGAN involves a CycleGAN [37] that adapts simulated images to look more realistic while also resulting in consistent objects predictions. We leverage an object detector trained on both simulated and real domains to make predictions on original and translated images, and we enforce the invariant of the predictions with respect to the GAN translation.

RetinaGAN is a general approach to adaptation which provides reliable sim-to-real transfer for tasks in diverse visual environments (Fig. 1). In a specific scenario, we show how RetinaGAN may be reused for a novel pushing task. We evaluate the performance of our method on three real world robotics tasks and demonstrate the following:

  1. RetinaGAN, when trained on robotic grasping data, allows for grasping RL task models that outperform prior sim-to-real methods on real world grasping by 12%.

  2. With limited (5-10%) data, our method continues to work effectively for grasping, only suffering a 14% drop in performance.

  3. The RetinaGAN trained with grasping data may be reused for another similar task, 3D object pushing, without any additional real data. It achieves 90% success.

  4. We train RetinaGAN for a door opening imitation learning task in a drastically different environment, and we introduce an Ensemble-RetinaGAN method that adds more visual diversity to achieve 97% success rate.

  5. We utilize the same pre-trained object detector in all experiments.

Ii Related Work

To address the visual sim-to-reality gap, prior work commonly apply domain randomization and domain adaptation techniques.

With domain randomization, a policy is trained with randomized simulation parameters and scene configurations which produce differences in visual appearance [31, 22, 14, 28, 34, 33]. The policy may learn to generalize across the parameter distribution and takes actions likely to work in all situations. Policy performance relies heavily on the kind of randomizations applied and whether they are close to or cover reality. The recently proposed method, Automatic Domain Randomization [1]

, automates the hyperparameter tuning process for Rubik’s Cube manipulation. But, domain randomization still requires manual, task-specific selection of visual parameters like the scene, textures, and rendering.

Domain adaptation bridges the reality gap by directly resolving differences between the domains [23]. Images from a source domain can be modified at the pixel-level to resemble a target domain [4, 35]. Or, feature-level adaptation aligns intermediate network features between the domains [9, 21, 7]. GANs are a commonly applied method for pixel-level transfer which only require unpaired images from both domains [8, 5, 37, 3, 13]. Our method employs such pixel-level adaptation to address the sim-to-real gap.

Action Image [16] is another approach to bridge the sim-to-real gap through learning a domain invariant representation for the task of grasping. Our work is complementary to this work and can help to further reduce this gap.

Among prior work that apply semantic consistency to GAN training, CyCADA [12] introduces a pixel-level perception consistency loss (semantic segmentation) as a direct task loss, and applies the learned generator to other semantic segmentation and perception tasks. Comparatively, RetinaGAN uses object detection where labels on real data is much easier to obtain and demonstrates that feature understanding from object detection is sufficient to preserve object semantics for robotics applications.

Recently, RL-CycleGAN [26] extends vanilla CycleGAN [37] with an additional reinforcement learning task loss. RL-CycleGAN enforces consistency of task policy Q-values between the original and transferred images to preserve information important to a given task. RL-CycleGAN is trained jointly with the RL model and requires task-specific real world episodes collected via some preexisting policy. Comparatively, RetinaGAN works for supervised and imitation learning, as it uses object detection as a task-decoupled surrogate for object-level visual domain differences. This requires additional real-world bounding box labels, but the detector can be reused across robotics tasks. In practice, we find the RetinaGAN easier to train since the additional object detector is pre-trained and not jointly optimized.

Iii Preliminaries

Iii-a Object Detection

Fig. 2: Sim and real perception data used to train EfficientDet focuses on scenes of disposable objects encountered in recycling stations. The real dataset includes 44,000 such labeled images and 37,000 images of objects on desks. The simulated dataset includes 625,000 total images.

We leverage an object detection perception model to provide object awareness for the sim-to-real CycleGAN. We train the model by mixing simulated and real world datasets which contain ground-truth bounding box labels (illustrated in Fig. 2). The real world object detection dataset includes robot images collected in general robot operation; labeling granularity is based on general object type – all brands of soda will be part of the “can” class. Simulation data is generated with the PyBullet physics engine [6].

Object detection models are object-aware but task-agnostic, and thus, they do not require task-specific data. We use this single detection network as a multi-domain model for all tasks, and we suspect in-domain detection training data is not crucial to the success of our method. Notably, the door opening domain is very different from the perception training data domain, and we demonstrate successful transfer in Section V-C.

We select the EfficientDet-D1 [29] model architecture (using the same losses as RetinaNet [19]) for the object detector. EfficientDet passes an input RGB image through a backbone feedforward EfficientNet [30]

architecture, and fuses features at multiple scales within a feature pyramid network. From the result, network heads predict class logit and bounding box regression targets.

We note that it is also possible to train separate perception networks for each domain. However, this adds complexity and requires that the object sets between synthetic and real data be close to bijective, because both models would have to produce consistent predictions on perfectly paired images.

While segmentation models like Mask-RCNN [11] and ShapeMask [17] provide dense, pixel-level object supervision, it is practically easier and more efficient to label object detection data. In general, perception models are common, and data collection and labeling efforts can be amortized across various use cases.

Iii-B CycleGAN

The RetinaGAN training process builds on top of CycleGAN [37]: an approach to learn a bidirectional mapping between unpaired datasets of images from two domains, and , with generators and . These generators are trained alongside adversarial discriminators

, which classify images to the correct domain, and with the cycle consistency loss capturing

for . We can summarize the training process with the CycleGAN loss (described in detail in [37, 26]):


Iv RetinaGAN

1:  Given: EfficientDet, , trained with simulation and real robot data
2:  Collect simulation () and real () task episodes
3:  while train and generators do
4:     Iterate over batch of simulation () and real () data
5:     Compute ,
6:     for pairs in {x, x’, x”}, {y, y’, y”} do
7:        Compute loss,
8:     end for
9:     Compute CycleGAN losses,
10:     Take optimization step using losses
11:  end while
Algorithm 1 Summary of RetinaGAN training pipeline.

RetinaGAN trains with a frozen object detector, EfficientDet, that provides object consistency loss. Once trained, the RetinaGAN model adapts simulated images for training the task policy model. Similarly to CycleGAN, we use unpaired data without labels. The overall framework is described in Algorithm 1 and illustrated in Fig. 3, and the details are described below.

Fig. 3: Diagram of RetinaGAN stages. The simulated image (top left) is transformed by the sim-to-real generator and subsequently by the real-to-sim generator. The perception loss enforces consistency on object detections from each image. The same pipeline occurs for the real image branch at the bottom.
Fig. 4: Diagram of perception consistency loss computation. An EfficientDet object detector predicts boxes and classes. Consistency of predictions between images is captured by losses similar to those in object detection training.

From CycleGAN, we have six images: sim, transferred sim, cycled sim, real, transferred real, and cycled real. Because of object invariance with respect to transfer, an oracle domain adapter would produce identical predictions between the former three images, as well as the latter three. To capture this invariance, we run inference using a pre-trained and frozen EfficientDet model on each image; for each of these pairs, we compute a perception consistency loss.

Iv-a Perception Consistency Loss

The perception consistency loss penalizes the generator for discrepancies in object detections between translations. Given an image , EfficientDet predicts a series of anchor-based bounding box regressions and class logits at several levels in its Feature Pyramid Network [20].

We compute the perception consistency loss () given a pair of images similarly to the box and class losses in typical RetinaNet/EfficientDet training. However, because the Focal Loss [19]

, used as the class loss, assumes one-hot vector ground truth labels, we propose a variation called Focal Consistency Loss (FCL) which is compatible with logit/probability labels (explained below in Section


Without loss of generality, consider an image pair to be and . This loss can be computed with a pre-trained EfficientDet network as:


is the Huber Loss [10] used as the box regression loss. This process is visualized in Fig. 4.

The Perception Consistency Loss on a batch of simulated images and real images , using the sim-to-real generator and the real-to-sim generator , is:


We halve the losses involving the cycled and images because they are compared twice (against the orginal and transferred images), but find that this weight has little effect in practice.

We arrive at the overall RetinaGAN loss:


Iv-B Focal Consistency Loss (FCL)

We introduce and derive a novel, interpolated version of the Focal Loss (FL) called Focal Consistency Loss (FCL), which extends support to a ground truth confidence probability

from a binary . Focal losses handle class imbalances in one-stage object detectors, improving upon Cross Entropy (CE) and Balanced Cross Entropy (BCE) losses (Section 3, [19]).

We begin from CE loss, which can be defined as:


where is the predicted probability.

BCE loss handles class imbalance by including a weighting term if and if . Interpolation between these two terms yields:


Focal Loss weights BCE by a focusing factor of , where and is if and if to addresses foreground-background imbalance. FCL is derived through interpolation between the binary cases of :


FCL is equivalent to FL when the class targets are one-hot labels, but interpolates the loss for probability targets. Finally, FL is normalized by the number of anchors assigned to ground-truth boxes (Section 4, [19]

). Instead, FCL is normalized by the total probability attributed to anchors in the class tensor. This weights each anchor by its inferred probability of being a ground-truth box.

V Task Policy Models and Experiments

We test the following hypotheses: 1) the value of sim-to-real at various data sizes by comparing robotics models trained with RetinaGAN vs without RetinaGAN 2) with purely sim-to-real data, how models trained with various GANs perform 3) transfer to other tasks.

We begin with training and evaluating RetinaGAN for RL grasping. We then proceed by applying the same RetinaGAN model to RL pushing and finally re-train on an IL door opening task. See the Appendix for further details on training and model architecture.

V-a Reinforcement Learning: Grasping

We use the distributed reinforcement learning method Q2-Opt [2], an extension to QT-Opt [15], to train a vision based task model for instance grasping. In the grasping task, a robot is positioned in front of one of three bins within a trash sorting station and attempts to grasp targeted object instances. The RGB image and a binary mask for the grasp target is input into the network. Real world object classes are focused on cups, cans, and bottles, although real training data is exposed to a long tail of discarded objects. Grasps in simulation are performed with the PyBullet [6] physics engine, with 9 to 18 spawned objects per scene. Example images are visualized in Fig. 5.

When using real data, we train RetinaGAN on 135,000 off-policy real grasping episodes and the Q2-Opt task model on 211,000 real episodes. We also run a low data experiment using 10,000 real episodes for training both RetinaGAN and Q2-Opt. We run distributed simulation to generate one-half to one million on-policy training episodes for RetinaGAN and one to two million for Q2-Opt.

We evaluate with six robots and sorting stations. Two robots are positioned in front of each of the three waste bins, and a human manually selects a cup, can, or bottle to grasp. Each evaluation includes thirty grasp attempts for each class, for ninety total. By assuming each success-failure experience is an independent Bernouili trial, we can estimate the sample standard deviation as

, where is the average failure rate and is the number of trials.

Fig. 5: Sampled, unpaired images for the grasping task at various scales translated with either the sim-to-real (left) or real-to-sim (right) generator. Compared to other methods, the sim-to-real RetinaGAN consistently preserves object textures and better reconstructs real features. The real-to-sim RetinaGAN is able to preserve all object structure in cluttered scenes, and it correctly translates details of the robot gripper and floor.

Grasp Success Est. Std.

18.9% 4.1%
Randomized Sim 41.1% 5.2%

GAN: 10K Real, Q2-Opt: 10K Real

22.2% 4.4%
RetinaGAN 47.4% 5.3%
RetinaGAN+Real 65.6% 5.0%

GAN: 135K Real, Q2-Opt: 211K Real

30.0% 4.9%
Sim+Real 54.4% 5.3%
RetinaGAN+Real 80.0% 4.2%

GAN: 135K Real, Q2-Opt: 0 Real
CycleGAN [37] 67.8% 5.0%
RL-CycleGAN [26] 68.9% 4.9%
RetinaGAN 80.0% 4.2%

TABLE I: Instance grasping success mean and estimated standard deviation (est. std.) of Q2-Opt compared between different training data sources across 90 trials. Results are organized by the number of real grasping episodes used.

We use the RL grasping task to measure the sim-to-real gap and compare methods in the following scenarios, which are displayed in Table I:

  • Train by mixing 10K real episodes with simulation to gauge data efficiency in the limited data regime.

  • Train by mixing 135K+ real grapsing episodes with simulation to investigate scalability with data, data efficiency, and performance against real data baselines.

  • Train Q2-Opt with only simulation-only to compare between RetinaGAN and other sim-to-real methods.

In the sim-only setup, we train with fixed light position and object textures, though we apply photometric distortions including brightness, saturation, hue, contrast, and noise. In simulation evaluation, a Q2-Opt model achieves 92% instance grasping success on cups, cans, and bottles. A performance of 18.9% on the real object equivalents indicates a significant sim-to-real gap from training in simulation alone.

We compare against baselines in domain randomization and domain adaptation techniques. Domain randomization includes variations in texture and light positioning.

On the limited 10K episode dataset, RetinaGAN+Real achieves 65.6%, showing significant performance improvement compared to Real-only. When training on the large real dataset, RetinaGAN achieves 80%, demonstrating scalability with more data. Additionally, we find that RetinaGAN+Real with 10K examples outperforms Sim+Real with 135K+ episodes, showing more than 10X data efficiency.

We proceed to compare our method with other domain adaptation methods; here, we train Q2-Opt solely on sim-to-real translated data for a clear comparison. RL-CycleGAN is trained with the same indiscriminate grasping task loss as in [26], but used to adapt on instance grasping. This could explain its relatively lower improvement from results in [26]. RetinaGAN achieves 80%, outperforming other methods by over two standard deviations, and interestingly, is on par with RetinaGAN+Real. We hypothesize that the knowledge of the real data was largely captured during RetinaGAN training, and the near on-policy simulation data is enough to train a high performing model.

V-B Reinforcement Learning: 3D Object Pushing

Fig. 6: Example unpaired images from the object pushing task, where the robot needs to push an upright object to the goal position, the red dot, without knocking it over.

We investigate the transfer capability of RetinaGAN within the same sorting station environment by solving a 3D object pushing task. We test the same RetinaGAN model with this visually similar but distinct robotic pushing task and show that it may be reused without fine-tuning. No additional real data is required for both the pushing task and RetinaGAN.

The pushing task trains purely in simulation, using a scene with a single bottle placed within the center bin of the sorting station and the same Q2-Opt RL framework (Fig. 6). Success is achieved when the object remains upright and is pushed to within 5 centimeters of the goal location indicated by a red marker. We stack the initial image (with the goal marker) and current RGB image as input. For both sim and real world evaluation, the robot needs to push a randomly placed tea bottle to a target location in the bin without knocking it over. Further details are described in [32], a concurrent submission.

Model Push Success Est. Std.
Sim-Only 0.0% 0.0%
RetinaGAN 90.0% 10.0%
TABLE II: Success rate mean and estimated standard deviation (est. std.) of pushing an upright tea bottle to goal position across 10 attempts.

Evaluation results are displayed in Table II. We train a Q2-Opt policy to perform the pushing task in simulation only and achieve 90% sim success. When deploying the sim-only RL policy to real, we get 0% success, revealing a large sim-to-real gap. By applying RetinaGAN to the RL training data, we create a policy achieving 90% success, demonstrating strong transfer and understanding of the real domain.

V-C Imitation Learning: Door Opening

We investigate RetinaGAN with a mis-matched object detector (trained on recycling objects) on an door opening task using a supervised learning form of behavioral cloning and imitation learning (IL). This task is set in a dramatically different visual domain, policy learning framework and algorithm, and neural network architecture. It involves a fixed, extended robot arm with a policy controlling the wheels of the robot base to open the doors of, and enter, conference rooms (Fig.


The supervised learning policy is represented by a ResNet-FiLM architecture with 18 layers [24]. Both the RetinaGAN model and the supervised learning policy are trained on 1,500 human demonstrations in simulation and 29,000 human demonstrations on real conference doors. We evaluate on three conference rooms seen within the training demonstrations. We train and evaluate on three conference rooms with both left and right-swinging doors, for ten trials each and thirty total trials.

Fig. 7: Images sampled from the door opening task in simulation (red border) and real (blue border). Generated images from two separately trained RetinaGAN models highlight prediction diversity in features like lighting or background; this diversity is also present in the real world dataset.

Seen Doors Est. Std.
Sim-only 0.0% 0.0%
Real 36.6% 8.9%
Sim+Real 75.0% 8.0%
RetinaGAN+Real 76.7% 7.9%
Ensemble-RetinaGAN+Real 93.3% 4.6%
Ensemble-RetinaGAN 96.6% 3.4%
TABLE III: Success rate mean and estimated standard deviation (est. std.) of door opening across 30 trials. RetinaGAN+Real result was selected from best of three models used in Multi-RetinaGAN+Real.

With the door opening task, we explore how our domain adapation method performs in an entirely novel domain, training method, and action space, with a relatively low amount of real data (1,500 real demonstrations). We train the RetinaGAN model using the same object detector trained on recycling objects. This demonstrates the capacity to re-use labeled robot bounding box data across environments, eliminating further human labeling effort. Within door opening images, the perception model produces confident detections only for the the robot arm, but we hypothesize that structures like door frames could be maintained by consistency in low-probability prediction regimes.

Compared to baselines without consistency loss, RetinaGAN strongly preserves room structures and door locations, while baseline methods lose this consistency (see Appendix). This semantic inconsistency in GAN baselines presents a safety risk in real world deployment, so we did not attempt evaluations with these models.

We then evaluate IL models trained with different data sources and domain adaptors, and displayed the results in Table III. An IL model trained on demonstrations in simulation and evaluated in simulation achieves 98% success. The same model fails in real with no success cases - showing a large sim-to-real gap.

By mixing real world demonstrations in IL model training, we achieve 75% success on conference room doors seen in training time. We achieve a comparable success rate, 76.7%, when applying RetinaGAN.

By training on data from three separate RetinaGAN models with different random seeds and consistency loss weights (called Ensemble-RetinaGAN), we are able to achieve 93.3% success rate. In the low data regime, RetinaGAN can oscillate between various reconstructed semantics and ambiguity in lighting and colors as shown in Fig. 7. We hypothesize that mixing data from multiple GANs adds diversity and robustness, aiding in generalization. Finally, we attempt Ensemble-RetinaGAN without any real data for training the IL model. We achieve 96.6%, within margin of error of the Ensemble-RetinaGAN+Real result.

Vi Conclusions

RetinaGAN is an object-aware sim-to-real adaptation technique which transfers robustly across environments and tasks, even with limited real data. We evaluate on three tasks and show 80% success on instance grasping, a 12 percentage-point improvement upon baselines. Further extensions may look into pixel-level perception consistency or other modalities like depth. Another direction of work in task and domain-agnostic transfer could extend RetinaGAN to perform well in a visual environment unseen at training time.


Vi-a Door Opening Figure

See Fig. 8 for example of semantic structure distortions when training the door opening task with CycleGAN.

Fig. 8: CycleGAN can distort semantic structure when trained on door opening images, in the low data regime. Images on the right are transfered results of the simulated image on the left.

Vi-B Perception Model Training

Hyperparameters used in object detection model training are listed in Table IV. We use default augmentation parameters from [19], including a scale range of 0.8 to 1.2. Among the 59 classes, the following are frequently used: robot, bottle, bowl, can, cup, bag/wrapper, bowl, and plate. Other classes appear sparesely or not at all.

Hyperparameter Value
Training Hardware 4 x Google TPUv3 Pods
Network Architecture EfficientNet-D1 [29]
Precision bfloat16
Input Resolution 512x640 pixels
Preprocessing Crop, scale, Horizontal flipping
Pad to 640x640
Training Step Count 90,000
Optimizer tf.train.MomentumOptimizier
Learning Rate 0.08, stepped two times with 10% decay
Momentum 0.08
Batch Size 256
Weight Decay 1e-5
Classes 59
TABLE IV: Hyperparameters used for EfficientDet Training.

Vi-C RetinaGAN Model Training

We train RetinaGAN with similar parameters to those described in Appendix A of [26]. We generate simulation images with the following object set (and counts): paper bags (1), bottles (9), bowls (1), napkins (1), cans (12), cups (6), containers (2), plates (1), and wrappers (10). Each training batch includes 256 simulation and 256 real images. Photometric distortions are defined in the Tensor2Robot framework111

Hyperparameter Value
Training Hardware 4 x Google TPUv3 Pods
Network Architecture U-Net [27], Fig. 5 in [26]
Precision bfloat16
Input Resolution 512x640 pixels
Preprocessing Crop to 472x472 pixels
Apply photometric distortions
Training Step Count 50,000-100,000
Optimizer tf.train.AdamOptimizer
Learning Rate 0.0001
Batch Size 512
Weight Decay 7e-5
Additional Normalization Spectral Normalization [36]
1 weight (), updates
10 weight (), updates
0.1 weight (), updates
TABLE V: Hyperparameters used for GAN Training.

Vi-D Q2-Opt RL Model Training

We use the Q2R-Opt [2] model and training pipeline for both the grasping and pushing tasks, with the same hyperparameters. We train on the same simulated object set as in the RetinaGAN setup.

When using the full real dataset, we sample each minibatch from simulation episodes with a 50% weight and real episodes with a 50% weight. With the restricted 10K episode dataset, we sample from simulation with 20% weight and real with 80% weight, as to not overfit on the smaller real dataset. We did not tune these ratios, as in prior experiments, we found that careful tuning was not required.

Vi-E ResNet-FiLM IL Model Training

We train IL with the ResNet-FiLM [24] model with a ResNet-18 architecture defined in the Tensor2Robot framework222 For training RetinaGAN and Multi-RetinaGAN, we mix real demonstrations, simulated demonstrations, and RetinaGAN-adapted simulated demonstrations. We use a lower 20% weight for real data (because of the small dataset size) and evenly weight simulated and adapted demonstrations. The action space is the 2D movement of the robot base. Additional details will be provided in an as-yet unreleased paper; this work focuses on the benefits of CycleGAN-adapted data independently of whether policies are trained with IL or RL.

Vi-F Evaluation

For grasping, we evaluate with the station setup in Fig. 9. Each setup is replicated three times (with potentially different object brands/instances, but the same classes), and one robot positioned in front of each bin. We target the robot to only grasp the cup, can, and bottle, for a total of eighteen grasps. This is repeated five times for ninety total grasps.

Fig. 9: The two evaluation station setups displaying the object classes present in each bin.

For pushing, we evaluate with a single Ito En Green Tea Bottle filled 25% full of water.

For door opening, we evaluate on three real world conference room doors. Two doors swing rightwards and one door swings leftwards. The episode is judged as successful if the robot autonomously pushes the door open and the robot base enters the room.


We thank Noah Brown, Christopher Paguyo, Armando Fuentes, and Sphurti More for overseeing robot operations, and Daniel Kappler, Paul Wohlhart, and Alexander Herzog for helpful discussions. We thank Chris Harris and Alex Irpan for comments on the manuscript.


  • [1] I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino, M. Plappert, G. Powell, R. Ribas, et al. (2019) Solving rubik’s cube with a robot hand. arXiv preprint arXiv:1910.07113. Cited by: §II.
  • [2] C. Bodnar, A. Li, K. Hausman, P. Pastor, and M. Kalakrishnan (2019) Quantile qt-opt for risk-aware vision-based robotic grasping. ArXiv abs/1910.02787. Cited by: §V-A, §VI-D.
  • [3] K. Bousmalis, A. Irpan, P. Wohlhart, Y. Bai, M. Kelcey, M. Kalakrishnan, L. Downs, J. Ibarz, P. P. Sampedro, K. Konolige, S. Levine, and V. Vanhoucke (2018) Using simulation and domain adaptation to improve efficiency of deep robotic grasping. External Links: Link Cited by: §II.
  • [4] K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan (2017) Unsupervised pixel-level domain adaptation with generative adversarial networks. External Links: 1612.05424 Cited by: §I, §II.
  • [5] A. Brock, J. Donahue, and K. Simonyan (2019) Large scale gan training for high fidelity natural image synthesis. External Links: 1809.11096 Cited by: §II.
  • [6] E. Coumans and Y. Bai (2017)

    Pybullet, a python module for physics simulation in robotics, games and machine learning

    Cited by: §III-A, §V-A.
  • [7] K. Fang, Y. Bai, S. Hinterstoisser, S. Savarese, and M. Kalakrishnan (2018)

    Multi-task domain adaptation for deep learning of instance grasping from simulation

    In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 3516–3523. Cited by: §II.
  • [8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (Eds.), pp. 2672–2680. External Links: Link Cited by: §II.
  • [9] R. Gopalan, Ruonan Li, and R. Chellappa (2011) Domain adaptation for object recognition: an unsupervised approach. In

    2011 International Conference on Computer Vision

    Vol. , pp. 999–1006. Cited by: §II.
  • [10] T. Hastie, R. Tibshirani, and J. Friedman (2001) The elements of statistical learning. Springer Series in Statistics, Springer New York Inc., New York, NY, USA. Cited by: §IV-A.
  • [11] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2018) Mask r-cnn. External Links: 1703.06870 Cited by: §III-A.
  • [12] J. Hoffman, E. Tzeng, T. Park, J. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell (2017) CyCADA: cycle-consistent adversarial domain adaptation. External Links: 1711.03213 Cited by: §II.
  • [13] S. James, P. Wohlhart, M. Kalakrishnan, D. Kalashnikov, A. Irpan, J. Ibarz, S. Levine, R. Hadsell, and K. Bousmalis (2019) Sim-to-real via sim-to-sim: data-efficient robotic grasping via randomized-to-canonical adaptation networks. In

    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    Vol. , pp. 12619–12629. Cited by: §II.
  • [14] S. James, A. Davison, and E. Johns (2017) Transferring end-to-end visuomotor control from simulation to real world for a multi-stage task. In CoRL, Cited by: §II.
  • [15] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakrishnan, V. Vanhoucke, and S. Levine (2018) QT-opt: scalable deep reinforcement learning for vision-based robotic manipulation. External Links: 1806.10293 Cited by: §I, §V-A.
  • [16] M. Khansari, D. Kappler, J. Luo, J. Bingham, and M. Kalakrishnan (2020) Action image representation: learning scalable deep grasping policies with zero real world data. External Links: 2005.06594 Cited by: §II.
  • [17] W. Kuo, A. Angelova, J. Malik, and T. Lin (2019) Shapemask: learning to segment novel objects by refining shape priors. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9207–9216. Cited by: §III-A.
  • [18] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen (2018) Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research 37 (4-5), pp. 421–436. Cited by: §I.
  • [19] T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár (2017) Focal loss for dense object detection. In 2017 IEEE International Conference on Computer Vision (ICCV), Vol. , pp. 2999–3007. Cited by: §III-A, §IV-A, §IV-B, §IV-B, §VI-B.
  • [20] T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie (2017) Feature pyramid networks for object detection. External Links: 1612.03144 Cited by: §IV-A.
  • [21] M. Long, Y. Cao, J. Wang, and M. Jordan (2015-07–09 Jul) Learning transferable features with deep adaptation networks. F. Bach and D. Blei (Eds.), Proceedings of Machine Learning Research, Vol. 37, Lille, France, pp. 97–105. External Links: Link Cited by: §II.
  • [22] J. Matas, S. James, and A. J. Davison (2018) Sim-to-real reinforcement learning for deformable object manipulation. External Links: 1806.07851 Cited by: §II.
  • [23] V. M. Patel, R. Gopalan, R. Li, and R. Chellappa (2015) Visual domain adaptation: a survey of recent advances. IEEE Signal Processing Magazine 32 (3), pp. 53–69. Cited by: §II.
  • [24] E. Perez, F. Strub, H. de Vries, V. Dumoulin, and A. Courville (2017) FiLM: visual reasoning with a general conditioning layer. External Links: 1709.07871 Cited by: §V-C, §VI-E.
  • [25] L. Pinto and A. Gupta (2016) Supersizing self-supervision: learning to grasp from 50k tries and 700 robot hours. In 2016 IEEE international conference on robotics and automation (ICRA), pp. 3406–3413. Cited by: §I.
  • [26] K. Rao, C. Harris, A. Irpan, S. Levine, J. Ibarz, and M. Khansari (2020) RL-cyclegan: reinforcement learning aware simulation-to-real. External Links: 2006.09001 Cited by: §II, §III-B, §V-A, TABLE I, §VI-C, TABLE V.
  • [27] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. External Links: 1505.04597 Cited by: TABLE V.
  • [28] F. Sadeghi, A. Toshev, E. Jang, and S. Levine (2018) Sim2Real viewpoint invariant visual servoing by recurrent control. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vol. , pp. 4691–4699. Cited by: §II.
  • [29] M. Tan, R. Pang, and Q. V. Le (2020) EfficientDet: scalable and efficient object detection. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 10778–10787. Cited by: §III-A, TABLE IV.
  • [30] M. Tan and Q. V. Le (2020)

    EfficientNet: rethinking model scaling for convolutional neural networks

    External Links: 1905.11946 Cited by: §III-A.
  • [31] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel (2017) Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. , pp. 23–30. Cited by: §II.
  • [32] Z. Xu, W. Yu, A. Herzog, W. Lu, C. Fu, M. Tomizuka, Y. Bai, C. K. Liu, and D. Ho (2020) COCOI: contact-aware online context inference for generalizable non-planar pushing. Cited by: §V-B.
  • [33] X. Yan, J. Hsu, M. Khansari, Y. Bai, A. Pathak, A. Gupta, J. Davidson, and H. Lee (2018) Learning 6-dof grasping interaction via deep 3d geometry-aware representations. External Links: Link Cited by: §II.
  • [34] X. Yan, M. Khansari, J. Hsu, Y. Gong, Y. Bai, S. Pirk, and H. Lee (2019) Data-efficient learning for sim-to-real robotic grasping using deep point cloud prediction networks. arXiv preprint arXiv:1906.08989. Cited by: §II.
  • [35] D. Yoo, N. Kim, S. Park, A. S. Paek, and I. S. Kweon (2016) Pixel-level domain transfer. External Links: 1603.07442 Cited by: §II.
  • [36] H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena (2019) Self-attention generative adversarial networks. External Links: 1805.08318 Cited by: TABLE V.
  • [37] J. Zhu, T. Park, P. Isola, and A. A. Efros (2020)

    Unpaired image-to-image translation using cycle-consistent adversarial networks

    External Links: 1703.10593 Cited by: §I, §II, §II, §III-B, TABLE I.