Learning-based approaches, and specifically the ones that utilize the recent advances of deep learning, have shown strong generalization capacity and the ability to learn relevant features for manipulation of real objects[1, 2, 3, 4, 5]
. These features can be used to avoid explicit object pose estimation which is often inaccurate, even for known objects, in the presence of occlusions and noise. Furthermore, parameterization of the environment state with positions in and rotations in is not necessarily the best state representation for every task.
Deep learning can provide task-relevant features and state representation directly from data. However, deep learning, and especially deep reinforcement learning (RL), requires a significant amount of data, which is a critical challenge for robotics . For this reason, sim-to-real transfer is an important area of research for vision-based robotic control as simulations offer an abundance of labeled data.
Pixel-based agents trained in simulation do not generalize naively to the real world. However, recent sim-to-real transfer techniques have shown significant promise in reducing real-world sample complexity. Such techniques either randomize the simulated environment in ways that help with generalization [7, 8, 9], use domain adaptation , or both 
. Our work falls in the scope of unsupervised domain adaptation techniques, i.e. methods that are able to utilize both labeled simulated and unlabeled real data. These have been successfully used both in computer vision and in vision-based robot learning for manipulation  and locomotion .
The contribution of our work is two-fold: (a) we investigate the use of sequence-based self-supervision as a way to improve sim-to-real transfer; and (b) we develop contrastive forward dynamics (CFD), a self-supervised objective to achieve that. We propose a two-step procedure (see Fig. 1) for such sequence-based self-supervised adaptation. In the first step, we use the simulated environment to learn a policy that solves the task in simulation using synthetic images and proprioception as observations. In the second step, we use synthetic and unlabeled real image sequences to adapt the state representation to the real domain. Besides the task objective on the simulated images, this step also uses sequence-based self-supervision as a way to provide a common objective for representation learning that applies in both simulation and reality without the need for paired or aligned data. Our CFD objective additionally combines dynamics model learning with time-contrastive techniques to better utilize the structure of sequences in real robot data.
We demonstrate the effectiveness of our approach by training a vision-based cube stacking RL agent. Our agent interacts with the real world with 20Hz closed-loop Cartesian velocity control from vision which makes our method applicable to a large set of manipulation tasks. The cube stacking task also emphasizes the generality of our approach for long horizon manipulation tasks. Most importantly, our method is able to make better use of the available unlabeled real world data resulting in higher stacking performance, compared to domain randomization 
and domain-adversarial neural networks.
Ii Related Work
Manipulation: challenges and approaches
It is well acknowledged that both planning and state estimation become challenging when performed in cluttered environments . During execution, continuously tracking the pose of manipulated objects becomes increasingly more difficult in presence of occlusions, often caused by the gripper itself. Surveys reveal that pose estimation is still an essential component in many approaches to grasping [1, fig. 3-5-7]; proposed approaches rely on some sort of supervision, either in the form of model-based grasp quality measure [16, 17, 18]
, or in the form of heuristics for grasp stability[1, fig. 18-19], or finally in the form of labelled data for learning [1, fig. 9].
Sim-to-Real Transfer for Robotic Manipulation
Sim-to-real transfer learning aims to bridge the gaps between simulation and reality, which consist of differences in the dynamics and observation models such as image rendering. Sim-to-real transfer techniques can be grouped by the amount and kind of real world data they use. Techniques like domain randomization[9, 13] focus on zero-shot transfer. Others are able to utilize real data in order to adapt to the real world via system identification or domain adaptation. Similar to system identification in classical control , recent techniques like SimOpt  utilize real data to learn policies that are robust under different transition dynamics. Unsupervised domain adaptation  has been successfully used for sim-to-real transfer in vision-based robotic grasping . Semi-supervised domain adaptation additionally utilizes any labeled data that might be available, as was done by . In many ways, zero-shot transfer, system identification, domain adaptation–with or without labeled data in the real world–are complementary groups of techniques.
Cube Stacking Task
Recent work on efficient multi-task deep reinforcement learning 
has shown the difficulty of cube stacking task even in simulated environments as the task requires several core abilities such as grasping, lifting and precise placing. Sim-to-real method has also been applied for cube stacking task from vision where combination of domain randomization and imitation learning was used to perform zero-shot sim-to-real transfer of the cube stacking task. However, the resulting policy only obtained a success rate of 35% over 20 trials in a limited number of configurations reconfirming the difficulty of the cube stacking task.
Unsupervised Domain Adaptation
Unsupervised domain adaptation techniques are either feature-based or pixel-based. Pixel-based adaptation is possible by changing the observations to match those from the real environment with image-based GANs . Feature-based adaptation is done either by learning a transformation over fixed simulated and real feature representations, as done by  or by learning a domain-invariant feature extractor, also represented by a neural network [25, 26]. The latter has been shown to be more effective , and we employ a feature-level domain adversarial method  as a baseline.
Sequence-based Self Supervision
Sequence-based self-supervision is commonly applied for video representation learning, particularly making use of local  and global  temporal structures. Time-contrastive networks (TCN)  utilize two temporally synchronous camera views to learn view-independent high-level representations. By predicting temporal distance between frames, Aytar et al.  learn a representation that can handle small domain gaps (i.e. color changes and video artifacts) for the purpose of imitating YouTube gameplays in an Atari environment. To the best of our knowledge, sequence-based self-supervision for handling large visual domain gaps in sim-to-real transfer for robotic learning have not been considered before.
Iii Our Method
In this section, we provide the detailed description of our method for enabling sim-to-real transfer of visual robotic manipulation. We propose a two stage training process. In the first stage, state-based and vision-based agents are trained simultaneously in simulation with domain randomization.
We then collect unlabeled robot data by executing the vision-based agent on the real robot. In the second stage, we perform self-supervised domain adaptation by tuning the visual perception module with the help of sequence-based self-supervised objectives optimized over simulation and real world data jointly.
Our method optimizes three main loss functions:(a) is the reinforcement learning (RL) objective optimized by the state-based and vision-based agents in simulation, (b) is the behavioral cloning loss utilized by the vision-based agent to speed up learning by imitating the state-based agent, and (c) is the sequence-based self-supervised objective optimized on both simulation and real robot data. The purpose of is to align the agent’s perception of real and simulated visuals by solving a common objective using a shared encoder.
Our system is composed of four main neural networks: (a) an image representation encoder with parameters composed of layers which embeds any visual observation to a latent space as , (b) a vision-based deep policy network with parameters which combines the output of the visual encoder with the proprioceptive observations and outputs an action, (c) a state-based policy network with parameters which takes the simulation state and outputs an action, and (d) a self-supervised objective network with parameters which takes the encoded visual observation (and action if necessary) as input and directly computes the loss . Fig. 1 presents a visual description of these components. In the remainder of this section, we discuss the two stages of our method and present an objective for sequence-based self-supervision.
Iii-a First stage: Learning in simulation
In this stage we train a state-based agent and a vision-based agent with a shared experience replay. Our goal is to speed up the learning process by leveraging the privileged information in simulation through the state-based agent, and distilling the learned skills into the vision-based agent using a shared replay buffer. Both of the agents are trained with an off-policy reinforcement learning objective, . We use a state of the art continuous control RL algorithm, Maximum a Posteriori Policy Optimization (MPO) 
, which uses an expectation-maximization-style policy optimization with an approximate off-policy policy evaluation algorithm. As shown in Fig.1, the state-based agent has access to the simulator state, which allows it to learn much faster than the vision-based agent that uses raw pixel observations. In essence, the state-based agent is an asymmetric behavior policy, which provides diverse and relevant data for reinforcement learning of the vision-based agent. This idea leverages the flexibility of off-policy RL, which has been shown to improve sample complexity in a single-domain setting . Additionally, we also utilize the behavioral cloning (BC) objective  for the vision-based agent to imitate the state-based agent. provides reliable training and further improves sample efficiency in the learning process, as we show in Sect. V. We additionally employ DDPGfD  which injects human demonstrations to the replay buffer and asymmetric actor-critic for our stacking experiments. Our final objective in the first stage can be written as follows:
Iii-B Second stage: Self-supervised sim-to-real adaptation
Although our vision-based agent can perform reasonably well when transferred to the real robot, there is still significant room for improvement, mostly due to the large domain gap between simulation and the real robot. Our main objective in this stage is to mitigate the negative effects of the domain gap by utilizing the unlabeled robot data collected by our simulation-trained agent for domain adaptation. In addition to well-explored domain adversarial training , which we present as a strong baseline, we investigate the use of sequence-based self-supervised objectives for sim-to-real domain adaptation.
Modality tuning , freezing the higher-level weights of a trained network and adapting only the initial layers for a new modality (or domain), is a method shown to successfully align multiple modalities (i.e. natural images, line drawings and text descriptions), though it requires class labels in all modalities. In our context, it would require rewards for the real-world data which we do not have. Instead, we utilize a self-supervised objective while performing modality tuning (i.e. simulation-to-reality adaptation) which can be readily applied both in simulation and reality. However, there is no guarantee that this alignment learned using a objective would indeed successfully transfer the vision-based policy from simulation to the real world. In fact, different objectives would result in different transfer performances. Finding a suitable objective for better transfer of the learned policy is of major importance as well.
In the context of our neural network architecture, while applying the modality tuning, we freeze the vision-based agent’s policy network parameters and the encoder parameters except for the first layer . This allows the system to adapt its visual perception to the real world without making major changes in the policy logic, which we expect to be encoded in the higher layers of the neural network. We also continue optimizing the and objectives along with to ensure that as is adapting itself to solve the , it also maintains good performance for the manipulation task. In other words, is forced to adapt itself without compromising the performance of the vision-based agent. The final objective in the second stage is:
Due to its wide adoption in the robotics settings, we employ the Time-Contrastive Networks (TCN)  objective for in our self-supervised sim-to-real adaptation method, though any other sequence-based self-supervised objective can also be used here. In the next subsection we introduce an alternative loss for which makes use of domain-specific properties of robotics, therefore potentially result in better transferable alignment.
Iii-C Contrastive Forward Dynamics
Time-Contrastive Networks (TCN) , which we use as a baseline, and other sequence-based self-supervision methods [30, 36, 37], mainly exploit the temporal structure of the observations. However, with robot data we also have physical dynamics of the real world probed by actions and perceived through observations. In this section we describe the contrastive forward dynamics (CFD) objective, which is able to utilize both observations and actions by learning a forward dynamics model in a latent space. Essentially we are learning the latent transition dynamics of the environment which has strong connections to the model-based optimal control approaches . Therefore we can expect that the alignment achieved through our CFD objective potentially better transfers the learned policy from simulation to real world. We formally define the CFD objective below.
Assume we are given a dataset of sequences where each sequence is of length . denotes observations and denotes the actions at time . Any observation is embedded into a latent space as through the encoder network . Given a transition in the latent space, the forward dynamics model predicts the next latent state as where is the prediction network. Instead of learning by minimizing the prediction error , which has a trivial solution achieved by setting the latents to zero, we minimize a contrastive prediction loss. A contrastive loss [39, 40] takes pairs of examples as input and predicts whether the two elements in the pair are from the same class or not. It can also be implemented as a multi-class classification objective comparing one positive pair and multiple negative pairs , creating an embedding space by pushing representations from the same “class” together and ones from different “classes” apart. In our context, is our positive pair and any other non-matching pairs where are the negative pairs. With CFD, we solve such a multi-class classification problem by minimizing the cross-entropy loss for any given latent observation and its prediction as follows:
In practice, while forming the negative pairs we pick all the other latent observations in the same mini-batch, which also contains observations from the same sequence. To further enforce the prediction quality, we perform multi-step future predictions by continuously applying the forward dynamics model. These longer horizon predictions optimize the same objective given in Eq. 3 where is replaced with any multi-step prediction of . Fig. 3 illustrates how multi-step predictions are obtained using a single forward dynamics model.
Iv Simulated and Real Environments and Tasks
The primary manipulation task we have used in this work is vision-based stacking of one cube on top of another. However, as this is a particularly hard task to solve  from pixels from scratch with off-the-shelf RL algorithms, we studied the ablation effects of different components of our proposed RL framework on the easier problem of vision-based lifting instead. As lifting is an easier task, and a required skill towards achieving stacking, we focused on the latter for the rest of our experimental analysis in simulation and for all our real world evaluations.
Fig. 1 shows our real robot setup, which is composed of a 7-DoF Sawyer robotic arm, a basket and two cubes. The agent receives the front left and right RGB camera images as observations, shown in Fig. 2. The two cameras are positioned in a way that can help disambiguate 3D positions of the arm and the objects. In addition to these images, our observations also consist of the pose of the cameras, end-effector position and angle, and the gripper finger angle. The action space of the agent is 4D Cartesian velocity control of the end effector, with an additional action for actuating the gripper. The real environment is modelled in simulation using the MuJoCo  simulator. Fig 1 also shows the simulated version of our environment. Unless mentioned otherwise, all of our policies are trained in simulation with domain randomization and a shaped reward functions.
The shaped reward function for lifting is a combination of reaching, touching and lifting rewards. Let be the Euclidean distance of a target object from the pinch site of the end effector, and be the target height and object height from the ground in meters. Our reach reward is defined as , where is the indicator function. In practice we use reward shaping with the Gaussian tolerance reward function as defined in the DeepMind Control Suite , with bounds and a margin of . Our touch reward is binary and provided by our simulator upon contact with the object. Our lift reward is and the final shaped version we use during training: . As before, in practice the distance is passed through the same tolerance function as above, with bounds and a margin of . For stacking we now have a top and a bottom target objects with positions . If the cubes are in contact and on top of each other, the reward is . Otherwise, we have additional shaping to aid with training. More specifically, if we revert to a normalized lift reward for the top object . Otherwise, , to account for bringing the cubes closer to each other. In practice we set if it’s greater than 0.75.
|Domain Randomization||46.0 %|
|SSDA with TCN||38.0 %|
|SSDA with TCN (Ours)||54.0 %|
|SSDA with CFD (Ours)||62.0 %|
|SSDA without Task Objective||12.0 %|
|SSDA with Task Objective (Ours)||62.0 %|
In the real world, the cubes are fitted with AR tags that are only used for the purposes of fair and consistent evaluation of our resulting policies: the 3D poses of the cubes are never available to an RL agent during training or testing. At the beginning of every episode, the cubes are placed in a random position by a hand-crafted controller. All real world evaluations referred to in the rest of the section are on the stacking task and consist of 50 episodes. A real world episode is considered a success if the green cube is on top of the yellow cube at any point throughout the episode. Episodes are of length 200 with 20Hz control rate for both simulated and real environments.
V Experimental Results and Discussion
In this section, we discuss the details of our experiments, and attempt to answer the following questions: (a) Can sequence-based self-supervision be used as a common auxiliary objective for simulated and real data without degrading task performance in simulation? (b) Does doing so improve final task performance in the real world? (c) How does using sequence-based self supervision for visual domain alignment between simulation and reality compare with domain-adversarial adaptation? (d) Is the use of actions in such a self-supervised loss important for bridging the sim-to-real domain gap? (e) What is the performance difference of modality tuning in our two-stage approach versus a one-stage end-to-end approach? and (f) What are the effects of the different components of our RL framework in solving manipulation tasks from scratch, i.e. without the shared replay buffer or behavior cloning, in simulation?
V-a Self-Supervised Sim-to-Real Adaptation
We evaluated the following methods on our vision-based cube stacking task: domain randomization , unsupervised domain adaptation with a domain adversarial (DANN)  loss, and self-supervised domain adaptation (SSDA) with two sequence-based self-supervised objectives: the time-contrastive networks (TCN)  loss, and the contrastive forward dynamics (CFD) loss we proposed in Sect. III-C. We ablate two different training methods for domain adaptation, end-to-end and two-stage. The end-to-end training method simply optimize Eq. 2 from Sect. III-B with respect to all parameters, without the two-stage procedure described in Sect. III-B. This means that all of the losses are jointly optimized without freezing any part of the neural network. Two-stage training procedure is described in Sect. III and employs modality tuning .
Table I shows the quantitative results from evaluating task success on the real robot. These experiments show that DANN improves on top of the domain randomization baseline by a small margin. However, end-to-end adaptation with the TCN loss results in degradation of performance. This is likely due to insufficient sharing of the encoder between the self-supervised objective using simulated data and real data. On the other hand, the two-stage self-supervised domain adaptation with TCN significantly improves over the end-to-end variant and domain randomization baselines. This reconfirms that modality tuning used in the two-stage training method results in significantly better sharing of the encoder. Finally, the two-stage self-supervised adaptation with our CFD objective, which utilizes both the temporal structure of the observations and the actions, performs significantly better when compared to all other methods, yielding a 62 % task success.
We also evaluated the importance of jointly optimizing the RL and BC objectives in Eq. 2 for the two-stage self-supervised domain adaptation. As one can see in Table II, only optimizing without the task objective significantly reduces the performance. Fig. 4 further shows how the task performance in simulation degrades when optimizing only the self-supervised objective. In essence, by only optimizing the self-supervised loss, the network catastrophically forgets  how to solve the manipulation task.
V-B Ablations for different components of our RL framework
In order to assess the necessity and efficacy of the different components of our framework, described in Sect. III-A, we provide ablation experimental results. Specifically we examined the effects of the state-based agent that share a replay buffer with the vision-based agent, and the addition of an auxiliary behavior cloning objective for the vision-based agent to imitate the state-based agent. Fig. 5 shows these effects on the cube lifting task. A vision-based agent trained with MPO , the state-of-the-art continuous control RL method at the core of our framework, struggles with solving this task, contrary to an MPO agent with access to the full state information. By sharing the replay buffer between the state-based agent and the vision-based agent, one can see that the vision-based agent is able to solve lifting in a reasonable amount of time. The addition of the behavior cloning (BC) objective further improves the speed and stability of training.
Fig. 6 shows the even more profound effect our BC objective has on learning our vision-based cube stacking task. Furthermore, one can also observe the stability of the method persists even when jointly training, end-to-end, with the TCN loss, or the DANN loss with real world data.
In this work, we have presented our self-supervised domain adaptation method, which uses unlabeled real robot data to improve sim-to-real transfer learning. Our method is able to perform domain adaptation for sim-to-real transfer learning of cube stacking from visual observations. In addition to our domain adaptation method, we developed contrastive forward dynamics (CFD), which combines dynamics model learning with time-contrastive techniques to better utilize the structure available in unlabeled robot data. We demonstrate that using our CFD objective for adaptation yields a clear improvement over domain randomization, other self-supervised adaptation techniques and domain adversarial methods.
Through our experiments, we discovered that optimizing only the first visual layers of the policy network in combination with jointly optimizing the reinforcement learning, behavior cloning and self-supervised loss was necessary for a successful application of self-supervised learning for sim-to-real transfer for robotic manipulation. Finally, the use of sequence-based self-supervised loss by leveraging the dynamical structure in the robotic system ultimately resulted in the best domain adaptation for our manipulation task.
-  J. Bohg, A. Morales, T. Asfour, and D. Kragic, “Data-driven grasp synthesis-a survey,” IEEE Transactions on Robotics, 2014.
-  U. Viereck, A. t. Pas, K. Saenko, and R. Platt, “Learning a visuomotor controller for real world robotic grasping using easily simulated depth images,” CoRL, 2017.
-  J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, “Dex-Net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” in RSS, 2017.
-  S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” CoRR, vol. abs/1603.02199, 2016.
-  D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakrishnan, V. Vanhoucke, and S. Levine, “Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation,” CoRR, vol. abs/1806.10293, 2018.
-  B. Siciliano and O. Khatib, Springer Handbook of Robotics. Secaucus, NJ, USA: Springer-Verlag New York, Inc., 2007.
-  X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel, “Sim-to-real transfer of robotic control with dynamics randomization,” CoRR, vol. abs/1710.06537, 2017.
-  S. James, P. Wohlhart, M. Kalakrishnan, D. Kalashnikov, A. Irpan, J. Ibarz, S. Levine, R. Hadsell, and K. Bousmalis, “Sim-to-real via sim-to-sim: Data-efficient robotic grasping via randomized-to-canonical adaptation networks,” CoRR, vol. abs/1812.07252, 2018.
-  OpenAI, M. Andrychowicz, B. Baker, M. Chociej, R. Józefowicz, B. McGrew, J. W. Pachocki, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder, L. Weng, and W. Zaremba, “Learning dexterous in-hand manipulation,” CoRR, vol. abs/1808.00177, 2018.
-  G. J. Stein and N. Roy, “Genesis-rt: Generating synthetic images for training secondary real-world tasks,” in 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018, pp. 7151–7158.
-  K. Bousmalis, A. Irpan, P. Wohlhart, Y. Bai, M. Kelcey, M. Kalakrishnan, L. Downs, J. Ibarz, P. Pastor, K. Konolige, S. Levine, and V. Vanhoucke, “Using simulation and domain adaptation to improve efficiency of deep robotic grasping,” CoRR, vol. abs/1709.07857, 2017.
-  G. Csurka, “Domain adaptation for visual applications: A comprehensive survey,” arxiv:1702.05374, 2017.
-  J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, “Domain randomization for transferring deep neural networks from simulation to the real world,” CoRR, vol. abs/1703.06907, 2017.
-  Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-Adversarial Training of Neural Networks,” arXiv e-prints, May 2015.
-  A. Billard and D. Kragic, “Trends and challenges in robot manipulation,” Science, vol. 364, no. 6446, p. eaat8414, 2019.
-  V.-D. Nguyen, “Constructing Force-Closure Grasps,” IJRR, 1988.
-  A. Rodriguez, M. T. Mason, and S. Ferry, “From caging to grasping,” IJRR, 2012.
-  S. Makita and W. Wan, “A Survey of Robotic Caging and its Applications,” Advanced Robotics, vol. 0, no. 0, pp. 1–15, 2017.
-  S. Kolev and E. Todorov, “Physically consistent state estimation and system identification for contacts,” in 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Nov 2015, pp. 1036–1043.
-  Y. Chebotar, A. Handa, V. Makoviychuk, M. Macklin, J. Issac, N. D. Ratliff, and D. Fox, “Closing the sim-to-real loop: Adapting simulation randomization with real world experience,” CoRR, vol. abs/1810.05687, 2018.
M. Riedmiller, R. Hafner, T. Lampe, M. Neunert, J. Degrave, T. Wiele, V. Mnih,
N. Heess, and J. T. Springenberg, “Learning by playing solving sparse reward
tasks from scratch,” in
International Conference on Machine Learning, 2018, pp. 4341–4350.
-  Y. Zhu, Z. Wang, J. Merel, A. Rusu, T. Erez, S. Cabi, S. Tunyasuvunakool, J. Kramár, R. Hadsell, N. de Freitas, and N. Heess, “Reinforcement and imitation learning for diverse visuomotor skills,” in Proceedings of Robotics: Science and Systems, Pittsburgh, Pennsylvania, June 2018.
K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan, “Unsupervised
pixel-level domain adaptation with generative adversarial networks,” in
Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3722–3731.
-  R. Caseiro, J. F. Henriques, P. Martins, and J. Batista, “Beyond the shortest path: Unsupervised Domain Adaptation by Sampling Subspaces Along the Spline Flow,” in CVPR, 2015.
-  Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” The Journal of Machine Learning Research, vol. 17, no. 1, pp. 2096–2030, 2016.
-  K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan, “Domain separation networks,” in NIPS, 2016.
B. Fernando, H. Bilen, E. Gavves, and S. Gould, “Self-supervised video representation learning with odd-one-out networks,” inProceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3636–3645.
-  D. Wei, J. J. Lim, A. Zisserman, and W. T. Freeman, “Learning and using the arrow of time,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8052–8060.
-  P. Sermanet, C. Lynch, J. Hsu, and S. Levine, “Time-contrastive networks: Self-supervised learning from multi-view observation,” CoRR, vol. abs/1704.06888, 2017.
-  Y. Aytar, T. Pfaff, D. Budden, T. Paine, Z. Wang, and N. de Freitas, “Playing hard exploration games by watching youtube,” in Advances in Neural Information Processing Systems, 2018, pp. 2930–2941.
-  A. Abdolmaleki, J. T. Springenberg, Y. Tassa, R. Munos, N. Heess, and M. A. Riedmiller, “Maximum a posteriori policy optimisation,” CoRR, vol. abs/1806.06920, 2018.
-  D. Schwab, J. T. Springenberg, M. F. Martins, T. Lampe, M. Neunert, A. Abdolmaleki, T. Hertweck, R. Hafner, F. Nori, and M. A. Riedmiller, “Simultaneously learning vision and feature-based control policies for real-world ball-in-a-cup,” CoRR, vol. abs/1902.04706, 2019.
-  A. Nair, B. McGrew, M. Andrychowicz, W. Zaremba, and P. Abbeel, “Overcoming exploration in reinforcement learning with demonstrations,” CoRR, vol. abs/1709.10089, 2017.
-  M. Vecerik, T. Hester, J. Scholz, F. Wang, O. Pietquin, B. Piot, N. Heess, T. Rothörl, T. Lampe, and M. A. Riedmiller, “Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards,” CoRR, vol. abs/1707.08817, 2017.
-  Y. Aytar, L. Castrejon, C. Vondrick, H. Pirsiavash, and A. Torralba, “Cross-modal scene networks,” IEEE transactions on pattern analysis and machine intelligence, 2017.
I. Misra, C. L. Zitnick, and M. Hebert, “Shuffle and learn: unsupervised learning using temporal order verification,” inEuropean Conference on Computer Vision. Springer, 2016, pp. 527–544.
-  A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with contrastive predictive coding,” arXiv preprint arXiv:1807.03748, 2018.
-  L. Grne and J. Pannek, Nonlinear Model Predictive Control: Theory and Algorithms. Springer Publishing Company, Incorporated, 2013.
-  S. Chopra, R. Hadsell, Y. LeCun, et al., “Learning a similarity metric discriminatively, with application to face verification,” in CVPR (1), 2005, pp. 539–546.
-  R. Hadsell, S. Chopra, and Y. LeCun, “Dimensionality reduction by learning an invariant mapping,” in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), vol. 2. IEEE, 2006, pp. 1735–1742.
-  K. Sohn, “Improved deep metric learning with multi-class n-pair loss objective,” in Advances in Neural Information Processing Systems, 2016, pp. 1857–1865.
-  E. Todorov, T. Erez, and Y. Tassa, “Mujoco: A physics engine for model-based control,” in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012, pp. 5026–5033.
-  Y. Tassa, Y. Doron, A. Muldal, T. Erez, Y. Li, D. de Las Casas, D. Budden, A. Abdolmaleki, J. Merel, A. Lefrancq, T. P. Lillicrap, and M. A. Riedmiller, “Deepmind control suite,” ArXiv, vol. abs/1801.00690, 2018.
-  F. Sadeghi and S. Levine, “CAD2RL: Real single-image flight without a single real image.” in RSS, 2017.
-  J. Kirkpatrick, R. Pascanu, N. C. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell, “Overcoming catastrophic forgetting in neural networks,” Proceedings of the National Academy of Sciences of the United States of America, vol. 114 13, pp. 3521–3526, 2016.