RL-CycleGAN: Reinforcement Learning Aware Simulation-To-Real

06/16/2020
by   Kanishka Rao, et al.
10

Deep neural network based reinforcement learning (RL) can learn appropriate visual representations for complex tasks like vision-based robotic grasping without the need for manually engineering or prior learning a perception system. However, data for RL is collected via running an agent in the desired environment, and for applications like robotics, running a robot in the real world may be extremely costly and time consuming. Simulated training offers an appealing alternative, but ensuring that policies trained in simulation can transfer effectively into the real world requires additional machinery. Simulations may not match reality, and typically bridging the simulation-to-reality gap requires domain knowledge and task-specific engineering. We can automate this process by employing generative models to translate simulated images into realistic ones. However, this sort of translation is typically task-agnostic, in that the translated images may not preserve all features that are relevant to the task. In this paper, we introduce the RL-scene consistency loss for image translation, which ensures that the translation operation is invariant with respect to the Q-values associated with the image. This allows us to learn a task-aware translation. Incorporating this loss into unsupervised domain translation, we obtain RL-CycleGAN, a new approach for simulation-to-real-world transfer for reinforcement learning. In evaluations of RL-CycleGAN on two vision-based robotics grasping tasks, we show that RL-CycleGAN offers a substantial improvement over a number of prior methods for sim-to-real transfer, attaining excellent real-world performance with only a modest number of real-world observations.

READ FULL TEXT

page 1

page 5

page 6

page 7

page 11

page 12

research
11/06/2020

RetinaGAN: An Object-aware Approach to Sim-to-Real Transfer

The success of deep reinforcement learning (RL) and imitation learning (...
research
12/18/2018

Sim-to-Real via Sim-to-Sim: Data-efficient Robotic Grasping via Randomized-to-Canonical Adaptation Networks

Real world data, especially in the domain of robotics, is notoriously co...
research
11/04/2019

Closing the Reality Gap with Unsupervised Sim-to-Real Image Translation for Semantic Segmentation in Robot Soccer

Deep learning approaches have become the standard solution to many probl...
research
08/30/2022

Sim-to-Real Transfer of Robotic Assembly with Visual Inputs Using CycleGAN and Force Control

Recently, deep reinforcement learning (RL) has shown some impressive suc...
research
10/12/2021

Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes

We study the problem of robotic stacking with objects of complex geometr...
research
02/21/2023

Unpaired Translation from Semantic Label Maps to Images by Leveraging Domain-Specific Simulations

Photorealistic image generation from simulated label maps are necessitat...

Please sign up or login with your details

Forgot password? Click here to reset