End-to-end Reinforcement Learning of Robotic Manipulation with Robust Keypoints Representation

02/12/2022
by   Tianying Wang, et al.
30

We present an end-to-end Reinforcement Learning(RL) framework for robotic manipulation tasks, using a robust and efficient keypoints representation. The proposed method learns keypoints from camera images as the state representation, through a self-supervised autoencoder architecture. The keypoints encode the geometric information, as well as the relationship of the tool and target in a compact representation to ensure efficient and robust learning. After keypoints learning, the RL step then learns the robot motion from the extracted keypoints state representation. The keypoints and RL learning processes are entirely done in the simulated environment. We demonstrate the effectiveness of the proposed method on robotic manipulation tasks including grasping and pushing, in different scenarios. We also investigate the generalization capability of the trained model. In addition to the robust keypoints representation, we further apply domain randomization and adversarial training examples to achieve zero-shot sim-to-real transfer in real-world robotic manipulation tasks.

READ FULL TEXT

page 1

page 4

page 5

page 6

research
07/28/2020

KOVIS: Keypoint-based Visual Servoing with Zero-Shot Sim-to-Real Transfer for Robotics Manipulation

We present KOVIS, a novel learning-based, calibration-free visual servoi...
research
10/21/2019

Self-Supervised Sim-to-Real Adaptation for Visual Robotic Manipulation

Collecting and automatically obtaining reward signals from real robotic ...
research
11/13/2020

Robotic self-representation improves manipulation skills and transfer learning

Cognitive science suggests that the self-representation is critical for ...
research
10/03/2022

Hierarchical reinforcement learning for in-hand robotic manipulation using Davenport chained rotations

End-to-end reinforcement learning techniques are among the most successf...
research
11/13/2020

Robust Policies via Mid-Level Visual Representations: An Experimental Study in Manipulation and Navigation

Vision-based robotics often separates the control loop into one module f...
research
01/27/2022

Excavation Reinforcement Learning Using Geometric Representation

Excavation of irregular rigid objects in clutter, such as fragmented roc...
research
03/15/2020

Active Perception and Representation for Robotic Manipulation

The vast majority of visual animals actively control their eyes, heads, ...

Please sign up or login with your details

Forgot password? Click here to reset