OHPL: One-shot Hand-eye Policy Learner

08/06/2021
by   Changjae Oh, et al.
0

The control of a robot for manipulation tasks generally relies on object detection and pose estimation. An attractive alternative is to learn control policies directly from raw input data. However, this approach is time-consuming and expensive since learning the policy requires many trials with robot actions in the physical environment. To reduce the training cost, the policy can be learned in simulation with a large set of synthetic images. The limit of this approach is the domain gap between the simulation and the robot workspace. In this paper, we propose to learn a policy for robot reaching movements from a single image captured directly in the robot workspace from a camera placed on the end-effector (a hand-eye camera). The idea behind the proposed policy learner is that view changes seen from the hand-eye camera produced by actions in the robot workspace are analogous to locating a region-of-interest in a single image by performing sequential object localisation. This similar view change enables training of object reaching policies using reinforcement-learning-based sequential object localisation. To facilitate the adaptation of the policy to view changes in the robot workspace, we further present a dynamic filter that learns to bias an input state to remove irrelevant information for an action decision. The proposed policy learner can be used as a powerful representation for robotic tasks, and we validate it on static and moving object reaching tasks.

READ FULL TEXT

page 2

page 4

page 5

page 6

research
11/24/2021

Ex-DoF: Expansion of Action Degree-of-Freedom with Virtual Camera Rotation for Omnidirectional Image

Inter-robot transfer of training data is a little explored topic in lear...
research
12/17/2021

A controller for reaching and unveiling a partially occluded object of interest with an eye-in-hand robot

In this work, a control scheme for approaching and unveiling a partially...
research
09/20/2017

Transfer learning from synthetic to real images using variational autoencoders for robotic applications

Robotic learning in simulation environments provides a faster, more scal...
research
11/21/2019

Contextual Reinforcement Learning of Visuo-tactile Multi-fingered Grasping Policies

Using simulation to train robot manipulation policies holds the promise ...
research
11/21/2019

Camera-to-Robot Pose Estimation from a Single Image

We present an approach for estimating the pose of a camera with respect ...
research
11/20/2018

Reinforcement Learning of Active Vision forManipulating Objects under Occlusions

We consider artificial agents that learn to jointly control their grippe...
research
10/14/2022

Learning to Autonomously Reach Objects with NICO and Grow-When-Required Networks

The act of reaching for an object is a fundamental yet complex skill for...

Please sign up or login with your details

Forgot password? Click here to reset