Active Perception and Representation for Robotic Manipulation

03/15/2020
by   Youssef Zaky, et al.
2

The vast majority of visual animals actively control their eyes, heads, and/or bodies to direct their gaze toward different parts of their environment. In contrast, recent applications of reinforcement learning in robotic manipulation employ cameras as passive sensors. These are carefully placed to view a scene from a fixed pose. Active perception allows animals to gather the most relevant information about the world and focus their computational resources where needed. It also enables them to view objects from different distances and viewpoints, providing a rich visual experience from which to learn abstract representations of the environment. Inspired by the primate visual-motor system, we present a framework that leverages the benefits of active perception to accomplish manipulation tasks. Our agent uses viewpoint changes to localize objects, to learn state representations in a self-supervised manner, and to perform goal-directed actions. We apply our model to a simulated grasping task with a 6-DoF action space. Compared to its passive, fixed-camera counterpart, the active model achieves 8 performance in targeted grasping. Compared to vanilla deep Q-learning algorithms, our model is at least four times more sample-efficient, highlighting the benefits of both active perception and representation learning.

READ FULL TEXT

page 1

page 3

page 4

page 6

research
11/16/2018

Grasp2Vec: Learning Object Representations from Self-Supervised Grasping

Well structured visual representations can make robot learning faster an...
research
07/16/2020

Distributed Reinforcement Learning of Targeted Grasping with Active Vision for Mobile Manipulators

Developing personal robots that can perform a diverse range of manipulat...
research
02/12/2022

End-to-end Reinforcement Learning of Robotic Manipulation with Robust Keypoints Representation

We present an end-to-end Reinforcement Learning(RL) framework for roboti...
research
07/01/2021

Learning to See before Learning to Act: Visual Pre-training for Manipulation

Does having visual priors (e.g. the ability to detect objects) facilitat...
research
06/01/2022

Active Inference for Robotic Manipulation

Robotic manipulation stands as a largely unsolved problem despite signif...
research
01/27/2021

Self-Calibrating Active Binocular Vision via Active Efficient Coding with Deep Autoencoders

We present a model of the self-calibration of active binocular vision co...
research
07/02/2023

Active Sensing with Predictive Coding and Uncertainty Minimization

We present an end-to-end procedure for embodied exploration based on two...

Please sign up or login with your details

Forgot password? Click here to reset