Seeing All the Angles: Learning Multiview Manipulation Policies for Contact-Rich Tasks from Demonstrations

04/28/2021
by   Trevor Ablett, et al.
8

Learned visuomotor policies have shown considerable success as an alternative to traditional, hand-crafted frameworks for robotic manipulation tasks. Surprisingly, the extension of these methods to the multiview domain is relatively unexplored. A successful multiview policy could be deployed on a mobile manipulation platform, allowing it to complete a task regardless of its view of the scene. In this work, we demonstrate that a multiview policy can be found through imitation learning by collecting data from a variety of viewpoints. We illustrate the general applicability of the method by learning to complete several challenging multi-stage and contact-rich tasks, from numerous viewpoints, both in a simulated environment and on a real mobile manipulation platform. Furthermore, we analyze our policies to determine the benefits of learning from multiview data compared to learning with data from a fixed perspective. We show that learning from multiview data has little, if any, penalty to performance for a fixed-view task compared to learning with an equivalent amount of fixed-view data. Finally, we examine the visual features learned by the multiview and fixed-view policies. Our results indicate that multiview policies implicitly learn to identify spatially correlated features with a degree of view-invariance.

READ FULL TEXT

page 1

page 2

page 4

page 5

page 7

research
12/09/2021

Error-Aware Imitation Learning from Teleoperation Data for Mobile Manipulation

In mobile manipulation (MM), robots can both navigate within and interac...
research
08/01/2022

A System for Imitation Learning of Contact-Rich Bimanual Manipulation Policies

In this paper, we discuss a framework for teaching bimanual manipulation...
research
10/25/2019

Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning

We present relay policy learning, a method for imitation and reinforceme...
research
08/07/2023

MOMA-Force: Visual-Force Imitation for Real-World Mobile Manipulation

In this paper, we present a novel method for mobile manipulators to perf...
research
09/16/2019

Self-Supervised Correspondence in Visuomotor Policy Learning

In this paper we explore using self-supervised correspondence for improv...
research
11/07/2018

RoboTurk: A Crowdsourcing Platform for Robotic Skill Learning through Imitation

Imitation Learning has empowered recent advances in learning robotic man...
research
07/07/2023

Decomposing the Generalization Gap in Imitation Learning for Visual Robotic Manipulation

What makes generalization hard for imitation learning in visual robotic ...

Please sign up or login with your details

Forgot password? Click here to reset