Learning Precise 3D Manipulation from Multiple Uncalibrated Cameras

02/21/2020
by   Iretiayo Akinola, et al.
0

In this work, we present an effective multi-view approach to closed-loop end-to-end learning of precise manipulation tasks that are 3D in nature. Our method learns to accomplish these tasks using multiple statically placed but uncalibrated RGB camera views without building an explicit 3D representation such as a pointcloud or voxel grid. This multi-camera approach achieves superior task performance on difficult stacking and insertion tasks compared to single-view baselines. Single view robotic agents struggle from occlusion and challenges in estimating relative poses between points of interest. While full 3D scene representations (voxels or pointclouds) are obtainable from registered output of multiple depth sensors, several challenges complicate operating off such explicit 3D representations. These challenges include imperfect camera calibration, poor depth maps due to object properties such as reflective surfaces, and slower inference speeds over 3D representations compared to 2D images. Our use of static but uncalibrated cameras does not require camera-robot or camera-camera calibration making the proposed approach easy to setup and our use of sensor dropout during training makes it resilient to the loss of camera-views after deployment.

READ FULL TEXT

page 1

page 3

page 5

research
02/05/2023

Multi-View Masked World Models for Visual Robotic Manipulation

Visual robotic manipulation research and applications often use multiple...
research
06/26/2023

RVT: Robotic View Transformer for 3D Object Manipulation

For 3D object manipulation, methods that build an explicit 3D representa...
research
05/23/2019

Bi-objective Framework for Sensor Fusion in RGB-D Multi-View Systems: Applications in Calibration

Complete and textured 3D reconstruction of dynamic scenes has been facil...
research
09/11/2023

PAg-NeRF: Towards fast and efficient end-to-end panoptic 3D representations for agricultural robotics

Precise scene understanding is key for most robot monitoring and interve...
research
05/24/2020

Learning Camera Miscalibration Detection

Self-diagnosis and self-repair are some of the key challenges in deployi...
research
12/19/2022

From a Bird's Eye View to See: Joint Camera and Subject Registration without the Camera Calibration

We tackle a new problem of multi-view camera and subject registration in...
research
02/08/2022

DURableVS: Data-efficient Unsupervised Recalibrating Visual Servoing via online learning in a structured generative model

Visual servoing enables robotic systems to perform accurate closed-loop ...

Please sign up or login with your details

Forgot password? Click here to reset