DeepAI
Log In Sign Up

3D Shape Reconstruction from Vision and Touch

07/07/2020
by   Edward J. Smith, et al.
7

When a toddler is presented a new toy, their instinctual behaviour is to pick it up and inspect it with their hand and eyes in tandem, clearly searching over its surface to properly understand what they are playing with. Here, touch provides high fidelity localized information while vision provides complementary global context. However, in 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored. In this paper, we study this problem and present an effective chart-based approach to fusing vision and touch, which leverages advances in graph convolutional networks. To do so, we introduce a dataset of simulated touch and vision signals from the interaction between a robotic hand and a large array of 3D objects. Our results show that (1) leveraging both vision and touch signals consistently improves single-modality baselines; (2) our approach outperforms alternative modality fusion methods and strongly benefits from the proposed chart-based structure; (3) the reconstruction quality increases with the number of grasps provided; and (4) the touch information not only enhances the reconstruction at the touch site but also extrapolates to its local neighborhood.

READ FULL TEXT

page 16

page 18

page 21

07/20/2021

Active 3D Shape Reconstruction from Vision and Touch

Humans build 3D understandings of the world through active object explor...
08/22/2018

CentralNet: a Multilayer Approach for Multimodal Fusion

This paper proposes a novel multimodal fusion approach, aiming to produc...
04/04/2017

OctNetFusion: Learning Depth Fusion from Data

In this paper, we present a learning based approach to depth fusion, i.e...
12/13/2018

Dynamic Fusion with Intra- and Inter- Modality Attention Flow for Visual Question Answering

Learning effective fusion of multi-modality features is at the heart of ...
10/07/2019

An Interactive Control Approach to 3D Shape Reconstruction

The ability to accurately reconstruct the 3D facets of a scene is one of...
02/03/2022

HRBF-Fusion: Accurate 3D reconstruction from RGB-D data using on-the-fly implicits

Reconstruction of high-fidelity 3D objects or scenes is a fundamental re...
04/08/2021

Multimodal Fusion of EMG and Vision for Human Grasp Intent Inference in Prosthetic Hand Control

For lower arm amputees, robotic prosthetic hands offer the promise to re...

Code Repositories

3D-Vision-and-Touch

When told to understand the shape of a new object, the most instinctual approach is to pick it up and inspect it with your hand and eyes in tandem. Here, touch provides high fidelity localized information while vision provides complementary global context. However, in 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored. In this paper, we study this problem and present an effective chart-based approach to fusing vision and touch, which lev


view repo