DeepAI AI Chat
Log In Sign Up

Learning to Grasp Without Seeing

by   Adithyavairavan Murali, et al.
Carnegie Mellon University

Can a robot grasp an unknown object without seeing it? In this paper, we present a tactile-sensing based approach to this challenging problem of grasping novel objects without prior knowledge of their location or physical properties. Our key idea is to combine touch based object localization with tactile based re-grasping. To train our learning models, we created a large-scale grasping dataset, including more than 30 RGB frames and over 2.8 million tactile samples from 7800 grasp interactions of 52 objects. To learn a representation of tactile signals, we propose an unsupervised auto-encoding scheme, which shows a significant improvement of 4-9 variety of tactile perception tasks. Our system consists of two steps. First, our touch localization model sequentially 'touch-scans' the workspace and uses a particle filter to aggregate beliefs from multiple hits of the target. It outputs an estimate of the object's location, from which an initial grasp is established. Next, our re-grasping model learns to progressively improve grasps with tactile feedback based on the learned features. This network learns to estimate grasp stability and predict adjustment for the next grasp. Re-grasping thus is performed iteratively until our model identifies a stable grasp. Finally, we demonstrate extensive experimental results on grasping a large set of novel objects using tactile sensing alone. Furthermore, when applied on top of a vision-based policy, our re-grasping model significantly boosts the overall accuracy by 10.6 grasp with only tactile sensing and without any prior object knowledge.


page 1

page 3

page 7


Slip detection for grasp stabilisation with a multi-fingered tactile robot hand

Tactile sensing is used by humans when grasping to prevent us dropping o...

Simultaneous Tactile Exploration and Grasp Refinement for Unknown Objects

This paper addresses the problem of simultaneously exploring an unknown ...

Safely Learning Visuo-Tactile Feedback Policies in Real For Industrial Insertion

Industrial insertion tasks are often performed repetitively with parts t...

Bayesian Grasp: Robotic visual stable grasp based on prior tactile knowledge

Robotic grasp detection is a fundamental capability for intelligent mani...

More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch

For humans, the process of grasping an object relies heavily on rich tac...

PoseIt: A Visual-Tactile Dataset of Holding Poses for Grasp Stability Analysis

When humans grasp objects in the real world, we often move our arms to h...