Visuo-Tactile Manipulation Planning Using Reinforcement Learning with Affordance Representation

07/14/2022
by   Wenyu Liang, et al.
0

Robots are increasingly expected to manipulate objects in ever more unstructured environments where the object properties have high perceptual uncertainty from any single sensory modality. This directly impacts successful object manipulation. In this work, we propose a reinforcement learning-based motion planning framework for object manipulation which makes use of both on-the-fly multisensory feedback and a learned attention-guided deep affordance model as perceptual states. The affordance model is learned from multiple sensory modalities, including vision and touch (tactile and force/torque), which is designed to predict and indicate the manipulable regions of multiple affordances (i.e., graspability and pushability) for objects with similar appearances but different intrinsic properties (e.g., mass distribution). A DQN-based deep reinforcement learning algorithm is then trained to select the optimal action for successful object manipulation. To validate the performance of the proposed framework, our method is evaluated and benchmarked using both an open dataset and our collected dataset. The results show that the proposed method and overall framework outperform existing methods and achieve better accuracy and higher efficiency.

READ FULL TEXT

page 1

page 2

page 3

page 5

page 6

page 7

page 8

research
03/21/2022

Tactile Pose Estimation and Policy Learning for Unknown Object Manipulation

Object pose estimation methods allow finding locations of objects in uns...
research
03/31/2022

Visual-Tactile Multimodality for Following Deformable Linear Objects Using Reinforcement Learning

Manipulation of deformable objects is a challenging task for a robot. It...
research
10/09/2021

Multimodal Sensory Learning for Real-time, Adaptive Manipulation

Adaptive control for real-time manipulation requires quick estimation an...
research
03/02/2021

Learning Robotic Manipulation Tasks through Visual Planning

Multi-step manipulation tasks in unstructured environments are extremely...
research
06/03/2021

Probabilistic Discriminative Models Address the Tactile Perceptual Aliasing Problem

In this paper, our aim is to highlight Tactile Perceptual Aliasing as a ...
research
06/29/2023

ArrayBot: Reinforcement Learning for Generalizable Distributed Manipulation through Touch

We present ArrayBot, a distributed manipulation system consisting of a 1...
research
09/30/2022

Visuo-Tactile Transformers for Manipulation

Learning representations in the joint domain of vision and touch can imp...

Please sign up or login with your details

Forgot password? Click here to reset