Clothes Grasping and Unfolding Based on RGB-D Semantic Segmentation

05/05/2023
by   Xingyu Zhu, et al.
0

Clothes grasping and unfolding is a core step in robotic-assisted dressing. Most existing works leverage depth images of clothes to train a deep learning-based model to recognize suitable grasping points. These methods often utilize physics engines to synthesize depth images to reduce the cost of real labeled data collection. However, the natural domain gap between synthetic and real images often leads to poor performance of these methods on real data. Furthermore, these approaches often struggle in scenarios where grasping points are occluded by the clothing item itself. To address the above challenges, we propose a novel Bi-directional Fractal Cross Fusion Network (BiFCNet) for semantic segmentation, enabling recognition of graspable regions in order to provide more possibilities for grasping. Instead of using depth images only, we also utilize RGB images with rich color features as input to our network in which the Fractal Cross Fusion (FCF) module fuses RGB and depth data by considering global complex features based on fractal geometry. To reduce the cost of real data collection, we further propose a data augmentation method based on an adversarial strategy, in which the color and geometric transformations simultaneously process RGB and depth data while maintaining the label correspondence. Finally, we present a pipeline for clothes grasping and unfolding from the perspective of semantic segmentation, through the addition of a strategy for grasp point selection from segmentation regions based on clothing flatness measures, while taking into account the grasping direction. We evaluate our BiFCNet on the public dataset NYUDv2 and obtained comparable performance to current state-of-the-art models. We also deploy our model on a Baxter robot, running extensive grasping and unfolding experiments as part of our ablation studies, achieving an 84

READ FULL TEXT

page 1

page 2

page 5

research
01/26/2021

Global-Local Propagation Network for RGB-D Semantic Segmentation

Depth information matters in RGB-D semantic segmentation task for provid...
research
09/26/2022

MonoGraspNet: 6-DoF Grasping with a Single RGB Image

6-DoF robotic grasping is a long-lasting but unsolved problem. Recent me...
research
10/06/2021

Grasp-Oriented Fine-grained Cloth Segmentation without Real Supervision

Automatically detecting graspable regions from a single depth image is a...
research
02/18/2019

MetaGrasp: Data Efficient Grasping by Affordance Interpreter Network

Data-driven approach for grasping shows significant advance recently. Bu...
research
07/06/2017

End-to-End Learning of Semantic Grasping

We consider the task of semantic robotic grasping, in which a robot pick...
research
08/02/2023

Grasp Stability Assessment Through Attention-Guided Cross-Modality Fusion and Transfer Learning

Extensive research has been conducted on assessing grasp stability, a cr...
research
11/26/2021

Data Augmented 3D Semantic Scene Completion with 2D Segmentation Priors

Semantic scene completion (SSC) is a challenging Computer Vision task wi...

Please sign up or login with your details

Forgot password? Click here to reset