A Multi-task Learning Framework for Grasping-Position Detection and Few-Shot Classification

03/12/2020
by   Yasuto Yokota, et al.
0

It is a big problem that a model of deep learning for a picking robot needs many labeled images. Operating costs of retraining a model becomes very expensive because the object shape of a product or a part often is changed in a factory. It is important to reduce the amount of labeled images required to train a model for a picking robot. In this study, we propose a multi-task learning framework for few-shot classification using feature vectors from an intermediate layer of a model that detects grasping positions. In the field of manufacturing, multitask for shape classification and grasping-position detection is often required for picking robots. Prior multi-task learning studies include methods to learn one task with feature vectors from a deep neural network (DNN) learned for another task. However, the DNN that was used to detect grasping positions has two problems with respect to extracting feature vectors from a layer for shape classification: (1) Because each layer of the grasping position detection DNN is activated by all objects in the input image, it is necessary to refine the features for each grasping position. (2) It is necessary to select a layer to extract the features suitable for shape classification. To tackle these issues, we propose a method to refine the features for each grasping position and to select features from the optimal layer of the DNN. We then evaluated the shape classification accuracy using these features from the grasping positions. Our results confirm that our proposed framework can classify object shapes even when the input image includes multiple objects and the number of images available for training is small.

READ FULL TEXT

page 1

page 3

page 5

page 6

research
03/30/2018

Jacquard: A Large Scale Dataset for Robotic Grasp Detection

Grasping skill is a major ability that a wide number of real-life applic...
research
03/08/2020

Online Self-Supervised Learning for Object Picking: Detecting Optimum Grasping Position using a Metric Learning Approach

Self-supervised learning methods are attractive candidates for automatic...
research
02/27/2018

Slip Detection with Combined Tactile and Visual Information

Slip detection plays a vital role in robotic manipulation and it has lon...
research
11/05/2020

Improving Robotic Grasping on Monocular Images Via Multi-Task Learning and Positional Loss

In this paper, we introduce two methods of improving real-time object gr...
research
03/02/2021

Spatial Attention Point Network for Deep-learning-based Robust Autonomous Robot Motion Generation

Deep learning provides a powerful framework for automated acquisition of...
research
01/25/2021

Lightweight Convolutional Neural Network with Gaussian-based Grasping Representation for Robotic Grasping Detection

The method of deep learning has achieved excellent results in improving ...
research
10/03/2017

Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching

This paper presents a robotic pick-and-place system that is capable of g...

Please sign up or login with your details

Forgot password? Click here to reset