Vision-based Robot Manipulation Learning via Human Demonstrations

03/01/2020
by   Zhixin Jia, et al.
0

Vision-based learning methods provide promise for robots to learn complex manipulation tasks. However, how to generalize the learned manipulation skills to real-world interactions remains an open question. In this work, we study robotic manipulation skill learning from a single third-person view demonstration by using activity recognition and object detection in computer vision. To facilitate generalization across objects and environments, we propose to use a prior knowledge base in the form of a text corpus to infer the object to be interacted with in the context of a robot. We evaluate our approach in a real-world robot, using several simple and complex manipulation tasks commonly performed in daily life. The experimental results show that our approach achieves good generalization performance even from small amounts of training data.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 7

research
07/30/2021

ManiSkill: Learning-from-Demonstrations Benchmark for Generalizable Manipulation Skills

Learning generalizable manipulation skills is central for robots to achi...
research
03/15/2022

Vision-Based Manipulators Need to Also See from Their Hands

We study how the choice of visual perspective affects learning and gener...
research
06/22/2023

SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by Generative Pre-trained Heterogeneous Graph Transformer

Soft object manipulation tasks in domestic scenes pose a significant cha...
research
12/03/2020

Relational Learning for Skill Preconditions

To determine if a skill can be executed in any given environment, a robo...
research
03/02/2020

Understanding Contexts Inside Robot and Human Manipulation Tasks through a Vision-Language Model and Ontology System in a Video Stream

Manipulation tasks in daily life, such as pouring water, unfold intentio...
research
07/10/2017

Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration

In this paper, we propose a multi-task learning from demonstration metho...
research
10/13/2017

Transfer of Tool Affordance and Manipulation Cues with 3D Vision Data

Future service robots working in human environments, such as kitchens, w...

Please sign up or login with your details

Forgot password? Click here to reset