Self-Evaluation in One-Shot Learning from Demonstration of Contact-Intensive Tasks

04/03/2019
by   Mythra V. Balakuntala, et al.
0

Humans naturally "program" a fellow collaborator to perform a task by demonstrating the task few times. It is intuitive, therefore, for a human to program a collaborative robot by demonstration and many paradigms use a single demonstration of the task. This is a form of one-shot learning in which a single training example, plus some context of the task, is used to infer a model of the task for subsequent execution and later refinement. This paper presents a one-shot learning from demonstration framework to learn contact-intensive tasks using only visual perception of the demonstrated task. The robot learns a policy for performing the tasks in terms of a priori skills and further uses self-evaluation based on visual and tactile perception of the skill performance to learn the force correspondences for the skills. The self-evaluation is performed based on goal states detected in the demonstration with the help of task context and the skill parameters are tuned using reinforcement learning. This approach enables the robot to learn force correspondences which cannot be inferred from a visual demonstration of the task. The effectiveness of this approach is evaluated using a vegetable peeling task.

READ FULL TEXT

page 1

page 3

page 5

page 6

research
05/13/2019

Extending Policy from One-Shot Learning through Coaching

Humans generally teach their fellow collaborators to perform tasks throu...
research
03/10/2021

Combining Learning from Demonstration with Learning by Exploration to Facilitate Contact-Rich Tasks

Collaborative robots are expected to be able to work alongside humans an...
research
09/08/2023

Few-Shot Learning of Force-Based Motions From Demonstration Through Pre-training of Haptic Representation

In many contact-rich tasks, force sensing plays an essential role in ada...
research
06/03/2022

One-shot Learning for Autonomous Aerial Manipulation

This paper is concerned with learning transferable contact models for ae...
research
03/02/2022

InsertionNet 2.0: Minimal Contact Multi-Step Insertion Using Multimodal Multiview Sensory Input

We address the problem of devising the means for a robot to rapidly and ...
research
03/09/2022

One-Shot Learning from a Demonstration with Hierarchical Latent Language

Humans have the capability, aided by the expressive compositionality of ...
research
10/04/2017

Neural Task Programming: Learning to Generalize Across Hierarchical Tasks

In this work, we propose a novel robot learning framework called Neural ...

Please sign up or login with your details

Forgot password? Click here to reset