Applying Learning-from-observation to household service robots: three common-sense formulation

04/19/2023
by   Katsushi Ikeuchi, et al.
0

Utilizing a robot in a new application requires the robot to be programmed at each time. To reduce such programmings efforts, we have been developing “Learning-from-observation (LfO)” that automatically generates robot programs by observing human demonstrations. One of the main issues with introducing this LfO system into the domain of household tasks is the cluttered environments, which cause difficulty in determining which elements are important for task execution when observing demonstrations. To overcome this issue, it is necessary for the system to have common sense shared with the human demonstrator. This paper addresses three relationships that LfO in the household domain should focus on when observing demonstrations and proposes representations to describe the common sense used by the demonstrator for optimal execution of task sequences. Specifically, the paper proposes to use labanotation to describe the postures between the environment and the robot, contact-webs to describe the grasping methods between the robot and the tool, and physical and semantic constraints to describe the motions between the tool and the environment. Then, based on these representations, the paper formulates task models, machine-independent robot programs, that indicate what to do and how to do. Third, the paper explains the task encoder to obtain task models and task decoder to execute the task models on the robot hardware. Finally, this paper presents how the system actually works through several example scenes.

READ FULL TEXT

page 3

page 4

page 6

page 7

page 8

page 9

page 10

page 11

research
03/03/2021

Semantic constraints to represent common sense required in household actions for multi-modal Learning-from-observation robot

The paradigm of learning-from-observation (LfO) enables a robot to learn...
research
12/26/2020

Imitation Learning for High Precision Peg-in-Hole Tasks

Industrial robot manipulators are not able to match the precision and sp...
research
02/03/2020

Quantifying Hypothesis Space Misspecification in Learning from Human-Robot Demonstrations and Physical Corrections

Human input has enabled autonomous systems to improve their capabilities...
research
07/25/2022

Learning Human Body Motions from Skeleton-Based Observations for Robot-Assisted Therapy

Robots applied in therapeutic scenarios, for instance in the therapy of ...
research
03/28/2023

Coordinated Multi-Robot Shared Autonomy Based on Scheduling and Demonstrations

Shared autonomy methods, where a human operator and a robot arm work tog...
research
07/09/2021

Work in Progress – Automated Generation of Robotic Planning Domains from Observations

In this paper, we report the results of our latest work on the automated...
research
11/11/2020

Accounting for Human Learning when Inferring Human Preferences

Inverse reinforcement learning (IRL) is a common technique for inferring...

Please sign up or login with your details

Forgot password? Click here to reset