Learning Articulated Motions From Visual Demonstration

02/05/2015
by   Sudeep Pillai, et al.
0

Many functional elements of human homes and workplaces consist of rigid components which are connected through one or more sliding or rotating linkages. Examples include doors and drawers of cabinets and appliances; laptops; and swivel office chairs. A robotic mobile manipulator would benefit from the ability to acquire kinematic models of such objects from observation. This paper describes a method by which a robot can acquire an object model by capturing depth imagery of the object as a human moves it through its range of motion. We envision that in future, a machine newly introduced to an environment could be shown by its human user the articulated objects particular to that environment, inferring from these "visual demonstrations" enough information to actuate each object independently of the user. Our method employs sparse (markerless) feature tracking, motion segmentation, component pose estimation, and articulation learning; it does not require prior object models. Using the method, a robot can observe an object being exercised, infer a kinematic model incorporating rigid, prismatic and revolute joints, then use the model to predict the object's motion from a novel vantage point. We evaluate the method's performance, and compare it to that of a previously published technique, for a variety of household objects.

READ FULL TEXT

page 1

page 2

page 5

page 6

research
09/29/2018

Inferring geometric constraints in human demonstrations

This paper presents an approach for inferring geometric constraints in h...
research
03/17/2021

Learning Descriptor of Constrained Task from Demonstration

Constrained objects, such as doors and drawers are often complex and sha...
research
02/25/2019

A Versatile Framework for Robust and Adaptive Door Operation with a Mobile Manipulator Robot

The ability to deal with articulated objects is very important for robot...
research
04/25/2022

Sparse-Dense Motion Modelling and Tracking for Manipulation without Prior Object Models

This work presents an approach for modelling and tracking previously uns...
research
10/15/2021

Learning to Infer Kinematic Hierarchies for Novel Object Instances

Manipulating an articulated object requires perceiving itskinematic hier...
research
02/22/2023

Robotic Perception-motion Synergy for Novel Rope Wrapping Tasks

This paper introduces a novel and general method to address the problem ...
research
11/17/2015

Learning Articulated Motion Models from Visual and Lingual Signals

In order for robots to operate effectively in homes and workplaces, they...

Please sign up or login with your details

Forgot password? Click here to reset