Learning Extended Body Schemas from Visual Keypoints for Object Manipulation

11/08/2020
by   Sarah Bechtle, et al.
0

Humans have impressive generalization capabilities when it comes to manipulating objects and tools in completely novel environments. These capabilities are, at least partially, a result of humans having internal models of their bodies and any grasped object. How to learn such body schemas for robots remains an open problem. In this work, we develop an approach that can extend a robot's kinematic model when grasping an object from visual latent representations. Our framework comprises two components: 1) a structured keypoint detector, which fuses proprioception and vision to predict visual key points on an object; 2) Learning an adaptation of the kinematic chain by regressing virtual joints from the predicted key points. Our evaluation shows that our approach learns to consistently predict visual keypoints on objects, and can easily adapt a kinematic chain to the object grasped in various configurations, from a few seconds of data. Finally we show that this extended kinematic chain lends itself for object manipulation tasks such as placing a grasped object.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 6

research
07/10/2023

Kinematically-Decoupled Impedance Control for Fast Object Visual Servoing and Grasping on Quadruped Manipulators

We propose a control pipeline for SAG (Searching, Approaching, and Grasp...
research
11/17/2015

Learning Articulated Motion Models from Visual and Lingual Signals

In order for robots to operate effectively in homes and workplaces, they...
research
11/06/2022

Learning body models: from humans to humanoids

Humans and animals excel in combining information from multiple sensory ...
research
03/29/2018

Learning Kinematic Descriptions using SPARE: Simulated and Physical ARticulated Extendable dataset

Next generation robots will need to understand intricate and articulated...
research
01/17/2019

Kinematically-Informed Interactive Perception: Robot-Generated 3D Models for Classification

To be useful in everyday environments, robots must be able to observe an...
research
04/14/2023

CAMM: Building Category-Agnostic and Animatable 3D Models from Monocular Videos

Animating an object in 3D often requires an articulated structure, e.g. ...
research
05/28/2022

Learning to Use Chopsticks in Diverse Gripping Styles

Learning dexterous manipulation skills is a long-standing challenge in c...

Please sign up or login with your details

Forgot password? Click here to reset