-
Learning Articulated Motion Models from Visual and Lingual Signals
In order for robots to operate effectively in homes and workplaces, they...
read it
-
Kinematically-Informed Interactive Perception: Robot-Generated 3D Models for Classification
To be useful in everyday environments, robots must be able to observe an...
read it
-
Learning Kinematic Descriptions using SPARE: Simulated and Physical ARticulated Extendable dataset
Next generation robots will need to understand intricate and articulated...
read it
-
Vision Based Adaptation to Kernelized Synergies for Human Inspired Robotic Manipulation
Humans in contrast to robots are excellent in performing fine manipulati...
read it
-
Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation
What is the right object representation for manipulation? We would like ...
read it
-
Visual Identification of Articulated Object Parts
As autonomous robots interact and navigate around real-world environment...
read it
-
An optimization framework for simulation and kinematic control of Constrained Collaborative Mobile Agents (CCMA) system
We present a concept of constrained collaborative mobile agents (CCMA) s...
read it
Learning Extended Body Schemas from Visual Keypoints for Object Manipulation
Humans have impressive generalization capabilities when it comes to manipulating objects and tools in completely novel environments. These capabilities are, at least partially, a result of humans having internal models of their bodies and any grasped object. How to learn such body schemas for robots remains an open problem. In this work, we develop an approach that can extend a robot's kinematic model when grasping an object from visual latent representations. Our framework comprises two components: 1) a structured keypoint detector, which fuses proprioception and vision to predict visual key points on an object; 2) Learning an adaptation of the kinematic chain by regressing virtual joints from the predicted key points. Our evaluation shows that our approach learns to consistently predict visual keypoints on objects, and can easily adapt a kinematic chain to the object grasped in various configurations, from a few seconds of data. Finally we show that this extended kinematic chain lends itself for object manipulation tasks such as placing a grasped object.
READ FULL TEXT
Comments
There are no comments yet.