KETO: Learning Keypoint Representations for Tool Manipulation

10/26/2019
by   Zengyi Qin, et al.
0

We aim to develop an algorithm for robots to manipulate novel objects as tools for completing different task goals. An efficient and informative representation would facilitate the effectiveness and generalization of such algorithms. For this purpose, we present KETO, a framework of learning keypoint representations of tool-based manipulation. For each task, a set of task-specific keypoints is jointly predicted from 3D point clouds of the tool object by a deep neural network. These keypoints offer a concise and informative description of the object to determine grasps and subsequent manipulation actions. The model is learned from self-supervised robot interactions in the task environment without the need for explicit human annotations. We evaluate our framework in three manipulation tasks with tool use. Our model consistently outperforms state-of-the-art methods in terms of task success rates. Qualitative results of keypoint prediction and tool generation are shown to visualize the learned representations.

READ FULL TEXT

page 1

page 3

page 5

page 6

research
09/23/2018

Detecting Features of Tools, Objects, and Actions from Effects in a Robot using Deep Learning

We propose a tool-use model that can detect the features of tools, targe...
research
12/01/2021

Tool as Embodiment for Recursive Manipulation

Humans and many animals exhibit a robust capability to manipulate divers...
research
06/20/2023

3D Keypoint Estimation Using Implicit Representation Learning

In this paper, we tackle the challenging problem of 3D keypoint estimati...
research
06/28/2021

GIFT: Generalizable Interaction-aware Functional Tool Affordances without Labels

Tool use requires reasoning about the fit between an object's affordance...
research
10/13/2017

Transfer of Tool Affordance and Manipulation Cues with 3D Vision Data

Future service robots working in human environments, such as kitchens, w...
research
03/20/2022

Generating Task-specific Robotic Grasps

This paper describes a method for generating robot grasps by jointly con...
research
01/18/2021

Learning by Watching: Physical Imitation of Manipulation Skills from Human Videos

We present an approach for physical imitation from human videos for robo...

Please sign up or login with your details

Forgot password? Click here to reset