Visual Task Progress Estimation with Appearance Invariant Embeddings for Robot Control and Planning

03/16/2020
by   Guilherme Maeda, et al.
0

To fulfill the vision of full autonomy, robots must be capable of reasoning about the state of the world. In vision-based tasks, this means that a robot must understand the dissimilarities between its current perception of the environment with that of another state. To be of practical use, this dissimilarity must be quantifiable and computed over scenes with different viewpoints, nature (simulated vs. real), and appearances (shape, color, luminosity, etc.). Motivated by this problem, we propose an approach that uses the consistency of the progress among different examples and viewpoints of a task to train a deep neural network to map images into measurable features. Our method builds upon Time-Contrastive Networks (TCNs), originally proposed as a representation for continuous visuomotor skill learning, to train the network using only discrete snapshots taken at different stages of a task such that the network becomes sensitive to differences in task phases. We associate these embeddings to a sequence of images representing gradual task accomplishment, allowing a robot to iteratively query its motion planner with the current visual state to solve long-horizon tasks. We quantify the granularity achieved by the network in recognizing the number of objects in a scene and in measuring the volume of liquid in a cup. Our experiments leverage this granularity to make a mobile robot move a desired number of objects into a storage area and to control the amount of pouring in a cup.

READ FULL TEXT

page 1

page 5

page 6

page 8

research
03/20/2022

Accelerating Integrated Task and Motion Planning with Neural Feasibility Checking

As robots play an increasingly important role in the industrial, the exp...
research
02/07/2019

Visual search and recognition for robot task execution and monitoring

Visual search of relevant targets in the environment is a crucial robot ...
research
03/28/2019

Amortized Object and Scene Perception for Long-term Robot Manipulation

Mobile robots, performing long-term manipulation activities in human env...
research
03/11/2019

Building an Affordances Map with Interactive Perception

Robots need to understand their environment to perform their task. If it...
research
08/27/2023

Using Knowledge Representation and Task Planning for Robot-agnostic Skills on the Example of Contact-Rich Wiping Tasks

The transition to agile manufacturing, Industry 4.0, and high-mix-low-vo...
research
07/11/2022

TASKOGRAPHY: Evaluating robot task planning over large 3D scene graphs

3D scene graphs (3DSGs) are an emerging description; unifying symbolic, ...
research
09/29/2017

Vision-based deep execution monitoring

Execution monitor of high-level robot actions can be effectively improve...

Please sign up or login with your details

Forgot password? Click here to reset