Vision-based deep execution monitoring

09/29/2017
by   Francesco Puja, et al.
0

Execution monitor of high-level robot actions can be effectively improved by visual monitoring the state of the world in terms of preconditions and postconditions that hold before and after the execution of an action. Furthermore a policy for searching where to look at, either for verifying the relations that specify the pre and postconditions or to refocus in case of a failure, can tremendously improve the robot execution in an uncharted environment. It is now possible to strongly rely on visual perception in order to make the assumption that the environment is observable, by the amazing results of deep learning. In this work we present visual execution monitoring for a robot executing tasks in an uncharted Lab environment. The execution monitor interacts with the environment via a visual stream that uses two DCNN for recognizing the objects the robot has to deal with and manipulate, and a non-parametric Bayes estimation to discover the relations out of the DCNN features. To recover from lack of focus and failures due to missed objects we resort to visual search policies via deep reinforcement learning.

READ FULL TEXT

page 2

page 4

page 6

research
02/07/2019

Visual search and recognition for robot task execution and monitoring

Visual search of relevant targets in the environment is a crucial robot ...
research
02/07/2019

Deep execution monitor for robot assistive tasks

We consider a novel approach to high-level robot task execution for a ro...
research
07/20/2021

Ontology-Assisted Generalisation of Robot Action Execution Knowledge

When an autonomous robot learns how to execute actions, it is of interes...
research
03/11/2019

Building an Affordances Map with Interactive Perception

Robots need to understand their environment to perform their task. If it...
research
05/20/2021

Robot Action Diagnosis and Experience Correction by Falsifying Parameterised Execution Models

When faced with an execution failure, an intelligent robot should be abl...
research
03/16/2020

Visual Task Progress Estimation with Appearance Invariant Embeddings for Robot Control and Planning

To fulfill the vision of full autonomy, robots must be capable of reason...
research
12/16/2021

Intermittent Deployment for Large-Scale Multi-Robot Forage Perception: Data Synthesis, Prediction, and Planning

Monitoring the health and vigor of grasslands is vital for informing man...

Please sign up or login with your details

Forgot password? Click here to reset