Language-guided Adaptive Perception with Hierarchical Symbolic Representations for Mobile Manipulators

09/21/2019
by   Ethan Fahnestock, et al.
0

Language is an effective medium for bi-directional communication in human-robot teams. To infer the meaning of many instructions, robots need to construct a model of their surroundings that describe the spatial, semantic, and metric properties of objects from observations and prior information about the environment. Recent algorithms condition the expression of object detectors in a robot's perception pipeline on language to generate a minimal representation of the environment necessary to efficiently determine the meaning of the instruction. We expand on this work by introducing the ability to express hierarchies between detectors. This assists in the development of environment models suitable for more sophisticated tasks that may require modeling of kinematics, dynamics, and/or affordances between objects. To achieve this, a novel extension of symbolic representations for language-guided adaptive perception is proposed that reasons over single-layer object detector hierarchies. Differences in perception performance and environment representations between adaptive perception and a suitable exhaustive baseline are explored through physical experiments on a mobile manipulator.

READ FULL TEXT

page 1

page 5

page 7

page 8

research
10/22/2019

Language-guided Semantic Mapping and Mobile Manipulation in Partially Observable Environments

Recent advances in data-driven models for grounded language understandin...
research
11/23/2020

Imagination-enabled Robot Perception

Many of today's robot perception systems aim at accomplishing perception...
research
01/23/2017

Identification of Unmodeled Objects from Symbolic Descriptions

Successful human-robot cooperation hinges on each agent's ability to pro...
research
10/08/2018

Towards Robot-Centric Conceptual Knowledge Acquisition

Robots require knowledge about objects in order to efficiently perform v...
research
03/21/2019

Inferring Compact Representations for Efficient Natural Language Understanding of Robot Instructions

The speed and accuracy with which robots are able to interpret natural l...
research
09/17/2023

CLIPUNetr: Assisting Human-robot Interface for Uncalibrated Visual Servoing Control with CLIP-driven Referring Expression Segmentation

The classical human-robot interface in uncalibrated image-based visual s...
research
07/26/2021

Robotic Occlusion Reasoning for Efficient Object Existence Prediction

Reasoning about potential occlusions is essential for robots to efficien...

Please sign up or login with your details

Forgot password? Click here to reset