Language-guided Semantic Mapping and Mobile Manipulation in Partially Observable Environments

10/22/2019
by   Siddharth Patki, et al.
19

Recent advances in data-driven models for grounded language understanding have enabled robots to interpret increasingly complex instructions. Two fundamental limitations of these methods are that most require a full model of the environment to be known a priori, and they attempt to reason over a world representation that is flat and unnecessarily detailed, which limits scalability. Recent semantic mapping methods address partial observability by exploiting language as a sensor to infer a distribution over topological, metric and semantic properties of the environment. However, maintaining a distribution over highly detailed maps that can support grounding of diverse instructions is computationally expensive and hinders real-time human-robot collaboration. We propose a novel framework that learns to adapt perception according to the task in order to maintain compact distributions over semantic maps. Experiments with a mobile manipulator demonstrate more efficient instruction following in a priori unknown environments.

READ FULL TEXT

page 1

page 2

page 5

page 7

page 8

research
03/21/2019

Inferring Compact Representations for Efficient Natural Language Understanding of Robot Instructions

The speed and accuracy with which robots are able to interpret natural l...
research
05/21/2021

Language Understanding for Field and Service Robots in a Priori Unknown Environments

Contemporary approaches to perception, planning, estimation, and control...
research
09/21/2019

Language-guided Adaptive Perception with Hierarchical Symbolic Representations for Mobile Manipulators

Language is an effective medium for bi-directional communication in huma...
research
05/17/2021

RoSmEEry: Robotic Simulated Environment for Evaluation and Benchmarking of Semantic Mapping Algorithms

Human-robot interaction requires a common understanding of the operation...
research
07/12/2023

GVCCI: Lifelong Learning of Visual Grounding for Language-Guided Robotic Manipulation

Language-Guided Robotic Manipulation (LGRM) is a challenging task as it ...
research
09/17/2023

Optimal Scene Graph Planning with Large Language Model Guidance

Recent advances in metric, semantic, and topological mapping have equipp...
research
12/04/2020

Spatial Language Understanding for Object Search in Partially Observed Cityscale Environments

We present a system that enables robots to interpret spatial language as...

Please sign up or login with your details

Forgot password? Click here to reset