SEAL: Semantic Frame Execution And Localization for Perceiving Afforded Robot Actions

03/24/2023
by   Cameron Kisailus, et al.
0

Recent advances in robotic mobile manipulation have spurred the expansion of the operating environment for robots from constrained workspaces to large-scale, human environments. In order to effectively complete tasks in these spaces, robots must be able to perceive, reason, and execute over a diversity of affordances, well beyond simple pick-and-place. We posit the notion of semantic frames provides a compelling representation for robot actions that is amenable to action-focused perception, task-level reasoning, action-level execution, and integration with language. Semantic frames, a product of the linguistics community, define the necessary elements, pre- and post- conditions, and a set of sequential robot actions necessary to successfully execute an action evoked by a verb phrase. In this work, we extend the semantic frame representation for robot manipulation actions and introduce the problem of Semantic Frame Execution And Localization for Perceiving Afforded Robot Actions (SEAL) as a graphical model. For the SEAL problem, we describe our nonparametric Semantic Frame Mapping (SeFM) algorithm for maintaining belief over a finite set of semantic frames as the locations of actions afforded to the robot. We show that language models such as GPT-3 are insufficient to address generalized task execution covered by the SEAL formulation and SeFM provides robots with efficient search strategies and long term memory needed when operating in building-scale environments.

READ FULL TEXT

page 1

page 4

page 5

research
03/28/2019

Amortized Object and Scene Perception for Long-term Robot Manipulation

Mobile robots, performing long-term manipulation activities in human env...
research
07/12/2021

A Persistent Spatial Semantic Representation for High-level Natural Language Instruction Execution

Natural language provides an accessible and expressive interface to spec...
research
10/16/2020

Manipulation-Oriented Object Perception in Clutter through Affordance Coordinate Frames

In order to enable robust operation in unstructured environments, robots...
research
01/12/2018

Deep Episodic Memory: Encoding, Recalling, and Predicting Episodic Experiences for Robot Action Execution

We present a novel deep neural network architecture for representing rob...
research
12/04/2015

Learning the Semantics of Manipulation Action

In this paper we present a formal computational framework for modeling m...
research
07/20/2021

Ontology-Assisted Generalisation of Robot Action Execution Knowledge

When an autonomous robot learns how to execute actions, it is of interes...
research
09/26/2011

The Case for Durative Actions: A Commentary on PDDL2.1

The addition of durative actions to PDDL2.1 sparked some controversy. Fo...

Please sign up or login with your details

Forgot password? Click here to reset