GoalNet: Inferring Conjunctive Goal Predicates from Human Plan Demonstrations for Robot Instruction Following

05/14/2022
by   Shreya Sharma, et al.
5

Our goal is to enable a robot to learn how to sequence its actions to perform tasks specified as natural language instructions, given successful demonstrations from a human partner. The ability to plan high-level tasks can be factored as (i) inferring specific goal predicates that characterize the task implied by a language instruction for a given world state and (ii) synthesizing a feasible goal-reaching action-sequence with such predicates. For the former, we leverage a neural network prediction model, while utilizing a symbolic planner for the latter. We introduce a novel neuro-symbolic model, GoalNet, for contextual and task dependent inference of goal predicates from human demonstrations and linguistic task descriptions. GoalNet combines (i) learning, where dense representations are acquired for language instruction and the world state that enables generalization to novel settings and (ii) planning, where the cause-effect modeling by the symbolic planner eschews irrelevant predicates facilitating multi-stage decision making in large domains. GoalNet demonstrates a significant improvement (51 completion rate in comparison to a state-of-the-art rule-based approach on a benchmark data set displaying linguistic variations, particularly for multi-stage instructions.

READ FULL TEXT

page 1

page 5

page 8

page 12

research
02/25/2022

SGL: Symbolic Goal Learning in a Hybrid, Modular Framework for Human Instruction Following

This paper investigates robot manipulation based on human instruction wi...
research
06/25/2023

The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling Probabilistic Social Inferences from Linguistic Inputs

Human beings are social creatures. We routinely reason about other agent...
research
10/13/2021

Improving the Robustness to Variations of Objects and Instructions with a Neuro-Symbolic Approach for Interactive Instruction Following

An interactive instruction following task has been proposed as a benchma...
research
10/04/2021

Skill Induction and Planning with Latent Language

We present a framework for learning hierarchical policies from demonstra...
research
07/09/2021

Work in Progress – Automated Generation of Robotic Planning Domains from Observations

In this paper, we report the results of our latest work on the automated...
research
03/20/2020

Imagination-Augmented Deep Learning for Goal Recognition

Being able to infer the goal of people we observe, interact with, or rea...
research
09/10/2021

PlaTe: Visually-Grounded Planning with Transformers in Procedural Tasks

In this work, we study the problem of how to leverage instructional vide...

Please sign up or login with your details

Forgot password? Click here to reset