Learning a natural-language to LTL executable semantic parser for grounded robotics

08/07/2020
by   Christopher Wang, et al.
0

Children acquire their native language with apparent ease by observing how language is used in context and attempting to use it themselves. They do so without laborious annotations, negative examples, or even direct corrections. We take a step toward robots that can do the same by training a grounded semantic parser, which discovers latent linguistic representations that can be used for the execution of natural-language commands. In particular, we focus on the difficult domain of commands with a temporal aspect, whose semantics we capture with Linear Temporal Logic, LTL. Our parser is trained with pairs of sentences and executions as well as an executor. At training time, the parser hypothesizes a meaning representation for the input as a formula in LTL. Three competing pressures allow the parser to discover meaning from language. First, any hypothesized meaning for a sentence must be permissive enough to reflect all the annotated execution trajectories. Second, the executor – a pretrained end-to-end LTL planner – must find that the observed trajectories are likely executions of the meaning. Finally, a generator, which reconstructs the original input, encourages the model to find representations that conserve knowledge about the command. Together these ensure that the meaning is neither too general nor too specific. Our model generalizes well, being able to parse and execute both machine-generated and human-generated commands, with near-equal accuracy, despite the fact that the human-generated sentences are much more varied and complex with an open lexicon. The approach presented here is not specific to LTL; it can be applied to any domain where sentence meanings can be hypothesized and an executor can verify these meanings, thus opening the door to many applications for robotic agents.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/22/2016

An Incremental Parser for Abstract Meaning Representation

Meaning Representation (AMR) is a semantic representation for natural la...
research
11/02/2020

Comparison by Conversion: Reverse-Engineering UCCA from Syntax and Lexical Semantics

Building robust natural language understanding systems will require a cl...
research
12/25/2018

Building a Neural Semantic Parser from a Domain Ontology

Semantic parsing is the task of converting natural language utterances i...
research
07/20/2021

Neural Abstructions: Abstractions that Support Construction for Grounded Language Learning

Although virtual agents are increasingly situated in environments where ...
research
06/23/2022

Do Trajectories Encode Verb Meaning?

Distributional models learn representations of words from text, but are ...
research
09/19/2020

CLEVR Parser: A Graph Parser Library for Geometric Learning on Language Grounded Image Scenes

The CLEVR dataset has been used extensively in language grounded visual ...
research
09/18/2019

Improving Natural Language Inference with a Pretrained Parser

We introduce a novel approach to incorporate syntax into natural languag...

Please sign up or login with your details

Forgot password? Click here to reset