Interactive Learning of State Representation through Natural Language Instruction and Explanation

10/07/2017
by   Qiaozi Gao, et al.
0

One significant simplification in most previous work on robot learning is the closed-world assumption where the robot is assumed to know ahead of time a complete set of predicates describing the state of the physical world. However, robots are not likely to have a complete model of the world especially when learning a new task. To address this problem, this extended abstract gives a brief introduction to our on-going work that aims to enable the robot to acquire new state representations through language communication with humans.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2021

Learning and Executing Re-usable Behaviour Trees from Natural Language Instruction

Domestic and service robots have the potential to transform industries s...
research
07/17/2020

Toward Givenness Hierarchy Theoretic Natural Language Generation

Language-capable interactive robots participating in dialogues with huma...
research
12/26/2020

Translating Natural Language Instructions to Computer Programs for Robot Manipulation

It is highly desirable for robots that work alongside humans to be able ...
research
10/27/2016

The Probabilistic Model Checker Storm (Extended Abstract)

We present a new probabilistic model checker Storm. Using state-of-the-a...
research
03/13/2022

Summarizing a virtual robot's past actions in natural language

We propose and demonstrate the task of giving natural language summaries...
research
11/05/2021

LILA: Language-Informed Latent Actions

We introduce Language-Informed Latent Actions (LILA), a framework for le...
research
02/19/2020

Interactive Natural Language-based Person Search

In this work, we consider the problem of searching people in an unconstr...

Please sign up or login with your details

Forgot password? Click here to reset