Reward Learning from Narrated Demonstrations

04/27/2018
by   Hsiao-Yu Fish Tung, et al.
2

Humans effortlessly "program" one another by communicating goals and desires in natural language. In contrast, humans program robotic behaviours by indicating desired object locations and poses to be achieved, by providing RGB images of goal configurations, or supplying a demonstration to be imitated. None of these methods generalize across environment variations, and they convey the goal in awkward technical terms. This work proposes joint learning of natural language grounding and instructable behavioural policies reinforced by perceptual detectors of natural language expressions, grounded to the sensory inputs of the robotic agent. Our supervision is narrated visual demonstrations(NVD), which are visual demonstrations paired with verbal narration (as opposed to being silent). We introduce a dataset of NVD where teachers perform activities while describing them in detail. We map the teachers' descriptions to perceptual reward detectors, and use them to train corresponding behavioural policies in simulation.We empirically show that our instructable agents (i) learn visual reward detectors using a small number of examples by exploiting hard negative mined configurations from demonstration dynamics, (ii) develop pick-and place policies using learned visual reward detectors, (iii) benefit from object-factorized state representations that mimic the syntactic structure of natural language goal expressions, and (iv) can execute behaviours that involve novel objects in novel locations at test time, instructed by natural language.

READ FULL TEXT

page 2

page 5

page 6

page 7

research
10/31/2019

A Narration-based Reward Shaping Approach using Grounded Natural Language Commands

While deep reinforcement learning techniques have led to agents that are...
research
06/05/2021

Zero-shot Task Adaptation using Natural Language

Imitation learning and instruction-following are two common approaches t...
research
10/04/2021

Skill Induction and Planning with Latent Language

We present a framework for learning hierarchical policies from demonstra...
research
12/02/2017

Interactive Reinforcement Learning for Object Grounding via Self-Talking

Humans are able to identify a referred visual object in a complex scene ...
research
07/11/2019

Graph-Structured Visual Imitation

We cast visual imitation as a visual correspondence problem. Our robotic...
research
04/07/2023

Embodied Concept Learner: Self-supervised Learning of Concepts and Mapping through Instruction Following

Humans, even at a very early age, can learn visual concepts and understa...
research
06/30/2016

Towards A Virtual Assistant That Can Be Taught New Tasks In Any Domain By Its End-Users

The challenge stated in the title can be divided into two main problems....

Please sign up or login with your details

Forgot password? Click here to reset