DeepAI AI Chat
Log In Sign Up

Learning to Follow Language Instructions with Adversarial Reward Induction

by   Dzmitry Bahdanau, et al.

Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards. However, for many real-world natural language commands that involve a degree of underspecification or ambiguity, such as "tidy the room", it would be challenging or impossible to program an appropriate reward function. To overcome this, we present a method for learning to follow commands from a training set of instructions and corresponding example goal-states, rather than an explicit reward function. Importantly, the example goal-states are not seen at test time. The approach effectively separates the representation of what instructions require from how they can be executed. In a simple grid world, the method enables an agent to learn a range of commands requiring interaction with blocks and understanding of spatial relations and underspecified abstract arrangements. We further show the method allows our agent to adapt to changes in the environment without requiring new training examples.


page 4

page 6

page 13


Using Natural Language for Reward Shaping in Reinforcement Learning

Recent reinforcement learning (RL) approaches have shown strong performa...

Incorrigibility in the CIRL Framework

A value learning system has incentives to follow shutdown instructions, ...

How to talk so your robot will learn: Instructions, descriptions, and pragmatics

From the earliest years of our lives, humans use language to express our...

Reinforcement Learning of Implicit and Explicit Control Flow in Instructions

Learning to flexibly follow task instructions in dynamic environments po...

Representation Learning for Grounded Spatial Reasoning

The interpretation of spatial references is highly contextual, requiring...

Linguistic communication as (inverse) reward design

Natural language is an intuitive and expressive way to communicate rewar...

Emergent Systematic Generalization in a Situated Agent

The question of whether deep neural networks are good at generalising be...