How to talk so your robot will learn: Instructions, descriptions, and pragmatics

06/16/2022
by   Theodore R. Sumers, et al.
0

From the earliest years of our lives, humans use language to express our beliefs and desires. Being able to talk to artificial agents about our preferences would thus fulfill a central goal of value alignment. Yet today, we lack computational models explaining such flexible and abstract language use. To address this challenge, we consider social learning in a linear bandit setting and ask how a human might communicate preferences over behaviors (i.e. the reward function). We study two distinct types of language: instructions, which provide information about the desired policy, and descriptions, which provide information about the reward function. To explain how humans use these forms of language, we suggest they reason about both known present and unknown future states: instructions optimize for the present, while descriptions generalize to the future. We formalize this choice by extending reward design to consider a distribution over states. We then define a pragmatic listener agent that infers the speaker's reward function by reasoning about how the speaker expresses themselves. We validate our models with a behavioral experiment, demonstrating that (1) our speaker model predicts spontaneous human behavior, and (2) our pragmatic listener is able to recover their reward functions. Finally, we show that in traditional reinforcement learning settings, pragmatic social learning can integrate with and accelerate individual learning. Our findings suggest that social learning from a wider range of language – in particular, expanding the field's present focus on instructions to include learning from descriptions – is a promising approach for value alignment and reinforcement learning more broadly.

READ FULL TEXT

page 17

page 20

page 21

page 22

page 23

research
04/11/2022

Linguistic communication as (inverse) reward design

Natural language is an intuitive and expressive way to communicate rewar...
research
06/05/2018

Learning to Follow Language Instructions with Adversarial Reward Induction

Recent work has shown that deep reinforcement-learning agents can learn ...
research
11/08/2019

Language Grounding through Social Interactions and Curiosity-Driven Multi-Goal Learning

Autonomous reinforcement learning agents, like children, do not have acc...
research
02/01/2022

A General, Evolution-Inspired Reward Function for Social Robotics

The field of social robotics will likely need to depart from a paradigm ...
research
09/27/2019

Playing Atari Ball Games with Hierarchical Reinforcement Learning

Human beings are particularly good at reasoning and inference from just ...
research
11/08/2021

Batch Reinforcement Learning from Crowds

A shortcoming of batch reinforcement learning is its requirement for rew...
research
10/08/2021

Explaining Reward Functions to Humans for Better Human-Robot Collaboration

Explainable AI techniques that describe agent reward functions can enhan...

Please sign up or login with your details

Forgot password? Click here to reset