Imitating Interactive Intelligence

12/10/2020
by   Josh Abramson, et al.
0

A common vision from science fiction is that robots will one day inhabit our physical spaces, sense the world as we do, assist our physical labours, and communicate with us through natural language. Here we study how to design artificial agents that can interact naturally with humans using the simplification of a virtual environment. This setting nevertheless integrates a number of the central challenges of artificial intelligence (AI) research: complex visual perception and goal-directed physical control, grounded language comprehension and production, and multi-agent social interaction. To build agents that can robustly interact with humans, we would ideally train them while they interact with humans. However, this is presently impractical. Therefore, we approximate the role of the human with another learned agent, and use ideas from inverse reinforcement learning to reduce the disparities between human-human and agent-agent interactive behaviour. Rigorously evaluating our agents poses a great challenge, so we develop a variety of behavioural tests, including evaluation by humans who watch videos of agents or interact directly with them. These evaluations convincingly demonstrate that interactive training and auxiliary losses improve agent behaviour beyond what is achieved by supervised learning of actions alone. Further, we demonstrate that agent capabilities generalise beyond literal experiences in the dataset. Finally, we train evaluation models whose ratings of agents agree well with human judgement, thus permitting the evaluation of new agent models without additional effort. Taken together, our results in this virtual environment provide evidence that large-scale human behavioural imitation is a promising tool to create intelligent, interactive agents, and the challenge of reliably evaluating such agents is possible to surmount.

READ FULL TEXT

page 3

page 8

page 27

12/07/2021

Creating Multimodal Interactive Agents with Imitation and Self-Supervised Learning

A common vision from science fiction is that robots will one day inhabit...
05/26/2022

Evaluating Multimodal Interactive Agents

Creating agents that can interact naturally with humans is a common goal...
10/19/2020

Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration

In this paper, we introduce Watch-And-Help (WAH), a challenge for testin...
09/02/2010

Automatable Evaluation Method Oriented toward Behaviour Believability for Video Games

Classic evaluation methods of believable agents are time-consuming becau...
06/20/2017

Grounded Language Learning in a Simulated 3D World

We are increasingly surrounded by artificially intelligent technology th...
03/02/2021

PHASE: PHysically-grounded Abstract Social Events for Machine Social Perception

The ability to perceive and reason about social interactions in the cont...
11/23/2019

Corpus-Level End-to-End Exploration for Interactive Systems

A core interest in building Artificial Intelligence (AI) agents is to le...