Imitating Interactive Intelligence

by   Josh Abramson, et al.

A common vision from science fiction is that robots will one day inhabit our physical spaces, sense the world as we do, assist our physical labours, and communicate with us through natural language. Here we study how to design artificial agents that can interact naturally with humans using the simplification of a virtual environment. This setting nevertheless integrates a number of the central challenges of artificial intelligence (AI) research: complex visual perception and goal-directed physical control, grounded language comprehension and production, and multi-agent social interaction. To build agents that can robustly interact with humans, we would ideally train them while they interact with humans. However, this is presently impractical. Therefore, we approximate the role of the human with another learned agent, and use ideas from inverse reinforcement learning to reduce the disparities between human-human and agent-agent interactive behaviour. Rigorously evaluating our agents poses a great challenge, so we develop a variety of behavioural tests, including evaluation by humans who watch videos of agents or interact directly with them. These evaluations convincingly demonstrate that interactive training and auxiliary losses improve agent behaviour beyond what is achieved by supervised learning of actions alone. Further, we demonstrate that agent capabilities generalise beyond literal experiences in the dataset. Finally, we train evaluation models whose ratings of agents agree well with human judgement, thus permitting the evaluation of new agent models without additional effort. Taken together, our results in this virtual environment provide evidence that large-scale human behavioural imitation is a promising tool to create intelligent, interactive agents, and the challenge of reliably evaluating such agents is possible to surmount.


page 3

page 8

page 27


Creating Multimodal Interactive Agents with Imitation and Self-Supervised Learning

A common vision from science fiction is that robots will one day inhabit...

Improving Multimodal Interactive Agents with Reinforcement Learning from Human Feedback

An important goal in artificial intelligence is to create agents that ca...

Evaluating Multimodal Interactive Agents

Creating agents that can interact naturally with humans is a common goal...

Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration

In this paper, we introduce Watch-And-Help (WAH), a challenge for testin...

PHASE: PHysically-grounded Abstract Social Events for Machine Social Perception

The ability to perceive and reason about social interactions in the cont...

Corpus-Level End-to-End Exploration for Interactive Systems

A core interest in building Artificial Intelligence (AI) agents is to le...

Automatable Evaluation Method Oriented toward Behaviour Believability for Video Games

Classic evaluation methods of believable agents are time-consuming becau...

Please sign up or login with your details

Forgot password? Click here to reset