Evaluating the Robustness of Collaborative Agents

01/14/2021
by   Paul Knott, et al.
6

In order for agents trained by deep reinforcement learning to work alongside humans in realistic settings, we will need to ensure that the agents are robust. Since the real world is very diverse, and human behavior often changes in response to agent deployment, the agent will likely encounter novel situations that have never been seen during training. This results in an evaluation challenge: if we cannot rely on the average training or validation reward as a metric, then how can we effectively evaluate robustness? We take inspiration from the practice of unit testing in software engineering. Specifically, we suggest that when designing AI agents that collaborate with humans, designers should search for potential edge cases in possible partner behavior and possible states encountered, and write tests which check that the behavior of the agent in these edge cases is reasonable. We apply this methodology to build a suite of unit tests for the Overcooked-AI environment, and use this test suite to evaluate three proposals for improving robustness. We find that the test suite provides significant insight into the effects of these proposals that were generally not revealed by looking solely at the average validation reward.

READ FULL TEXT

page 2

page 7

page 10

page 15

page 16

page 17

11/27/2017

AI Safety Gridworlds

We present a suite of reinforcement learning environments illustrating v...
04/06/2022

A Cognitive Framework for Delegation Between Error-Prone AI and Human Agents

With humans interacting with AI-based systems at an increasing rate, it ...
05/07/2022

Search-Based Testing of Reinforcement Learning

Evaluation of deep reinforcement learning (RL) is inherently challenging...
12/18/2019

Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation

Deep reinforcement learning has the potential to train robots to perform...
06/10/2021

ERMAS: Becoming Robust to Reward Function Sim-to-Real Gaps in Multi-Agent Simulations

Multi-agent simulations provide a scalable environment for learning poli...
01/08/2021

Faster SAT Solving for Software with Repeated Structures (with Case Studies on Software Test Suite Minimization)

Theorem provers has been used extensively in software engineering for so...

Code Repositories

overcooked_ai

A benchmark environment for fully cooperative human-AI performance.


view repo