Evaluating the Robustness of Collaborative Agents

01/14/2021
by   Paul Knott, et al.
6

In order for agents trained by deep reinforcement learning to work alongside humans in realistic settings, we will need to ensure that the agents are robust. Since the real world is very diverse, and human behavior often changes in response to agent deployment, the agent will likely encounter novel situations that have never been seen during training. This results in an evaluation challenge: if we cannot rely on the average training or validation reward as a metric, then how can we effectively evaluate robustness? We take inspiration from the practice of unit testing in software engineering. Specifically, we suggest that when designing AI agents that collaborate with humans, designers should search for potential edge cases in possible partner behavior and possible states encountered, and write tests which check that the behavior of the agent in these edge cases is reasonable. We apply this methodology to build a suite of unit tests for the Overcooked-AI environment, and use this test suite to evaluate three proposals for improving robustness. We find that the test suite provides significant insight into the effects of these proposals that were generally not revealed by looking solely at the average validation reward.

READ FULL TEXT

page 2

page 7

page 10

page 15

page 16

page 17

11/27/2017

AI Safety Gridworlds

We present a suite of reinforcement learning environments illustrating v...
05/29/2023

Doing the right thing for the right reason: Evaluating artificial moral cognition by probing cost insensitivity

Is it possible to evaluate the moral cognition of complex artificial age...
05/07/2022

Search-Based Testing of Reinforcement Learning

Evaluation of deep reinforcement learning (RL) is inherently challenging...
05/26/2023

A Hierarchical Approach to Population Training for Human-AI Collaboration

A major challenge for deep reinforcement learning (DRL) agents is to col...
12/18/2019

Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation

Deep reinforcement learning has the potential to train robots to perform...
10/05/2019

Towards Deployment of Robust AI Agents for Human-Machine Partnerships

We study the problem of designing AI agents that can robustly cooperate ...
01/08/2021

Faster SAT Solving for Software with Repeated Structures (with Case Studies on Software Test Suite Minimization)

Theorem provers has been used extensively in software engineering for so...

Code Repositories

overcooked_ai

A benchmark environment for fully cooperative human-AI performance.


view repo

Please sign up or login with your details

Forgot password? Click here to reset