Identifying Reasoning Flaws in Planning-Based RL Using Tree Explanations

09/28/2021
by   Kin-Ho Lam, et al.
0

Enabling humans to identify potential flaws in an agent's decision making is an important Explainable AI application. We consider identifying such flaws in a planning-based deep reinforcement learning (RL) agent for a complex real-time strategy game. In particular, the agent makes decisions via tree search using a learned model and evaluation function over interpretable states and actions. This gives the potential for humans to identify flaws at the level of reasoning steps in the tree, even if the entire reasoning process is too complex to understand. However, it is unclear whether humans will be able to identify such flaws due to the size and complexity of trees. We describe a user interface and case study, where a small group of AI experts and developers attempt to identify reasoning flaws due to inaccurate agent learning. Overall, the interface allowed the group to identify a number of significant flaws of varying types, demonstrating the promise of this approach.

READ FULL TEXT

page 2

page 3

page 4

research
06/04/2022

Beyond Value: CHECKLIST for Testing Inferences in Planning-Based RL

Reinforcement learning (RL) agents are commonly evaluated via their expe...
research
12/01/2022

Decisions that Explain Themselves: A User-Centric Deep Reinforcement Learning Explanation System

With deep reinforcement learning (RL) systems like autonomous driving be...
research
07/18/2023

IxDRL: A Novel Explainable Deep Reinforcement Learning Toolkit based on Analyses of Interestingness

In recent years, advances in deep learning have resulted in a plethora o...
research
04/06/2021

Why? Why not? When? Visual Explanations of Agent Behavior in Reinforcement Learning

Reinforcement Learning (RL) is a widely-used technique in many domains, ...
research
11/11/2022

Global and Local Analysis of Interestingness for Competency-Aware Deep Reinforcement Learning

In recent years, advances in deep learning have resulted in a plethora o...
research
05/13/2020

Explainable Reinforcement Learning: A Survey

Explainable Artificial Intelligence (XAI), i.e., the development of more...
research
01/13/2020

Exploiting Language Instructions for Interpretable and Compositional Reinforcement Learning

In this work, we present an alternative approach to making an agent comp...

Please sign up or login with your details

Forgot password? Click here to reset