Interpretable Local Tree Surrogate Policies

09/16/2021
by   John Mern, et al.
0

High-dimensional policies, such as those represented by neural networks, cannot be reasonably interpreted by humans. This lack of interpretability reduces the trust users have in policy behavior, limiting their use to low-impact tasks such as video games. Unfortunately, many methods rely on neural network representations for effective learning. In this work, we propose a method to build predictable policy trees as surrogates for policies such as neural networks. The policy trees are easily human interpretable and provide quantitative predictions of future behavior. We demonstrate the performance of this approach on several simulated tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2018

Programmatically Interpretable Reinforcement Learning

We study the problem of generating interpretable and verifiable policies...
research
01/18/2022

Programmatic Policy Extraction by Iterative Local Search

Reinforcement learning policies are often represented by neural networks...
research
06/06/2020

Understanding Finite-State Representations of Recurrent Policy Networks

We introduce an approach for understanding finite-state machine (FSM) re...
research
02/04/2022

Learning Interpretable, High-Performing Policies for Continuous Control Problems

Gradient-based approaches in reinforcement learning (RL) have achieved t...
research
10/03/2022

Reward Learning with Trees: Methods and Evaluation

Recent efforts to learn reward functions from human feedback have tended...
research
02/09/2019

PoliFi: Airtime Policy Enforcement for WiFi

As WiFi grows ever more popular, airtime contention becomes an increasin...
research
11/29/2018

Learning Finite State Representations of Recurrent Policy Networks

Recurrent neural networks (RNNs) are an effective representation of cont...

Please sign up or login with your details

Forgot password? Click here to reset