Learning Finite State Representations of Recurrent Policy Networks

11/29/2018
by   Anurag Koul, et al.
0

Recurrent neural networks (RNNs) are an effective representation of control policies for a wide range of reinforcement and imitation learning problems. RNN policies, however, are particularly difficult to explain, understand, and analyze due to their use of continuous-valued memory vectors and observation features. In this paper, we introduce a new technique, Quantized Bottleneck Insertion, to learn finite representations of these vectors and features. The result is a quantized representation of the RNN that can be analyzed to improve our understanding of memory use and general behavior. We present results of this approach on synthetic environments and six Atari games. The resulting finite representations are surprisingly small in some cases, using as few as 3 discrete memory states and 10 observations for a perfect Pong policy. We also show that these finite policy representations lead to improved interpretability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2020

Verifiable RNN-Based Policies for POMDPs Under Temporal Logic Constraints

Recurrent neural networks (RNNs) have emerged as an effective representa...
research
06/06/2020

Understanding Finite-State Representations of Recurrent Policy Networks

We introduce an approach for understanding finite-state machine (FSM) re...
research
06/21/2019

A Study of State Aliasing in Structured Prediction with RNNs

End-to-end reinforcement learning agents learn a state representation an...
research
02/27/2019

Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks

We investigate the internal representations that a recurrent neural netw...
research
07/18/2016

Imitation Learning with Recurrent Neural Networks

We present a novel view that unifies two frameworks that aim to solve se...
research
09/16/2021

Interpretable Local Tree Surrogate Policies

High-dimensional policies, such as those represented by neural networks,...
research
01/01/2020

A Quantized Representation of Probability in the Brain

Conventional and current wisdom assumes that the brain represents probab...

Please sign up or login with your details

Forgot password? Click here to reset