Mesh Based Analysis of Low Fractal Dimension ReinforcementLearning Policies

12/24/2020
by   Sean Gillen, et al.
0

In previous work, using a process we call meshing, the reachable state spaces for various continuous and hybrid systems were approximated as a discrete set of states which can then be synthesized into a Markov chain. One of the applications for this approach has been to analyze locomotion policies obtained by reinforcement learning, in a step towards making empirical guarantees about the stability properties of the resulting system. In a separate line of research, we introduced a modified reward function for on-policy reinforcement learning algorithms that utilizes a "fractal dimension" of rollout trajectories. This reward was shown to encourage policies that induce individual trajectories which can be more compactly represented as a discrete mesh. In this work we combine these two threads of research by building meshes of the reachable state space of a system subject to disturbances and controlled by policies obtained with the modified reward. Our analysis shows that the modified policies do produce much smaller reachable meshes. This shows that agents trained with the fractal dimension reward transfer their desirable quality of having a more compact state space to a setting with external disturbances. The results also suggest that the previous work using mesh based tools to analyze RL policies may be extended to higher dimensional systems or to higher resolution meshes than would have otherwise been possible.

READ FULL TEXT

page 1

page 3

research
03/29/2019

Mesh-based Tools to Analyze Deep Reinforcement Learning Policies for Underactuated Biped Locomotion

In this paper, we present a mesh-based approach to analyze stability and...
research
12/21/2020

Explicitly Encouraging Low Fractional Dimensional Trajectories Via Reinforcement Learning

A key limitation in using various modern methods of machine learning in ...
research
10/08/2021

Training Transition Policies via Distribution Matching for Complex Tasks

Humans decompose novel complex tasks into simpler ones to exploit previo...
research
12/07/2022

Specifying Behavior Preference with Tiered Reward Functions

Reinforcement-learning agents seek to maximize a reward signal through e...
research
10/15/2021

Wasserstein Unsupervised Reinforcement Learning

Unsupervised reinforcement learning aims to train agents to learn a hand...
research
02/26/2019

Planning in Hierarchical Reinforcement Learning: Guarantees for Using Local Policies

We consider a settings of hierarchical reinforcement learning, in which ...
research
10/29/2019

Resolution-independent meshes of super pixels

The over-segmentation into superpixels is an important preprocessing ste...

Please sign up or login with your details

Forgot password? Click here to reset