Mesh Based Analysis of Low Fractal Dimension ReinforcementLearning Policies

by   Sean Gillen, et al.

In previous work, using a process we call meshing, the reachable state spaces for various continuous and hybrid systems were approximated as a discrete set of states which can then be synthesized into a Markov chain. One of the applications for this approach has been to analyze locomotion policies obtained by reinforcement learning, in a step towards making empirical guarantees about the stability properties of the resulting system. In a separate line of research, we introduced a modified reward function for on-policy reinforcement learning algorithms that utilizes a "fractal dimension" of rollout trajectories. This reward was shown to encourage policies that induce individual trajectories which can be more compactly represented as a discrete mesh. In this work we combine these two threads of research by building meshes of the reachable state space of a system subject to disturbances and controlled by policies obtained with the modified reward. Our analysis shows that the modified policies do produce much smaller reachable meshes. This shows that agents trained with the fractal dimension reward transfer their desirable quality of having a more compact state space to a setting with external disturbances. The results also suggest that the previous work using mesh based tools to analyze RL policies may be extended to higher dimensional systems or to higher resolution meshes than would have otherwise been possible.


page 1

page 3


Mesh-based Tools to Analyze Deep Reinforcement Learning Policies for Underactuated Biped Locomotion

In this paper, we present a mesh-based approach to analyze stability and...

Explicitly Encouraging Low Fractional Dimensional Trajectories Via Reinforcement Learning

A key limitation in using various modern methods of machine learning in ...

Training Transition Policies via Distribution Matching for Complex Tasks

Humans decompose novel complex tasks into simpler ones to exploit previo...

Specifying Behavior Preference with Tiered Reward Functions

Reinforcement-learning agents seek to maximize a reward signal through e...

Wasserstein Unsupervised Reinforcement Learning

Unsupervised reinforcement learning aims to train agents to learn a hand...

Planning in Hierarchical Reinforcement Learning: Guarantees for Using Local Policies

We consider a settings of hierarchical reinforcement learning, in which ...

Resolution-independent meshes of super pixels

The over-segmentation into superpixels is an important preprocessing ste...

Please sign up or login with your details

Forgot password? Click here to reset