Towards Robust Bisimulation Metric Learning

10/27/2021
by   Mete Kemertas, et al.
0

Learned representations in deep reinforcement learning (DRL) have to extract task-relevant information from complex observations, balancing between robustness to distraction and informativeness to the policy. Such stable and rich representations, often learned via modern function approximation techniques, can enable practical application of the policy improvement theorem, even in high-dimensional continuous state-action spaces. Bisimulation metrics offer one solution to this representation learning problem, by collapsing functionally similar states together in representation space, which promotes invariance to noise and distractors. In this work, we generalize value function approximation bounds for on-policy bisimulation metrics to non-optimal policies and approximate environment dynamics. Our theoretical results help us identify embedding pathologies that may occur in practical use. In particular, we find that these issues stem from an underconstrained dynamics model and an unstable dependence of the embedding norm on the reward signal in environments with sparse rewards. Further, we propose a set of practical remedies: (i) a norm constraint on the representation space, and (ii) an extension of prior approaches with intrinsic rewards and latent space regularization. Finally, we provide evidence that the resulting method is not only more robust to sparse reward functions, but also able to solve challenging continuous control tasks with observational distractions, where prior methods fail.

READ FULL TEXT

page 10

page 26

research
06/06/2019

DeepMDP: Learning Continuous Latent Space Models for Representation Learning

Many reinforcement learning (RL) tasks provide the agent with high-dimen...
research
04/04/2018

Information Maximizing Exploration with a Latent Dynamics Model

All reinforcement learning algorithms must handle the trade-off between ...
research
03/01/2022

On the Generalization of Representations in Reinforcement Learning

In reinforcement learning, state representations are used to tractably d...
research
07/10/2020

Representations for Stable Off-Policy Reinforcement Learning

Reinforcement learning with function approximation can be unstable and e...
research
03/03/2021

Successor Feature Sets: Generalizing Successor Representations Across Policies

Successor-style representations have many advantages for reinforcement l...
research
06/18/2020

Learning Invariant Representations for Reinforcement Learning without Reconstruction

We study how representation learning can accelerate reinforcement learni...
research
02/06/2022

Trusted Approximate Policy Iteration with Bisimulation Metrics

Bisimulation metrics define a distance measure between states of a Marko...

Please sign up or login with your details

Forgot password? Click here to reset