Between Rate-Distortion Theory Value Equivalence in Model-Based Reinforcement Learning

06/04/2022
by   Dilip Arumugam, et al.
0

The quintessential model-based reinforcement-learning agent iteratively refines its estimates or prior beliefs about the true underlying model of the environment. Recent empirical successes in model-based reinforcement learning with function approximation, however, eschew the true model in favor of a surrogate that, while ignoring various facets of the environment, still facilitates effective planning over behaviors. Recently formalized as the value equivalence principle, this algorithmic technique is perhaps unavoidable as real-world reinforcement learning demands consideration of a simple, computationally-bounded agent interacting with an overwhelmingly complex environment. In this work, we entertain an extreme scenario wherein some combination of immense environment complexity and limited agent capacity entirely precludes identifying an exactly value-equivalent model. In light of this, we embrace a notion of approximate value equivalence and introduce an algorithm for incrementally synthesizing simple and useful approximations of the environment from which an agent might still recover near-optimal behavior. Crucially, we recognize the information-theoretic nature of this lossy environment compression problem and use the appropriate tools of rate-distortion theory to make mathematically precise how value equivalence can lend tractability to otherwise intractable sequential decision-making problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/04/2022

Deciding What to Model: Value-Equivalent Sampling for Reinforcement Learning

The quintessential model-based reinforcement-learning agent iteratively ...
research
10/30/2022

On Rate-Distortion Theory in Capacity-Limited Cognition Reinforcement Learning

Throughout the cognitive-science literature, there is widespread agreeme...
research
06/01/2018

Equivalence Between Wasserstein and Value-Aware Model-based Reinforcement Learning

Learning a generative model is a key component of model-based reinforcem...
research
01/29/2022

Bellman Meets Hawkes: Model-Based Reinforcement Learning via Temporal Point Processes

We consider a sequential decision making problem where the agent faces t...
research
07/18/2020

Modulation of viability signals for self-regulatory control

We revisit the role of instrumental value as a driver of adaptive behavi...
research
10/26/2021

The Value of Information When Deciding What to Learn

All sequential decision-making agents explore so as to acquire knowledge...
research
11/19/2014

Compress and Control

This paper describes a new information-theoretic policy evaluation techn...

Please sign up or login with your details

Forgot password? Click here to reset