Reinforcement Learning with Subspaces using Free Energy Paradigm

12/13/2020
by   Milad Ghorbani, et al.
12

In large-scale problems, standard reinforcement learning algorithms suffer from slow learning speed. In this paper, we follow the framework of using subspaces to tackle this problem. We propose a free-energy minimization framework for selecting the subspaces and integrate the policy of the state-space into the subspaces. Our proposed free-energy minimization framework rests upon Thompson sampling policy and behavioral policy of subspaces and the state-space. It is therefore applicable to a variety of tasks, discrete or continuous state space, model-free and model-based tasks. Through a set of experiments, we show that this general framework highly improves the learning speed. We also provide a convergence proof.

READ FULL TEXT

page 2

page 3

page 4

page 6

page 8

page 9

page 10

page 11

research
10/22/2017

Exploiting generalization in the subspaces for faster model-based learning

Due to the lack of enough generalization in the state-space, common meth...
research
03/20/2023

Deceptive Reinforcement Learning in Model-Free Domains

This paper investigates deceptive reinforcement learning for privacy pre...
research
03/07/2022

Learning Solution Manifolds for Control Problems via Energy Minimization

A variety of control tasks such as inverse kinematics (IK), trajectory o...
research
09/07/2019

Regularized Anderson Acceleration for Off-Policy Deep Reinforcement Learning

Model-free deep reinforcement learning (RL) algorithms have been widely ...
research
12/12/2019

Control-Tutored Reinforcement Learning

We introduce a control-tutored reinforcement learning (CTRL) algorithm. ...
research
10/27/2021

Dream to Explore: Adaptive Simulations for Autonomous Systems

One's ability to learn a generative model of the world without supervisi...

Please sign up or login with your details

Forgot password? Click here to reset