Reinforcement Learning through Active Inference

02/28/2020
by   Alexander Tschantz, et al.
0

The central tenet of reinforcement learning (RL) is that agents seek to maximize the sum of cumulative rewards. In contrast, active inference, an emerging framework within cognitive and computational neuroscience, proposes that agents act to maximize the evidence for a biased generative model. Here, we illustrate how ideas from active inference can augment traditional RL approaches by (i) furnishing an inherent balance of exploration and exploitation, and (ii) providing a more flexible conceptualization of reward. Inspired by active inference, we develop and implement a novel objective for decision making, which we term the free energy of the expected future. We demonstrate that the resulting algorithm successfully balances exploration and exploitation, simultaneously achieving robust performance on several challenging RL benchmarks with sparse, well-shaped, and no rewards.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/24/2019

Scaling active inference

In reinforcement learning (RL), agents often operate in partially observ...
research
06/04/2021

Online reinforcement learning with sparse rewards through an active inference capsule

Intelligent agents must pursue their goals in complex environments with ...
research
10/19/2021

Contrastive Active Inference

Active inference is a unifying theory for perception and action resting ...
research
07/18/2023

REX: Rapid Exploration and eXploitation for AI Agents

In this paper, we propose an enhanced approach for Rapid Exploration and...
research
09/04/2022

Variational Inference for Model-Free and Model-Based Reinforcement Learning

Variational inference (VI) is a specific type of approximate Bayesian in...
research
01/03/2020

Making Sense of Reinforcement Learning and Probabilistic Inference

Reinforcement learning (RL) combines a control problem with statistical ...

Please sign up or login with your details

Forgot password? Click here to reset