Goal-Directed Planning by Reinforcement Learning and Active Inference

06/18/2021
by   Dongqi Han, et al.
13

What is the difference between goal-directed and habitual behavior? We propose a novel computational framework of decision making with Bayesian inference, in which everything is integrated as an entire neural network model. The model learns to predict environmental state transitions by self-exploration and generating motor actions by sampling stochastic internal states z. Habitual behavior, which is obtained from the prior distribution of z, is acquired by reinforcement learning. Goal-directed behavior is determined from the posterior distribution of z by planning, using active inference which optimizes the past, current and future z by minimizing the variational free energy for the desired future observation constrained by the observed sensory sequence. We demonstrate the effectiveness of the proposed framework by experiments in a sensorimotor navigation task with camera observations and continuous motor actions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset