Learning Perception and Planning with Deep Active Inference

01/30/2020
by   Ozan Çatal, et al.
0

Active inference is a process theory of the brain that states that all living organisms infer actions in order to minimize their (expected) free energy. However, current experiments are limited to predefined, often discrete, state spaces. In this paper we use recent advances in deep learning to learn the state space and approximate the necessary probability distributions to engage in active inference.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/06/2020

Deep Active Inference for Autonomous Robot Navigation

Active inference is a theory that underpins the way biological agent's p...
09/08/2020

Deep Active Inference for Partially Observable MDPs

Deep active inference has been proposed as a scalable approach to percep...
11/20/2018

Geometry of Friston's active inference

We reconstruct Karl Friston's active inference and give a geometrical in...
09/09/2021

Deep Active Inference for Pixel-Based Discrete Control: Evaluation on the Car Racing Problem

Despite the potential of active inference for visual-based control, lear...
09/24/2019

Demystifying active inference

Active inference is a first (Bayesian) principles account of how autonom...
09/21/2021

Active inference, Bayesian optimal design, and expected utility

Active inference, a corollary of the free energy principle, is a formal ...
05/22/2018

Active Inference for Adaptive BCI: application to the P300 Speller

Adaptive Brain-Computer interfaces (BCIs) have shown to improve performa...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Active inference postulates that action selection in biological systems, in particular the human brain, is actually an inference problem where agents are attracted to a preferred prior state distribution in a hidden state space [3]. To do so, each living organism builds an internal generative model of the world, by minimizing the so-called free energy. The idea of active inference stems from neuroscience [7, 4] and has already been adopted to solve different control and learning tasks [5, 6, 3]. These experiments however typically make use of manually engineered state transition models and predefined, often discrete, state spaces.

In this paper we show that we can also learn the state space and state transition model, by using deep neural networks as probability density estimators. By sampling from the learnt state transition model, we can plan ahead minimizing the expected free energy, trading off goal-directed behavior and uncertainty-resolving behavior.

2 Active Inference

Active inference states that every organism or agent entertains an internal model of the world, and implicitly tries to minimize the difference between what it believes about the world and what it perceives, hence minimizing its own variational free energy [7]. Moreover, the agent believes that it will minimize its expected free energy in the future, in effect turning action selection into an inference problem. This boils down to optimizing for two distinct objectives. On the one hand the agent actively samples the world to update its internal model of the world and better explain observations. On the other hand the agent is driven to visit preferred states that it believes a priori it will visit—a kind of global prior— which carry little expected free energy.

Formally, an agent entertains a generative model of the environment, which specifies the joint probability of observations, actions and their hidden causes, where actions are determined by some policy

. If the environment is modelled as a Markov Decision Process (MDP) this generative model factorizes as:

(1)

The free energy is then defined as:

(2)

where

is an approximate posterior distribution. The second equality shows that the free energy is minimized when the KL divergence term becomes zero, meaning that the approximate posterior becomes the true posterior, in which case the free energy becomes the negative log evidence. This can also be rewritten as the third equality, which is the negative evidence lower bound (ELBO) that also appears in the variational autoencoders (VAE) framework

[8, 10].

In active inference, agents infer the actions that will result in visiting states of low expected free energy. They do this by sampling actions from a prior belief about policies according to how much expected free energy that policy will induce. Formally, this means that the probability of picking a policy is given by [11]:

(3)

where is the softmax function with precision parameter , which governs the agent’s goal-directedness and randomness in its behavior. Here is the expected free energy at future timestep when following policy  [11]:

(4)

We used

and that the prior probability

is given by a preferred state distribution . This results into two terms: a KL divergence term between the predicted states and the prior preferred states, and an entropy term reflecting the expected ambiguity under predicted states. Action selection in active inference thus entails:

  1. Evaluate for each policy

  2. Calculate the belief over policies

  3. Infer the next action using

3 Deep Active Inference

For the agent’s model, we use deep neural networks to parameterize the various factors of equation (1): i.e. the transition model and the likelihood distribution . Also the approximate posterior is parameterized by a neural network

. All distributions are parameterized as i.i.d multivariate Gaussian distributions, i.e. the outputs of the neural networks are the means

and standard deviations

of each Gaussian. Sampling is done using the reparameterization trick, computing with

, which allows for backpropagation of the gradients. Minimizing the free energy then boils down to minimizing the following loss function:

(5)

Figure 1 shows an overview of the information flow between the transition model, approximate posterior and likelihood neural networks. To engage in active inference using these models, we need to estimate , which involves estimating . As our model takes a state sample as input, and only estimates the state distribution of the next timestep, the only way to get an estimate of the state distribution at a future timestep is by Monte Carlo sampling. Concretely, to infer , we sample for each policy trajectories following using . This results in state samples , for which we can get observation estimates via

. To be able to calculate the KL divergence and entropy, we use a Gaussian distribution fitted on the samples’ mean and variance. We then estimate the expected free energy for each policy from current timestep

onward as follows:

(6)

The first summation term looks

timesteps ahead, calculating the KL divergence between expected and preferred states and the entropy on the expected observations. We also introduce an additional hyperparameter

, which allows for a trade-off between reaching preferred states on the one hand and resolving uncertainty on the other hand. The second summation term implies that after timesteps, we continue to select policies according to their expected free energy, hence recursively re-evaluating the expected free energy of each policy at timestep . In practice, we unroll this times, resulting into a search tree with an effective planning horizon of .

Figure 1: We train simultaneously a transition model , an approximate posterior distribution model , and a likelihood distribution model, by minimizing the variational free energy.

4 Experiments

We experiment with the Mountain Car problem, where an agent needs to drive the car up the mountain in 1D by throttling left or right, as shown on Figure 2. The top of the mountain can only be reached by first building up momentum before throttling right. The agent spawns at a random position, and only observes a noisy position sensor and has no access to its current velocity. At each timestep, the agent can choose between two policies: to throttle to the left, to throttle to the right. We experiment with two flavors of this environment: one where the agent starts with fixed zero velocity, and one where the agent starts with a random initial velocity.

Figure 2: The mountain car environment. The shown position of the car at is the starting position in our evaluations.

For our generative model, we instantiate , and

as fully connected neural networks with 20 hidden neurons, and a state space with 4 dimensions. To bootstrap the model, we train on actions and observation of a random agent minimizing the loss function in Equation (

5

) using stochastic gradient descent. Next, we instantiate an active inference agent that uses Equation (

3) to plan ahead and select the best policy. As preferred state distribution , we manually drive the car up the mountain and evaluate the model’s posterior state at the end of the sequence , and set . To limit the computations, the active inference agent plans ahead for 90 timesteps, allowing to switch policy every 30 timesteps, effectively evaluating a search tree with depth 3, using 100 samples for each policy ().

Figure 3 shows the sampled trajectories for all branches of the search tree, in the case the model is bootstrapped with only a single observation at position . This is a challenging starting position as the car needs enough momentum in order to reach up the hill from there. In the case of a random starting velocity, the generative model is not sure about the velocity after only the first observation. This is reflected by the entropy (i.e. the expected ambiguity) of the sampled trajectories. Now following from the start will sometimes reach the preferred state, depending on the initial velocity. In this case the active inference agent’s behavior is determined by the parameter . For , the agent will act greedily, preferring the policy that has a chance of reaching the top early, cf. Figure (e)e. When setting , the entropy term will play a bigger role, and the agent will select the policy that is less uncertain about the outcomes, rendering a more cautious agent that prefers a more precise and careful policy, moving to the left first – see Fig. (a)a. We found setting yields a good trade-off for the mountain car agent.

In the environment with no initial velocity, the transition model learnt by the agent is quite accurate and the entropy terms are an order of magnitude lower, as shown in Figure 4. However, in terms of preferred state the lowest KL is still achieved by following . This is due to the fact that the KL term is evaluated each timestep, and moving to the left, away from the preferred state in the sequence outweighs the benefit of reaching the preferred state in the end. Choosing again forces the agent to put more weight on resolving uncertainty, preferring the policy in Figure (a)a.

(a) left-right-right
(b) left-right-left
(c) left-left-right
(d) left-left-left
(e) right-right-right
(f) right-right-left
(g) right-left-right
(h) right-left-left
Figure 3: Depending on the random initial velocity, the car will reach the hill fast using the right policy only part of the cases (e), however starting with the left policy first also reaches the hill top and with lower entropy on the trajectories (a). A greedy agent () will pick (e) whereas a cautious agent () will favor (a). For each policy we report the values of KL, H and G for .
(a) left-right-right
(b) left-right-left
(c) left-left-right
(d) left-left-left
(e) right-right-right
(f) right-right-left
(g) right-left-right
(h) right-left-left
Figure 4: When the environment starts the car with a fixed zero velocity, the model is on average much more certain on the predicted trajectories, resulting in lower entropy terms. However, policy (e) still achieves the lowest KL value, as this term is evaluated each timestep, and moving away from the preferred state yields a high KL penalty. When choosing , the agent again favors (a). For each policy we report the values of KL, H and G for .

5 Discussion

Using deep neural networks to instantiate the generative model and to approximate both prior and posterior distributions, has the advantage that the generative model is independent of any state representation. The model can learn the best state representation for the observed data. Employing deep neural networks also opens up the possibility of using high-dimensional sensor inputs, e.g. images. The downside of our model, however, is the required sampling step, which means that a distribution is only calculated for the next timestep, and distributions for timesteps further in the future can only be approximated by sampling.

Another point of discussion is the definition of the preferred state distribution. In our case we opted for a Gaussian state distribution, centered around the state visited by an expert demonstration, similar to our earlier work [2]. However, the standard deviation of this distribution will determine the absolute value of the KL term in Equation (3). A small standard deviation will blow up the KL term, completely ignoring the entropy term. A large standard deviation will assign probability mass to neighboring states, possibly introducing local optima that don’t reach the actual goal state. To mitigate this, we introduced an additional parameter that balances risk and ambiguity.

Finally, planning by generating and evaluating trajectories of the complete search tree is computationally expensive. In this paper, we intentionally pursued this approach in order to directly investigate the effect of the KL term versus the entropy term. To mitigate the computational load, one might amortize the resulting policy by training a policy neural network based on the visited states and the planned actions by the agent, similar to [1]. In other approaches, the policy is learned directly through end-to-end training. For example, K. Ueltzhöffer uses evolution strategies to learn both a model and a policy, that requires part of the state space to be fixed, to contain information of the preferred state [Ueltzhöffer2018]. B. Millidge on the other hand amortizes the expected free energy

as function of the state, similar to value function estimation in reinforcement learning 

[9]. Again, however, the perception part is omitted and the state space is fixed upfront.

6 Conclusion

In this paper, we have shown how generative models parameterized by neural networks are trained by minimizing the free energy, and how these can be exploited by an active inference agent to select the optimal policy. We will further extend our models to work in more complex environments, in particular towards more complex sensory inputs such as camera, lidar or radar data.

References

  • [1] O. Catal, J. Nauta, T. Verbelen, P. Simoens, and B. Dhoedt (2019) Bayesian policy selection using active inference. In Workshop on “Structure & Priors in Reinforcement Learning” at ICLR 2019 : proceedings, pp. 9 (eng). Cited by: §5.
  • [2] C. De Boom, T. Verbelen, and B. Dhoedt (2018) Deep active inference for state estimation by learning from demonstration. In Conference of Complex Systems (CCS), Satellite Symposium on Complexity from Cells to Consciousness: Free Energy, Integrated Information, and Epsilon Machines, Cited by: §5.
  • [3] K. Friston, T. FitzGerald, F. Rigoli, P. Schwartenbeck, and G. Pezzulo (2017) Active inference: A Process Theory. Neural Computation 29, pp. 1–49. External Links: Document, ISBN 0899-7667, ISSN 1530888X Cited by: §1.
  • [4] K. Friston, J. Kilner, and L. Harrison (2006) A free energy principle for the brain. Journal of Physiology Paris 100 (1-3), pp. 70–87. External Links: Document, arXiv:1401.4122v2, ISBN 0928-4257, ISSN 09284257 Cited by: §1.
  • [5] K. Friston, S. Samothrakis, and R. Montague (2012) Active inference and agency: Optimal control without cost functions. Biological Cybernetics 106 (8-9), pp. 523–541. External Links: Document, ISBN 0340-1200, ISSN 03401200 Cited by: §1.
  • [6] K. Friston, P. Schwartenbeck, T. FitzGerald, M. Moutoussis, T. Behrens, and R. J. Dolan (2013) The anatomy of choice: active inference and agency. Frontiers in Human Neuroscience 7 (September), pp. 1–18. External Links: Document, ISBN 1662-5161 (Electronic)$\$r1662-5161 (Linking), ISSN 1662-5161, Link Cited by: §1.
  • [7] K. Friston (2010) The free-energy principle: A unified brain theory?. Nature Reviews Neuroscience 11 (2), pp. 127–138. External Links: Document, arXiv:1507.02142v2, ISBN 1471-0048 (Electronic)$\$r1471-003X (Linking), ISSN 1471003X, Link Cited by: §1, §2.
  • [8] D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. CoRR abs/1312.6114. External Links: Link, 1312.6114 Cited by: §2.
  • [9] B. Millidge (2019) Deep active inference as variational policy gradients. CoRR abs/1907.03876. External Links: Link, 1907.03876 Cited by: §5.
  • [10] D. J. Rezende, S. Mohamed, and D. Wierstra (2014) Stochastic backpropagation and approximate inference in deep generative models. In

    Proceedings of the 31st International Conference on Machine Learning

    ,
    External Links: Link Cited by: §2.
  • [11] P. Schwartenbeck, J. Passecker, T. Hauser, T. H. B. FitzGerald, M. Kronbichler, and K. J. Friston (2018) Computational mechanisms of curiosity and goal-directed exploration. bioRxiv, pp. 411272. External Links: Document, Link Cited by: §2.