Deep Active Inference for Autonomous Robot Navigation

03/06/2020
by   Ozan Çatal, et al.
Ghent University
11

Active inference is a theory that underpins the way biological agent's perceive and act in the real world. At its core, active inference is based on the principle that the brain is an approximate Bayesian inference engine, building an internal generative model to drive agents towards minimal surprise. Although this theory has shown interesting results with grounding in cognitive neuroscience, its application remains limited to simulations with small, predefined sensor and state spaces. In this paper, we leverage recent advances in deep learning to build more complex generative models that can work without a predefined states space. State representations are learned end-to-end from real-world, high-dimensional sensory data such as camera frames. We also show that these generative models can be used to engage in active inference. To the best of our knowledge this is the first application of deep active inference for a real-world robot navigation task.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 9

page 10

page 11

page 12

01/30/2020

Learning Perception and Planning with Deep Active Inference

Active inference is a process theory of the brain that states that all l...
03/13/2015

A Minimal Active Inference Agent

Research on the so-called "free-energy principle" (FEP) in cognitive neu...
05/22/2018

Active Inference for Adaptive BCI: application to the P300 Speller

Adaptive Brain-Computer interfaces (BCIs) have shown to improve performa...
04/23/2021

Realising Active Inference in Variational Message Passing: the Outcome-blind Certainty Seeker

Active inference is a state-of-the-art framework in neuroscience that of...
09/09/2021

Deep Active Inference for Pixel-Based Discrete Control: Evaluation on the Car Racing Problem

Despite the potential of active inference for visual-based control, lear...
12/03/2021

Active Inference in Robotics and Artificial Agents: Survey and Challenges

Active inference is a mathematical framework which originated in computa...
06/23/2020

The principles of adaptation in organisms and machines II: Thermodynamics of the Bayesian brain

This article reviews how organisms learn and recognize the world through...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Active inference and the free energy principle underpins the way our brain – and natural agents in general – work. The core idea is that the brain entertains a (generative) model of the world which allows it to learn cause and effect and to predict future sensory observations. It does so by constantly minimising its prediction error or “surprise”, either by updating the generative model, or by inferring actions that will lead to less surprising states. As such, the brain acts as an approximate Bayesian inference engine, constantly striving for homeostasis.

There is ample evidence (Friston, 2012; Friston et al., 2013a, 2014) that different regions of the brain actively engage in variational free energy minimisation. Theoretical grounds indicate that even the simplest of life forms act in a free energy minimising way (Friston, 2013).

Although there is a large body of work on active inference for artificial agents (Friston et al., 2006, 2009, 2017, 2013b; Cullen et al., 2018)

, experiments are typically done in a simulated environment with predefined and simple state and sensor spaces. Recently, research has been done on using deep neural networks as an implementation of the active inference generative model, resulting in the umbrella term “deep active inference”. However, so far all of these approaches were only tested on fairly simple, simulated environments 

(Ueltzhöffer, 2018; Millidge, 2019; Çatal et al., 2019). In this paper, we apply deep active inference on a robot navigation task, with high-dimensional camera observations and deploy it on a mobile robot platform. To the best of our knowledge, this is the first time that active inference is applied on a real-world robot navigation task.

In the remainder of this paper we will first introduce the active inference theory in Section 2. Next, we show how we implement active inference using deep neural networks in Section 3, and discuss initial experiments in Section 4.

2 Active Inference

Active inference is a process theory of the brain that utilises the concept of free energy (Friston, 2013) to describe the behaviour of various agents. It stipulates that all agents act in order to minimise their own uncertainty of the world. This uncertainty is expressed as Bayesian Surprise, or alternatively the variational free energy. In this context this is characterised by the difference between what an agent imagines about the world and what it has perceived about the world (Friston, 2010). More concretely, the agent builds a generative model , linking together the agents internal belief states with the perceived actions and observations

in the form of a joint distribution. We use a tilde to denote a sequence of variables through time. This generative model can be factorised as in Equation 

1.

(1)

The free energy or Bayesian surprise is then defined as:

(2)

Here, is an approximate posterior distribution. The second equality shows that free energy is equivalent to the (negative) evidence lower bound (ELBO) (Kingma and Welling, 2013; Rezende et al., 2014). The final equation frames the problem of free energy minimisation as explaining the world from the agents beliefs whilst minimising the complexity of accurate explanations (Friston et al., 2016).

Crucially, in active inference agents will act according to the belief that they will keep minimising surprise in the future. This means agents will infer policies that yield minimal expected free energy in the future, with a policy being the sequence of future actions starting at current time step with a time horizon . This principle is formalised in Equation 3 with being the softmax function with precision parameter .

(3)

Expanding the expected free energy functional we get Equation 4. Using the factorisation of the generative model from Equation 1 we approximate .

(4)

Note that, in the final equality, we substitute by , a global prior distribution on the so-called “preferred” states of the agent. This reflects the fact that the agent has prior expectations about the states it will reach. Hence, minimising expected free energy entails both realising preferences, while minimising the ambiguity of the visited states.

3 Deep active inference

Figure 1:

The various components of the agent rolled out trough time. We minimise the variational free energy by minimising both the negative log likelihood of observations and the KL divergence between the state transition model and the observation model. The inferred hidden state is characterised as a multivariate Gaussian distribution.

In current treatments of active inference the state spaces are typically completely fixed upfront (Friston et al., 2009; Millidge, 2019) or partially (Ueltzhöffer, 2018)

. However, this does not scale well for more complex tasks as it is often difficult to design meaningful state spaces for such problems. Therefore we allow for the agent to learn by itself what the exact parameterisation of its belief space should be. We enable this by using deep neural networks to generate the various necessary probability distributions for our agent.

We approximate the variational posterior distribution for a single timestep with a network . Similarly we approximate the likelihood model with the network and the prior with the network

. Each of the networks output a multivariate normal distribution with a diagonal covariance matrix using the reparameterisation trick 

(Kingma and Welling, 2013). These neural networks cooperate in a way similar to a VAE, where the fixed standard normal prior is replaced with the learnable prior , the decoder by and finally the encoder by , as visualised in Figure 1.

These networks are trained end-to-end using the free energy formula from the previous section as an objective.

(5)

As in a conventional VAE (Kingma and Welling, 2013) the negative log likelihood (NLL) term in the objective punishes reconstruction error forcing the model to learn relevant information on the belief state to be captured in the posterior output, while the KL term pulls the prior output towards the posterior output, forcing the prior and posterior to agree on the content of the belief state in a way that still allows the likelihood model to reconstruct the current observation.

We can now use the learned models to engage in active inference, and infer which action the agent has to take next. This is done by generating imagined trajectories for different policies using and , calculating the expected free energy and selecting the action of the policy that yields the lowest . These policies to evaluate can be predefined, or generated through random shooting, using cross-entropy method (Boer et al., 2005) or by building a search tree.

4 Experiments

We validate our deep active inference approach on a real world robotics navigation task. First, we collect a dataset consisting of two hours worth of real world action-observation sequences by driving a Kuka Youbot base platform up and down the aisles of a warehouse lab. Camera observations are recorded with a front mounted Intel Realsense RGB-D camera, without taking into account the depth information. The x, y and angular velocities are recorded as actions at a recording frequency of 10Hz. The models are trained on a subsampled version of the data resulting in a train set with data points every 200ms.

Next, we instantiate neural networks and as a convolutional encoder and decoder network, and using an LSTM. These are trained with Adam optimizer using the objective function from Equation 5

for 1M iterations. We use a minibatch size of 128 and a sequence length of 10 timesteps. A detailed overview of all hyperparameters is given in appendix.

We utilise the same approach as in Çatal et al. (2020) for our imaginary trajectories and planning. The agent has access to three base policies to pick from: drive straight, turn left and turn right. Actions from these policies are propagated to the learned models at different time horizons or . For each resulting imaginary trajectory, the expected free energy is calculated. Finally the trajectory with lowest is picked, and the first action of the chosen policy is executed, after which the imaginary planning restarts. The robot’s preferences are given by demonstration, using the state distribution of the robot while driving in the middle of the aisle. This should encourage the robot to navigate in the aisles.

At each trial the robot is placed at a random starting position and random orientation and tasked to navigate to the preferred position. Figure 2 presents a single experiment as an illustrative example. Figure 1(a) shows the reconstructed preferred observation from the given preferred state, while Figure 1(b) shows the trial’s start state from an actual observation. Figure 1(c) shows the imagined results of either following the policy “always turn right”, “always go straight” or “always turn left”. Figure 1(d) is the result of utilising the planning method explained above. Additional examples can be found in the supplementary material.

The robot indeed turns and keeps driving in the middle of the aisle, until it reaches the end and then turns around 111A movie demonstrating the results is available at https://tinyurl.com/smvyk53. When one perturbs the robot by pushing it, it will again recover and continue to the middle of the aisle.

(a) Preferred state.
(b) Start state
(c) Imaginary future trajectories for different policies, i.e. going straight ahead (top), turning right (middle), turning left (bottom).
(d) Actually followed trajectory.
Figure 2: Experimental results: Figure (a) shows the target observation in imagined (reconstructed) space. (b) The start observation of the trial. Figure (c) shows different imaginary planning results, whilst (d) shows the actually followed trajectory.

5 Conclusion

In this paper we present how we can implement a generative model for active inference using deep neural networks. We show that we are able to successfully execute a simple navigation task on a real world robot with our approach. As future work we want to allow the robot to continuously learn from past autonomous behaviour, effectively “filling the gaps” in its generative model. Also how to define the “preferred state” distributions and which policies to evaluate remains an open research challenge for more complex tasks and environments.

References

  • P. Boer, D. Kroese, S. Mannor, and R. Rubinstein (2005) A tutorial on the cross-entropy method. Annals of Operations Research 134, pp. 19–67. External Links: Document Cited by: §3.
  • O. Çatal, J. Nauta, T. Verbelen, P. Simoens, and B. Dhoedt (2019) Bayesian policy selection using active inference. In

    Workshop on “Structure & Priors in Reinforcement Learning” at ICLR 2019 : proceedings

    ,
    pp. 9 (eng). Cited by: §1.
  • O. Çatal, T. Verbelen, J. Nauta, C. D. Boom, and B. Dhoedt (2020) Learning perception and planning with deep active inference. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, Barcelona, Spain, pp. In Press. External Links: 2001.11841 Cited by: Table 2, §4.
  • M. Cullen, B. Davey, K. J. Friston, and R. J. Moran (2018) Active inference in openai gym: a paradigm for computational investigations into psychiatric illness. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging 3 (9), pp. 809 – 818. Note: Computational Methods and Modeling in Psychiatry External Links: ISSN 2451-9022, Document, Link Cited by: §1.
  • K. Friston, T. FitzGerald, F. Rigoli, P. Schwartenbeck, J. O’Doherty, and G. Pezzulo (2016) Active inference and learning. Neuroscience & Biobehavioral Reviews 68, pp. 862 – 879. External Links: ISSN 0149-7634, Document, Link Cited by: §2.
  • K. Friston, T. FitzGerald, F. Rigoli, P. Schwartenbeck, and G. Pezzulo (2017) Active inference: A Process Theory. Neural Computation 29, pp. 1–49. External Links: Document, ISBN 0899-7667, ISSN 1530888X Cited by: §1.
  • K. J. Friston, P. Schwartenbeck, T. F. Fitzgerald, M. Moutoussis, T. W. Behrens, and R. J. Dolan (2014) The anatomy of choice: dopamine and decision-making. In Philosophical Transactions of the Royal Society B: Biological Sciences, Cited by: §1.
  • K. J. Friston, J. Daunizeau, and S. J. Kiebel (2009) Reinforcement learning or active inference?. PLOS ONE 4 (7), pp. 1–13. External Links: Link, Document Cited by: §1, §3.
  • K. J. Friston (2013) Life as we know it. Journal of the Royal Society Interface. Cited by: §1, §2.
  • K. Friston, J. Kilner, and L. Harrison (2006) A free energy principle for the brain. Journal of Physiology Paris 100 (1-3), pp. 70–87. External Links: Document, arXiv:1401.4122v2, ISBN 0928-4257, ISSN 09284257 Cited by: §1.
  • K. Friston, P. Schwartenbeck, T. Fitzgerald, M. Moutoussis, T. Behrens, and R. Dolan (2013a) The anatomy of choice: active inference and agency. Frontiers in Human Neuroscience 7, pp. 598. External Links: Link, Document, ISSN 1662-5161 Cited by: §1.
  • K. Friston, P. Schwartenbeck, T. FitzGerald, M. Moutoussis, T. Behrens, and R. J. Dolan (2013b) The anatomy of choice: active inference and agency. Frontiers in Human Neuroscience 7 (September), pp. 1–18. External Links: Document, ISBN 1662-5161 (Electronic)$\$r1662-5161 (Linking), ISSN 1662-5161, Link Cited by: §1.
  • K. Friston (2010) The free-energy principle: A unified brain theory?. Nature Reviews Neuroscience 11 (2), pp. 127–138. External Links: Document, arXiv:1507.02142v2, ISBN 1471-0048 (Electronic)$\$r1471-003X (Linking), ISSN 1471003X, Link Cited by: §2.
  • K. Friston (2012) A free energy principle for biological systems. Entropy 14 (11), pp. 2100–2121. External Links: Link, ISSN 1099-4300, Document Cited by: §1.
  • D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. CoRR abs/1312.6114. External Links: Link, 1312.6114 Cited by: §2, §3, §3.
  • B. Millidge (2019) Deep active inference as variational policy gradients. CoRR abs/1907.03876. External Links: Link, 1907.03876 Cited by: §1, §3.
  • D. J. Rezende, S. Mohamed, and D. Wierstra (2014)

    Stochastic backpropagation and approximate inference in deep generative models

    .
    In

    Proceedings of the 31st International Conference on Machine Learning

    , E. P. Xing and T. Jebara (Eds.),
    Proceedings of Machine Learning Research, Vol. 32, Bejing, China, pp. 1278–1286. External Links: Link Cited by: §2.
  • K. Ueltzhöffer (2018) Deep active inference. Biological Cybernetics 112 (6), pp. 547–573. External Links: ISSN 1432-0770, Document, Link Cited by: §1, §3.

Appendix A Neural architecture

Layer Neurons/Filters activation function

Posterior

Convolutional 8

Leaky ReLU

Convolutional 16 Leaky ReLU
Convolutional 32 Leaky ReLU
Convolutional 64 Leaky ReLU
Convolutional 128 Leaky ReLU
Concat N.A. N.A.
Linear 2 x 128 states Softplus

Likelihood

Linear 128 x 8 x 8 Leaky ReLU
Convolutional 128 Leaky ReLU
Convolutional 64 Leaky ReLU
Convolutional 32 Leaky ReLU
Convolutional 16 Leaky ReLU
Convolutional 8 LeakyReLU

Prior

LSTM cell 400 Leaky ReLU
Linear 2 x128 states Softplus
Table 1:

Neural network architectures. All convolutional layers have a 3x3 kernel. The convolutional layers in the Likelihood model have a stride and padding of 1 to ensure that they preserve the input shape. Upsampling is done by nearest neighbour interpolation. The concat step concatenates the processed image pipeline with the vector inputs

and .

Appendix B Hyperparameters

Parameter Value

Learning

learning rate 0.0001
batch size 128
train iterations 1M
sequence length 10

Planning

100
(Çatal et al., 2020) 1
(Çatal et al., 2020) 10, 25, 55
(Çatal et al., 2020) 5
 (Çatal et al., 2020) 0.001
Table 2: Overview of the model hyperparemeters.

Appendix C Detailed Planning example

A movie demonstrating the results is available at https://tinyurl.com/smvyk53.

Figure 3: Trial preferred state
Figure 4: Short term planning
Figure 5: Middle long term planning
Figure 6: Long term planning