Active inference provides a framework derived from first principles for solving and understanding the behavior of autonomous agents in situations requiring decision-making under uncertainty (Friston, FitzGerald et al., 2017; Friston, Rosch et al., 2017). It uses the free energy principle to describe the properties of random dynamical systems (such as an agent in an environment), and by minimizing the average of this quantity over time (through gradient descent), optimal behavior can be obtained for a given environment (with respect to prior preferences) (Friston, Schwartenbeck et al., 2014; Friston, 2019). More concretely, optimal behavior is determined by evaluating (sensory) evidence under a generative model of (observed) outcomes (Friston, FitzGerald et al., 2016). The generative model – of the environment – contains beliefs about future (hidden) states and sequence of actions (policies) that an agent might choose. The most likely policies lead to the preferred outcomes. This formulation has two complementary objectives: infer optimal behavior and optimize the generative model based on the agent’s ability to infer the observed data. Both can be achieved, simultaneously, by minimizing the free energy functional (function of a function). Additionally, this free energy formulation gives rise to realistic behaviors, such as natural exploration-exploitation trade-offs, and by being fully Bayesian, is amenable to on-line learning settings, where the environment is non-stationary (Friston, Rigoli et al., 2015; Parr & Friston, 2017).
Practically, we need to solve for both the dynamics (optimizing the free energy), but also determine optimal behavior (i.e., the form of the attracting sets). If the joint probability over the hidden states and observed outcomes can be associated with a generative model; the log of the generative model evidence (or the marginal likelihood) becomes surprise; a.k.a. surprisal in physics and information theory (Tribus, 1961). From this, we can use the free energy functional of the generative model under some beliefs (encoded by internal states) to reproduce the dynamic flows that would give rise to the attracting set – that is specified in terms of the priors of the generative model (i.e., prior preferences or beliefs about states an agent expects to find itself in).
Congruent formulations of the free energy functional variational and expected when coupled together allow us to account for many aspects of action and perception (both for biological and artificial agents). Active inference is a formal way to combine the two formulations; namely, self-organization or self-assembly in physics (Crauel & Flandoli, 1994; Seifert, 2012; Friston, 2019) on one hand and planning as inference (Attias, 2003; Botvinick & Toussaint, 2012; Baker & Tenenbaum, 2014) on the other. This rests on defining random dynamical systems that have attracting sets (with low entropy) that can be distinguished from their environment, in virtue of possessing a Markov blanket (Friston, 2019). Here we assume a particular form for the generative model and determine whether the expected free energy can explain the ensuing behavior – by casting non-equilibrium steady-state dynamics as approximate Bayesian inference (Friston, FitzGerald et al., 2017; Parr & Friston, 2017; Friston, Rosch et al., 2017). This notion underpins active inference and allows us to understand how agents navigate non-stationary environments; making inferences about the environment and how they should act.
The main contributions of active inference in contrast to analogous frameworks follow from its commitments to a pure belief-based scheme. These contributions include: a principled account of epistemic exploration and intrinsic motivation (Parr & Friston, 2017; Schwartenbeck, Passecker et al., 2019), uncertainty is a natural part of belief updating (Parr & Friston, 2017) and a reward function does not have to be explicitly specified. This review paper aims to unpack these properties – under the discrete state-space and time formulation – of active inference; thereby providing a brief overview of the theory.
The review comprises three sections. The first section considers (via definitions) the discrete state-space (both hidden states and observations) and time formulation of active inference and provides commentary on its implementation. This is followed by a T-maze simulation to provide a concrete example of the key components of the generative model and update rules in play (previously introduced in (Friston, Rigoli et al., 2015; Friston, FitzGerald et al., 2017)). The simulation offers an explicit account of how an active inference agent evinces a natural trade-off between exploration and exploitation in non-stationary environments. We conclude with a brief discussion of how this formalism could be applied in engineering; e.g., robotic arm movement, playing Atari games, etc., and the specification of the underlying probability distribution or attracting set (through the generative model).
2 Active Inference
Active inference is predicated on understanding how (biological or artificial) agents navigate dynamic, non-stationary environments (Friston, FitzGerald et al., 2017; Friston, Rosch et al., 2017). It postulates that in any given state, an agent maintains a homeostasis by residing in (attractor) states that minimize entropy (or surprising observations) (Firston, Mattout et al., 2011).
Definition 1 (Surprise).
We define entropy – as being related to surprise, – from information theory:
is the set of possible outcomes.
The agent minimizes entropy by creating a generative model of the world. This is necessary because the agent does not have access to a direct measurement of its current state (i.e. the state of the true generative process). Instead it can only perceive itself and the world around via its sensory observations (Friston, FitzGerald et al., 2017; Firston, Parr et al., 2017). The generative model based on incomplete information about the current (and future) state of the world
can be defined in terms of a partially observable Markov decision processes (POMDP)(Astrom, 1965). In active inference, the agent makes choices based on the beliefs about these states of the world and not based on the ‘value’ of the states (Friston, FitzGerald et al., 2016)
. This distinction is key: in standard model-based reinforcement learning framework the agent is interested in optimizing thevalue function of the states (Sutton & Barto, 1998); i.e., making decisions that maximize expected value. In active inference, we are interested in optimizing a free energy functional of beliefs about states; i.e., making decisions that minimize expected free energy.
A simple abstraction would be to assert that the world has a true (hidden or latent) state , which results in the observations (via the generative process) (see Figure 1). The agent correspondingly has an internal representation of or expectation about , which it infers from (via its generative model). The hidden state is a combination of features relevant to the agent (e.g. location, color, etc) and the observation is the information from the environment (e.g., feedback, etc). By the reverse process of mapping from its hidden state to the observations (i.e., Bayesian model inversion), the agent can explain the observations in terms of how they were caused by hidden states.
Definition 2 (Generative Model).
The joint model of this simple system is defined as . This can be factorized, assuming conditional independence, into a likelihood function and prior over internal states (see Appendix 5.1 for a full specification of the model):
We know that for the agent to minimize its entropy, we need to marginalize over all possible states (and sequence of actions) that could lead to a given observation. This can be achieved by using the above factorization:
This is not a trivial task, since the dimensionality of the hidden state (and sequences of actions) space can be extremely large. Instead, we utilize a variational approximation of this quantity,
, which is tractable and allows us to estimate quantities of interest.
Definition 3 (Variational free energy).
Using Jensen’s inequality, we can define the variational free energy, , or the upper bound on surprise. This is, commonly, known as the (negative) evidence lower bound () in variational inference literature (Blei, Kucukelbir et al., 2017):
To make the link more concrete, we further manipulate the variational free energy quantity, :
By rearranging the last Equation, the connection between surprise and variational free energy is made explicit:
Additionally, we can express variational free energy as a function of these posterior beliefs in many forms:
Since KL divergences cannot be less than zero, from Equation 12 we can see that the free energy is minimized when the approximate posterior becomes the true posterior. In that instance, the free energy would simply be the negative log evidence for the generative model (Beal, 2003). This highlights that minimizing free energy is equivalent to maximizing (generative) model evidence. In other words, it is minimizing the complexity of accurate explanations for observed outcomes, as seen in Equation 13. Note that we have conditioned the probabilities in Equation 12 and 13 on policies, . These policies can be regarded as particular priors that as we will see below
pertain to probabilistic transitions among hidden states. For the moment, the introduction of priors, simply means that the variational free energy above can be evaluated for any given policy or model of state transitions.
Thinking in terms of variational free energy; enables us to perceive sensory data but does not account for actions that the agent can take. Therefore, we would like to minimize not only our instantaneous variational free energy, , but also our variational free energy in the future; called the expected free energy, . Minimization of expected free energy allows the agent to influence the future by taking actions, which are selected from policies.
Definition 4 (Policy).
is defined as a sequence of actions, at time , that enable an agent to transition between hidden states. The total number of policies that can be pursued is defined by some arbitrary number, . Formally this can be written:
This enables the agent to infer how it must act in the world as determined by the policies selected and how these actions determine subsequent outcomes. This is analogous to model-based reinforcement learning using planning (Sutton, 1990): hypothetical roll-outs are used to model the consequences of each policy. However, active inference goes one step further, deriving its actual policy from these roll-outs, and therefore can be seen to be implementing a form of imagination-augmentation (Racanière, Reichert, et al., 2017). Policies, a priori, minimize the free energy of beliefs about the future, (Friston, FitzGerald et al., 2017)
. This can be realized by associating the prior probability of any policy with a softmax function (i.e., normalized exponential) of expected free energy:
where denotes a softmax function.
We can extend the variational free energy definition to be dependent on time () and policy () (and present its matrix formulation: Equation 18):
Here is the expected state conditioned on each policy; is the transition probability for hidden states under each action prescribed by a policy at a particular time; is the expected likelihood matrix mapping from hidden states to outcomes and represents the outcomes.
Definition 5 (Expected free energy).
is the variational free energy of future trajectories. It effectively evaluates evidence for plausible policies based on outcomes that have yet to be observed (Parr & Friston, 2018). It can be derived from Equation 16 by taking an expectation under the posterior predictive distribution given by
by taking an expectation under the posterior predictive distribution given by. This captures the idea of predicting future outcomes, given future hidden states, conditioned on policies.
The expected free energy can be decomposed in complementary ways (and it’s matrix formulation: Equation 26):
where the following assumptions are made: ; ; is the logarithm of prior preference over outcomes and
is the vector encoding the ambiguity over outcomes for each hidden state.
When minimizing expected free energy, we can regard Equation 24 as capturing the imperative to maximize the amount of information gained by observing the environment about the hidden state (i.e., maximizing epistemic value), whilst maximizing expected value – as scored by log preferences (i.e., extrinsic value).
This entails a clear trade-off: the former (epistemic) component promotes curious behavior, with exploration encouraged as the agent seeks out salient states to minimize uncertainty about the environment, and the latter (pragmatic) component encourages exploitative behavior, through leveraging knowledge that enables policies to reach preferred outcomes. In other words, the expected free energy formulation enables active inference to treat exploration and exploitation as two different ways of tackling the same problem: minimizing uncertainty. The natural curiosity emerging through this formulation, is in contrast to reinforcement learning, where curiosity must be manufactured, either through random action selection (Mnih, Silver et al., 2018) or through additional curiosity terms, which are appended to the reward signal (Pathak, Efros et al., 2017). Information theoretic approaches have also been explored in a reinforcement learning context but do not leverage the (beliefs about) latent states implied by the generative model; see (Still, 2012; Mohamed & Rezende, 2015). Consequently, they do not encourage exploration that would minimize ambiguity.
Equation 25 offers an alternative perspective on the same objective; i.e. an agent wishes to minimize the ambiguity, whilst minimizing how much outcomes (under a given policy) deviate from prior preferences . Thus, ambiguity, is the expectation of the conditional entropy or uncertainty about outcomes under the current policy. Low entropy suggests that outcomes are salient and uniquely informative about hidden states (e.g., visual cues in a well-lit environment as opposed to the dark). In addition, the agent would like to pursue policy dependent outcomes that resemble its preferred outcomes. This is achieved when the KL divergence between predicted and preferred outcomes (i.e. expected cost) is minimized by a particular policy. Furthermore, prior beliefs about future outcomes equip the agent with goal-directed behavior (i.e. towards states they expect to occupy and frequent).
The traditional reward function used in reinforcement learning is therefore replaced with prior beliefs about preferred outcomes in the future (see Equation 24). The agent’s prior preferences, , are defined only to within an additive constant and depend on relative differences between rewarding (familiar) and unrewarding (surprising) outcomes. Thus, the agent will aim to follow a policy that enables both self-evidencing behavior (i.e., surprise minimization) and satisfies prior preferences.
From this free energy formulation, we can optimize expectations about hidden states, policies, and precision through inference and optimize model parameters (likelihood, transition states) through learning (via a learning rate: ). This optimization requires finding sufficient statistics of posterior beliefs that minimize variational free energy (Firston, Parr et al., 2017). Under variational Bayes, this would mean iterating the appropriate formulations (for inference and learning) until convergence. However, under the active inference scheme, we calculate the solution by using a gradient descent (with a default step size, , of 4) on expected free energy, which allows us to optimize both action-selection and inference simultaneously (in matrix form) assuming a particular mean-field approximation (Beck, Pouget, et al., 2012; Parr, Markovic, et al., 2019):
where ; encodes posterior beliefs about precision; represents the policies specifying action sequences and .
This entails converting the discrete updates, defined in Equation 27 and 28, into dynamics for inference that minimize state and precision prediction errors: and . These prediction errors are free energy gradients. Gradient flows then produce posterior expectations that minimize free energy to provide Bayesian estimates of hidden variables. This particular optimization scheme means expectations about hidden variables are updated over several time scales: during each observation or trial, evidence for each policy is evaluated based upon prior beliefs about future outcomes. This is determined by updating posterior beliefs about hidden states (i.e., state estimation under each policy, ) on a fast time scale, while posterior beliefs find new extrema (i.e., as new observations are sampled, ) to produce a slower evidence accumulation over observations.
Using this kind of belief updating, we can calculate the posterior beliefs about each policy; namely, a softmax function based on expected free energy see Equation 15. The softmax function is a generalized sigmoid for vector input, and can in a neurobiological setting
be regarded as a firing rate function of neuronal depolarization(Friston, Rosch et al., 2017). Having optimized posterior beliefs about policies, they are used to form a Bayesian model average of the next outcome, which is realized through action. In active inference, the scope and depth of the policy search is exhaustive, in the sense that any policy entertained by the agent is encoded explicitly – and any hidden state over the sequence of actions entailed by policy are continuously updated. However, in practice, using Occam’s window, a policy is no longer evaluated if its log evidence is (default ) times less likely than the (current) most plausible policy. This can be treated as an adjustable hyper-parameter. Additionally, at the end of each sequence of observations, the expected parameters are updated to allow for learning across trials. This is like Monte-Carlo reinforcement learning, where model parameters are updated at the end of each trial. Lastly, temporal discounting emerges naturally from the active inference scheme, where the generative model determines the nature of discounting (based on parameter capturing precision), with predictions in the distal future being less precise, thus discounted (Friston, FitzGerald et al., 2017).
The discussion above suggests that, from a generic generative model, we can derive Bayesian updates that clarify how perception, policy selection and actions shape beliefs about hidden states and subsequent outcomes in a dynamic (non-stationary) environment. This formulation can be extended to capture a more representative generative process by defining a hierarchical (deep temporal) generative model as described in (Friston, FitzGerald et al., 2017; Firston, Parr et al., 2017; Parr & Friston, 2017), continuous state spaces models (Buckley, Kim, et al., 2017; Parr & Friston, 2019) or mixed models with both discrete and continuous states as described in (Firston, Parr et al., 2017; Parr & Friston, 2018). In the case of a continuous formulation, the generative model state-space can be defined in terms of generalized coordinates of motion, which generally have a non-linear mapping to the observed outcomes. Additionally, future work looks to evaluate how these formulations (agents) may interact with each other to emulate multi-agent exchanges. In what follows, we provide a simple worked example to show how this sort of scheme works.
This section considers inference using simulations of foraging in a T-maze: for simplicity, we have chosen a simple paradigm (more complex simulations have been explored in the literature; e.g. behavioral economics trust games (Moutoussis, Trujillo-Barreto, et al., 2014; Schwartenbeck, FitzGerald, et al., 2015), narrative construction and reading (Friston, Rosch et al., 2017), saccadic searches and scene construction (Mirza, Adams, et al., 2016), Atari games (Cullen, Davey, et al., 2018), etc). We first describe the simulation set-up and then simulate how a mouse (artificial agent) learns to navigate (i.e., explore and then exploit) a maze to get the reward. The simulations involve searching for rewards (e.g., cheese) in a T-maze (Friston, Rigoli et al., 2015).
A mouse (agent) starts at the center of the T-maze: it can either move directly to the right or left arms that contain cheese or to the lower arm that contains cues that indicate (probabilistically) whether the reward is in the upper right or left arm. The agent can only move twice and upon entering the upper right or left arms cannot leave. Thus, an optimal behavior is to first go to the lower arm to find the location of the reward and then retrieve the reward. If the agent follows this path, it receives a reward of , if it goes directly to the correct reward location it receives a reward of , but failure to find the correct reward location results in , at the end of the trail. Notice that rewards and losses are specified in terms of or natural units, because we have stipulated reward in terms of the natural logarithms of some outcome.
For this setup, we define the generative model as follows: four control states that correspond to visiting the four locations (the center and three arms we assume each control state takes the agent to the associated location), eight hidden states (four locations factorized by two contexts) and seven possible outcomes. The outcomes correspond to the following: being in the center plus the (two) outcomes at each of the (three) arms that are determined by the context (the cheese being in the right or left arm).
We define the likelihood as follows: ambiguous clue at the center (first) location and a definitive cue at the lower (fourth) location (refer to Figure 2). The remaining locations provide a reward with probability based on the context (i.e., reward on the right or left). The action-specific transition probabilities encode how an agent may move, except for the second and third locations, which are absorbing hidden states that the agent cannot leave. We define the agent as having extremely precise beliefs about the contingencies (i.e. large prior concentration parameters). Additionally, the utility of the outcomes, , is defined by : and for rewarding and unrewarding outcome: this is a replacement for writing out an explicit reward function. This means, that the agent expects to be rewarded times more than experiencing a neutral outcome. Having specified the state-space and contingencies, we can solve the belief updating Equations 27 and 28 to simulate behavior. Prior beliefs about the initial state were initialized with concentration parameters of a Dirichlet distribution (
) for the central location for each context and zero otherwise. This can be regarded as the number of times (pseudo-count) each state, transition or policy has previously been encountered. Additionally, we remove policies with a relative posterior probability ofor less then that fall outside Occam’s window. Pseudo-code for the belief updating and action selection for this particular type of discrete state-space and time formulation is presented in Appendix 5.2.
3.2 Learning to navigate the maze
To highlight how the (agent) mouse learnt where the reward was located, in a non-stationary environment, we simulated trials. The first three trials alternated between the two contexts: reward on either right or left. Then the context – indicated by the clue in the lower arm – was specified as being right until trial , left from trial to , right again from trial to and left again from trial to . After trial , it remained right till the end of the simulation. These context changes allowed us to evaluate how quickly the mouse was able to switch between epistemic and exploitative policies and identify the correct reward location.
For the first trials, the agent selected epistemic policies—first going to the lower arm and then proceeding to the reward location (i.e., left or right). This suggests that the agent was not entirely confident about what context might be in play. This is highlighted in Figure 3 (showing updates to the initial state concentration parameters reflective of context learning in different contexts). Initially the agent is uncertain which context it might be in since both contexts have similar probabilities. However, post-trial there is a shift in the updates attuning to a consistent context (right) till trial . During this time, the agent becomes increasingly confident about the context and starts to directly visit the reward location (from trial ). This is highlighted via the switch in policy being pursued from ‘exploratory right policy: middle, bottom, right cyan dot’ to ‘exploitative right policy: middle, left purple dot’ (see Figure 3). However, whilst pursing an exploitative policy, the context switches from right to left at trial (see black arrow in Figure 4) and the agent, chooses the wrong upper arm twice (and receives negative reward). This causes the agent to (once again) pursue an exploratory policy of first going to the bottom to collect the clue and then deciding which arm to go to next. After this, the agent continues to purse exploratory policies, due to the changing context after every 10 trails. However, after trial , the agent is consistently exposed to the same context. This enables it to accumulate enough evidence and it can once again switch the policy being pursued from ‘exploratory right policy: middle, bottom, right cyan dot’ to ‘exploitative right policy: middle, left purple dot’.
This paradigm and its extensions as explored in earlier work (Friston, FitzGerald et al., 2017), e.g. inability to move to lower / upper arms or wrong cues cause the mouse to pragmatically change its behavior (and continue to explore the environment) with slower convergence towards the optimal policy ( reward; directly going to the correct reward location) when uncertain. This highlights that active inference agents are equipped with a natural trade-off between exploration (to better understand the environment) and exploitation (choosing pragmatic policies). In other words, the mouse will continue to explore until it is confident about the environment. However, despite being reasonably confident about a given environment, the agent can rapidly adapt to changing contexts and new observations, as seen in the simulations above.
In short, active inference offers an attractive, natural adaptation mechanism for training artificial agents due to its Bayesian model updating properties. This is contrast to reinforcement learning where issues of non-stationarity in environments are dealt with using techniques that involve the inclusion of inductive biases; e.g. importance sampling of experiences in multi-agent environments (Foerster, Chen, et al., 2017) or using meta-learning to adapt gradient-update approaches more quickly (Al-Shedivat, Bansal, et al., 2018).
We have described active inference and the underlying minimization of variational and expected free energy using a (simplified) discrete state-space and time formulation. Throughout this review, we have suggested that active inference can be used as framework to understand how agents (biological or artificial) operate in dynamic, non-stationary environments (Friston, Rosch et al., 2017), via a standard gradient descent on a free energy functional. In a more general (non-equilibrium physics) setting, active inference can be thought of as a formal way of describing the behavior of random dynamical systems (that possess a Markov blanket between internal states and observations).
As noted in the formulation of active inference (see Equation 24), epistemic foraging (or exploration) emerges naturally. This is captured by the desire to maximize the mutual information between observations and the hidden state on the environment. Exploration means that the agent seeks out states that afford observations, which minimize uncertainty about (hidden) states of affairs. Note that in the formulation presented, we did not discuss parameter exploration that might also be carried out by the agent (by applying the expected free energy derivations to likelihood parameters in A)(Schwartenbeck, Passecker et al., 2019). The T-maze simulation highlighted this natural transition from exploratory (epistemic) policies to exploitative (pragmatic) policies that underpin active inference. Initially, when the agent was uncertain about hidden state (i.e. context), it engaged in exploratory behavior. This behavior manifested by choosing policies where it would first go to the lower arm to disclose the cue that allowed it to determine the location of the reward. Behavior did not change quantitatively, until it was sufficiently confident about the context in play – via the updating of the concentration parameters; i.e., learning.
Active inference gives us a natural way to account for uncertainty via the minimization of the expected free energy (Parr & Friston, 2017). It accounts for uncertainty regarding the parameters of the generative model –such as the mapping from hidden states of the world to observations, temporal evolution of world (via state transitions) and even the initial starting point of the environment by defining appropriate Dirichlet distributions over these quantities. Additionally, we can parameterize uncertainty about potential policies based on (precision) introduced above. In the T-maze simulation, resolving uncertainty about the state of the world was the main objective of the mouse and by accumulating evidence, it was able to make correct inferences in later trials.
Our treatment has emphasized that, via a belief-based scheme, active inference enables us to specify reward functions in terms of prior beliefs or not specify rewards at all (to produce purely epistemic behavior). However, if rewards are available as observations or actions, they can be assigned high prior preferences. An agent is likely to maximize reward (or extrinsic value) by having prior preferences about unsurprising outcomes (see Equation 23 via the minimization of expected free energy. It is important to note that the minimization of expected free energy is achieved by choosing appropriate policies (sequences of actions). We accounted for this in the T-maze simulation where the mouse had strong positive preference for finding the cheese in either right or left upper arm, depending on the context. Additionally, ending up in locations without the reward was associated with strong negative preferences.
Finally, as has been demonstrated, agents using active inference demonstrate many canonical properties with respect to learning and decision making; such as natural exploration and exploitation trade-off, the capacity to account for and make decisions given uncertainty, and adaptive approaches in the face of non-stationarity. Classical reinforcement learning requires additional engineering of such mechanisms into its formulation, whereas with active inference, such properties emerge naturally by minimising free energy.
However, it is worth noting that these properties follow from the form of the underlying generative model. The challenge is to identify the correct generative model that best explains the generative process (or the empirical responses) of interest (Gershman & Beck, 2017). This can be framed through more complex forms (via amortization) or learnt through structural learning (Gershman & Niv, 2010; Tervo, Tenenbaum, et al., 2016). Thus, if one was to find the correct generative model, active inference could be used for a variety of different problems; e.g. robotic arm movement, dyadic agents, playing Atari games, etc. We note that the task of defining the appropriate generative model (discrete or continuous) might be difficult. Thus, future work should look to incorporate implicit generative models (based on feature representation from empirical data) or shrinking hidden state-space by defining transition probabilities based on likelihood (rather than latent states).
The routines described in this paper are available as MATLAB code in the SPM academic software: http://www.fil.ion.ucl.ac.uk/spm/. The simulations reported in the figures can be reproduced (and customised) via a graphical user interface by typing (in the MATLAB command window) DEM and selecting appropriate demonstration routine (DEM_demo_MDP_X.m).
The accompanying MATLAB script is called spm_MDP_VB_X.m.
NS is funded by the Medical Research Council (Ref: 2088828). KJF is funded by the Wellcome Trust (Ref: 088130/Z/09/Z).
The authors have no disclosures or conflict of interest.
5.1 Explicit parameterisation of the generative model
Active inference rests on the tuple :
A finite set of outcomes,
A finite set of control states or actions,
A finite set of hidden states,
A finite set of time-sensitive policies,
A generative process that generates probabilistic outcomes from (hidden) states and action
A generative model with parameters , over outcomes, states, and policies , where returns a sequence of actions
An approximate posterior over states, policies and parameters with expectations
The generative process describes transitions between hidden (unobserved) states in the world that generate (observed) outcomes. Their transitions depend on action, which depends on posterior beliefs about the next state. Subsequently, these beliefs are formed using a generative model of how observations are generated. The generative model (based on partially observable MDP) describes what the agent believes about the world, where beliefs about hidden states and policies are encoded by expectations. Here actions are part of the generative process in the world and policies are part of the generative model of the agent.
5.2 Pseudo-code for belief updating and action selection
- Friston, FitzGerald et al. (2017) Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., & Pezzulo, G.(2017). Active Inference: A Process Theory. Neural Computation, 29(1), 1-49.
- Friston, Rosch et al. (2017) Friston, K., Rosch, R., Parr, T., Price, C., & Bowman, H.(2018). Deep temporal models and active inference. Neurosci Biobehav Rev, 77, 388-402.
- Pouget, Beck et al. (2013) Pouget, A., Beck, J., Ma, W., & Latham, P.(2013). Probabilistic brains: knowns and unknowns. Nature neuroscience, 16(9), 1170.
- Friston, Schwartenbeck et al. (2014) Friston, K., Schwartenbeck, P., FitzGerald, T., Moutoussis, M., Behrens, T., & Dolan, R. (2014). The anatomy of choice: dopamine and decision-making. Philos Trans R Soc Lond B Biol Sci, 369(1655).
- Friston (2019) Friston, K. (2019). A free energy principle for a particular physics. arXiv preprint, arXiv:1906.10184.
- Friston, FitzGerald et al. (2016) Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., O’Doherty, J., & Pezzulo, G.(2016). Active inference and learning. Neurosci Biobehav Rev, 68, 862-879.
- Friston, Rigoli et al. (2015) Friston, K., Rigoli, F., Ognibene, D., Mathys, C., FitzGerald, T., & Pezzulo, G.(2015). Active inference and epistemic value. Cogn Neurosci, 1-28.
- Parr & Friston (2017) Parr, T., & Friston, K. (2017). Uncertainty, epistemics and active inference. Journal of the Royal Society Interface, 14(136).
- Tribus (1961) Tribus, M. (1961). Thermodynamics and Thermostatics: An Introduction to Energy, Information and States of Matter, with Engineering Applications. New York, USA, D. Van Nostrand Company Inc.
- Crauel & Flandoli (1994) Crauel, H., & Flandoli, F. (1994). Attractors for Random Dynamical-Systems. Probability Theory and Related Fields, 100(3), 365-393.
- Seifert (2012) Seifert, U. (2012). Stochastic thermodynamics, fluctuation theorems and molecular machines. Rep Prog Phys, 75(12), 126001.
Attias, H. (2003).
Planning by Probabilistic Inference.
Proc. of the 9th Int. Workshop on Artificial Intelligence and Statistics
- Botvinick & Toussaint (2012) Botvinick, M., & Toussaint M.(2012). Planning as inference. Trends Cogn Sci, 16(10), 485-488.
- Baker & Tenenbaum (2014) Baker, C., & Tenenbaum J.(2014). Plan, Activity, and Intent Recognition: Modeling Human Plan Recognition Using Bayesian Theory of Mind. Sukthankar, G., Geib, C., Bui, H., Pynadath, D., & Goldman, R. Morgan Kaufmann, Boston, 177-204.
- Schwartenbeck, Passecker et al. (2019) Schwartenbeck, P., Passecker, J., Hauser, T., FitzGerald, T., Kronbichler, M. & Friston K.(2019). Computational mechanisms of curiosity and goal-directed exploration. Elife, 8.
- Firston, Mattout et al. (2011) Friston, K., Mattout, J., & Kilner, J. (2011). Action understanding and active inference. Biol Cybern, 104, 137-160.
- Firston, Parr et al. (2017) Friston, K., Parr, T., & de Vries, B. (2017). The graphical brain: Belief propagation and active inference. Netw Neurosci, 1(4), 381-414.
- Astrom (1965) Astrom, K. J. (1965). Optimal control of Markov processes with incomplete state information. Journal of mathematical analysis and applications, 10(1), 174-205.
- Sutton & Barto (1998) Sutton, S. & Barto A. (1998). Introduction to Reinforcement Learning. MIT Press.
- Blei, Kucukelbir et al. (2017) Blei, D., Kucukelbir, A., & McAuliffe, J. (2017). Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518), 859-877.
- Beal (2003) Beal, M. (2003). Variational Algorithms for Approximate Bayesian Inference. PhD. Thesis, University College London.
Sutton, S. (1990).
Integrated architectures for learning, planning, and reacting based on approximating dynamic programming.
Proceedings of the Seventh International Conference on Machine Learning, Austin, TX, Morgan Kaufmann.
- Racanière, Reichert, et al. (2017) Racanière, S., Reichert, D., Buesing, L., Guez, A., Rezende, D., Badia, A., Vinyals, O., Heess, N., Li, Y., Pascanu, R., Battaglia, P., Hassabis, D., Silver, D., & Wierstra, D. (2017). Imagination-augmented agents for deep reinforcement learning. International Conference on Neural Information Processing Systems, Long Beach, CA, Curran Associates Inc.
- Parr & Friston (2018) Parr, T. & Friston, K. (2018) Generalised free energy and active inference: can the future cause the past?. bioRxiv: 304782.
Mnih, Silver et al. (2018)
Mnih, V., Silver, D., Graves, A., Antonoglou, I., Wierstra, D. & Riedmiller, M (2013)
Playing Atari with Deep Reinforcement Learning.
NIPS Deep Learning Workshop.
- Pathak, Efros et al. (2017) Pathak, D., Efros, A., & Darrell, T. (2017) Curiosity-driven Exploration by Self-supervised Prediction. International Conference on Machine Learning, Sydney.
- Still (2012) Still, S.(2012) An information-theoretic approach to curiosity-driven reinforcement learning. Theory in Biosciences, 139–148
- Mohamed & Rezende (2015) Mohamed, S. & Rezende, D. (2015) Variational information maximisation for intrinsically motivated reinforcement learning. Advances in neural information processing systems.
- Parr, Markovic, et al. (2019) Parr, T., Markovic, D., Kiebel, S. & Friston, K. (2019) Neuronal message passing using Mean-field, Bethe, and Marginal approximations. Scientific Reports, 9(1): 1889
- Buckley, Kim, et al. (2017) Buckley, C., Kim, C., McGregor, S., & Seth, A., (2017) The free energy principle for action and perception: A mathematical review. Journal of Mathematical Psychology, 81, 55-79.
- Parr & Friston (2018) Parr, T., & Friston, K. (2018) The Discrete and Continuous Brain: From Decisions to Movement-And Back Again. Neural Comput, 30(9), 2319-2347.
- Parr & Friston (2019) Parr, T., & Friston, K. (2019) The computational pharmacology of oculomotion. Psychopharmacology.
- Mirza, Adams, et al. (2016) Mirza, B., Adams, R., Mathys, C., & Friston, K. (2016) Scene Construction, Visual Foraging, and Active Inference. Frontiers in Computational Neuroscience, 10(56).
- Cullen, Davey, et al. (2018) Cullen, M., Davey, B., Friston, K., & Moran, R. (2018) Active Inference in OpenAI Gym: A Paradigm for Computational Investigations Into Psychiatric Illness. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 3(9), 809-818.
- Schwartenbeck, FitzGerald, et al. (2015) Schwartenbeck, P., FitzGerald, T., Mathys, C., Dolan, R., Wurst, F., Kronbichler, M., & Friston, K. (2015) Optimal inference with suboptimal models: Addiction and active Bayesian inference. Medical Hypotheses, 84(2), 109-117.
- Moutoussis, Trujillo-Barreto, et al. (2014) Moutoussis, M., Trujillo-Barreto, N., El-Deredy, W., Dolan, R., & Friston, K. (2014) A formal model of interpersonal inference. Front Hum Neurosci, 8:160.
- Foerster, Chen, et al. (2017) Foerster, J., Chen, R., Al-Shedivat, M., Whiteson, S., Abbeel, P., & Mordatch, I. (2017) Learning with Opponent-Learning Awareness. CoRR, arXiv:1709.04326.
- Al-Shedivat, Bansal, et al. (2018) Al-Shedivat, M., Bansal, T., Burda, Y., Sutskever, I., Mordatch, I., & Abbeel, P. (2018) Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments. CoRR, arXiv:1710.03641.
- Gershman & Niv (2010) Gershman, S., & Niv, Y. (2010) Learning latent structure: carving nature at its joints. Current opinion in neurobiology, 20(2), 251-256
- Tervo, Tenenbaum, et al. (2016) Tervo, D., Tenenbaum, J., & Gershman, S. (2016) Toward the neural implementation of structure learning. Current opinion in neurobiology, 37, 99-105
- Gershman & Beck (2017) Gershman, S., & Beck, J. (2017) Complex probabilistic inference. Computational Models of Brain and Behavior, 453
- Beck, Pouget, et al. (2012) Beck, J., Pouget, A., & Heller, K. (2012) Complex inference in neural circuits with probabilistic population codes and topic models. In: Advances in Neural Information Processing Systems, 3059–30