The application of notions of self-organization and complex systems to psychology and cognitive science has increased during the last decades. Recently, some contributions in this field have obtained sets of indicators of critical behaviour (long-range correlations, fractal scaling, etc.) to characterize different domains of cognitive activity Chialvo (2010); Van Orden . (2012); Dixon . (2012). Although comparing experimentally measurable quantities with the parameters of models of self-organized criticality allows to establish analogies between models and observed phenomena, the connection between the empirical indicators and mechanistic models is often thin Wagenmakers . (2012). Mechanistic theories, accounting for the behaviour of the system in terms of its parts and their interactions Bechtel Richardson (2010) can provide useful perspectives on the study of complex notions as criticality that the majority of current empirical explanations do not have incorporated.
Interestingly, in the past few years, large sets of biological real data have allowed to characterize -using mechanistic models- how the behaviours of different biological systems (e.g. networks of neurons, antibody segments or flocks of birds) are poised near a critical point in their parameter space Mora Bialek (2011). This is a great step towards the development of deeper theoretical principle behind the behaviour of biological and cognitive system. However, beyond the importance of these models explaining the emergence of criticality in specific experimental data, we suggest that a complementary perspective could tackle the development of ‘conceptual models’ in order to explain how organisms are driven towards critical behaviour in a more abstract level. Conceptual models are defined in terms of a set of general mechanisms and generic processes but expressed in abstract frameworks working as ‘proofs of concept’ Barandiaran Chemero (2009) that can be the support of future experiments.
In this paper, we propose a conceptual model describing a mechanisms driving an embodied agent towards critical points of its parameter space. We make use of concepts from statistical mechanics to base our model not on specific configurations of the parameters of the agent, but instead we exploit macroscopic variables to drive the system to transition points between qualitatively different regimes of behaviour. Driving synthetic agents to criticality may offer the opportunity to clarify what is its contribution of in different contexts. In the study of cognitive processes, we always find that criticality appears entangled with other features of adaptive behaviour (e.g. perception, prediction, learning) in agents dealing with complex environments. A mechanism poising agents in criticality in different scenarios may help understanding what are the contributions of criticality ‘by-itself’ or how is criticality related to other phenomena.
In order to do it, we first introduce a Boltzmann Machine as the simplest statistical mechanics model showing correlations between elements of a network and derive a learning model driving the system towards critical points. The model will exploit the heat capacity of the system, as a macroscopic property that works as a proxy for criticality (when the heat capacity diverges the Boltzmann Machine is in a critical point). After that, we test our learning model in an embodied agent controlling a Mountain Car (a classic reinforced learning testbed) finding that it is able to drive both the neural controller and the behaviour of the agent to a transition point in the parameter space between qualitatively different behavioural regimes. Finally, we discuss the possible applications of our model to contribute to the development of deeper principles governing biological and cognitive systems.
2 Driving a neural controller towards a critical point
We propose a learning model self-organizing the parameters of a Boltzmann Machine, in order to drive the system towards states of criticality. We take advantage of the fact that at critical points, derivatives of thermodynamic quantities as the entropy may diverge Mora Bialek (2011). An example of this is the heat capacity, whose divergence is a sufficient condition for criticality (though not a necessary one). We define our network as a Boltzmann Machine Ackley . (1985) following a maximum entropy distribution:
where the energy of each state is defined in terms of the bias and couplings of the state of each neuron.
The states can take values of or and the couplings and bias can take continuous values.
In order to define a learning rule adjusting the values of and
we define a gradient climbing rule for maximizing the value of the heat capacity, with the intent of driving the system to critical points depicted by a singularity of the heat capacity. Though, the heat capacity of the global state of the system depends on global variables of the system (e.g. the energy of the system) and thus we cannot define a gradient climbing rule based only in local information. Instead, we can define the heat capacity of the system from the path entropy of each neuron depicting transition between states, which is defined by the probability:
where is the state of the system at time and at time and is now ascribed to the transitions of an individual neuron111Using individual values of allows to derive a learning rule that is only dependent on local variables. Path entropy is defined as the entropy of the transitions of the state of a neuron ,
From the path entropy we can define the heat capacity associated with the path entropy of neuron as
In our model, the value of the thermodynamic beta defines the temperature of the system , where is the Boltzmann constant and the temperature associated with each neuron. Nevertheless, since the temperature here has no real-world meaning, just corresponds to a global rescaling of the parameters of the neuron by multiplying them by a constant value. Thus, we determine a working temperature defining .
Considering , and and knowing that and we derive the learning rules that climb the gradient of and drive the system towards critical points as:
In the following section, we use this learning rule to drive the neural controller of an embodied agent towards a critical point. In order to do so, we need to take into account the environment during learning. If we consider two interconnected Boltzmann Machines, (one being the neural controller and other being the environment) Equation 6 holds perfectly if we only apply it to the values of and corresponding to units of the neural controller. In our case, we will not use a Bolzmann Machine as an environment but instead we will use a classic example from reinforced learning. Therefore, our learning rule will be valid as long as the statistics of the environment can be approximated by a Boltzmann Machine with an arbitrary number of units. Luckily, Boltzmann Machines are universal approximators Montúfar (2014). Nevertheless, if the updating of units does not follow the rules of a Boltzmann Machine222For example, in our embodied model sensors values are clamped from values in the environment, thus they influence hidden and motor units but they are not influenced by them. In a Boltzmann Machine, this can provoke that the distribution of states no longer follows the Boltzmann distribution depicted by Equation 3, as we will see later, the approximation can be flawed in some cases.
3 Embodied model: Mountain Car
In order to evaluate the behaviour of the proposed learning model, we test it in the Mountain Car environment Moore (1990). This environment is a classical test bed in reinforced learning depicting an under-powered car that must drive up a steep hill (Figure 1). Since gravity is stronger than the car’s engine, the vehicle must learn to leverage potential energy by driving to the opposite hill before the car is able to make it to the goal at the top of the rightmost hill. We simulate the environment using the OpenAI Gym toolkit Brockman . (2016). In this environment, the horizontal position of the car is limited to an interval of , and the vertical position of the car is defined as . The velocity in the horizontal axis is updated each time step as , where is the action of the motor which can be either .
So as to make it difficult for random agents to solve the task, the maximum velocity of the car was limited to 333Typically the Mountain Car environment restricts the velocity of the car to the interval . With this velocity limitation, only around a fraction of
of agents with random parameters (sampled from a uniform distribution in the range) are able to reach the top of the mountain in a trial of simulation steps starting from a starting random position (uniformly distributed in the region of the valley ).
We define the neural controller of the car as a Boltzmann Machine containing sensors and neurons. We feed the sensors with the horizontal and vertical acceleration of the car, each discretized to arrays of three bits. Each sensor unit is assigned a value of if its corresponding bit is active and otherwise. Two of the car neurons are connected to the motors, defining if both neurons are active, if both neurons are inactive, and otherwise. We apply the learning rule from Equation 6 to different agents. In order to avoid overfitting, we add an L2 regularization term updating the parameters of the system according to the rule:
where , and and are the result of Equation 6. Agents are initialized in the starting random position of the environment. Hidden and motor neurons are randomized and the initial parameters and are sampled from a uniform random interval . The agents are simulated for trials of steps, applying Equation 6 at the end of the trial computing the values of and over that trial. Note that agents are not reseted at the end of the trial.
In this section, we analyze the behaviour of the neural controllers and the behavioural patterns of the agents respect the possibilities of their parameter space. Although figures correspond to one particular agent (one of the ones reaching the top of the mountain), most results are general to all 10 agents, except when it is indicated otherwise. In order to compare the agents with other behavioural possibilities, we explore the parameter space by changing the parameter of the agents. Modifying the value of is equivalent to a global rescaling of the parameters of the agent transforming and , thus exploring the parameter space along one specific direction. For values of logarithmically distributed in the interval we simulate the 10 agents for a trial of simulation steps, after starting the agents from the random starting position (i.e. in an interval ) and a initial run of simulation steps. We will use the results of those simulations for all the results in this section.
4.1 Signatures of criticality in the neural controller
Firstly, we test whether the trained agents show signatures of critical behaviour. Counting the occurrence of each possible state of the
neurons of the agents (including sensor, hidden and motor neurons) we can compute the probability distribution of the Boltzmann Machine.
for calculating the entropy and derivating a cubic interpolation of the entropy function respect to
(solid line) and estimation of the heat capacity using the approximation used in Equation5 (dashed line). A peak in heat capacity is observed near , suggesting that the system is near a critical point. For values of below the critical point we observe that the heat capacity and its approximation coincide, indicating that the approximation is valid for that range.
We observe that all agents approximately follow a Zipf’s law at (Figure 2.A) for almost three decades, which is a good agreement for the limited size of the system (note that the possible states of the system are limited to states. All trained agents show a similar distribution close to Zipf’s law.
Secondly, as another indicator of critical points is the divergence of the heat capacity of the system, we estimate the heat capacity of hidden and motor neurons 444 We can do it using Equation 4 or, alternatively, using Equation 5, which involves the approximations made for designing the learning algorithm that models heat capacity ‘as seen’ by the neural controller. . The result that we observe (Figure 2.B) is that the heat capacity peaks around the operating temperature (at a value slightly larger than ) that, together with the Zipf’s distribution, it suggests that the system is operating in a regime of criticality.
Finally, if we compare the real heat capacity and the heat capacity as seen by the learning algorithm, we can infer that the approximation of the environment as a Boltzmann Machine works well when the parameters of the agent are not too large (increasing is equivalent to rescale all parameters of the system similarly, and therefore, regularization terms might be necessary for the learning algorithm to work correctly at least when environments are deterministic, as in this case).
(solid line) and its upper and lower quartiles (dotted lines). We observe a transition nearwhere the agent reaches the top of the mountain. Similar transitions are identified in of the simulated agents.
4.2 Behavioural transitions in the parameter space
What does it imply for the agent to poise its neural controller at a critical point?
It should be remarked here that our agents are given no explicit goal but they only tend to behavioural patterns maximizing the heat capacity of its neurons independently of this behaviour reaches the top of the mountain or not (in fact, only of the trained agents are able to climb to the top of the mountain). Related to this, we will start exploring the effects of transiting the critical point observing the different behavioural modes of the agent in the parameter space. The behaviour of the car can be described just by the position and speed
at different moments of time.
In Figure 3.A-C we can observe the behaviour of the car for respectively, for an interval of simulation steps. If we compute the average value of at the trial for each value of Figure 3.D, we observe that slightly below the operating temperature there is a transition from agents that are not able to reach to the top of the mountains to those that are able to do so. More in detail, in the agents that are able to reach the top of the mountain the results are similar while more agents that are not able to reach the top display a similar transition in the average value of height and the average absolute velocity of the car. The remaining agents do not show a transition in average values of basic behavioural variables although it does not preclude the possibility of another type of less evident behavioural transition.
What is it changed in this behavioural transition? We are interested in knowing of these behavioural regimes are qualitatively different and will explore this using information theory to characterize how different variables of the agent interact at different points of the parameter space. Specifically, we are interested in the relation between sensor, hidden and motor neurons, which determines the behaviour of the agent in its environment.
But, is the agent merely reactive to sensory inputs or is there a more complex interplay between sensor, hidden and motor units? In order to answer this, we characterize the interaction between variables using partial information decomposition Timme . (2014) to compute synergies between variables, defined as:
where , and are random discrete variables, is the mutual information between two variables and , defined as in Williams Beer (2011), is the redundant information that and share about . The resulting synergy is able to capture information of that is not available from either and alone but from their interaction (the classical example is the relation between the output and inputs of an XOR gate).
Defining , and
as the joint distribution of sensor, hidden and motor neurons respectively, we can analyze the synergistic information between the distribution of the three groups of variables. The objective is to capture how much information emerges from the interaction between variables instead of being contained in the variables alone.
As we observe in Figure 4, the synergy between motor and hidden neurons about sensor information peaks at values of lower than 1, while the synergies of hidden and motor neurons with sensor neurons, increase at larger values of , depicting a transition point at a value of slightly lower than . Since the environment of the agent is completely deterministic, it seems adequate that larger values of (i.e. less random behaviour) are more effective to transmit information from sensors to other neurons, while maximum interaction between hidden and motor neurons takes place at a place with a lower .
Recapitulating, we have proposed a learning model driving an embodied agent close to critical points in the parameter space, poising both the neural controller and the behavioural patterns of the agent near a transition point between qualitatively different regimes of operation. In the case of the neural controller, we have found that the Boltzmann Machine has a peak in its peak capacity in a point slightly above . However, if we analyze the synergistic interaction between sensor, hidden and motor units of the system we find a transition at a point slightly under , which also coincides with a point of transition between behavioural regimens in of the agents. These results could suggest that the system might be finding a compromise between the critical point of its neural controller and zones of transition in the behavioural regimes of the agent (the former happening at higher temperatures and the latter at lower temperatures).
At this point, we could harken back to our original questions. Why do biological systems behave near to criticality? What are the benefits for a biological system to move to this special type of points? And more importantly, how can our learning model help answer those questions?
Reviewing related literature, one finds that interpretations about criticality are too speculative in general. For example, beggs_criticality_2008 hypothesizes that neural systems operating at a critical point can optimize information processing and its computational power. mora_are_2011 discuss the experimental evidence of criticality in a wide variety of systems and propose that criticality could provide better defense mechanisms against predators (in animals), gain selectivity in response to stimuli (in auditory systems) or improved mechanism to anticipate attacks (in immunological systems). Nevertheless, the reasoning that give support to this hypothesis is based more on generic suggestions than on scientifically rigorous statements. More detailed analyses are needed to accept speculations,and our opinion is that a conceptual model of embodied criticality in natural systems can be useful to capture how transition points in the parameter space of behavioral regimes can be found and exploited to obtain functional advantages as the ones mentioned above. For that purpose, rather than specific biological instances of critical phenomena, we have used an abstract framework of how embodied agents can be driven to critical points.
Furthermore, we believe that using conceptual models as the one presented here could us to test more intriguing hypothesis. For example, our general mechanism driving an embodied neural controller to criticality has the potential of capturing what is the contribution of criticality ‘by itself’ to the behaviour of an adaptive agents in different scenarios, as well as the relation between criticality and other biological and cognitive phenomena.
On the other hand, criticality generally appears entangled other capabilities developed by biological systems, and typically interpretations about the advantages of criticality always refer to tangible benefits for the system (e.g. in an evolutive level, as the source of a new range of capabilities or better mechanism for surviving in open environments, etc.) and it is difficult to distinguish if criticality is the cause or the consequence of such effects. Notwithstanding, as we have presented here, our model does not address any particular task. Instead, the model finds ways to drive the system to critical points, allowing us to explore what are the effects of poising a system to criticality under some embodied constraints, disentangling the effects of criticality to other factors embedded in real life organisms. As well, this can be connected to the analysis particular features in animal behaviour that are interpreted without assuming a necessary pragmatical perspective of analysis. For example, ‘play’ in humans and other species does not aim to solve a specific problem, but instead it can be simply understood as a ‘rule-breaker’ activity, breaking the constrains of stable regimes of behaviour, even if it is not directly required from the environment Di Paolo . (2010).
Moreover, the presented model could be implemented in more complex embodied setups, for example involving specific tasks of adaptive behaviour adding environmental constraints (e.g. exploration, decision-making, categorical perception) or biological requirements (e.g. an internal metabolism or other biological drives as hunger or thirst) and observe how the compliance of these biological and cognitive requirements interplays with the drive towards critical points in the neural controller of the agent. Thus, we could explore in this way how criticality can contribute to capabilities observed by natural organisms.
The study of criticality in living systems has traditionally rested on too speculative grounds. Today, the increasing amount of high quality data together with the possibilities of statistical mechanics models promises exciting routes towards a rigorous exploration of the governing principles of biological organisms, linking experimental evidence and data-driven models with conceptual models exploring general mechanisms offering general explanations of the mechanisms driving the behaviour of these complex systems.
Research was supported in part by the Spanish National Programme for Fostering Excellence in Scientific and Technical Research project PSI2014-62092-EXP and for by the project TIN2011-24660 funded by the Spanish Ministry of Economy and Competitiveness.
- Ackley . (1985) ackley_learning_1985Ackley, DH., Hinton, GE. Sejnowski, TJ. 1985. A learning algorithm for Boltzmann machines A learning algorithm for Boltzmann machines. Cognitive science91147–169.
- Barandiaran Chemero (2009) barandiaran_animats_2009Barandiaran, XE. Chemero, A. 200907. Animats in the Modeling Ecosystem Animats in the Modeling Ecosystem. Adaptive Behavior174287–292. 10.1177/1059712309340847
- Bechtel Richardson (2010) bechtel_discovering_2010Bechtel, W. Richardson, RC. 2010. Discovering complexity: Decomposition and localization as strategies in scientific research Discovering complexity: Decomposition and localization as strategies in scientific research. MIT Press.
- Beggs (2008) beggs_criticality_2008Beggs, JM. 2008. The criticality hypothesis: how local cortical networks might optimize information processing The criticality hypothesis: how local cortical networks might optimize information processing. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences3661864329–343.
- Brockman . (2016) brockman_openai_2016Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J. Zaremba, W. 2016. OpenAI gym OpenAI gym. arXiv preprint arXiv:1606.01540.
- Chialvo (2010) chialvo_emergent_2010Chialvo, DR. 2010. Emergent complex neural dynamics Emergent complex neural dynamics. Nature Physics610744–750. 10.1038/nphys1803
- Di Paolo . (2010) di_paolo_horizons_2010Di Paolo, EA., Rohde, M. De Jaegher, H. 2010. Horizons for the enactive mind: Values, social interaction, and play Horizons for the enactive mind: Values, social interaction, and play. Enaction: Toward a new paradigm for cognitive science33–87.
- Dixon . (2012) dixon_multifractal_2012Dixon, JA., Holden, JG., Mirman, D. Stephen, DG. 2012. Multifractal Dynamics in the Emergence of Cognitive Structure Multifractal Dynamics in the Emergence of Cognitive Structure. Topics in Cognitive Science4151–62. 10.1111/j.1756-8765.2011.01162.x
- Montúfar (2014) montufar_universal_2014Montúfar, GF. 2014. Universal Approximation Depth and Errors of Narrow Belief Networks with Discrete Units Universal Approximation Depth and Errors of Narrow Belief Networks with Discrete Units. Neural Computation2671386–1407.
- Moore (1990) moore_efficient_1990Moore, AW. 1990. Efficient memory-based learning for robot control Efficient memory-based learning for robot control UCAM-CL-TR-209. University of Cambridge, Computer Laboratory.
- Mora Bialek (2011) mora_are_2011Mora, T. Bialek, W. 2011. Are biological systems poised at criticality? Are biological systems poised at criticality? Journal of Statistical Physics1442268–302.
- Timme . (2014) timme_synergy_2014Timme, N., Alford, W., Flecker, B. Beggs, JM. 201404. Synergy, redundancy, and multivariate information measures: an experimentalist’s perspective Synergy, redundancy, and multivariate information measures: an experimentalist’s perspective. Journal of Computational Neuroscience362119–140. 10.1007/s10827-013-0458-4
- Van Orden . (2012) van_orden_blue-collar_2012Van Orden, G., Hollis, G. Wallot, S. 2012. The blue-collar brain The blue-collar brain. Fractal Physiology3207. 10.3389/fphys.2012.00207
- Wagenmakers . (2012) wagenmakers_abstract_2012Wagenmakers, EJ., van der Maas, HLJ. Farrell, S. 2012. Abstract Concepts Require Concrete Models Abstract Concepts Require Concrete Models. Topics in Cognitive Science4187–93. 10.1111/j.1756-8765.2011.01164.x
- Williams Beer (2011) williams_generalized_2011Williams, PL. Beer, RD. 2011. Generalized Measures of Information Transfer Generalized Measures of Information Transfer. arXiv:1102.1507 [physics]. arXiv: 1102.1507