Morphological computation (MC), in the context of embodied (artificial) intelligence, refers to processes which are conducted by the body (and environment) that otherwise would have to be performed by the brain . A nice example of MC is given by Wootton  (see p. 188), who describes how “active muscular forces cannot entirely control the wing shape in flight. They can only interact dynamically with the aerodynamic and inertial forces that the wings experience and with the wing’s own elasticity; the instantaneous results of these interactions are essentially determined by the architecture of the wing itself […]”
MC is relevant in the study of biological and robotic systems. In robotics, a quantification of MC can be used e.g. as part of a reward function in a reinforcement learning setting to encourage the outsourcing of computation to the morphology, thereby enabling complex behaviors that result from comparably simple controllers. The relationship of embodiment and controller complexity has been recently studied in. MC measures can also be used to evaluate the robot’s morphology during the design process. For biological systems, energy efficiency is important and an evolutionary advantage. Exploiting the embodiment can lead to more energy efficient behaviors, and hence, MC may be a driving force in evolution.
In biological systems, movements are typically generated by muscles. Several simulation studies have shown that the muscles’ typical non-linear contraction dynamics can be exploited in movement generation with very simple control strategies . Muscles improve movement stability in comparison to torque driven models  or simplified linearized muscle models (for an overview see ). Muscles also reduce the influence of the controller on the actual kinematics (they can act as a low-pass filter). This means that the hopping kinematics of the system is more pre-determined with non-linear muscle characteristics than with simplified linear muscle characteristics . And finally, in hopping movements, muscles reduce the control effort (amount of information required to control the movement) by a factor of approximately 20 in comparison to a DC-motor driven movement .
In view of these results we expect that MC plays an important role in the control of muscle driven movement. To study this quantitatively, a suitable measure for MC is required. There are several approaches to formalize MC [8, 13, 12]. In our previous work we have focused on an agent-centric perspective of measuring MC  and we have applied an information decomposition of the sensorimotor loop to measure and better understand MC . Both publications used a binary toy world model to evaluate the measures. With this toy model, it was possible to show that these measures capture the conceptual idea of MC and, in consequence, that they are candidates to measure MC in more complex and more realistic systems.
The goal of this publication is to evaluate two measures of MC on biologically realistic hopping models. With this, we want to demonstrate their applicability in non-trivial, realistic scenarios. Based on our previous findings (see above), we hypothesize that MC is higher in hopping movements driven by a non-linear muscle compared to those driven by a simplified linear muscle or a DC-motor, for the following reason. Our experiments show that a state-dependent analysis of MC for the different models leads to insights, which cannot be gained from the averaged measures alone.
Furthermore, we provide detailed instructions, including MATLAB® code, on how to apply these measures to robotic systems and to computer simulations. With this, we hope to provide a tool for the evaluation of MC in a large variety of applications.
2 The Sensorimotor Loop
The conceptual idea of the sensorimotor loop is similar to the basic control loop systematics, which is the basis of robotics and also of computer simulations of human movement. In our understanding, a cognitive system consists of a brain or controller, which sends signals to the system’s actuators, thereby affecting the system’s environment. We prefer to capture the body and environment in a single random variable namedworld. This is consistent with other concepts of agent-environment distinctions. An example for such a distinction can be found in the context of reinforcement learning, where the environment (world) is everything that cannot be changed arbitrarily by the agent . A more thorough discussion of the brain-body-environment distinction can be found in [17, 1] and a more recent discussion can be found in . A brief example of a world, based on a robot simulation, is given below. The loop is closed as the system acquires information about its internal state (e.g. current pose) and its world through its sensors.
For simplicity, we only discuss the sensorimotor loop for reactive systems. This is plausible, because behaviors which exploit the embodiment (e.g. walking, swimming, flying) are typically reactive. This leaves us with three (stochastic) processes , , and , that constitute the sensorimotor loop (see Fig. 1), which take values , , and , in the sensor, actuator, and world state spaces (their respective domains will be clear from the context). The directed edges (see Fig. 1) reflect causal dependencies between these random variables. We consider time to be discrete, i.e., and are interested in what happens in a single time step. Therefore, we use the following notation. Random variables without any time index refer to some fixed time and primed variables to time , i.e., the two variables , refer to and .
Starting with the initial distribution over world states, denoted by
, the sensorimotor loop for reactive systems is given by three conditional probability distributions,, , , also referred to as kernels. The sensor kernel, which determines how the agent perceives the world, is denoted by , the agent’s controller or policy is denoted by , and finally, the world dynamics kernel is denoted by .
To understand the function of the world dynamics kernel it is useful to think of a robotic simulation. In this scenario, the world state is the state of the simulator at a given time step, which includes the pose of all objects, their velocities, applied forces, etc. The actuator state is the value that the controller passes on to the physics engine prior to the next physics update. Hence, the world dynamics kernel is closely related to the forward model that is known in the context of robotics and biomechanics.
Based on this notation, we can now formulate quantifications of MC in the next section.
3 Quantifying Morphological Computation
In the introduction, we stated that MC relates to the computation that the body (and environment) performs that otherwise would have to be conducted by the controller (or brain). This means that we want to measure the extent to which the system’s behavior is the result of the world dynamics (i.e., the body’s internal dynamics and it’s interaction with its world) and how much of the behavior is determined by the policy (see Fig. 1).
In our previous publication  we have defined two concepts to quantify MC, from which the two measures below are taken and derived.
3.1 Morphological computation as conditional mutual information ()
The first quantification, that is used in this work, was introduced in . The idea behind it can be summarized in the following way. The world dynamics kernel captures the influence of the actuator signal and the previous world state on the next world state . A complete lack of MC would mean that the behavior of the system is entirely determined by the system’s controller, and hence, by the actuator state . In this case, the world dynamics kernel reduces to . Every difference from this assumption means that the previous world state had an influence, and hence, information about changes the distribution over the next world states
. The discrepancy of these two distributions can be measured with the average of the Kullback-Leibler divergence, which is also known as the conditional mutual information . This distance is formally given by (see also Alg. 2 in App. 8)
3.2 Morphological computation as comparison of behavior and controller complexity ()
The second quantification follows concept one of . The assumption that underlies this concept is that, for a given behavior, MC decreases with an increasing effect of the action on the next world state . The corresponding measure cannot be used in systems with deterministic policy, because for these systems (see App. 9). Therefore, for this publication, we require an adaptation that operates on world states and is applicable to deterministic systems.
The new measure compares the complexity of the behavior with the complexity of the controller. The complexity of the behavior can be measured by the mutual information of consecutive world states, , and the complexity of the controller can be measured by the mutual information of sensor and actuator states, , for the following reason. The mutual information of two random variables can also be written as difference of entropies:
which, applied to our setting, means that the mutual information is high, if we have a high entropy over world states (first term) that are highly predicable (second term). Summarized, this means that the mutual information is high if the system shows a diverse but non-random behavior. Obviously, this is what we would like to see in an embodied system. On the other hand, a system with high MC should produce a complex behavior based on a controller with low complexity. Hence, we want to reduce the mutual information , because this either means that the policy has a low diversity in its output (low entropy over actuator states ) or that there is only a very low correlation between sensor states and actuator states (high conditional entropy ). Therefore, we define the second measure as the difference of these two terms, which is (see also Alg. 4 in Sec. 8)
For deterministic systems, as those studied in this work, the two measures are closely related. In particular, it holds that (see App. 10). The inequality may not be satisfied always, because discretization can introduce stochasticity.
Note that in the case of a passive observer, i.e., a system that observes the world but in which there is no causal dependency between the action and the next world state (i.e., missing connection between and in Fig. 1), the controller complexity in Eq. (2) will reduce the amount of MC measured by , although the actuator state does not influence the world dynamics. This might be perceived as a potential shortcoming. In the context discussed in this paper, e.g. data recorded from biological or robotic systems, we think that this will not be an issue.
The next section introduces the hopping models on which the two measures are evaluated.
4 Hopping models
In a reduced model, hopping motions can be described by a one-dimensional differential equation :
where the point mass represents the total mass of the hopper which is accelerated by the gravitational force () in negative -direction. An opposing leg force in positive -direction can act only during ground contact (). Hopping motions are then characterized by alternating flight and stance phases. For this manuscript, we investigated three different models for the leg force. All models have in common, that the leg force depends on a control signal and the system state , : , meaning, that the force modulation partially depends on the controller output and partially on the dynamic characteristics, or material properties of the actuator. The control parameters of all three models were adjusted to generate the same periodic hopping height of . All models were implemented in MATLAB® Simulink™ (Ver2014b) and solved with ode45 Dormand-Prince solver with absolute and relative tolerances of . The models were solved and integrated in time for and model output was generated at sampling frequency.
4.1 Muscle-Fiber model (MusFib)
A biological muscle generates its active force in muscle fibers whose contraction dynamics are well studied. It was found that the contraction dynamics are qualitatively and quantitatively (with some normalizations) very similar across muscles of all sizes and across many species. In the MusFib model, the leg force is modeled to incorporate the active muscle fibers’ contraction dynamics. The model has been motivated and described in detail elsewhere [6, 5, 7]. In a nutshell, the material properties of the muscle fibers are characterized by two terms modulating the leg force
The first term represents the muscle activity. The activity depends on the neural stimulation of the muscle and is governed by biochemical processes modeled as a first-order ODE called activation dynamics
with the time constant . The second term in Eq. (4) considers the force-length and force-velocity relation of biological muscle fibers. It is a function of the system state, i.e., the muscle length and muscle contraction velocity during ground contact and constant during flight :
Here we use a maximum isometric muscle force , an optimal muscle length , force-length parameters and , and force-velocity parameters , , and .
In this model, periodic hopping is generated with a controller representing a mono-synaptic force-feedback. The neural muscle stimulation
is based on the time delayed () muscle fiber force . The feedback gain is and the stimulation at touch down .
This model neither considers leg geometry nor tendon elasticity and is therefore the simplest hopping model with muscle-fiber-like contraction dynamics. The model output was the world state , the sensor state , and the actuator control command . For this model, these are the values that the random variables , , and take at each time step.
4.2 Linearized Muscle-Fiber model (MusLin)
This model differs from the model MusFib only in the representation of the force-length-velocity relation, i.e., (see Eq. (6)). More precisely, the force-length relation is neglected and the force-velocity relation is approximated linearly
with . Feedback gain and stimulation at touch down were chosen to achieve the same hopping height as the MusFib model.
4.3 DC-Motor model (DCMot)
An approach to mimic biological movement in a technical system (robot) is to track recorded kinematic trajectories with electric motors and a PD-control approach. The DCMot model implements this approach (slightly modified from ). The leg force generated by the DC-motor was modeled as
where is the motor constant, the current through the motor windings, the ratio of an ideal gear translating the rotational torque and movement of the motor to the translational leg force and movement required for hopping. The electrical characteristics of the motor can be modeled as
where is the armature voltage (control signal), the resistance, and the inductance of the motor windings. The motor parameters were taken from a commercially available DC-motor commonly used in robotics applications (Maxon EC-max 40, nominal Torque ). As this relatively small motor would not be able to lift the same mass, the body mass was adapted to guarantee comparable accelerations
The recorded kinematic trajectory and during ground contact was taken from the periodic hopping trajectory of the MusFib model. This trajectory was enforced with a PD-controller
with feedback gains and .
This model is the simplest implementation of negative feedback control that allows to enforce a desired hopping trajectory on a technical system. The model output was the world state , the sensor state , and the actuator control command .
This section discusses the experiments that were conducted with the hopping
models and the preprocessing of the data. Algorithms for the calculations are
provided in the appendix (Sec. 8) and implemented
MATLAB® code can be downloaded from
http://github.com/kzahedi/MC/ (commit c332c18, 30. Nov. 2015). A
C++ implementation is available at
At this stage, the measures operate on discrete state spaces (see Eqs. (1)–(3) and Algs. 2-4). Hence, the data was discretised in the following way. To ensure the comparability of the results, the domain (range of values) for each variable (e.g. the position ) was calculated over all hopping models. Then, the data of each variable was discretised into 300 values (bins). The algorithm for the discretisation is described in Alg. 1. Different binning resolutions were evaluated and the most stable results were found for more than 100 bins. Finding the optimal binning resolution is a problem of itself and beyond the scope of this work. In practice, however, a reasonable binning can be found by increasing the binning until further increase has little influence on the outcome of the measures.
The possible range of actuator values are different for the motor and muscle models. For the muscle models, the values are in the unit interval, i.e., , whereas the values for the motor can have higher values (see above). Hence, to ensure comparability, we normalized the actions of the motor to the unit interval before they were discretized.
The hopping models are deterministic, which means that only a few hopping cycles are necessary to estimate the required probability distributions. To ensure comparability of the results, we parameterized the hopping models to achieve the same hopping height.
Tab. 1 shows the value of the two MC measures for the three hopping models.
|7.219 bits||4.975 bits||4.960 bits|
|7.310 bits||5.153 bits||4.990 bits|
Compared to the MusFib model, the two other models result in significantly lower measurement of MC ( less). This result complements previous findings showing that the minimum information required to generate hopping is reduced by the material properties of the non-linear muscle fibers compared to the DC-motor driven model .
This also confirms previous findings that the non-linear contraction dynamics reduces the influence of the controller on the actual hopping kinematics in comparison to a linearized muscle model [6, 5]. To better understand the differences of the models, we plotted the state-dependent MC (see Alg. 3). Fig. 3 shows the values of for each state of the models during two hopping cycles. We chose to discuss only, because the plots of and are very similar, and hence, a discussion of the state-dependent will not provide any additional insights. The plots for all models and the entire data are shown in Fig. 4.
The orange line shows the state-dependent MC for the linear muscle model (MusLin), and finally, the blue line shows the values for the non-linear muscle model (MusFib). The green line shows the state-dependent MC for the motor model (DCMot). In the figure, the lower lines show the position of the center of mass over time. The DCMot model is parameterized to follow the trajectory of the MusFib model (see Eq. (12)), which is why the blue and green position plots coincide. The original data is shown in Fig. 4. There are basically three phases, which need to be distinguished (indicated by the vertical lines). First, the flight phase, during which the hopper does not touch the ground (position plots are above the red line), second, the deceleration phase, which occurs after landing (position is below the red line but still declining), and finally, the acceleration phase, in which the position is below the red line but increasing.
The first observation is that MC is equal for all models during most of the flight phase (position above the red line) and that it seems to be proportional to the velocity of the systems. During flight, the behavior of the system is governed only by the interaction of the body (mass, velocity) and the environment (gravity) and not by the actuator models. This explains why the values coincide for the three models.
For all models, MC drops as soon as the systems touch the ground. DCMot and MusLin reach their highest values only during the flight phase, which can be expected at least from a motor model that is not designed to exploit MC. The graphs also reveal that the MusLin model shows slightly higher MC around mid-stance phase, compared to the DCMot model. For the non-linear muscle model, the behavior is different. Shortly after touching the ground, the system shows a strong decline of MC, which is followed by a strong incline during the deceleration with the muscle. Contrary to the other two models, the non-linear muscle model MusFib shows the highest values when the muscle is contracted the most (until mid-stance). This is an interesting result, as it shows that the non-linear muscle is capable of showing more MC while the muscle is operating, compared to the flight phase, in which the behavior is only determined by the interaction of the body and environment.
This work presented two different quantifications of MC including algorithms and MATLAB® code to use them. We demonstrated their applicability in experiments with non-trivial, biologically realistic hopping models and discussed the importance of a state-based analysis of morphological computation. The first quantification, , measures MC as the conditional mutual information of the world and actuator states. Morphological computation is the additional information that the previous world state provides about the next world state , given that the current actuator state is known. The second quantification, , compares the behavior and controller complexity to determine the amount of MC.
The numerical results of the two quantifications confirm our hypothesis that the MusFib model should show significantly higher MC, compared to the two other models (MusLin, DCMot). We also showed that a state-dependent analysis of MC leads to additional insights. Here we see that the non-linear muscle model is capable of showing significantly more morphological computation in the stance phase, compared to the flight phase, during which the behavior is only determined by the interaction of the body and environment. This shows that morphological computation is not only behavior-, but also state-dependent. Future work will include the analysis of additional behaviors, such as walking and running, for which we expect, based on the findings of this work, to see a more morphological computation of the non-linear muscle model MusFib.
This section presents the algorithms in pseudo-code. The
MATLAB® code that was used for this publication
can be downloaded from http://github.com/kzahedi/MC/ (commit c332c18, 30. Nov. 2015).
C++ implementation is available at
Note that we use a compressed notion in Alg. 2–5, in which and .
9 for deterministic systems
In the case where , , and are deterministic, the conditional entropy vanishes. It follows that
10 Relation between and
From the following equality
we can derive
For deterministic systems the conditional entropies . We show this exemplarily for . If the action is a function of the sensor state , then is either one or zero, because there is exactly one actuator value for every sensor state. Hence, . The equality is not hold in Tab. 1, because the discretization introduces stochasticity, and hence, the conditional entropies are only approximately zero, i.e.,
This work was partly funded by the DFG Priority Program Autonomous Learning (DFG-SPP 1527). The DC-motor model is based on a Simulink model provided by Roger Aarenstrup (http://in.mathworks.com/matlabcentral/fileexchange/11829-dc-motor-model).
-  N. Ay and W. Löhr. The umwelt of an embodied agent — a measure-theoretic definition. Theory in Biosciences, to appear.
-  N. Ay and K. Zahedi. An information theoretic approach to intention and deliberative decision-making of embodied systems. In Advances in cognitive neurodynamics III. Springer, Heidelberg, 2013.
-  A. Clark. Being There: Putting Brain, Body, and World Together Again. MIT Press, Cambridge, MA, USA, 1996.
-  K. Ghazi-Zahedi and J. Rauh. Quantifying morphological computation based on an information decomposition of the sensorimotor loop. In Proceedings of the 13th European Conference on Artificial Life (ECAL 2015), pages 70—77, July 2015.
-  D. F. B. Haeufle, S. Grimmer, K.-T. Kalveram, and A. Seyfarth. Integration of intrinsic muscle properties, feed-forward and feedback signals for generating and stabilizing hopping. Journal of the Royal Society Interface, 9(72):1458–1469, 07 2012.
-  D. F. B. Haeufle, S. Grimmer, and A. Seyfarth. The role of intrinsic muscle properties for stable hopping—stability is achieved by the force–velocity relation. Bioinspiration & Biomimetics, 5(1):016004, 2010.
-  D. F. B. Haeufle, M. Günther, G. Wunner, and S. Schmitt. Quantifying control effort of biological and technical movements: An information-entropy-based approach. Phys. Rev. E, 89:012716, Jan 2014.
-  H. Hauser, A. Ijspeert, R. M. Füchslin, R. Pfeifer, and W. Maass. Towards a theoretical foundation for morphological computation with compliant bodies. Biological Cybernetics, 105(5-6):355–370, 2011.
-  A. S. Klyubin, D. Polani, and C. L. Nehaniv. Organization of the information flow in the perception-action loop of evolved agents. In Evolvable Hardware, 2004. Proceedings. 2004 NASA/DoD Conference on, pages 177–180, 2004.
-  G. Montúfar, K. Ghazi-Zahedi, and N. Ay. A theory of cheap control in embodied systems. PLoS Comput Biol, 11(9):e1004427, 09 2015.
-  R. Pfeifer and J. C. Bongard. How the Body Shapes the Way We Think: A New View of Intelligence. The MIT Press (Bradford Books), Cambridge, MA, 2006.
-  D. Polani. An informational perspective on how the embodiment can relieve cognitive burden. In Artificial Life (ALIFE), 2011 IEEE Symposium on, pages 78–85, April 2011.
-  E. A. Rückert and G. Neumann. Stochastic optimal control methods for investigating the power of morphological computation. Artificial Life, 19(1):115–131, 2013.
-  S. Schmitt and D. F. B. Haeufle. Mechanics and Thermodynamics of Biological Muscle - A Simple Model Approach. In Alexander Verl, Alin Albu-Schäffer, Oliver Brock, and Annika Raatz, editors, Soft Robotics, pages 134–144. Springer, 1 edition, 2015.
-  R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
-  A. J. van Soest and M. F. Bobbert. The contribution of muscle properties in the control of explosive movements. Biological Cybernetics, 69(3):195–204, jul 1993.
-  J. von Uexkuell. A stroll through the worlds of animals and men. In C. H. Schiller, editor, Instinctive Behavior, pages 5–80. International Universities Press, New York, 1957 .
-  R. J. Wootton. Functional morphology of insect wings. Annual Review of Entomology, 37(1):113–140, 2012/01/06 1992.
-  K. Zahedi and N. Ay. Quantifying morphological computation. Entropy, 15(5):1887–1915, 2013.
-  K. Zahedi, N. Ay, and R. Der. Higher coordination with less control – a result of information maximization in the sensori-motor loop. Adaptive Behavior, 18(3–4):338–355, 2010.