Going through different developmental phases, human children continuously acquire control over their own bodies . Eventually, they can distinguish between self and others and predict the consequences of their own actions. This pre-reflective identification is often called the minimal self, and constitutes of two major components: a sense of agency, and a sense of body ownership . These notions have recently become relevant also in developmental robotics. Firstly, computational models of the self can provide interesting insights into the processes of self-development in humans, and secondly an adaptive self-model can be crucial for intuitive and adaptive interaction in robotics.
Different models have been proposed in the past few years to address one or more facets of this problem. In a recent paper, Lang, Schillaci and Hafner 
presented a study on the sense of agency and object permanence in which a robot predicted its own arm movements using a convolutional neural network. Hoffman et al. investigated the formation of a body representation in a humanoid robot equipped with artificial skin during experiments on touch. Another study investigates the effects of self-touch in human infants related to their own motor actions, and suggests robotic models . An information-theoretic approach to the formation of body maps in robots has been proposed in , in which the structure of the body map resulted from information distances between sensory data during a particular behaviour of the robot.
An approach for body and non-body discrimination in robotics has been proposed by Yoshikawa et al. 
, where they propose a method to approximate posture sensations by Gaussian distributions. A predictive coding approach to generate visuo-proprioceptive patterns has been implemented by Hwang et al. on a simulated iCub robot. In this study, the robot was trained to imitate gestures of another robot or its mirror image displayed on a screen. Hinz et al.  present a study investigating prediction errors in tactile-visual data in both humans and robots. They use a rubber-hand illusion setup typically used to study properties of body ownership.
Gallagher  argues that body image and body schema are separate concepts. He suggests that the body image is a conscious representation of the body, whereas the body schema is a prepersonal structural mapping. In artificial systems, a clear distinction between these two concepts is more difficult to make, and their respective definition often varies. In this paper, we use the term ”body image” in a very literal sense, as the ”appearance of the body in the visual flow”, and we argue that body image acquisition depends on the predictive capabilities of the agent. Extending the studies of Lang et al. , we suggest a mechanism for an artificial agent to acquire its body image from scratch through predictive processes in self-supervised way.
The objective of this work is to study how an agent could learn its own body image (appearance) in a self-supervised way. Our working hypothesis is that when predicting sensorimotor experiences, the body appears as a primary source of predictability, as it is always present and does not change, or only at a very slow pace. As a consequence, when reaching motor states, the “appearance” of the body in the sensory flow is consistent throughout the exploration of the environment. Compared to the rest of the sensory flow induced by the environment, which the agent does not directly control, this body image should thus be significantly more predictable.
We can formalize this intuition by considering the mapping between the agent’s motor state and sensory state.
We denote the proprioceptive motor state, or posture, in which the agent is at time , where each corresponds to the static configuration of a joint. Similarly, we denote the sensory state that the agent receives from its exteroceptive sensors at time , where each is an individual sensory component (see Fig. 1).
We assume in this work that the sensors are such that they provide an instantaneous reading of the state of the world, and that, for any motor state , the sensory state can be divided into two subsets: and . The subset gathers all components associated with the body, while its complement gathers the ones associated with the environment. Said otherwise, we assume that the elementary sensory excitations are not due to a mixed contribution of the body and the environment. This is typically the case with a camera as, for a given body posture, a subset of pixels (and their respective channels) corresponds to body parts in the field of view, while other pixels correspond to the environment.
Without a priori knowledge about the state of the environment, the mapping is not deterministic. Putting aside potential sensorimotor noise, the uncertainty about the state of the environment limits the ability to predict . However, if the body appearance stays temporally consistent, we hypothesize that the subset should exhibit a significantly lower variability in time than , i.e.:
is treated as a random variable over time. In an environment which exhibits a sufficient amount of variability, this intrinsic difference can be used, in a data-driven way, to distinguish the sensory components which belong to the body image and to the environment (symmetrizing the implication arrows of Eq.(1)). Note that the typical RGB excitations of a pixel are here considered as separate components .
More formally, the agent can learn a forward model mapping from to a prediction of , as well as to a prediction of the error (see Fig. 1). According to Eq. 1, the elementary prediction errors should be significantly lower if than if , allowing to distinguish the two subsets based on the accuracy of the forward model.
Ii-B Neural network architecture
We use a deep neural network to learn both the forward mapping and the error prediction mapping . As displayed in Fig. 2, the network first passes the input motor state through two fully-connected layers. The last of these layers is then reshaped to form a low resolution image-like representation that can be spatially processed for deconvolution. This intermediary representation is then fed to two distinct branches which respectively predict the output image and the prediction error . Each branch is composed of three successive deconvolutional layers and three convolutional layers. The three deconvolution layers upscale the low resolution representation to the final size of the image. Following the good practice proposed in , the deconvolution operation consists in an upscaling operation, followed by a convolution. The next three convolutional layers perform typical convolution operations 
, and progressively reduce the depth of the input from 32 channels to 3, corresponding to the RGB channels of the pixels. In each branch, all layers use the SeLu activation function14], in order to guarantee the positivity of each and . Throughout the network, all convolutions are done using kernels of size
, with a stride of 1.
A different loss is associated with each branch of the network. The first branch (top one in Fig. 2) outputs a predicted image , and its associated reconstruction loss is:
where is a sensory component for input , and is the absolute operator.
It corresponds to the mean value of the norm between and , normalized by the number of sensory components (three times the number of pixels).
The second branch (bottom one in Fig. 2) outputs the predicted absolute prediction error between the predicted image and the ground-truth image , and its associated loss is:
It corresponds to the mean value of the norm between and the actual prediction error , normalized by the number of sensory components. Finally, the total loss is a weighted composition of the these two losses:
where denotes a scalar relative weight.
This total loss is minimized using the ADAM optimizer , for iterations, and with a learning rate linearly decreasing from to during training. At each iteration a random mini-batch of samples is fed to the network to compute the (stochastic) gradient. Finally, is set to increase linearly from to during the first iterations. It helps stabilizing the convergence by first focusing the optimization on the image prediction branch, so that its output can be used as a reliable target for the second branch.
Iii-a Sensorimotor data generation
In order to test our approach, we created a dataset of images in which a simulated Pepper robot  observes its own right arm in different environments. In order to quickly generate large datasets without directly using the physical robot and to easily assess a ground-truth body image mask, we created a synthetic experimental dataset by composing images.
First, the realistic Pepper simulator qibullet , was used to quickly generate right arm configurations . They were randomly generated by uniformly drawing the 4 following joint orientations , , , (radians). The motor exploration thus resembles a typical motor babbling . During this exploration, the simulated Pepper is placed in front of a green wall, with the head oriented towards the right arm (downward pitch of , rightward yaw of (radians)). For each , the pixels image captured by the robot’s camera is recorded. As can be seen in Fig. 3, these images correspond to a first person view of the arm in front of a uniform green surface. This clear bimodal structure allows use to easily replace the green surface with any desired background. We filled it with random images from a previous dataset collected while a real Pepper robot moved in an office-like environment. The robot’s body is not visible in this previous dataset, which means that the arm images generated using qibullet can be embedded in them without ambiguity. The whole creation of the dataset is illustrated in Fig. 3, where one can see the arm configurations and the background images used to compose the experimental dataset.
Note that in this compositional approach, the ambient lighting of the scene (background) does not affect the appearance of the robot’s arm. Due to its probabilistic nature, we however expect our approach to be robust to small changes in lighting conditions, such as the ones observed in the office-type background dataset. This however needs to be confirmed by future experiments.
Before being fed to the network, both the motor states and sensory states (images) are pre-processed. The motor states are normalized such that each component spans the interval, and the sensory states are normalized such that all components lie in (instead of originally). Finally, the input images are downsampled to a resolution, in order to limit the network size and computation time (although no theoretical limitation prevents the approach from scaling to greater resolutions).
Iii-B Body image extraction
Our working hypothesis is that for each motor state it should exist a subset of sensory components for which predictability is significantly higher than for other ones. Those components should correspond to the consistent part of the robot’s visual experience: its body image.
We thus propose to automatically isolate this body image by looking at the predicted prediction error . If its distribution is indeed bimodal, we propose to set a threshold between these two modes and to consider any component with a predicted error inferior to as part of the body.
This way we can create a binary mask to isolate the body image from the rest of the input image.
The code used to generate the data, and train and test the network is available at: https://github.com/alaflaquiere/learn-masked-body-image.
After training, we qualitatively and quantitatively evaluate the mappings learned by the neural network.
Figure 4 shows the predicted image and the predicted prediction error for random samples from the training set.
Firstly, we can see that the predicted images contain a meaningful approximation of the appearance of the arm. Note that the network also predicts the absence of the arm when the motor state moves it out of the field of view (see the last column of Fig. 4). The arm appears in front of a background made of two relatively uniform horizontal stripes. The brighter upper stripe seems to statistically correspond to walls and windows which tend to be white in the background dataset. The darker lower stripe seems to statistically correspond to the floor which tends to be darker in the dataset. Apart from this statistical distinction, the background of the predicted images does not contain any specific pattern from the original ground-truth images. In order to minimize its prediction error, the network thus learned to output the expectation of each unpredictable background sensory component.
Secondly, the predicted error displays a similar structure. The area of the image corresponding to the arm is associated with a very low predicted prediction error (darker), while the background is associated with greater errors (lighter). It thus seems that our working hypothesis was correct: based on the predictability of the visual input sub-components, it is possible differentiate the body image from the rest of the environment in a self-supervised way.
Figure 5 shows a normalized histogram of all the predicted prediction errors produced by the network for
random motor states from the training dataset. As expected, the distribution appears to be bimodal. We fit it with a 2-component Gaussian Mixture Model and define a threshold at the intersection of the two components, i.e.. This threshold is used to automatically distinguish the sensory components belonging to the body image () from the ones belonging to the environment (). It allows us to compute a mask on the sensory components to isolate the body image, as displayed in Fig. 4. Note that in this process, the R, G, and B channels of each pixel are treated independently, assuming minimal a priori knowledge about the structure of the sensory state. This potentially allows a mismatch between the different color components of the body image. Finally, this mask of depth 3 is applied to the predicted image by making transparent the components not in the mask. The resulting body image, cleaned from the poorly predictable background, is displayed in Fig. 4 as well.
We can see that the appearance of the arm in the masked predicted images is very close to the one in the ground-truth images. The biggest disparities are located at the arm tip (hand), which appears to be the most difficult part to reconstruct. This can be explained by the fact that the appearance and localization of the hand in the image changes the most rapidly as a function of the joint configuration. On the contrary, the upper arm, closest to the shoulder and to the camera, is the most consistently reconstructed part, as its appearance changes the least as a function of the joint configuration.
We introduce two measures to quantify the quality of the learned body image. First, the mask match corresponds to the Intersection over Union (IoU) of the mask derived from the network’s output and the ground-truth mask (non-green pixels in the image before composition).
Second, the appearance match is defined as:
where is equal to if and otherwise, and is the number of components in this mask intersection.
Note that in each measure, the three RGB channels of each pixel are considered independently.
Figure 6 displays 6 visualizations of mask and appearance matches for random inputs of a testing dataset generated the same way as the training dataset. We can see that most errors happen at the edge of the arm, where pixels values are the most likely to quickly change as a function of the motor joints, due to the environment being unpredictable. We can also see that the errors in the arm appearance are insignificant, and barely visible as a consequence (see last row). Over the whole testing dataset of 2000 samples, the mask match is equal to , and the appearance match is equal to . The match between the ground-truth body image and the one produced by the network is thus very good.
Finally, the network can act as a (deterministic) conditional generative model. We can sample the input motor space and let the network generate an output image and output prediction error. Figure 7 shows the result of such a sampling when varying each motor component independently, starting from the reference arm configuration . We can see that the arm appearance and the associated low prediction error mask stay consistent throughout the motor space. The neural network thus seems to generalize and to accurately predict the arm appearance over the whole motor space.
We presented an algorithm and experiments for autonomous body image acquisition. This work extends the studies of Lang et al.  by a mechanism to distinguish sensory components that belong to the body image from those that belong to the environment, from scratch, in a self-supervised way. It relies on the hypothesized intrinsic difference in variability of those two kinds of components, that a robot can capture when trying to predict its sensory state given an input motor state.
Like in the previous one, only movements of a single arm have been considered in this study. However this work could potentially be extended to more complex movements, including other limbs or the head itself. As shown by Schmerling et al. , the resulting change in the visual image caused by the head motion could indeed be included in the predictive learning process. The only theoretical limitation of such an extension is the number of samples required to correctly estimate the conditional statistics of the sensory experience as the dimension of the motor space increases.
We have shown that the prediction of the visual information associated with a motor state is crucial for the formation of the body image. From a predictive coding perspective, the body image would in this way correspond to the component of the sensory experience which is reliably and quickly predictable, on a developmental scale, given the motor experience that the agent generates itself. The importance of motor information in this formative process suggests a strong connection between the sense of agency, body ownership, body schema, and body image. Another possible extension of this work would be to emphasize this motor aspect even more by introducing a dynamical system formulation of the problem, like has been proposed in . A network could for instance be optimized to predict the future sensory state , given the current sensory state and a motor command , instead of a static posture like in the current formulation. It would also be interesting to investigate if it is possible for an extended version of the network to simultaneously learn to infer its current body posture , using an auto-encoder paradigm, and to isolate its body image, based on the latent code learned by this auto-encoder.
Finally, it is important to note again that we used in this work the term “body image” in a literal sense to refer to the appearance that the body has in a visual flow. The same term is often used to refer to more complex notions covering different psychological concepts, and whose definition can vary. Many open questions remain regarding the complete modeling of the notion of body image, and its coupling with other notions like body schema, body ownership, or the sense of agency.
-  P. Rochat, “What is it like to be a newborn?” in The Oxford Handbook of the Self, S. Gallagher, Ed. Oxford University Press, 2011, pp. 57–79.
-  S. Gallagher, “Philosophical conceptions of the self: implications for cognitive science,” Trends in cognitive sciences, vol. 4, no. 1, pp. 14–21, 2000.
-  C. Lang, G. Schillaci, and V. V. Hafner, “A deep convolutional neural network model for sense of agency and object permanence in robots,” in 8th Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob). IEEE, 2018, pp. 260–265.
-  M. Hoffmann, Z. Straka, I. Farkaš, M. Vavrečka, and G. Metta, “Robotic homunculus: Learning of artificial skin representation in a humanoid robot motivated by primary somatosensory cortex,” IEEE Trans. on Cognitive and Developmental Systems, vol. 10, no. 2, pp. 163–176, 2018.
-  M. Hoffmann, L. K. Chinn, E. Somogyi, T. Heed, J. Fagard, J. J. Lockman, and J. K. O’Regan, “Development of reaching to the body in early infancy: From experiments to robotic models,” in IEEE Int. Conf. on Development and Learning and Epigenetic Robotics. IEEE, 2017.
-  F. Kaplan and V. V. Hafner, “Information-theoretic framework for unsupervised activity classification,” Advanced Robotics, vol. 20, no. 10, pp. 1087–1103, 2006. [Online]. Available: https://doi.org/10.1163/156855306778522514
-  Y. Yoshikawa, Y. Tsuji, K. Hosoda, and M. Asada, “Is it my body? body extraction from uninterpreted sensory data based on the invariance of multiple sensory attributes,” in 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), vol. 3, 2004, pp. 2325–2330.
-  J. Hwang, J. Kim, A. Ahmadi, M. Choi, and J. Tani, “Predictive coding-based deep dynamic neural network for visuomotor learning,” in 2017 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2017, Lisbon, Portugal, September 18-21, 2017, 2017, pp. 132–139. [Online]. Available: https://doi.org/10.1109/DEVLRN.2017.8329798
-  N.-A. Hinz, P. Lanillos, H. Mueller, and G. Cheng, “Drifting perceptual patterns suggest prediction errors fusion rather than hypothesis selection: replicating the rubber-hand illusion on a robot,” in IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL-Epirob). IEEE, 2018. [Online]. Available: https://arxiv.org/pdf/1806.06809.pdf
-  S. Gallagher, “Body image and body schema: A conceptual clarification,” Journal of Mind and Behavior, vol. 7, pp. 541–554, 11 1985.
-  A. Odena, V. Dumoulin, and C. Olah, “Deconvolution and checkerboard artifacts,” Distill, 2016. [Online]. Available: http://distill.pub/2016/deconv-checkerboard
-  Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in neural information processing systems, 1990, pp. 396–404.
-  G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-normalizing neural networks,” in Advances in neural information processing systems, 2017, pp. 971–980.
Proceedings of the 27th international conference on machine learning (ICML-10), 2010, pp. 807–814.
-  D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
-  SoftBank Robotics. (2014) Pepper robot. [Online]. Available: https://en.wikipedia.org/wiki/Pepper_(robot)
-  B. Maxime and C. Maxime. (2019) qibullet: A bullet-based python simulation for softbank robotics’ robots. [Online]. Available: https://github.com/ProtolabSBRE/qibullet
-  A. N. Meltzoff and M. K. Moore, “Explaining facial imitation: A theoretical model,” Infant and child development, vol. 6, no. 3-4, pp. 179–192, 1997.
-  M. Schmerling, G. Schillaci, and V. V. Hafner, “Goal-directed learning of hand-eye coordination in a humanoid robot,” in 2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), Aug 2015, pp. 168–175.