Multimodal representation models for prediction and control from partial information

10/09/2019 ∙ by Martina Zambelli, et al. ∙ Imperial College London 0

Similar to humans, robots benefit from interacting with their environment through a number of different sensor modalities, such as vision, touch, sound. However, learning from different sensor modalities is difficult, because the learning model must be able to handle diverse types of signals, and learn a coherent representation even when parts of the sensor inputs are missing. In this paper, a multimodal variational autoencoder is proposed to enable an iCub humanoid robot to learn representations of its sensorimotor capabilities from different sensor modalities. The proposed model is able to (1) reconstruct missing sensory modalities, (2) predict the sensorimotor state of self and the visual trajectories of other agents actions, and (3) control the agent to imitate an observed visual trajectory. Also, the proposed multimodal variational autoencoder can capture the kinematic redundancy of the robot motion through the learned probability distribution. Training multimodal models is not trivial due to the combinatorial complexity given by the possibility of missing modalities. We propose a strategy to train multimodal models, which successfully achieves improved performance of different reconstruction models. Finally, extensive experiments have been carried out using an iCub humanoid robot, showing high performance in multiple reconstruction, prediction and imitation tasks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Several studies have revealed that the ability of humans to make predictions is not only essential for motor control, but it is also fundamental for high level cognitive functions including action recognition, understanding, imitation, mental replay, and social cognition (Wolpert and Flanagan, 2001). Improving the ability of robots to make predictions is a promising direction to enhance their skills, not only on motor control and prediction of their own body, but also on fostering the understanding of others’ actions. Well-established learning systems for motor prediction and control (Wolpert and Kawato, 1998; Kawato, 1999; Demiris and Khadhouri, 2006) are built on internal models, namely forward and inverse models. The former provides a prediction of the state of the agent given the current state and an action, while the latter provides a mapping in the opposite direction: given a target state and the current state, it retrieves the action to bring the system from the current state to the target. Assuming existing similarities between agents, the internal model used to predict one’s own actions can be instrumental to predict the (visual) consequences of someone else’s actions (Demiris and Khadhouri, 2006; Demiris et al., 2014). The assumption of the existence of similarities between agents poses a challenge in robotics, known as the correspondence problem (Hafner and Kaplan, 2005; Alissandrakis et al., 2002; Nehaniv and Dautenhahn, 1998). This paper does not address this problem. Instead, we assume that the robot has access to visual information from an egocentric point of view. A solution to address general scenarios where the spatial perspective that the robot acquires of its own and of others’ actions is different has been proposed for example in (Johnson and Demiris, 2005; Fischer and Demiris, 2016). However, in this work it is assumed that agents share the same perspective (the same assumption is generally made in similar applications (Baraglia et al., 2015; Copete et al., 2016)).

Figure 1: Overview of the learning architecture. The self-learned model can be used to reconstruct missing data, make predictions, control the robot’s motion. When observing others, only the visual information is available. The model learned can reconstruct the multimodal state of the robot, including the proprioceptive, visual, tactile, sound and the motor commands data, from partial information (left). The model can also be used to make predictions in the futures, by feeding reconstructed data back to the model (center). Finally, the model can generate motor commands that can be issue directly to the robot’s joints to imitate others’ visual trajectories (right).

While several studies have focused on predicting outcomes of actions of the agent (e.g. learning a forward model) or actions of others (e.g. human trajectories from images or videos) (Kamel et al., 2018, 2019b, 2019a), in this paper the goal is to learn a model of the self that can be applied to predict and imitate the visual perception of another agent from an egocentric point of view. The proposed architecture is based on a self-learned model, which is built, trained and updated only using the experience accumulated by the agent. The advantage of self-learned models is that they can be used without specific prior knowledge about the robot, for example its morphology or predefined forward and inverse models. This information might be unavailable in some cases, such as in soft robotics or after a mechanical damage. Self-learned models can enable robots to learn on their own how to behave in those circumstances (Cully et al., 2015; Kriegman et al., 2019). However, one of the major obstacles in using self-learned internal models to predict motion of others is the intrinsic difference between the available data. While the model is learned and exploited by the agent using a whole range of available sensory modalities, only the visual information is available when observing someone else’s motion. In this paper, we overcome this challenge by implementing a model which is able to retrieve the missing sensory information and motor commands needed for mimicking and predicting the visual trajectories of another agent’s action. As a result, the main contribution of this paper is a learning architecture that uses a multimodal variational autoencoder in a versatile manner to (1) reconstruct missing sensory modalities, (2) predict the sensorimotor state of self and the visual perception of another agent from an egocentric point of view, and (3) imitate the observed agent’s visual trajectory. This architecture represents a unified representation of the traditional forward and inverse model leveraging their synergy to implement functions that are fundamental for autonomous systems. An overview of the proposed learning architecture is shown in Fig. 1.

Variational autoencoders (Kingma and Welling, 2013; Rezende et al., 2014)

have recently emerged as one of the most popular approaches for unsupervised learning of complex distributions of data. One of their key characteristics is that they can model the probability distribution of the reconstructed data and its distribution in the latent space. In this paper, we extend a traditional variational autoencoder model to reconstruct the probability distribution of non-observed modalities (

e.g. joint positions and velocities) given observed modalities (e.g.

visual position of the end-effector). Using probability distributions is particularly important in the case of robotics applications, as it allows the system to take into account the redundancy of the system. Typically, several joint positions lead to the same end-effector position, and such relationships can be captured by the learned conditional probability distribution. An important aspect of this work is also the training strategy used to learn this model. Specifically, we propose to train the model to reconstruct the input even when only part of it is available, by adopting a denoising approach. Our experiments, presented in Section

4, show that this method allows for the improved performance on the task at hand of various alternative models too.

The paper is organized as follows: the multimodal variational autoencoder implementation is introduced in Section 3. Experiments have been performed by using a humanoid iCub robot and results are reported and discussed in Sections 4 and 5, respectively.

2 Related work

Learning internal models in robotics

Learning algorithms have proven to be an effective means of building internal models for robots. Learning strategies achieve flexibility and adaptability in building robots’ kinematic and dynamic models, by incorporating uncertainties and nonlinearities, as well as dynamical changes due to wear, and in limiting the influence of specific engineered settings. Many approaches to learn controllers for robots have been proposed, including for example reinforcement learning

(Sutton and Barto, 1998; Abbeel et al., 2007) and learning by demonstration (Argall et al., 2009; Billard et al., 2008). Various implementations have been proposed, such as Gaussian processes (Deisenroth and Rasmussen, 2011; Williams et al., 2009)

, neural networks

(Miller et al., 1995; Kawato et al., 1988) and more recently deep neural networks (Hinton et al., 2006; Levine et al., 2016). The majority of these studies have focused on learning controllers, where the goal is to learn a policy or an inverse model in order to generate motor commands given a target input. Typically, learning forward models has been less investigated in traditional robotics because they can be directly defined based on the kinematic structure of the robot. However, learning such models is fundamental to implement a prediction model for robots to be able to make predictions not only on their own actions but also of others’ actions.

Forward and inverse models learning is a general approach to allow robots to learn new skills. Forward models generate state predictions from current state and action, while inverse models generate actions from states. These two capabilities enable robots to perform predictions, “mental simulation”, planning, and control (Wolpert and Kawato, 1998; Kawato, 1999; Demiris and Khadhouri, 2006). In developmental robotics, such models are acquired by designing learning mechanisms to let a robot build its own perceptive and behavioral repertoire. The focus is to investigate the acquisition of motor skills from sensorimotor interaction with the environment (Lungarella et al., 2003). As a result, the developmental approach aims to endow robots with all the learning capabilities that may be necessary to build rich and flexible sensorimotor representations (Sigaud and Droniou, 2016). Several studies have addressed the problem of learning internal models from sensorimotor data through exploration strategies, including for example learning of visuomotor models (Droniou et al., 2012; Vicente et al., 2016), learning of dynamics models (Calandra et al., 2015), and learning from multiple sensory signals and possible partial information (Fitzpatrick et al., 2006; Vicente et al., 2016; Ruesch et al., 2008). Internal models (forward and inverse models) are usually learned separately (Wolpert and Kawato, 1998; Kawato, 1999; Demiris and Khadhouri, 2006): the forward model is used to make predictions, and the inverse model is used for control. The method proposed in our paper instead achieves these two capabilities in conjunction. This can be a valuable asset, for example in terms of number of parameters used (one network instead of multiple ones). Our proposed approach also provides a compact yet powerful model that can achieve satisfactory performance on both prediction and control tasks. One powerful way to learn internal models is imitation, considered a fundamental part of learning in humans and used as a mechanism of learning for robots (Demiris and Dearden, 2005). The ability to predict someone else’s movements inherently incorporates the necessity of understanding others’ motion, being able to simulate it by developing learning as well as imitation skills. A vast literature exists in the robotics domain addressing imitation, in particular the paradigm of learning by imitation (Schaal et al., 2003; Calinon et al., 2010; Lopes and Santos-Victor, 2005), and the related correspondence problem (Hafner and Kaplan, 2005; Alissandrakis et al., 2002; Nehaniv and Dautenhahn, 1998) arising from the structural (kinematic/dynamic) differences between a demonstrator and a learner agent. Imitation can happen at different levels, such as at the action level, or at the effect level (Nehaniv and Dautenhahn, 2001)

. Recently, advances on motion analysis and estimation have been proposed

(Kamel et al., 2018, 2019b, 2019a), and these techniques have also been applied to humanoid robot motion learning through sensorimotor representation and physical interactions (Shimizu et al., 2014). In this paper, we use a trajectory level imitation, as an instrumental example of application of our proposed multimodal learning approach. Also, although the correspondence problem has an important role in the context of learning by imitation, we refer the reader to the relevant literature to solve this problem, and we focus the paper on the multimodal learning approach instead.

Multimodal learning

In the fields of sensor fusion and pattern recognition, several works have addressed the problem of learning representations from multiple sources,

e.g. text and audio or text and images (Ramisa et al., 2017; Poria et al., 2016). In (Ngiam et al., 2011)

, a multimodal deep learning approach was proposed, able to cope with data of different types, such as visual and audio data, with cross-modal learning and reconstruction. Some work on multimodal learning in robotics was proposed in

(Zambelli and Demiris, 2016, 2016). Recent literature has started to address the challenging problem of learning from multiple data sources, using variational inference models (e.g. variational autoencoders). Among others, two recent works have shown great potential: the joint multimodal VAE (Suzuki et al., 2016), and the product-of-experts-based multimodal VAE (Wu and Goodman, 2018)

. The former learns a joint distribution between two modalities, but trains a new inference network for each multimodal subset, which is generally impractical and arguably intractable. The latter uses a product-of-experts inference network and a sub-sampled training paradigm to solve the multimodal inference problem. Although these methods have been shown to achieve good results in domains such as image processing and text-to-vision tasks, they do not address the problem of multimodal learning from different sensors on a real robot. Such domain is fundamentally different since the data collected by the robot while acting are generally noisy time series of unscaled and heterogeneous data. The main contributions of our work compared to

(Suzuki et al., 2016; Wu and Goodman, 2018) are the application domain and the ability of our method to generate actions. Our work is the first, to the best of our knowledge, to use a multimodal formulation of variational autoencoders on a real robotic domain. While in (Suzuki et al., 2016; Wu and Goodman, 2018)

the addressed domains are purely self-supervised learning applications, not involving actions or control tasks, in this work we successfully use a multimodal VAE model to go beyond self-supervision and achieve imitation, prediction and control tasks. In

(Droniou et al., 2015), an architecture based on deep networks was proposed to make a humanoid robot iCub learn a task from multiple perceptual modalities (namely proprioception, vision, audio). While the method proposed in that paper learns the cross-modal relationships between sensory modalities, it is not able to deal explicitly with missing information. On the contrary, the architecture that we propose here can successfully retrieve missing modalities and use them to both predict and control motion. Finally, (Baraglia et al., 2015; Copete et al., 2016) have applied deep autoencoders to make a robot predict others’ actions through predictive learning, showing how a robot can use a self-acquired model to make predictions of others’ goals. In those works, the sequences of signals used for learning are given through kinesthetic teaching. On the contrary, in this paper we use a fully autonomous exploration for the robot to acquire its own sensorimotor data. Furthermore, the variational autoencoder that we propose in this paper is a more general and versatile model for robots to not only predict self and others’ motion, but also to perform imitation tasks. It also presents one major advantage compared to the model proposed in (Copete et al., 2016), namely the ability to capture the redundancy of the robotic system.

3 Methodology

3.1 Multimodal variational autoencoder

A variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is a latent variable generative model. It consists of an encoder that maps the input data into a latent representation , and of a decoder that reconstructs the input from the latent code, that is . Encoder and decoder are neural networks, parameterized by and , respectively. The lower-dimensional latent space where lives is stochastic: the encoder, denoted as

outputs a probability density, generally (as also in our case) a Gaussian distribution. The latent representation

can then be sampled from this distribution. The decoder is denoted as : it gets as input the latent representation of the input, and outputs parameter of a distribution representing the reconstructed input. The variational autoencoder model can also be written as where is a prior, usually Gaussian, and is the decoder.

The information bottleneck given by mapping of the input into a lower-dimensional latent space yields to a loss of information. The reconstruction log-likelihood is a measure of how effectively the decoder has learned to reconstruct an input given its latent representation . The training goal is then to maximize the marginal log-likelihood of the data. Because this is intractable (Rezende et al., 2014), the evidence lower bound (ELBO) is instead optimized, by leveraging the inference network (decoder), , which serves as a tractable distribution. The ELBO is defined as:

(1)

where

is the Kullback-Leibler divergence between distributions

and , while (Wu and Goodman, 2018) and (Higgins et al., 2016)

are parameters balancing the terms in the ELBO. The ELBO is then optimized via stochastic gradient descent, using the reparameterization trick to estimate the gradient

(Kingma and Welling, 2013; Rezende et al., 2014). In practice, since the main focus of this study is the reconstruction capability of the model, we chose , and only consider the reconstruction loss to train our architecture, noticing improvements in the reconstruction performance obtained.

In this paper, we extend standard variational autoencoders to multimodal sensorimotor data. Our multimodal VAE is formed of multiple encoders and decoders, one for each sensory modality. Each encoder and decoder is an independent neural network, not sharing weights with other modalities’ networks. The latent representation is however shared: each encoder maps its input (one sensory modality) into the shared code , as depicted in Fig. 2. Each decoder then reconstruct its particular output (one sensory modality) from the shared code. The main difference that characterizes the multimodal learning approach compared to a standard VAE is that the sub-networks can be used to process each modality, and shared layers can be used to learn cross-modal relations (see Fig. 2).

Figure 2: Multimodal Variational Autoencoder used in this work. The input layer is composed by multimodal sensorimotor data. Each modality is encoded and decoded by separate autoencoders (shown with different colors). A shared layer (in light blue, in the center) allows to learn a shared representation among different dimensions. This architecture is trained with complete as well as partial data (see Table 1

). Each uni-modal autoencoder can be trained separately, allowing for single modality learning. The cross-modality representations are also learned through the shared layer. The output of the network consists of the mean and variance of the reconstruction of each different data part. Details about the parameters of the network included in this figure are further explained in Appendix. N-ReLU represents a fully connected layer with N neurons and using the ReLU activation function. N-ReLU x2 indicates that 2 N-ReLU layers are created in parallel, one to encode the mean and the other to encode the variance of the output distribution.

The parameters are here used to balance losses from different sensor modalities. In order to put more emphasis on modalities described by fewer dimensions (e.g. the tactile and sound modalities), we compute independent loss values for each modality () and weight them according to their dimensionality (), that is . Then the sum of the independent reconstruction loss terms is optimized. The scaling factor given by the dimensionality of each modality allows us to balance the importance of each modality when combining them in the optimization step. That is, when optimizing the reconstruction loss, the weights allow to take into account that each modality and each corresponding unimodal sub-network have different dimensions. This approach helps learning even the most difficult parts of the state space, such as discrete or binary dimensions of the sensory space (see tactile example in Figure 3).

This type of variational model presents various advantages in a robotic framework. First, the ability of variational autoencoders to learn the distribution of a dataset in latent space is a powerful feature to generate a shared representation of the different modalities. For instance, the latent representation can be used to learn relationships and dependencies present in the sensorimotor experience of robots. This can be leveraged to generate new artificial perception by sampling from the latent distribution in the latent space. Second, this shared latent representation also allows the robot to reconstruct missing modalities. For example, if data from a sensor is unavailable, this model can be used to model the probability distribution of the data that should be observed from this sensor conditioned on the data from other sensors of the robot. Finally, their ability to predict probability distributions is fundamental to take into account the redundancy of complex robots, such as the iCub humanoid robot used in this study. With this property, the model can capture the fact that for a given end-effector position, several joint configurations are possible.

Details of the network implemented and used in this work are reported in B.

3.2 Training the Multimodal Variational Autoencoder

An important contribution of this work is the training strategy used to learn the proposed model. We propose to train the model to reconstruct the input even when only part of it is available, by adopting a de-noising approach. While in the following paragraphs the proposed training approach is presented relative to the multimodal variational autoencoder introduced earlier, this strategy is generic, and can be applied to other architectures, such as the reconstruction model proposed in (Droniou et al., 2015) as demonstrated by the experimental results. In the Experiment section, we show that the proposed training strategy allows to improve performance on the task at hand of various alternative models.

The training dataset contains multimodal sensorimotor data collected during a self-exploration phase. Data are captured from different sensors of the robot, such as the position of the hand in the robot’s visual space, tactile and sound data, and proprioception (joint positions) from the motor encoders. In particular, the position of the hand in the visual space is extracted by considering the center point of a tracking window around the moving hand. All data are then normalized to take values in the range . More details regarding the data acquisition and the database are presented in Section 4.1.

Time series data from the self-exploration dataset recorded are shown in Fig. 3. Denote by

the vector of velocity commands issued at time

, the vector of joint positions (proprioception), the vector of the visual position, the tactile signal and the sound signal at time . Note that other modalities can also be included. The input of the architecture is a multi-dimensional vector , which contains both data from time and to capture the temporal relationship between the different modalities.

The network is trained on both complete and partial samples of the training dataset collected during the robot self-exploration. To do so, the original dataset is augmented with samples that require the network to reconstruct the missing modalities given only one of them. This is realized by duplicating the dataset, while using a flag value (namely the arbitrary value -2, which is outside the range of any sensorimotor signal after normalization) to denote the non-observable modalities. The training dataset follows the structure in Table 1 to enable the network to perform predictions and reconstruction in multiple conditions of missing information. More specifically, the augmented training set is formed by concatenating the original complete set of data collected during motor babbling and normalized to values between -1 and 1, with mutilated versions of itself. The final dataset is then (1) the complete data at time and , concatenated to (2) data including only time , concatenated to (3) data including only proprioception at time and vision at time and , concatenated to (4) data including only vision at and . At each training step, a batch is randomly sampled from the augmented dataset and fed to the multimodal VAE model. The batch may contain only partial data, but the training objective forces the network to try to reconstruct the target complete sensorimotor state (i.e. ). Because the model is trained using the combination of complete and partial data as described above, the latent representation is shaped in such a way that it is robust to missing data; similarly, the sub-networks weights are learned to also be robust to missing inputs.

(1)
(2) - - - - -
(3) - - - - - - -
(4) - - - - - - - -
Table 1: Training dataset structure: original dataset (1) augmented with samples that only include partial data (2-3-4). Each row correspond to a dataset of 7380 datapoints. Colored cells indicate that the corresponding modality is present in the dataset. For the cases (2-3-4), missing modality data (cells in gray) is replaced with values . The datasets (1), (2), (3), and (4) are concatenated. The proposed model is trained on the augmented dataset, that is the concatenation of the four (1-2-3-4).

3.3 One model for multiple tasks

One of the major assets of our proposed model is its versatility, that is the possibility of using the same learned model to achieve different goals. In this section, we present how the learned multimodal variational autoencoder can be deployed to achieve three different objectives:

  1. reconstructing missing data;

  2. predicting the robot’s own sensorimotor data and visual trajectories from other data sources (e.g. other agents, other datasets);

  3. controlling the robot in an online control loop.

In these three cases, the training, structure, and parameters of the neural network remain the same: the learned model and network used for learning do not change even when different sets of input are available. We argue that this is a key aspect of our method: one single model can be trained and learned to capture a comprehensive internal model from multimodal data, and to cope even when part of this data is not available. Details for each of the aforementioned functions that the model can achieve are given in the remaining part of the section.

3.3.1 Reconstructing missing data

Similar to denoising autoencoders, the proposed multimodal VAE is trained to reconstruct missing data. Missing modalities are set to

(as explained in Section 3.2), while the network outputs the probability distribution of the reconstructed inputs. This is fundamental to address the problem at the origin of this work, that is the ability to predict the visual trajectory of others taken from egocentric visual information by relying on internal models of the self. In such an application, an agent learns internal representations of its sensorimotor space, in particular relating motor actions with multimodal sensory effects (Demiris and Khadhouri, 2006; Demiris et al., 2014; Pickering and Clark, 2014). However, when observing someone else performing an action, only the visual information is available. The agent, which relies on full information from all its sensors, must then be able to retrieve the missing information and interpret the observed motion in relation with its own internal representations. The architecture proposed in this paper allows robots to achieve this by reconstructing the missing sensorimotor information; for example reconstructing joint configuration, touch, sound and motor information from observations of the visual input, or time step from observations at time .

3.3.2 Predicting the robot and others’ visual trajectories

While data from all sensory modalities is available to the agent when learning the models, only the visual input, from an ego-centric perspective is available when observing others. This implies that only data referring to the visual input are available in (see (4) in Table 1: this part of the augmented dataset only contains visual data at time and ; training on this part of the dataset allows the network to learn to predict the missing modalities from only visual information).

In this respect, the reconstruction of missing modalities described above plays a key role. The neural network can act as a forward model to predict the next sensorimotor state from the current state of the agent (see line (2) in Table 1: this part of the augmented dataset only contains data at time ; training on this part of the dataset allows the network to learn to predict the next time step when only the previous observation is available). However, when observing someone else, the current state of the agent is not fully available, as only vision information can be observed. To perform predictions the network needs to infer future sensorimotor states given the current one first. We first feed the model with , and let the model reconstruct ; then we feed the obtained reconstructed signal as if it was the observation, and let the network reconstruct the missing part, that is . In summary, the network first reconstructs the current sensorimotor perceptions of the observed agent and then uses these reconstructed perceptions to predict the next state of the agent.

3.3.3 Controlling the robot in an online control loop

In addition to the abilities of the architecture to reconstruct and predict the visual trajectories of other agents’ motion, the learned model can be used as a controller for the robot. In particular, we show how the model can be placed in a control loop to regulate the sensory state of the robot given a target state. This approach can be used in imitation learning scenarios, for instance, where the robot imitates a target trajectory. In our scenario, the robot observes someone else’s visual trajectory from an egocentric point of view and uses the learned model to replicate such trajectory.

The control loop is depicted in Fig. 1 (rightmost diagram). Notably, the joint and visual configurations () of the robot are fed back to the network in order to provide the correct current state at each time. This prevents the network from drifting during the online cycles of the control loop, due to the dependencies between different input modalities. For example, moving to areas of the sensory space that lie far from the training space have increased uncertainty. This condition is made more severe by the multimodal nature of the data, which come independently from diverse sensors. The feedback loop implemented to provide the network with the real current data from the robot helps prevent the accumulation of errors in different state dimensions.

It is also important to emphasize that using the learned network as a controller for the robot is not a trivial application, since the network itself represents a model of the robotic system. The ability of the network to produce motor commands is then key to achieve a controller behavior, but this is not sufficient to implement an effective controller. It is important to provide the network with all the sensory information that can help the model to learn the kinematics and dynamics of the system, in particular the sensory states at two consecutive time steps. This is key for the network to build meaningful representations of the robot kinematics and dynamics, and in turn to generate sensible motor commands.

4 Experiments

4.1 Experimental setup

We have demonstrated our proposed approach using a humanoid iCub robot. In our scenario, the robot is interacting with a piano keyboard. The architecture is trained using data collected from the robot through experience, by performing pseudo-random self-exploratory movements (motor babbling).

Then, the robot uses the learned architecture to (1) reconstruct missing sensory modalities, (2) predict its own sensorimotor state and predict visual trajectories of another agent from an egocentric point of view, and (3) imitate the observed agent ’s trajectories.

During the experiments, the iCub robot moves it’s right arm, while keeping its head still to a fixed position. Four joints of one of the robot’s arms are used during motor babbling. The joints’ positions () are acquired from the motor encoders attached to each joint111The initial joints configuration of the robot’s arm is =-35 deg, =35 deg, =0 deg, =50 deg (with corresponding to the shoulder pitch, roll, yaw, and elbow flexion, respectively), the wrist is fixed in the standard neutral position, the index finger extended in the neutral position and the rest of the fingers folded. The joint configuration of the robot’s head is the standard neutral one, except for the first two joints of the neck which are turned 12 degrees rightwards and downwards.. Visual information encoding the position of the hand in the 2D visual field of the robot is acquired from the robot’s eye cameras (using a resolution of pixels for the image frames), with coordinates and

for the right and left eye, respectively. This is obtained by tracking the hand of the robot using OpenCV features and computing the mean of the tracked feature points, thus obtaining the two coordinates in the 2D frames. This approach is a coarse representation of the visual information available to the robot. An alternative is to extract visual information directly from pixels using a convolutional neural network (CNN). On the other hand, the coarse approximation obtained with the simple visual tracker was sufficient to develop the experiments presented in the following paragraphs, and we let the implementation of the CNN as future work. A binary one-dimensional tactile signal is acquired from the robot’s artificial skin, which consists of a network of taxels (“tactile pixels”). More specifically, the 60 tactile signals acquired from the robot’s hand skin are normalized, averaged and binarized using an empirically fixed threshold. The result is a one-dimensional signal that is equal to 1 when a contact is perceived (

i.e. when the average of the signals is above the fixed threshold), or 0 otherwise. Sound data is acquired from the piano keyboard, in the form of a one-dimensional vector containing the MIDI information related to the key played. MIDI is a symbolic representation of musical information incorporating both timing and velocity for each note played, which is thus associated to a specific integer number. The commands sent to the robot’s motors () to perform autonomous self-exploration (motor babbling) are velocity references. No prior knowledge is assumed on the robot’s kinematic or dynamic structure. The choice of using velocity commands aims to keep this prior knowledge to a minimum by avoiding to rely on the inverse kinematic of the robot. However, our method can accommodate other implementation choices, such as position or torque control. Self-exploration is realized by performing motor babbling on one of the robot’s arm. Random sinusoidal motor commands are sent to the motors as velocity commands defined for each joint as where the amplitudes

are sampled for each joint at each cycle from a uniform distribution

, and the frequency is fixed so that each cycle starts and terminates at zero (i.e. null velocity). Normalization is finally applied to all data to obtain signals in the range . The dataset collected from motor babbling contains 7380 data points, corresponding to approximately 30 minutes of exploration. This dataset is then augmented in order to train the network, as explained in the previous section.

Figure 3: Data from self-exploration. Joint positions are recorded from the motor encoders, visual positions (4D) are acquired from the RGB cameras of the robot’s eyes, random sinusoidal velocity commands are issued to the arm joints to perform motor babbling.

The input fed to the network is a 28-dimensional vector, including two four-dimensional joint position vectors (), two four-dimensional visual position vectors (), two one-dimensional tactile vectors (), two one-dimensional sound vectors (), and two four-dimensional motor commands vectors ().

We performed extensive evaluation tests of our proposed method. Three different datasets have been used: test data from the robot self-exploration, data from a RGB-D camera of a human playing a piano keyboard, and data from a RGB-D camera of the Imperial-PRL KSC Dataset222Dataset available at www.imperial.ac.uk/PersonalRobotics (data used in (Chang et al., 2017) to validate kinematic structure correspondences methods). To demonstrate our proposed method in practice, we show that the iCub robot is able to leverage its prediction capability to plan its own actions to imitate a human on the piano keyboard.

More details about the datasets used (including number of datapoints and training specifications) are provided in A.

Figure 4: Reconstruction results of multimodal data: proprioception (), vision (), motor commands (), touch and sound. Blue lines show the reconstructed data given complete input (case 1 in Table 1), and orange lines show the reconstruction results with partial input (case 4 in Table 1

). Shaded areas represent the variance of the predicted Gaussian distribution of the reconstructed signals. The multimodal variational autoencoder is able to reconstruct the visual position accurately. Reconstruction results on the joint and motor spaces display the effect of the redundancy of the robot’s arm: the same visual position can be reconstructed using diverse configurations, and applying diverse motor commands. Reconstruction errors occur simultaneously on different degrees of freedom, according to the robot’s kinematic structure. The redundancy effect is particularly evident for the second and third joints (

). A representation of the degrees of freedom of the iCub arm is depicted in the lower-right picture.

4.2 Architecture structure

The network implemented333Tensorflow (Abadi et al., 2015) has been used for the implementation of the Multimodal Variational Autoencoder. consists of five unimodal sub-networks, for the proprioceptive (joint positions), visual, tactile, sound and motor modalities, respectively. The encoders, one for each unimodal sub-network, consist of two fully connected layers, while the decoders consist of three fully connected layers. For the proprioception, visual and motor networks, the two encoder layers consist of 40 and 20 units, respectively, and the three decoder layers consist of 40, 8 and 8 units. For the tactile and sound networks, the two encoder layers consist of 10 and 5 units, respectively, and the three decoder layers consist of 10, 2 and 2 units. The ReLU activation function is used throughout the network for each layer. The difference in the number of units is to take into account that tactile and sound data are two-dimensional vectors, while the other modalities consist of eight-dimensional vectors. The outputs of all the unimodal encoders are concatenated to feed into the shared network, which consists of a two-layer encoder with 100 and 28 units, and a two-layer decoder with 100 and 70 units 444 The source code and the dataset used for this experiment can be downloaded at github.com/ImperialCollegeLondon/Zambelli2019_RAS_multimodal_VAE. .

4.3 Sensorimotor data reconstruction

In this section, we present experiments that demonstrate the performance of the proposed system to reconstruct sensorimotor data from complete and from partial observations, that is when all inputs are available and when only a subset of modalities is available. The experiments show that the proposed architecture can effectively reconstruct the data in all cases.

The Multimodal Variational Autoencoder is first trained using datapoints explored during motor babbling. The dataset collected during babbling is split into a training dataset and a test dataset. As described in Section 3, the network is trained on both complete and partial data of the training set.

In order to evaluate the reconstruction ability of the network, we first assess whether the encoding and decoding of the variational autoencoder manage to retrieve complete input data (when all the modalities are present). Then we tested the model on the reconstruction of missing modalities, using only the visual information as input.

The experiments conducted showed that the learned network achieves considerable results in terms of reconstruction and beyond that in terms of capturing the complexity of the system. The network is able to provide an estimate of the input reconstructed even when the majority of the modality dimensions is missing. Importantly, the model is also able to provide a measure of the uncertainty due, for example, to the redundancy of the system. Results of the reconstruction obtained using the multimodal variational autoencoder are shown in Fig. 4. This figure shows the reconstruction results on the joints, motor, tactile, sound and visual spaces, obtained with both complete and partial input data. The data used for this experiment belongs to the dataset collected from the robot self-exploration phase, but have not been used during the training of the model. It is possible to note that while the reconstruction of the visual signals is very accurate, the reconstruction of the joints’ positions and of the motor commands presents a peculiar behavior. In particular, reconstruction errors occur simultaneously for diverse joints. A closer analysis of these results shows that these joints are actually related in the kinematic structure of the robot: one joint can compensate or contribute for the movements of the other joint.

The results shown in Fig. 4 demonstrate how this redundancy is captured by the multimodal variational autoencoder, thus demonstrating the power of this type of network on such difficult tasks. More specifically, the multimodal variational autoencoder is able to learn the general sensorimotor structure underlying the robot’s movements rather than single trajectories or single motion sequences. In other words, a robot learns that there can be diverse configurations to achieve a target (for example a visual target). For instance, it can be seen that for and the variance of the reconstruction is particularly large. This comes from the fact that several joint configurations can explain the visual information provided to the architecture. We note that the true data to be reconstructed remains most of the time within the confidence range of the reconstruction.

The results obtained show another interesting capability of the learned network, namely the ability of learning a forward kinematics only using 2D images from the robot’s cameras, while not having direct access to the depth information of the 3D position of the hand in the robot’s operational space. This allows the system to avoid the use of stereo vision algorithms (with the related calibration and matching issues), while having the possibility to rely on the on-board 2D RGB cameras.

The mean squared errors of the reconstructed sensorimotor signals on test data for each modality have been computed to provide a quantitative account of the network performance. In Table 2, we report the error scores obtained both when complete and partial data are provided to the network. Note that the error scores achieved with partial data are comparable to those obtained when feeding complete data to the network, with the only exception of the touch modality, which remains a challenge due to its binary nature. This shows that the performance of the network is generally not degraded significantly when the input data consists only of partial data (i.e. vision only). This also shows that the network has successfully learned not only a direct reconstruction of each single modality but also cross-relations between the modalities and the way to reconstruct one of them provided only visual data are available.

The values reported in Table 2 show the accuracy of the proposed method. The values are reported with percentages (relative to the dataset ranges) to enable direct comparison across the modalities. However, to better appreciate them, consider that the and mean squared errors in joint space correspond to mean errors of and degrees in joint angles respectively. Similarly, the mean squared error in vision space corresponds to an average error of about pixels in the original image frames and mean squared errors of and in the motor commands represent an average error of and degrees per second.

Rec. complete data Rec. partial data
0.46% [0.45; 0.48]% 1.39% [1.37; 1.44]%
0.05% [0.04; 0.07]% 0.05% [0.03; 0.06]%
2.35% [1.74; 3.66]% 9.42% [9.07; 10.44]%
3.35% [0.70; 4.18]% 3.95% [3.35; 4.18]%
1.29% [0.67; 1.60]% 2.32% [2.29; 2.37]%
Table 2: Mean squared error percentages for each dimension of the multimodal reconstructed signal on test data.

4.4 Predict own sensorimotor states and visual trajectories of others

In this section, we present experiments which demonstrate the ability of the proposed architecture to predict the robot’s own sensorimotor state and to predict visual trajectories of another agent from an egocentric point of view. These experiments show that the proposed architecture can effectively predict future states by using the multimodal representations learned during training. Condition (2) in Table 1 was critical to achieve this behavior. The prediction tasks requires the network to infer future sensorimotor states given the current one (see case (2) in Table 1). This is realized by feeding the inferred missing time step (i.e. the time step ) back to the network as the new time step , letting the network infer the new time step , which is in fact the prediction at .

Figure 5: Prediction results using the learned model to predict the visual trajectories (with coordinates ) of the robot’s own motion (a representative part of the trajectories is depicted). Solid black lines represent the real data (part of the test database), while blue lines represent the predicted mean and the shaded light blue areas the predicted variance (uncertainty) of the model. In each plot, on the horizontal axes are the time steps, while on the vertical axes are the magnitude (normalized) of each of the four dimensions of the visual state.

First, we have evaluated the proposed architecture using test data from the robot’s own data collected from motor babbling. Results of the predictions of the visual trajectories obtained on data explored during motor babbling are shown in Fig. 5. The mean squared prediction error score obtained on this experiment is (corresponding to less than 4 pixels). The data on which the experiment is carried out is the test database, that is a part of the data from the robot’s self-exploration which was not used for training the model. These results show that the network is able to effectively make accurate predictions by first reconstructing missing data from visual positions only, and then iterating the process for a second time in order to achieve the next step prediction.

We have also tested the architecture on multi-step ahead predictions. At each time step, the predicted next state is used as the input of the network to predict an additional step ahead. This process can be repeated as long as necessary. The results in Fig. 6 show that the model is capable of predicting the visual trajectory of the on-going swing of the robot (the starting state of the prediction being 2 time steps after the beginning of the swing). The predicted trajectory (in blue) matches accurately the ground truth trajectory (in black) for more than 20 time steps. The prediction accuracy at 50 time steps is (less than pixels). Then, the model converges to a stable periodic swing pattern which differs from the actual trajectory of the robot. Note that obtaining stable long-term predictions with this type of approach is a challenging problem: this approach tends to diverge quickly because of the accumulation of error; also, note that it is expected the model to be unable to predict the movements of the robot after the first swing, as each swing is independent.

Figure 6: Prediction over multiple time steps using the learned model to predict the visual trajectories (with coordinates ) of the robot’s own motion (a representative part of the trajectories is depicted). Solid black lines represent the real data, while blue lines represent the predicted mean.

Then we evaluated the architecture on data collected from the observation of other agents. Using the multimodal variational autoencoder trained on data of the robot itself, the robot is able to make predictions also of others’ motion trajectories in the visual space. When observing others, the robot has access to the visual information only, from its egocentric point of view. The learned model is then used to retrieve the motor commands (together with the other missing sensory modalities) that would enable the robot to reproduce the trajectory observed to perform mental simulation of the observed action. Experiments were carried out using two different datasets. The first test dataset consists of movements of a human playing a piano keyboard, that was recorded by the authors using a RGB-D camera (Fig. (a)a). The second test dataset is part of the Imperial-PRL KSC Dataset (data used in (Chang et al., 2017) to validate kinematic structure correspondences methods). It contains kinect data of a human moving his hands (represented in Fig. (b)b). The 3D visual positions of these two datasets were then translated into 2D data by using two of the three available dimensions. This corresponds to a coarse approximation of the projection of the 3D trajectories onto the two cameras of the robot.

While the first dataset is similar to the self-exploration dataset in terms of scenario and application, the second one is significantly different, involving the free motion of the human arms, which are not confined within the scope of a keyboard. The first test dataset allows us to demonstrate that the robot can effectively reconstruct and predict another agent’s performing a sequence of motions that is similar to those performed in the motor babbling phase by using the learned internal models. The second test dataset allows us to demonstrate that the robot is able to reconstruct and predict visual trajectories of others’ motion using the learned models also when the type of motion is significantly different from the data acquired by the robot from self-exploration.

(a) Human piano playing.
(b) Imperial-PRL KSC Dataset.
Figure 7: LABEL:sub@fig.martinaPianokinectdata Kinect data of a human upper-body movements while playing a piano keyboard with one hand. LABEL:sub@fig.maximekinectdata Kinect data from the Imperial-PRL KSC Dataset. The trajectory of the left hand has been used as test dataset.

Results are shown in Fig. 8: the left plot shows the prediction performance on the kinect data collected from a human playing a piano keyboard (see Fig. (a)a), and the right plot shows the prediction performance on the kinect data from the Imperial-PRL dataset (specifically on , see Fig. (b)b). The corresponding mean squared error scores obtained are and (corresponding to about 6 to 7 pixels) for the two datasets, respectively. The results achieved demonstrate that the proposed architecture obtains predictions of visual trajectories of others’ motion by only making use of internal models of self.

Figure 8: Predictions of others’ trajectories. Solid black lines represent the real data, while blue lines represent the predicted mean and the shaded light blue areas the predicted variance (uncertainty) of the prediction model. Prediction of human playing a piano keyboard (left) and prediction of the left hand motion (right).

4.5 Imitate the observed agent’s trajectories

In this section, we present experiments that demonstrate the ability of the proposed architecture to use the learned multimodal representations to control the robot to imitate an observed agent ’s visual trajectories. Condition (3) in Table 1 was critical to achieve this behavior. The experiments presented here show that the robot can successfully follow demonstrated/target visual trajectories, only using the learned multimodal representations.

The learned model can be used in a control loop (rightmost diagram in Fig. 1). By deploying the learned model as a controller, it is possible to implement, for example, imitation tasks, where the robot needs to track trajectories in the sensory space. The learned model is able to reconstruct the motor commands necessary to achieve reference trajectories. The retrieved motor commands can then be issued to the robot’s motors. For this experiment, we have used two datasets: target trajectories from motor babbling, and data observed from the human playing two keys on the piano keyboard. The first dataset consists of trajectories from the part of the babbling dataset that has not been used for training the network. This test dataset thus contains data that have not been seen by the network before, though they are similar to the data used for training. In particular, the associations between positions in the sensory space and corresponding values of the velocity motor commands are similar. The second dataset is more challenging, particularly because it may contain visual positions that were not contained in the training set, and this can in turn lead to combinations of the multimodal dimensions of the input that the network was never presented before. The objective is for the robot to imitate the observed target trajectory. The target trajectory is used as reference and fed to the network in place of , while the current visual position of the robot and the current joint configuration of the robot ( and ) are fed back to the network. All the other modalities are considered missing, in particular the motor commands that are produced by the network online after each new observation.

In the first experiment, we have compared the proposed method with the Cartesian controller available on the iCub. The stereo vision system of the iCub is used to determine the 3D position in the Cartesian space associated with 2D visual inputs. This information is then used by the Cartesian controller to reach the target positions. Results obtained on the first dataset are represented in Fig. 9. The trajectories depicted in this figure are, consistently with the visual data used throughout this article, those captured from the robot’s first person view. It can be noted that our proposed method generates a trajectory that is more accurate than the one from the built-in Cartesian controller. It is important to note that the visual information available to the Cartesian controller is the same used by the proposed method, hence the calibration of the cameras together with the whole experimental setup is in common. This observation allows us to conclude that the proposed method overall surpasses the built-in controller in performing the task. The mean squared error score achieved by the proposed model on this task on the four-dimensional visual data is only (corresponding to an error of less than 6 pixels), a very low value considering the resolution of the image ( pixels) and the precision of the visual data encoding the hand position throughout the experiments. The built-in Cartesian controller achieved a less accurate tracking of the reference visual trajectory, with a mean squared error score on the four-dimensional visual data of , that is more than double the error achieved with the proposed method. This difference is likely related to the fact that the reference data comes from the OpenCV tracker used to detect the 2D position of the hand in each image, which are probably not a perfect representation of the 3D position of the hand. This is likely causing the stereo-vision module to produce inaccurate target positions for the built-in Cartesian controller. Thus, we hypothesize that this succession of inaccuracies leads to a less accurate reproduction of the trajectory. Nevertheless, it is interesting to see that our proposed model manages to generate a better trajectory, while using the same data and without the need of the prior knowledge contained in the built-in Cartesian controller.

Figure 9: Results of the imitation task realized by using the built-in Cartesian controller (yellow line) and the learned model (green line) to control online the robot’s movements. The proposed method outperformed the built-in model, achieving a more accurate tracking of the reference visual trajectory (gray line). The left plot shows the 2D visual position representation of the reference and executed trajectories, while the right plots show the corresponding temporal profiles of the positions ( and coordinates). For clarity of the representation, only the trajectories acquired from the left eye camera of the robot are depicted, while similar results were obtained from the right camera.

The experiments on the second dataset are also instrumental to show that the proposed method allows a robot to use data observed from another agent and imitate them. Results of 3 repetitions of this task are represented in Fig. 10. The mean squared error score achieved on this task on the four-dimensional visual data is (corresponding to an average of only 3 pixels error in the image frames). The visual trajectory executed by the robot and represented in Fig. 10 closely tracks the trajectory demonstrated. The robot is able to replicate the trajectory and successfully hit the two keys that were played by the demonstrator. It is possible to note that the results on the coordinate are more accurate than those obtained on the coordinate. This reflects the structure of the actions performed during the exploration, which are used for training the model. While the exploratory movements spanned a wide range on the vertical direction, a smaller part of the space was explored on the horizontal direction. We hypothesize that the bias observed in the network performance is related to the fact that the data acquired through the motor babbling exploration were also biased and constrained within a limited portion of the operational space. This limited, biased exploration allowed a more efficient data collection for the scope of the experiments and tasks described in this paper. We discuss this point further in Section 5.

Figure 10: Results of the imitation task on the data collected from a human playing a piano keyboard. The proposed method (colored lines) allows the robot to effectively track the reference visual trajectory (black line). The left plot shows the 2D visual position representation of the reference and executed trajectories, while the right plots show the corresponding temporal profiles of the positions ( and coordinates). For the clarity of the representation, only 3 of the repetitions performed on the task are represented, and only the trajectories acquired from the left eye camera of the robot are depicted (analogous results where obtained from the right camera).

4.6 Results summary

In summary, the proposed method achieves accurate reconstruction and prediction; moreover, it is able to generate control signals to imitate visual trajectories consistently and accurately. We report in Table 3 a summary of the quantitative results obtained and described in the previous subsections. The proposed method achieved low prediction errors across the different tasks considered: the model was able to predict with errors that can be considered negligible with respect to the state and action spaces (e.g. less than 2 degrees angles for joint positions, less than 6 pixels in the vision space).

Task
Accuracy
(percentage scores, relative to the dataset ranges)
Reconstruction
Joint pos.: 0.46% ( degrees)
Vision: 0.05% ( pixels)
Touch: 2.35%
Sound: 3.35%
Motor c.: 1.29% ( degree per sec.)
Reconstruction
from partial data
Joint pos.: 1.39% ( degrees)
Vision: 0.05% ( pixels)
Touch: 9.42%
Sound: 3.95%
Motor c.: 2.32% ( degrees per sec.)
Prediction
of self motion
Single step: 0.21% ( pixels)
Multi-step: 0.42% ( pixels, after 50 time steps)
Prediction
of others motion
Piano playing: 0.64% ( pixels)
Imperial-PRL-KSC: 0.69% ( pixels)
Imitation
Motor babbling: 0.48% ( pixels)
Playing keys: 0.13% ( pixels)
Table 3: Accuracy scores summary: low prediction error is achieved on all the considered tasks, as only small discrepancies to the reference are measured.

4.7 Comparison with other methods

An important aspect of this work is that the training procedure can be applied to other neural network architectures with reconstruction capabilities. The augmentation of the dataset with different arrangements of missing modalities enables the construction of a single model capable of executing several tasks. In order to illustrate the possibility of applying this training procedure to other networks, and to compared the accuracy of the proposed model, we have tested several other architectures:

  • Vanilla VAE: a standard VAE model (e.g. (Kingma and Welling, 2013)) trained in a denoising fashion on the dataset without missing modalities, by using a probability of 30% to set some values of the inputs to .

  • Vanilla VAE trained with our proposed training method, on the augmented database.

  • The multimodal architecture proposed in (Droniou et al., 2015). This architecture learns a shared latent representation and classification of the inputs, and can be used to reconstruct missing modalities. It is trained as (i), which is also the approach proposed by the authors. The implementation of this architecture is based on the source code provided by its authors. The sizes of the different fully connected layers have been selected to match those of our proposed architecture.

  • The multimodal architecture proposed in (Droniou et al., 2015) trained with our augmented database.

  • Two independent models, namely a forward and an inverse models, implemented by feed-forward neural networks.

Figure 11: Visualization of performance scores (prediction error) of the proposed method and other methods. Our training strategy clearly improves performance of reconstruction methods, including Vanilla VAE and the model proposed in (Droniou et al., 2015). Our method performs equally or better than the alternatives in the complex fully sensorimotor state estimation task.

Implementation details of the different architectures are given in B. We considered three representative cases for comparison, namely:

  • prediction of the current sensory state from the previous one (case 2 of Table 1); this case corresponds to the forward model function;

  • prediction of the motor commands from the visual information only (case 4 of Table 1); this case corresponds to the inverse model function;

  • prediction of the whole sensorimotor state from the external visual information and the current joints configuration (case 3 of Table 1); this case corresponds to the imitation scenario.

Fig. 11 provides a visual representation of the performance comparisons in terms of prediction error. Table 4 summarizes the MSE scores obtained by the models compared. From the results presented in Table 4, we are able to draw the following conclusions. First, the proposed training strategy consistently improves the performance of the considered models, allowing a drop of the MSE scores to approximately half of the original scores in the case of the model from (Droniou et al., 2015), and to a fraction of it in the case of the vanilla VAE. Also, the proposed multimodal VAE outperforms a vanilla VAE model: we argue that this is because the proposed multimodal model can learn both modality-specific and cross-modality features thanks to the modular structure of the encoder/decoder and the joint probability distribution learned in the latent encoding.

The comparison with the two independent forward and inverse models demonstrates that the proposed architecture performs better because it can fulfill the two functions (of forward and inverse model) simultaneously. In this comparison, the predictions from the forward model are used for the “forward model” case, and the prediction from the inverse model are used for the “inverse model” case, respectively. To achieve the third case (imitation case) the forward and inverse models must feed each other in order to produce the whole sensorimotor state from the visual and proprioception information: first, the inverse model must be applied to get the motor commands which are then used by the forward model to produce the sensory state prediction. Despite each individual model being (almost) perfectly suited for its own function (note the lowest scores achieved), the combination of the two to achieve imitation results does not achieve the best performance on the imitation case. On the contrary, the proposed architecture outperforms this baseline.

Method
Prediction of
sensory state
(forward model)
Prediction of
motor command
(inverse model)
Prediction of
full sensorimotor state
in imitation case
Our 1.13% [0.96; 1.22]% 2.31% [2.28; 2.34]% 1.52% [1.45; 1.67]%
Vanilla VAE 19.37% [12.48; 35.39]% 22.65% [12.47; 49.08]% 8.43% [6.86; 9.16]%
Vanilla VAE
+ our training method
1.75% [1.63; 1.82]% 3.31% [3.24; 3.73]% 1.76% [1.72; 1.81]%
Model from (Droniou et al., 2015) 2.36% [2.07;2.61]% 5.72% [5.66; 5.77]% 3.56% [3.22; 3.71]%
Model from (Droniou et al., 2015)
+ our training method
1.09% [1.06; 1.12]% 3.03% [2.91; 3.06]% 1.45% [1.43; 1.47]%
Indep. forward
and inverse models
0.51% [0.49; 0.53]% 0.24% [0.23; 0.26]% 3.39% [3.04; 3.64]%
Table 4:

Accuracy of different architectures on the tasks presented in this paper. The training and evaluation of the different models have been replicated 10 times. The results are presented in the form of percentages indicating: median [first quartile; third quartile].

5 Discussion

The results presented in this study show that a robot can learn to predict the visual trajectories of another agent from an egocentric point of view by exploiting only self-learned internal models. In this study, it has been argued that one of the main challenges in achieving predictions of others only based on internal models of self is the difference of the available data: while the whole set of sensorimotor data is available when the robot is acting and exploring, only visual information is available when the robot observes another agent. This motivated the proposed strategy to reconstruct and infer the missing information. In particular, the proposed training strategy has shown crucial to improve models performance, and the proposed variational autoencoder allowed a robot to learn probability distributions among different sensorimotor modalities which captures the kinematic redundancy of the robot’s motions.

The choice of the variational autoencoder was motivated by its capability of modelling data uncertainty, through a learned posterior distribution represented by the mean and the variance of a Gaussian distribution. The multimodal formulation, moreover, allows us to combine different representations of different types of data into a single distribution (the learned posterior distribution), that gracefully merges the different sources of information. In addition, the encoder-decoder structure of the variational autoencoder is ideal for reconstruction and self-supervised learning purposes, hence a perfect fit for the objective of this work: that is to reproduce (reconstruct, predict, generate) signals during inference, after training on exploration (self-collected) data. The choice of a variational autoencoder model instead of a classical autoencoder also allowed us to leverage the advantages of generative models. Variational autoencoders model the input data by means of a distribution, generally (as in our case) a Gaussian distribution, defined by a mean and a variance. This allows to capture a more general and flexible underlying structure of the data compared to other models (such as standard autoencoders or encoder-decoder models). In our case, the distribution is action-conditioned since part of the input includes the motor commands. This means that the posterior distribution learned during training captures the correspondences between actions and sensor observations, and learns that some observations actually correspond to different actions. This is shown in Figure 4: despite the fact that joint does not follow the prescribed trajectory, the visual trajectory (as well as tactile and acoustic ones) is actually tracked accurately. This is because the same visual position of the hand can be achieved by a number of different joint configurations (redundancy). The fact that the variance of joint is significantly bigger than the variance of the other joints supports this claim, because it represents the uncertainty of this particular joint motion.

The proposed approach can be enhanced by enforcing the variational autoencoder to learn a latent space of a certain shape from which inputs can be sampled in a more meaningful manner, to generate synthetic sensorimotor data. Although we let the exploration of this direction for future work, we believe this is a strong and promising characteristic of the chosen model in the context of multimodal learning. Another key characteristic of the proposed multimodal variational autoencoder is that this model can learn both modality-specific and cross-modality features thanks to the modular structure of the encoder/decoder and the joint probability distribution learned in the latent encoding.

A limitation of the current implementation is the dependence of the reconstruction accuracy on the explored sensorimotor space. In particular, it is possible that combinations of sensory states reached during an imitation task are far from the training set of states used in the training of the network. In this case the network “guesses” motor commands by sampling from the learned distribution, but the reconstruction accuracy is usually poor due to the lack of samples resembling the observed new sensory state.

The problem of generalizing to unexplored regions of the space is indeed a very interesting and still largely unsolved problem in robotics as well as in exploration methods in other domains (e.g.machine learning, reinforcement learning, multi-task learning, etc.). One possibility to improve our current method would be to enlarge the exploration space to include a larger region of the multimodal space (e.g. bigger areas of Cartesian/join space). This would come with the problem of having to acquire larger number of data and thus making learning of the model slower. A possible direction is the implementation of more sophisticated exploration strategies, for instance curiosity-based strategies (Maestre et al., 2015; Baranes and Oudeyer, 2010), or to exploit the generative nature of the model as mentioned earlier.

Finally, in this paper, we designed the tasks in a way that the robot and the human are both capable of executing it. It would be interesting in future works to investigate how to identify and address the situation when the task cannot be fulfilled by the robot.

6 Conclusion and Future Work

This work takes inspiration from cognitive studies showing that humans can predict others’ actions by using their own internal models (Demiris et al., 2014). Following this direction, we have implemented a new architecture that allows a robot to predict visual trajectories of other agents’ actions by using only self-learned internal models. In this paper, we introduced a strategic training approach and a multimodal learning architecture that allow a robot to (1) reconstruct missing sensory modalities, (2) predict the its own sensorimotor state and predict visual trajectories of another agent from an egocentric point of view, and (3) imitate the observed agent ’s trajectories. This versatility represents a major advantage of the proposed approach, that can thus be applied in different applications to address different objectives (e.g. prediction, control, etc.). This architecture leverages advantages of developmental robotics and of deep learning, and has been evaluated extensively on different datasets and set-ups.

In future work, we will investigate how to leverage the generative capabilities of the network, and how this method can be combined with more advanced exploration strategies (such as curiosity-based strategies) in order to acquire a self-perception database that covers the robot and environment states as much as possible (Maestre et al., 2015; Baranes and Oudeyer, 2010). The presented method will also be combined with perspective taking mechanisms (Johnson and Demiris, 2005; Fischer and Demiris, 2016) to enable prediction of future states from different viewpoints.

Acknowledgements

This work was supported by an EPSRC doctoral scholarship (Grant Number 1507722), EU FP7 project WYSIWYD under Grant 612139, and EU Horizon2020 project PAL under Grant 643783-RIA.

Appendix A Datasets and training

The motor babbling dataset contains 7380 datapoints (corresponding to approximately 30 minutes of exploration). The trajectory taken from the Imperial-PRL KSC dataset contains 25 datapoints (corresponding to approximately 45 seconds). The VAE is trained on the motor babbling data augmented training set, for 80000 epochs, with learning rate of 0.00005, and a batch size of 1000 samples. At each training step, a batch is randomly sampled from the augmented training set and fed to the network to train. The augmented training set is formed by concatenating the original complete set of data collected during motor babbling and normalized to values between -1 and 1, with mutilated versions of it: Table 1 shows how the augmented dataset is formed: (1) complete data at time

and , concatenated to (2) data including only time , concatenated to (3) data including only proprioception at time and vision at time and , concatenated to (4) data including only vision at and . For the cases (2-3-4), the missing data is replaced with the value (which is outside of the normalized range used for the collected data). Each dataset was split with a 80:20 ratio between training and testing datapoints. Given the size of the dataset, the model can overfit to the training set. Nonetheless, because the training dataset was collected by using pseudo-random movements (i.e. not specific to a particular task to be performed), the network is able to generalize to different types of motion.

Appendix B Parameters of the architectures

Multimodal Variational Autoencoder (proposed architecture)

joint positions visual tactile sound motor commands input layer
8 dims 8 dims 2 dims 2 dims 8 dims
40-ReLU 40-ReLU 10-ReLU 10-ReLU 40-ReLU Modality encoders
20-ReLU 20-ReLU 5-ReLU 5-ReLU 20-ReLU
concatenation
100-ReLU Shared encoder
28-ReLU x2 Latent space
100-ReLU Shared decoder
70-ReLU
slicing into 20, 20, 5, 5, 20 dimensions respectively
40-ReLU 40-ReLU 10-ReLU 10-ReLU 40-ReLU Modality decoders
8-ReLU x2 8-ReLU x2 2-ReLU x2 2-ReLU x2 8-ReLU x2 Reconstructed data

N-ReLU represents a fully connected layer with N neurons and using the ReLU activation function. N-ReLU x2 indicates that 2 N-ReLU layers are created in parallel, one to encode the mean and the other to encode the variance of the output distribution. The network has been trained for 80k epochs with the Adam optimizer and a learning rate of . The training took approximately 5 hours on a single GPU (Nvidia GTX-1080).

Structure of the compared approaches:

VAE Forward Model Inverse Model
all modalities input layer all modalities at t-1 input layer Sensory state at t-1 and t input layer
28 dims 14 dims 20 dims
100-ReLU Encoders 14-tanh 100-tanh
100-ReLU 10-linear output layer 100-tanh
28-ReLU x2 Latent space 4-linear output layer
100-ReLU Encoders
100-ReLU
28-ReLU x2 Reconstructed data

The implementation of the comparison architecture from (Droniou et al., 2015) is based on the source code provided by the authors and replicate most of its parameters. Only differences are the number of modalities (set to 5), the number of parameters (set to 100), and the number of classes (set to one as classification is not considered here).

References

  • M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng (2015) TensorFlow: large-scale machine learning on heterogeneous systems. Note: Software available from tensorflow.org External Links: Link Cited by: footnote 3.
  • P. Abbeel, A. Coates, M. Quigley, and A. Y. Ng (2007) An application of reinforcement learning to aerobatic helicopter flight. Advances in neural information processing systems 19, pp. 1. Cited by: §2.
  • A. Alissandrakis, C. L. Nehaniv, and K. Dautenhahn (2002) Imitation with alice: learning to imitate corresponding actions across dissimilar embodiments. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 32 (4), pp. 482–496. Cited by: §1, §2.
  • B. D. Argall, S. Chernova, M. Veloso, and B. Browning (2009) A survey of robot learning from demonstration. Robotics and autonomous systems 57 (5), pp. 469–483. Cited by: §2.
  • J. Baraglia, J. L. Copete, Y. Nagai, and M. Asada (2015) Motor experience alters action perception through predictive learning of sensorimotor information. In Joint IEEE International Conference on Development and Learning and Epigenetic Robotics, pp. 63–69. Cited by: §1, §2.
  • A. Baranes and P. Oudeyer (2010) Intrinsically motivated goal exploration for active motor learning in robots: a case study. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1766–1773. Cited by: §5, §6.
  • A. Billard, S. Calinon, R. Dillmann, and S. Schaal (2008) Robot programming by demonstration. In Springer handbook of robotics, pp. 1371–1394. Cited by: §2.
  • R. Calandra, S. Ivaldi, M. P. Deisenroth, E. Rueckert, and J. Peters (2015) Learning inverse dynamics models with contacts. In IEEE International Conference on Robotics and Automation, pp. 3186–3191. Cited by: §2.
  • S. Calinon, F. D’halluin, E. Sauser, D. Caldwell, and A. Billard (2010) A probabilistic approach based on dynamical systems to learn and reproduce gestures by imitation. IEEE Robotics and Automation Magazine 17 (2), pp. 44–54. Cited by: §2.
  • H. J. Chang, T. Fischer, M. Petit, M. Zambelli, and Y. Demiris (2017) Learning kinematic structure correspondences using multi-order similarities. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §4.1, §4.4.
  • J. L. Copete, Y. Nagai, and M. Asada (2016) Motor development facilitates the prediction of others’ actions through sensorimotor predictive learning. In Joint IEEE International Conference on Development and Learning and Epigenetic Robotics, pp. 223–229. Cited by: §1, §2.
  • A. Cully, J. Clune, D. Tarapore, and J. Mouret (2015) Robots that can adapt like animals. Nature 521 (7553), pp. 503–507. Cited by: §1.
  • M. Deisenroth and C. E. Rasmussen (2011) PILCO: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning, pp. 465–472. Cited by: §2.
  • Y. Demiris, L. Aziz-Zadeh, and J. Bonaiuto (2014) Information processing in the mirror neuron system in primates and machines. Neuroinformatics 12 (1), pp. 63–91. Cited by: §1, §3.3.1, §6.
  • Y. Demiris and A. Dearden (2005) From motor babbling to hierarchical learning by imitation: a robot developmental pathway. International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems, pp. 31–37. Cited by: §2.
  • Y. Demiris and B. Khadhouri (2006) Hierarchical attentive multiple models for execution and recognition of actions. Robotics and autonomous systems 54 (5), pp. 361–369. Cited by: §1, §2, §3.3.1.
  • A. Droniou, S. Ivaldi, V. Padois, and O. Sigaud (2012) Autonomous online learning of velocity kinematics on the icub: a comparative study. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3577–3582. Cited by: §2.
  • A. Droniou, S. Ivaldi, and O. Sigaud (2015) Deep unsupervised network for multimodal perception, representation and classification. Robotics and Autonomous Systems 71, pp. 83–98. Cited by: Appendix B, §2, §3.2, Figure 11, item (iii), item (iv), §4.7, Table 4.
  • T. Fischer and Y. Demiris (2016) Markerless Perspective Taking for Humanoid Robots in Unconstrained Environments. In IEEE International Conference on Robotics and Automation, pp. 3309–3316. Cited by: §1, §6.
  • P. Fitzpatrick, A. Arsenio, and E. R. Torres-Jara (2006) Reinforcing robot perception of multi-modal events through repetition and redundancy and repetition and redundancy. Interaction Studies 7 (2), pp. 171–196. Cited by: §2.
  • V. Hafner and F. Kaplan (2005) Interpersonal maps and the body correspondence problem. In Proceedings of the Third International Symposium on Imitation in animals and artifacts, pp. 48–53. Cited by: §1, §2.
  • I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner (2016) Beta-vae: learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, Cited by: §3.1.
  • G. E. Hinton, S. Osindero, and Y. Teh (2006) A Fast Learning Algorithm for Deep Belief Nets. Neural Computation 18 (7), pp. 1527–1554. Cited by: §2.
  • M. Johnson and Y. Demiris (2005) Perceptual perspective taking and action recognition. International Journal of Advanced Robotic Systems 2 (4), pp. 301–308. Cited by: §1, §6.
  • A. Kamel, B. Liu, P. Li, and B. Sheng (2019a)

    An investigation of 3d human pose estimation for learning tai chi: a human factor perspective

    .
    International Journal of Human–Computer Interaction 35 (4-5), pp. 427–439. Cited by: §1, §2.
  • A. Kamel, B. Sheng, P. Li, J. Kim, and D. D. Feng (2019b) Efficient body motion quantification and similarity evaluation using 3-d joints skeleton coordinates. IEEE Transactions on Systems, Man, and Cybernetics: Systems. Cited by: §1, §2.
  • A. Kamel, B. Sheng, P. Yang, P. Li, R. Shen, and D. D. Feng (2018) Deep convolutional neural networks for human action recognition using depth maps and postures. IEEE Transactions on Systems, Man, and Cybernetics: Systems. Cited by: §1, §2.
  • M. Kawato, Y. Uno, M. Isobe, and R. Suzuki (1988) Hierarchical neural network model for voluntary movement with application to robotics. IEEE Control Systems Magazine 8 (2), pp. 8–15. Cited by: §2.
  • M. Kawato (1999) Internal models for motor control and trajectory planning. Current Opinion in Neurobiology 9 (6), pp. 718–727. Cited by: §1, §2.
  • D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §1, §3.1, §3.1, item (i).
  • S. Kriegman, S. Walker, D. Shah, M. Levin, R. Kramer-Bottiglio, and J. Bongard (2019) Automated shapeshifting for function recovery in damaged robots. In Proceedings of Robotics: Science and System XV (RSS), Cited by: §1.
  • S. Levine, C. Finn, T. Darrell, and P. Abbeel (2016) End-to-end training of deep visuomotor policies. Journal of Machine Learning Research 17 (39), pp. 1–40. Cited by: §2.
  • M. Lopes and J. Santos-Victor (2005) Visual learning by imitation with motor representations. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 35 (3), pp. 438–449. Cited by: §2.
  • M. Lungarella, G. Metta, R. Pfeifer, and G. Sandini (2003) Developmental robotics: a survey. Connection Science 15 (4), pp. 151–190. Cited by: §2.
  • C. Maestre, A. Cully, C. Gonzales, and S. Doncieux (2015) Bootstrapping interactions with objects from raw sensorimotor data: a novelty search based approach. In Joint IEEE International Conference on Development and Learning and Epigenetic Robotics, pp. 7–12. Cited by: §5, §6.
  • W. T. Miller, P. J. Werbos, and R. S. Sutton (1995) Neural networks for control. MIT press. Cited by: §2.
  • C. Nehaniv and K. Dautenhahn (1998) Mapping between dissimilar bodies: a ordances and the algebraic foundations of imitation. EWLR-98, pp. 64–72. Cited by: §1, §2.
  • C. L. Nehaniv and K. Dautenhahn (2001) Like me?-measures of correspondence and imitation. Cybernetics & Systems 32 (1-2), pp. 11–51. Cited by: §2.
  • J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng (2011) Multimodal Deep Learning. Proceedings of The 28th International Conference on Machine Learning, pp. 689–696. Cited by: §2.
  • M. J. Pickering and A. Clark (2014) Getting ahead: Forward models and their place in cognitive architecture. In Trends in Cognitive Sciences, Vol. 18, pp. 451–456. Cited by: §3.3.1.
  • S. Poria, E. Cambria, N. Howard, G. Huang, and A. Hussain (2016)

    Fusing audio, visual and textual clues for sentiment analysis from multimodal content

    .
    Neurocomputing 174, pp. 50–59. Cited by: §2.
  • A. Ramisa, F. Yan, F. Moreno-Noguer, and K. Mikolajczyk (2017) Breakingnews: article annotation by image and text processing. IEEE Transactions on pattern analysis and machine intelligence. Cited by: §2.
  • D. J. Rezende, S. Mohamed, and D. Wierstra (2014)

    Stochastic backpropagation and approximate inference in deep generative models

    .
    arXiv preprint arXiv:1401.4082. Cited by: §1, §3.1, §3.1.
  • J. Ruesch, M. Lopes, A. Bernardino, J. Hornstein, J. Santos-Victor, and R. Pfeifer (2008) Multimodal saliency-based bottom-up attention a framework for the humanoid robot icub. In IEEE International Conference on Robotics and Automation, pp. 962–967. Cited by: §2.
  • S. Schaal, A. Ijspeert, and A. Billard (2003) Computational approaches to motor learning by imitation. Philosophical Transactions of the Royal Society B: Biological Sciences 358 (1431), pp. 537–547. Cited by: §2.
  • T. Shimizu, R. Saegusa, S. Ikemoto, H. Ishiguro, and G. Metta (2014) Robust sensorimotor representation to physical interaction changes in humanoid motion learning. IEEE transactions on neural networks and learning systems 26 (5), pp. 1035–1047. Cited by: §2.
  • O. Sigaud and A. Droniou (2016) Towards Deep Developmental Learning. IEEE Transactions on Cognitive and Developmental Systems 8 (2), pp. 99–114. Cited by: §2.
  • R. S. Sutton and A. G. Barto (1998) Reinforcement learning: an introduction. Vol. 1, MIT press Cambridge. Cited by: §2.
  • M. Suzuki, K. Nakayama, and Y. Matsuo (2016) Joint multimodal learning with deep generative models. arXiv preprint arXiv:1611.01891. Cited by: §2.
  • P. Vicente, L. Jamone, and A. Bernardino (2016) Online body schema adaptation based on internal mental simulation and multisensory feedback. Frontiers in Robotics and AI 3, pp. 7. Cited by: §2.
  • C. Williams, S. Klanke, S. Vijayakumar, and K. M. Chai (2009) Multi-task gaussian process learning of robot inverse dynamics. In Advances in Neural Information Processing Systems, pp. 265–272. Cited by: §2.
  • D.M. Wolpert and M. Kawato (1998) Multiple paired forward and inverse models for motor control. Neural Networks 11 (7), pp. 1317–1329. Cited by: §1, §2.
  • D. M. Wolpert and J. R. Flanagan (2001) Motor prediction.. Current biology 11 (18), pp. R729–R732. Cited by: §1.
  • M. Wu and N. Goodman (2018) Multimodal generative models for scalable weakly-supervised learning. In Advances in Neural Information Processing Systems, pp. 5580–5590. Cited by: §2, §3.1.
  • M. Zambelli and Y. Demiris (2016) Online Multimodal Ensemble Learning using Self-learned Sensorimotor Representations. Cited by: §2.
  • M. Zambelli and Y. Demiris (2016) Multimodal Imitation Using Self-Learned Sensorimotor Representations. In IEEE/RSJ International Conference on Intelligent Robots and Systems, Cited by: §2.