A Framework for Multisensory Foresight for Embodied Agents

09/15/2021 ∙ by Xiaohui Chen, et al. ∙ Tufts University 0

Predicting future sensory states is crucial for learning agents such as robots, drones, and autonomous vehicles. In this paper, we couple multiple sensory modalities with exploratory actions and propose a predictive neural network architecture to address this problem. Most existing approaches rely on large, manually annotated datasets, or only use visual data as a single modality. In contrast, the unsupervised method presented here uses multi-modal perceptions for predicting future visual frames. As a result, the proposed model is more comprehensive and can better capture the spatio-temporal dynamics of the environment, leading to more accurate visual frame prediction. The other novelty of our framework is the use of sub-networks dedicated to anticipating future haptic, audio, and tactile signals. The framework was tested and validated with a dataset containing 4 sensory modalities (vision, haptic, audio, and tactile) on a humanoid robot performing 9 behaviors multiple times on a large set of objects. While the visual information is the dominant modality, utilizing the additional non-visual modalities improves the accuracy of predictions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

For humans and many animals, the ability to anticipate the future is a prerequisite for intelligent behavior. For robots, predicting the future values of sensors can assist object manipulation (e.g. planning towards a desired sensory state), anomaly and failure detection (e.g. by comparing predictions to observed values), and sensorimotor learning (e.g. learning how sensors change as a result of the robot’s actions). More generally, if a robot can predict the future values of sensors such as its cameras or haptic sensors, any perceptual routine that is used to process the robot’s current sensory state would also be applicable for predicted sensory states.

Early work in robotics focused on learning visual forward models that anticipate the future trajectories of objects manipulated by the robot as well as movements by the robot itself [26]. More recently, methods have been developed to directly predict the future raw image frames that the robot would observe in its camera stream over the course of object manipulation [8]. One limitation of existing methods is that they mostly deal solely in the visual domain. For many object manipulation tasks, however, other sensory modalities, such as haptic, audio and tactile, may be just as important. Non-visual sensory modalities can also help in situations where vision alone may be insufficient to resolve an ambiguity (e.g. two objects may look identical but one may be much heavier than the other). Indeed research conducted in cognitive science [35, 7] and robotics [3, 16] has demonstrated the importance of using multiple (and often, non-visual) sensory modalities when learning about object properties and affordances.

Motivated by these findings, we present a deep learning methodology for

multisensory foresight which uses feedback from multiple sensory modalities produced over the course of the robot’s interaction with objects in its environment. We hypothesize that including more modalities can substantially improve prediction performance. To present and evaluate our proposed methodology, we used a publicly available dataset [32], in which a robot performed 9 different types of exploratory behaviors (e.g. push, press, etc.) on 100 objects multiple times. The dataset includes vision, haptic, audio, and vibrotactile sensory modalities. This paper introduces a modular deep neural network architecture that can take advantage of any modalities for performing the next-frame prediction task. Furthermore, we extend the model to predict the next frame for modalities other than vision, which leads to further improvements in the robot’s prediction performance.

Fig. 1: The architecture of the proposed model, which consists of 4 feature encoders (left) and prediction heads (right) for 4 modalities, and 1 fusion module (middle) for merging representations of different modalities.

Ii Related Work

Multi-modal perception. A large volume of research has shown that perception can benefit by relating information from multiple sources [20, 10, 16, 37, 22]. To identify the semantics of objects (e.g. empty, soft), visual information alone may not be adequate as the objects could be identical in the visual domain but different in other aspects (e.g. material, internal state, compliance). To address this problem, several lines of research have focused on how robots can use non-visual sensory modalities of tasks that include grasping [5, 38], object recognition [27, 13, 9], object categorization [32, 4, 29] and language grounding [6, 1, 25, 33]. Inspired by these works, we propose an architecture that also uses multiple sensory modalities for the sensorimotor learning task of visual next-frame predication.

Frame prediction. This research aims to forecast future frames in video sequences. Early studies have focused on employing complex networks to directly generate pixel values (e.g. [36]). However, these methods generally produce blurry predictions, as it is hard to model the distribution of image pixels, especially multiple steps into the future. Inspired by language modeling, Ranzato et al. [24]

applied a recurrent neural network to anticipate future frames. Srivastava

et al. [28] adapted LSTM model to capture pixel dynamics. Mathieu et al. [19]

investigated different loss functions for sharper frame predictions. In another effort, Oh

et al. [21]

proposed an action-conditional autoencoder network for Atari Games. Liang

et al. [17] defined a dual motion Generative Adversarial Net (GAN). Recently a few approaches have solved the issue of blurriness of predictions multiple steps into the future [34, 15, 2]. Despite the remarkable success, they have their own limitations. For example, [34] uses a hierarchical method which enables it to make sharper images for a longer period of time; however, it has the limitation that occasionally predictions disappear which constraints its applicability in safety critical settings. Two of the most successful models for frame prediction are PredNet [18], and the work introduced in [8]. ConvLSTM units are essential building blocks of these two models. PredNet makes local predictions in each layer of the network and only passes deviations from the predictions to succeeding layers. The model presented in [8] uses a pixel transformation function such as convolutional dynamic neural advection (CDNA) to predict motion distribution for the objects in videos. Despite the immense success, this model considers only one modality (vision) alongside state and action for forecasting future frames. In this paper, the proposed multi-modal network draws on the model architecture from [8] for the vision prediction branch. By integrating several modalities to the network, the proposed model shows significant improvement in performance compared to the single-modality network.

(a) Visual Feature Network
(b) Visual Prediction Network
Fig. 2: Pipeline of The Visual Prediction Module, 1(a) shows the architecture of visual feature extractor, and 1(b) shows the architecture of visual prediction network.
(a) Haptic Features
(b) Audio Spectrogram
(c) Vibro Accelerometer (3 Axes)
Fig. 3: Visualization of (a) haptic, (b) audio and (c) vibrotactile modalities when the robot drops a bottle
(a) lift behavior
(b) push behavior
Fig. 4: Sharpness of predicted images, when the robot arm perform different behaviors (3(a): lift, 3(b): push).

Iii Learning Methodology

Next, we describe our framework for multisensory foresight, which uses multiple sensory modalities coupled with exploratory actions performed on objects by the robot.

Iii-a Notation and problem formulation

We used a dataset which contains samples , where , and each sample is defined as a quadruple . The quadruple is consisted of 4 kinds of sequential data collected by different sensors: 1) Visual data ; 2) Haptic data ; 3) Auditory data ; and 4) Vibrotactile data . Different sensors of the robot execute at different frequency rate. As a result, with regard to our primary task which is to predict the following visual frames, all other modalities are processed to be synchronized to the visual data in terms of time step. To meet this end, for each time step, the modality data is defined as follows:

where , and are the width, height and the number of channels of each image respectively, is the number of robot joint-torque sensor readings, is the number of frequency bins in the audio spectrogram, and is the number of accelerometer readings. Moreover, , and are the number of in-frame time steps of haptic, auditory, and vibrotactile modalities respectively.

The goal of the framework is to predict the future visual frames given context frames along with other modalities, where . We also add a categorical feature indicating the type of behavior performed by the robot. While the main task is predicting subsequent frame images , where , we introduce the concept of auxiliary tasks learning, which also predicts the next frames for haptic, audio and vibrotactile modalities which are denoted as , and

respectively. Auxiliary tasks are expected to help find a stronger representation of how the modalities relate to one another through backpropagation, from which the main task might benefit. To this end, we define a highly abstracted autoregressive model

:

(1)

where , , are the additional modality sequences prior to time step . The model first learns how to extract high-level representations of each modality individually, then learns the interaction and combination of the 4 modality representations, and finally outputs the next frame prediction of each modality using the multi-heads network. Next, we discuss the details about the model .

Iii-B Model architecture

The proposed network architecture, shown in Figure 1, consists of 3 sub-modules: feature encoders, fusion module, and multi-modality prediction network.

Feature Encoders

Previous methods on next frame prediction relied mainly on the visual modality, while in our approach, inputs to the network are sequences of different modalities , , , . To efficiently integrate different modality features together, all modalities are mapped into feature maps with different numbers of channels via their corresponding feature encoder. The feature encoder networks are composed of convolution, downsampling, and ConvLSTM modules with concatenation and tile operation.

For the visual modality, we employ stack ConvLSTMs (Figure 1(a)

) to extract high-level vision features as well as spatio-temporal features. For the haptic modality, we spatially tile the concatenated joint signals and robot gripper pose across the feature map and feed it into the haptic-specific feature extraction network. For the audio and vibrotactile modalities, first we use Fast Fourier Transform (FFT) to compute a spectrogram, then employ convolutional layers and ConvLSTM layer to extract features.

Fusion Module

The fusion module contains one convolutional layer and one ConvLSTM layer with a concatenation operation, as illustrated in Figure 1. To further merge the modality features, it first integrates the lowest-dimensional activation maps given by each feature encoder into one latent feature map along the channel via concatenation operator, and feed it into the defined layers sequentially. The number of channels in the output feature map will be compressed into the same as of the visual input feature map, which in our work, channel size , and are considered. The output feature map contains information extracted from all used modalities and will be further used to predict each modality in the next frame. Note that the number of chosen modalities can vary from 1 to 4, and the fusion module will automatically adapt the modality setting and output the integrated feature map with a fixed number of channels.

Multi-modal prediction head

The core of the model is learning the internal relation across different modalities, which consequently leads to increasing the performance of the main task (visual next-frame prediction). This is achieved by augmenting the auxiliary tasks. For each modality, there is a head that gets its input (fused feature map) from the fusion module, which integrates all the information and outputs the corresponding next frame modality.

For auxiliary task prediction heads, we directly reconstruct the next frame. Transposed convolutional layers are employed in each decoder, and the fused map is upsampled to be in the same dimension as the original input. For the visual prediction head (Figure 1(b)), we use the idea of pixel transformation proposed in [8, 12], and perform two tasks instead of reconstructing the image directly. The first task is learning the pixel transformation parameters for each grouped object. The second task is performing an instance segmentation task that aims to group pixels by object. There are two branches for the visual prediction head. In the object motion capture branch, a motion prediction module called convolutional dynamic neural advection (CDNA) is used [8]. The CDNA function computes new pixel values by applying multiple normalized convolution kernels to previous frames. CDNA is an object-centric motion prediction module, and as it is indicated in [8], the intuition behind it is that pixels form the same rigid entity move together. This module is expressed in the following equation:

(2)

where is the size of convolution kernel, and is a set of several transformations of the previous image. In the instance segmentation branch, skip connections are used to include the intermediate feature maps obtained from the visual encoder to the middle of the prediction head by directly concatenating them to restore the details learned in the low-level feature maps. This branch is responsible for applying masks to different objects. Finally, to obtain a single output image , the composition of predicted images should be modulated by a mask.

(3)

where represents the channel of the mask, and is the Hadamard product. The total loss function contains 4 components, each of which corresponds to the cost function for each modality. The cost function for each modality is weighted and is described below. is the total loss:

(4)

where in our work, the coefficient hyper-parameters are selected via grid search: , , and . We used mean square error (MSE) as the cost function for each modality.

Iv Experimental Results

We compare the proposed framework with the vision only model proposed in [8] both quantitatively and qualitatively. To better investigate the robustness of the model, we provide two settings for experiments, which will be discussed in sections IV-A and IV-B. Furthermore, we discuss the effect of employing auxiliary training in section IV-C.

Implementation Details.

We make use of PyTorch

[23] for GPU-based implementation111Code: https://github.com/tufts-ai-robotics-group/mmvp/tree/main, set the number of context frames to 4, and evaluate the model performance for the following 16 predicted frames. For a few behaviors (grasp and tap), there are fewer frames in the dataset, only the following 6 frames are predicted. We employed ADAM optimizer [14] with learning rate

to train the network for 30 epochs with batch size 32. For evaluation, we use Structural Similarity Index Measurement (SSIM) metrics to measure the visual prediction quality. Alternative metrics, such as Maximum Mean Discrepancy (MMD)

[11] could be considered. We performed 5-fold cross-validation such that during each test, data from 80 objects was used for training and data from the remaining 20 objects was used for testing.

(a) Ablation study on sensory inputs
(b) Ablation study on behavior input
Fig. 5: Quantitative result evaluated with SSIM metric. Ablation studies on all behavior setting
avg. SSIM haptic audio vibrotactile behavior
0.771
0.773
0.767
0.769
0.756
0.770
0.776
0.773
0.798
TABLE I: Investigation of contribution of each modality to the improvement of model prediction

Dataset. The dataset described in [30] is used to evaluate and compare the proposed network with the single-modal network. For collecting the dataset, an uppertorso humanoid robot with a 7-DOF arm manipulates 100 objects by executing 9 different exploratory behaviors (push, poke, press, shake, lift, drop, grasp, tap and hold) multiple times and records visual, haptic, auditory and vibrotactile sensory data. The visualization of different sensory modalities when the robot drops a bottle is provided in Figure 3. Figure 2(a) illustrates the torques of 7 joints of the robot and 3 end-effector positions over time. Figure 2(b)

shows the spectrogram of the auditory data. We use the Fast Fourier Transform to convert the raw signal into a representation in the frequency domain. Figure

2(c) shows the 3-axis accelerometer readings.

Fig. 6: Investigating the performance of different combinations of modalities per individual behavior
vision (SSIM) haptic (MSE) audio (MSE) vibro (MSE)
aux no aux aux
vision 0.756 - - -
vision+haptic 0.785 0.764 0.282 - -
vision+haptic+audio 0.796 0.791 0.246 0.042 -
vision+haptic+audio+vibro 0.798 0.795 0.244 0.041 0.739
TABLE II: Modalities loss with and w/o auxiliary training, aux refers to auxiliary training and no aux refers to no auxiliary training.

Iv-a Training the Network with All Behaviors

The first experiment is to evaluate the framework in the all-behavior setting. Unlike the model in [8], which only uses one behavior (push), in the presented work, we trained the model on data spanning all 9 exploratory behaviors and evaluated it on novel unseen objects that were not seen during training. In this setting, we first show an illustrative example which describes the qualitative results of using multi-modal perceptions and a vision-only model compared with ground truth. Then we quantitatively evaluate the model performance with regards to different numbers of used sensory modalities. Furthermore, we study the model’s performance when the behavior type (e.g. grasp vs. push) is added as a categorical feature to the network. Note that except when explicitly indicated, the behavior category feature is used as input for the experiments.

Illustrative Example. Figure 4 shows the qualitative reconstruction performance of the proposed method and vision-only model [8] compared to ground-truth when the robot arm uses different behaviors (push, lift) to interact with objects. We observe that predicted frames using multi-modal are much less blurry. Furthermore, this figure demonstrates that the proposed method better captures the motion and can localize the object appearance with more precision especially in multiple steps into the future (e.g. see location of robot arm and the object for push behavior, frame No. 16).

Quantitative Reconstruction Performance. Figure 4(a) illustrates the performance of the network when integrated with different combinations of modalities compared to the vision-only method [8]. The results show that utilizing the network with multi-modal perceptions substantially increases the performance of the predicted frames. Note the gap between vision only and any combination of multi-modal escalates for further future frames. Meanwhile, as expected, the quality of prediction in all models decreases over time as errors accumulate. To avoid overfitting, we train the model with different channels in each layer and explore the effect of the model’s size on the performance. The baseline model explored in [8] contains 12.5M parameters, based on which we extend other modality sub-networks and reached 13.6M parameters. The number of associated parameters for the additional modalities are much less for two reasons. First the dimensions of other modalities are smaller compared to the vision. Second a deeper network is used for the visual branch. In another set of experiments, we investigate the effect of adding the behavior type as an input feature to the model. Figure 4(b) contrasts the model when it is trained with and without behavior. This figure shows the model performs better when the behavior is added as an input feature.

We demonstrate the contribution of each modality to the improvement of the model prediction via an ablation study. Table I shows the average SSIM over all time steps. The highest performance is obtained by integrating all modalities into the model. We also observe that in our dataset, haptic, audio, and behavior category share comparable contributions, while adding vibrotactile modality does not necessarily benefit the performance in this case, and sometimes it adds noise to the model which leads to performance degradation.

Iv-B Training Behavior-specific Models

We also investigate the performance of the model when trained and evaluated on an individual specific behavior. In this section, we ran the experiments with each behavior individually, yielding 9 models for each combination of modalities. We evaluate the performance of each model separately and also the averaged performance over all 9 behaviors. Furthermore, we compare the averaged performance to the model trained in section IV-A under the same combination setting. Finally, we explore how each behavior model performs differently from the others and investigate how they benefit from the additional modalities. Figure 6 shows the comparison between vision versus vision+haptic, vision+haptic+audio and vision+haptic+audio+vibrotactile for individual behaviors in terms of SSIM.

By comparing a different combination of modalities within each behavior, we observe that for 6 out of 9 behaviors, the model benefits from other modalities, especially, haptic. By contrasting the same modality setting across different behaviors, we notice that some behaviors (lift, grasp, hold and drop) pose an easier next-frame prediction challenge than others. We also observe that for tasks with discrete events (e.g. drop), the audio and tactile modalities are very helpful for predicting future frames; however, for contact behaviors, the haptic modality is significantly more helpful than audio and tactile feedback. Furthermore, by integrating 9 separate models, we evaluate the averaged performance of the model (the ’averaged’ column in figure 6). The averaged performance of the behavior-specific models is higher than that of the model trained simultaneously on all behaviors as described in Section IV-A, shown in the rightmost column.

Iv-C Predicting Future Frames of Auxiliary Modalities

Another novelty of the proposed framework is predicting future frames of modalities other than vision. Predicting other modalities can sometimes be useful (e.g. comparing the difference between predicted audio and the observed audio modality to identify abnormal events as they happen). In this subsection, we investigate the performance of these auxiliary tasks and whether learning them helps improve visual next-frame prediction. We evaluate vision modality prediction in two settings: with auxiliary training and without auxiliary training settings and assess the performance of the next-frame prediction model for the non-visual modalities under the with auxiliary training setting.

Table II shows that auxiliary training of haptic modality enhances vision prediction while auxiliary training of audio and vibrotactile modalities does not necessarily contribute to improve visual next-frame prediction. Furthermore, this table shows that the audio modality contributes to the prediction of the haptic, while the vibrotactile modality seems to have little influence on predicting haptic and audio modalities.

V Conclusion and Future Work

In this work, we developed a predictive framework incorporating multiple sensory modalities to help solve the next-frame prediction problem. Our experiments show that utilizing the network architecture with additional haptic, auditory, and tactile inputs achieves the best results compared to a state-of-the-art vision-only baseline. Furthermore, in this paper, we proposed the use of auxiliary tasks (predicting future haptic, audio, and vibrotactile signals) and showed that learning such tasks also improves visual next-frame prediction.

One limitation of our framework is that it is trained on only one robot. Since different robots have different morphologies and different sensor suites, the learned knowledge cannot be directly used by another robot. An interesting avenue for future work is to extend transfer learning methodologies (e.g.,

[31, 30]

) as to enable a robot to bootstrap its sensorimotor learning process with knowledge learned by another robot. Another viable direction for future work is to integrate the multisensory next-frame prediction methodology described here with reinforcement learning methods for object manipulation tasks.

References

  • [1] S. Amiri, S. Wei, S. Zhang, J. Sinapov, J. Thomason, and P. Stone (2018) Multi-modal predicate identification using dynamically learned robot controllers. In

    Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI-18)

    ,
    Cited by: §II.
  • [2] M. Babaeizadeh, C. Finn, D. Erhan, R. H. Campbell, and S. Levine (2017) Stochastic variational video prediction. arXiv preprint arXiv:1710.11252. Cited by: §II.
  • [3] J. Bohg, K. Hausman, B. Sankaran, O. Brock, D. Kragic, S. Schaal, and G. S. Sukhatme (2017) Interactive perception: leveraging action in perception and perception in action. IEEE Transactions on Robotics 33 (6), pp. 1273–1291. Cited by: §I.
  • [4] R. Braud, A. Giagkos, P. Shaw, M. Lee, and Q. Shen (2020) Robot multi-modal object perception and recognition: synthetic maturation of sensorimotor learning in embodied systems. IEEE Transactions on Cognitive and Developmental Systems. Cited by: §II.
  • [5] S. Chitta, J. Sturm, M. Piccoli, and W. Burgard (2011) Tactile sensing for mobile manipulation. IEEE Transactions on Robotics 27 (3), pp. 558–568. Cited by: §II.
  • [6] V. Chu, I. McMahon, L. Riano, C. G. McDonald, Q. He, J. M. Perez-Tejada, M. Arrigo, T. Darrell, and K. J. Kuchenbecker (2015) Robotic learning of haptic adjectives through physical interaction. Robotics and Autonomous Systems 63, pp. 279–292. Cited by: §II.
  • [7] M. O. Ernst and H. H. Bülthoff (2004) Merging the senses into a robust percept. Trends in cognitive sciences 8 (4), pp. 162–169. Cited by: §I.
  • [8] C. Finn, I. Goodfellow, and S. Levine (2016) Unsupervised learning for physical interaction through video prediction. In Advances in neural information processing systems, pp. 64–72. Cited by: §I, §II, §III-B, §IV-A, §IV-A, §IV-A, §IV.
  • [9] D. Gandhi, A. Gupta, and L. Pinto (2020-07) Swoosh! Rattle! Thump! - Actions that Sound. In Proceedings of Robotics: Science and Systems, Corvalis, Oregon, USA. Cited by: §II.
  • [10] Y. Gao, L. A. Hendricks, K. J. Kuchenbecker, and T. Darrell (2016) Deep learning for tactile understanding from visual and haptic data. In 2016 IEEE International Conference on Robotics and Automation (ICRA), Vol. , pp. 536–543. External Links: Document Cited by: §II.
  • [11] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola (2012) A kernel two-sample test.

    The Journal of Machine Learning Research

    13 (1), pp. 723–773.
    Cited by: §IV.
  • [12] M. Jaderberg, K. Simonyan, A. Zisserman, et al. (2015) Spatial transformer networks. In Advances in neural information processing systems, pp. 2017–2025. Cited by: §III-B.
  • [13] S. Jin, H. Liu, B. Wang, and F. Sun (2019) Open-enviroment robotic acoustic perception for object recognition. Frontiers in Neurorobotics 13, pp. 96. Cited by: §II.
  • [14] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §IV.
  • [15] A. X. Lee, R. Zhang, F. Ebert, P. Abbeel, C. Finn, and S. Levine (2018) Stochastic adversarial video prediction. arXiv preprint arXiv:1804.01523. Cited by: §II.
  • [16] Q. Li, O. Kroemer, Z. Su, F. F. Veiga, M. Kaboli, and H. J. Ritter (2020) A review of tactile information: perception and action through touch. IEEE Transactions on Robotics. Cited by: §I, §II.
  • [17] X. Liang, L. Lee, W. Dai, and E. P. Xing (2017) Dual motion gan for future-flow embedded video prediction. In

    Proceedings of the IEEE International Conference on Computer Vision

    ,
    pp. 1744–1752. Cited by: §II.
  • [18] W. Lotter, G. Kreiman, and D. Cox (2016) Deep predictive coding networks for video prediction and unsupervised learning. arXiv preprint arXiv:1605.08104. Cited by: §II.
  • [19] M. Mathieu, C. Couprie, and Y. LeCun (2015) Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440. Cited by: §II.
  • [20] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng (2011) Multimodal deep learning. In ICML, Cited by: §II.
  • [21] J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh (2015) Action-conditional video prediction using deep networks in atari games. In Advances in neural information processing systems, pp. 2863–2871. Cited by: §II.
  • [22] F. Pastor, J. García-González, J. M. Gandarias, D. Medina, P. Closas, A. J. García-Cerezo, and J. M. Gómez-de-Gabriel (2020) Bayesian and neural inference on lstm-based object recognition from tactile and kinesthetic information. IEEE Robotics and Automation Letters 6 (1), pp. 231–238. Cited by: §II.
  • [23] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §IV.
  • [24] M. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra (2014) Video (language) modeling: a baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604. Cited by: §II.
  • [25] B. Richardson and K. Kuchenbecker (2019) Improving haptic adjective recognition with unsupervised feature learning. In IEEE International Conference on Robotics and Automation (ICRA), Cited by: §II.
  • [26] M. V. B. O. Sigaud and G. P. G. Baldassarre (2007) Anticipatory behavior in adaptive learning systems. Springer. Cited by: §I.
  • [27] J. Sinapov, T. Bergquist, C. Schenck, U. Ohiri, S. Griffith, and A. Stoytchev (2011) Interactive object recognition using proprioceptive and auditory feedback. The International Journal of Robotics Research 30 (10), pp. 1250–1262. Cited by: §II.
  • [28] N. Srivastava, E. Mansimov, and R. Salakhudinov (2015) Unsupervised learning of video representations using lstms. In International conference on machine learning, pp. 843–852. Cited by: §II.
  • [29] G. Tatiya, R. Hosseini, M. C. Hughes, and J. Sinapov (2019) Sensorimotor cross-behavior knowledge transfer for grounded category recognition. In 2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), pp. 1–6. Cited by: §II.
  • [30] G. Tatiya, R. Hosseini, M. C. Hughes, and J. Sinapov (2020) A framework for sensorimotor cross-perception and cross-behavior knowledge transfer for object categorization. Frontiers in Robotics and AI 7, pp. 137. Cited by: §IV, §V.
  • [31] G. Tatiya, Y. Shukla, M. Edegware, and J. Sinapov (2020) Haptic knowledge transfer between heterogeneous robots using kernel manifold alignment. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Cited by: §V.
  • [32] G. Tatiya and J. Sinapov (2019) Deep multi-sensory object category recognition using interactive behavioral exploration. In 2019 International Conference on Robotics and Automation (ICRA), pp. 7872–7878. Cited by: §I, §II.
  • [33] J. Thomason, A. Padmakumar, J. Sinapov, N. Walker, Y. Jiang, H. Yedidsion, J. Hart, P. Stone, and R. Mooney (2020) Jointly improving parsing and perception for natural language commands through human-robot dialog. Journal of Artificial Intelligence Research 67, pp. 327–374. Cited by: §II.
  • [34] N. Wichers, R. Villegas, D. Erhan, and H. Lee (2018) Hierarchical long-term video prediction without supervision. arXiv preprint arXiv:1806.04768. Cited by: §II.
  • [35] T. Wilcox, R. Woods, C. Chapa, and S. McCurry (2007) Multisensory exploration and object individuation in infancy.. Developmental Psychology 43 (2), pp. 479. Cited by: §I.
  • [36] J. Yuen and A. Torralba (2010) A data-driven approach for event prediction. In European Conference on Computer Vision, pp. 707–720. Cited by: §II.
  • [37] K. Zhang, M. Sharma, M. Veloso, and O. Kroemer (2019) Leveraging multimodal haptic sensory data for robust cutting. In 2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids), pp. 409–416. Cited by: §II.
  • [38] Y. Zhang, W. Yuan, Z. Kan, and M. Y. Wang (2019) Towards learning to detect and predict contact events on vision-based tactile sensors. arXiv preprint arXiv:1910.03973. Cited by: §II.