Sequential Image-based Attention Network for Inferring Force Estimation without Haptic Sensor

11/17/2018 ∙ by Hochul Shin, et al. ∙ 0

Humans can infer approximate interaction force between objects from only vision information because we already have learned it through experiences. Based on this idea, we propose a recurrent convolutional neural network-based method using sequential images for inferring interaction force without using a haptic sensor. For training and validating deep learning methods, we collected a large number of images and corresponding interaction forces through an electronic motor-based device. To concentrate on changing shapes of a target object by the external force in images, we propose a sequential image-based attention module, which learns a salient model from temporal dynamics. The proposed sequential image-based attention module consists of a sequential spatial attention module and a sequential channel attention module, which are extended to exploit multiple sequential images. For gaining better accuracy, we also created a weighted average pooling layer for both spatial and channel attention modules. The extensive experimental results verified that the proposed method successfully infers interaction forces under the various conditions, such as different target materials, illumination changes, and external force directions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 6

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Among the five human senses, touch is an important human perceptional modality for understanding the relationship between human and surroundings. It offers complementary information for realizing the surrounding environment. From this viewpoint, touch or tactile sensing have been an attractive topic in the fields of robotics and haptic sensing for many years [9][4][13][15][24]. The main physical property for grasping and interacting with objects is the interaction force. Specifically, when a robotic hand attempts to grasp an object, the contact-type haptic sensor will be used to measure the interaction force between the device and object; this improves the grip success rate and enables precise hand manipulations [14]. In case of a person, the visual information sensed by eyes is utilized in addition to the tactile sensing when grapping. Through visual information, we perceive the shape, appearance, and texture of objects and infer the tactile memory learned through past experiences before touching objects. From the viewpoint of neuroscience and psychophysics, Ernst and Banks [6] investigated the method for sharing information between vision and tactile sensing. Newell et al. [16] showed that the human brain employs shared models of objects across multiple sensory modalities, e.g., vision and tactile sensing, so that knowledge can be transferred from one to another.

Figure 1: Overview of (a) baseline networks: CNN and RNN; and (b) the proposed sequential image-based attention module.

Inspired by the knowledge transfer from vision to tactile sensing [20], we propose a vision sensor-based method that simulates the tactile sensing, which has a different modality, with only visual sensing information. When humans try to touch an object, they can recall the feeling of the object through prior experience before touching the object. Specifically, if we know what an object is, and we can observe how the appearance of the object changes by a finger, we can predict the interaction force between the object and finger from prior experience. Another focal point of the proposed method is that compared with contact-type haptic sensors, a noncontact-type sensing method could measure the haptic force constantly because the camera sensor is not worn out even when it is used for a long time. Moreover, as an additional touch sensor does not need to be attached to the instrument, the mechanism of the instrument can be miniaturized. In this paper, our computational approach is based on learning haptic information from previous human experiences. The following are two pivotal rules: (1) to recognize what an object is from only images and (2) to predict what kind of interaction force is exerted by the object using sequential images. For this purpose, we collected more than 300,000 images under the various conditions and from different objects and the corresponding databases were used for training and validating the proposed method.

From the viewpoint of the deep learning architecture for predicting the haptic information from the images, the basic deep learning architecture is developed using the convolutional neural network (CNN)-based recurrent neural network (RNN)

[5], as shown in Fig. 1 (a). Similar to human perception processes, we first used CNN to analyze target object types and their appearance changes from images, and then analyzed the images over time and used their temporal changes as RNN inputs to eventually estimate the interaction force. For building up network composition, we expect that the attention mechanism [21][22], which focuses only the important region in the image for visual question answering (VQA) [1], helps to improve the accuracy of the force prediction. The main difference between the proposed method and the previous attention networks [21][22] is that we used the temporal dynamics-based attention method using the sequential images for predicting the interaction forces.

In this paper, we propose a sequential image-based attention method consisting of a sequential spatial attention module (SSAM) and a sequential channel attention module (SCAM) for gaining better accuracy. By developing the attention module based on the sequential images independent of RNN, as shown in Fig. 1 (b), the concentrated region could be inferred clearly for predicting the haptic force based on the shape changes of a target object. Moreover, although we used both spatial and channel attention modules as used in [22], the proposed attention modules were modified using the spatial pixel-wised weighted average pooling (WAP) and channel-wised WAP. Unlike in [22], we trained the SSAM and SCAM independently and finally merged them. The spatial and channel information are mutually exclusive, and they are not easily trained under a unified framework for predicting the haptic forces through the sequential images.

The main contributions of this paper are as follows: (1) a computational method is proposed for predicting the haptic information not by using a haptic sensor but a vision camera. (2) We collected a large number of sequential images and their corresponding force information from the automatic mechanism under various conditions. (3) We also propose a deep learning method based on sequential image-based attention modules for predicting force accurately.

2 Related Work

Studies have also been conducted to measure interaction forces without a force sensor. In [2]

, a stereo camera was used to reconstruct a 3D artificial heart surface and a supervised learning method was applied to predict the applied force. In

[26], a video-based interaction force estimation method between a human body and an object was proposed using 3D modeling information. In [17], a single RGB-D camera-based method was used to estimate the contact forces between human hand and an object; the method makes use of only visual information, given geometrical and physical object properties. In [7], a deep learning-based hand action prediction method was proposed using only visual information; it predicted the force of the fingertips by using the proposed networks. In [12], the authors focused more specifically on how to predict the interaction force from visual changes of the target objects by using the RNN method. Their work is the first to focus on interaction force prediction from only images without any additional sensor. However, the proposed RNN-based method does not have deeper layers for effectively training all the variations of the visual changes, such as illumination and pose changes. To overcome this issue, we employed the basic framework of the CNN-based RNN method, in which CNN first analyzes the salient visual feature variations by using the proposed sequential image-based attention module, and then RNN works on the serialized features for predicting the final interaction force. Our proposed method can now attain more robust accuracy with respect to conventional image variations, such as different objects, various illumination condition changes, and camera pose variations.

3 Proposed Method

In this section, we propose the dynamic attention module designed for modeling the interaction between objects by using multiple images. As shown in Fig. 1 (a), we adopted the CNN-based RNN [5] module as the baseline for analyzing sequential images; here, CNN first extracts visual features of each of the frames, and the extracted features are passed to the RNN to predict the interaction forces from complex temporal dynamics. The sequential attention module was used to focus on salient regions and consider temporal dynamic information simultaneously, as illustrated in Fig. 1 (b).

3.1 Baseline: CNN-based RNN method for sequential image description

Visual Feature Extraction

CNN is an indispensable source for the representation of images. In the case of sequential data, each frame is represented by the corresponding CNN feature. Each image frame passes through visual extractor as input

, and then CNN generates the fixed-length visual feature vector representation:

. To confirm feasibility of our model, we adopted a variant of the VGG model [18]

as an encoder, which is a common deep CNN architecture. We extracted feature maps from the last pooling layer. The features of each frame are considered as one chunk for one input step of RNN. The resulting frame-level vector was fed into our long short-term memory (LSTM) architecture.

LSTM Sequential Model Given a frame-level feature vector in sequential frames, we used as the input for LSTM, which has been proven to achieve great performance in many sequential problems [5][23]. Thus, to extract sequential features, we applied an LSTM comprising self-recurrent units and a memory cell, and which can store information dozens of time-steps in the past. We adopted the bidirectional LSTM (BLSTM) [8] derived from LSTM; it considers all available information in the past and future. As BLSTM uses inputs in two ways, i.e., one from past to future and one from future to past, two hidden-state output exist. We combined these two outputs at the last timestep and then propagated them to a fully connected layer.

3.2 Weighted Average Pooling (WAP)

Figure 2: Weighted average pooling (WAP) for (a) spatial information, where channel information is averaged using a 1

1 convolutional layer. (b) WAP for channel information, where spatial information is averaged after reshaping tensors and applying a 1

1 convolutional layer.

Recent works [10][11] have used the global average pooling for calculating the spatial average of the convolutional feature map; this type of pooling helps achieve better accuracy in visual recognition. Specifically, the global average pooling is efficient to encode a bunch of convolutional feature maps into a vector of limited size. Therefore, many attention methods [11][22] employ the global average pooling for extracting the feature vector for predicting the attention regions. However, this method usesd the equal weight average pooling for reducing the dimensionality because of its simplicity and efficiency. In this paper, we argue this simple assumption and propose the WAP method, which can be developed using a 11 convolutional layer for both spatial and channel attentions during training. Moreover, in this study, we exploited multiple images for developing the temporal dynamics-based attention mechanism for CNN feature extraction. Compared with a single image-based method, the size of the convolutional feature map of the multiple image-based method generally increases along with redundant information. Therefore, the proposed WAP encourages the network to emphasize more discriminative information.

As shown in Fig. 2 (a), to average the channel information by using different weights, convolutional feature matrix is split into (). We calculated the weighted average by multiplying each element of weight vector to the corresponding spatial map. In this respect, we simply implement it by applying a 11 convolutional operation. Furthermore, a similar approach could be used for averaging the spatial information with different weights, as shown in Fig. 2 (b). In this case, we reshaped the tensors of the convolutional feature maps to a flat shape, e.g., and applied 11 convolution for gaining the different weight values of the spatial regions.

3.3 Sequential Spatial Attention Module (SSAM)

Figure 3: Illustration of proposed attention network architecture for (a) sequential spatial attention module and (b) sequential channel attention module.

In general, an interaction occurs between objects in the region that is touched; therefore, the application of a global image feature may lead to a sub optimal result due to the irrelevant region. To solve such a problem, a spatial-attention mechanism has been proposed in many previous works [1][21][22]. Such a mechanism focuses on the key regions of information in an image by excluding less important regions, leading to performance improvement. However, most of the previous works [1][21][22] are built on the assumption that only a single frame is used. As the purpose of this work is to predict the interaction force between objects in sequential images, the consideration of dynamic information of each frame is also important. Therefore, instead of only extracting an attention map through a single frame, our attention module attempts to exploit multiple adjacent frames for producing an accurate attention map by considering dynamic information. The overall procedure is illustrated in Fig. 3 (a).

We represent the convolutional feature of the th frame as . The set of convolutional features of consecutive frames at time is denoted by . The overall process can be summarized as follows:

(1)

where denotes element-wise multiplication operation, represents the sequential spatial attention map, and is the final refined feature map.

(2)

where denotes the convolution operation and represents the concatenated convolutional features from the th image to the th image. To squeeze concatenated feature map by using the proposed WAP, we used 11 convolution kernel , generating projection tensor . Each of represents a linear combination for all channels in spatial location . Next, to generate an attention map, projected map

passes the convolution layer and the sigmoid function is applied as follows:

(3)

where is the sigmoid function, represents the convolution filter, and is the bias parameter.

3.4 Sequential Channel Attention Module (SCAM)

Similar to the SSAM, the proposed SCAM also generates salient features by exploiting channel information of adjacent frames. As the amount of channel information increases because of multiple images, redundant channel information also increases. In this case, as pointed in [25], non-salient channel information caused the problem of distraction. To overcome this issue, we adopted the self-gating attention module based on channel dependence [11] and the proposed WAP method. Fig. 3 (b) describes the overall block architecture of SCAM.

The set of visual features of sequential frames are given as input.

(4)

where represents the sequential channel attention map, and is the final refined feature map,

(5)

To squeeze concatenated feature map in the channel axis, we used 11 convolution kernel after reshaping to obtain squeezed vector . Each of represents the linear combination of all spatial positions in channel . Next, the output passes through two MLP layers to provide nonlinear dependencies, and then the sigmoid function is applied as follows:

(6)

where and

are the parameter weights of multilayer perceptron, and

is the reduction ratio.

3.5 Ensemble Module

The ensemble network has shown better accuracy in many applications [19][3]. For combining the individual attention networks, Woo et al. [22] designed the serialized spatial and channel-wise attention modules under a single network. However, in this study, we trained the SSAM and SCAM independently and eventually calculated the average the two individual results based on the late fusion rule. One of the main reasons for this merging by using the late fusion is that two proposed attention mechanisms play different roles and focus on different characteristics for inferring the forces. Specifically, SSAM focuses on the specific spatial regions in images, while SCAM is responsible for evaluating which channels of the convolution layer are important. The learning of two attention methods, SSAM and SCAM, whose characteristics are different under a single network, is not an easy task. Moreover, we used multiple images to learn more temporal dynamics for better performance. The amount of information to be judged by the proposed method is increased compared with that in a single image-based attention method, and the individual learning of the SSAM and SCAM is a better choice for learning their individual purposes efficiently.

4 Dataset and Implementation

4.1 Experimental Setup and Database

Figure 4: Collected dataset for interaction force estimation. The dataset consists of four objects: sponge, paper cup, tube, and stapler. An external force is applied at four pressing angles, and three illumination changes occur. (a) Sponge sample images according to each light condition and pressing angle variations and (b) example images of other objects such as paper cup, tube, and stapler. All the images and their corresponding forces are collected from (c) the data collection device.

For building a fair experimental training and validation protocol, we built the data-collecting system, consisting of a motorized probe system, and captured the images during the interaction between the probe and object while recording the interaction forces. Specifically, Fig. 4 (c) shows a schematic description of this equipment setting. We used the RC servo motor and cam structure, which are attached up to the translation stage for generating translation movement. The end of the tool mounted by the motor moved up and down automatically to apply force on the object. We measured the interaction force between the tool tip and interaction object through a load cell (model BCL-1L, CAS). We captured the images by using a 149-Hz camera (e.g., Cameleon3, CM3-U3-13Y3C-CS, Pointgrey) and stored the collected information as 128010243 (RGB) data. We carefully synchronized between collected image and interaction force. During the interaction between the tool tip and object when collecting information for the dataset, the magnitude of the pressing force and pressing time varied randomly to collect data on various force magnitudes and durations.

Materials Training Set Test Set
Sponge 144 set (71,350) 36 set (17,729)
Papercup 144 set (71,481) 36 set (18,070)
Stapler 144 set (72,325) 36 set (18,129)
Tube 144 set (72,253) 36 set (18,076)
Table 1: The training and test protocols. One set consists of four touches with approximately 500 sequential images. The number in the parentheses is the total number of the sequential images. The total number of images are 359,413.

For inferring the interaction force from the images, we selected four objects made of different materials, as shown in Figs. 4 (a) and (b). Each object has different materials and rigidity. In this paper, we collected the object images of a sponge, paper cup, tube, and stapler. The visual images and their corresponding interaction forces were collected through synchronization. For variation in the environment around objects, each object was provided four pressing angle variations () and three levels of light intensities (350, 550, 750 lux), as shown in Figs. 4 (a) and (b). One image set has four contacts to the material and a total of 15 sets were collected for each environment. To build the training and test protocols, we collected approximately 360,000 sequential images (=15 sets500 images4 objects3 lights4 angles) by using an RGB camera, and the corresponding interaction forces were captured through the load cell in the direction. We selected three test sets from each material, and the other sets were used for training the deep learning models. Table 1 summarizes the detailed information about the training and test sets for the four material-based objects.

4.2 Implementation Detail

Layer Name Type Size
conv 1/1 3x3 conv 16
conv 1/2 3x3 conv 16
maxpool 1 stride2
conv 2/1 3x3 conv 32
conv 2/2 3x3 conv 32
maxpool 2 stride2
conv 3/1 3x3 conv 64
conv 3/2 3x3 conv 64
maxpool 3 stride2
conv 4/1 3x3 conv 128
conv 4/2 3x3 conv 128
maxpool 4 stride2
conv 5/1 3x3 conv 256
conv 5/2 3x3 conv 256
Table 2: Baseline structure of CNN.

We learned the network weights through the mini-batch stochastic gradient descent by using Adam for 120 epochs. The initial learning rate is le-4 and 1/10 was multiplied per 30 epochs. At each iteration, a mini-batch of 64 samples was constructed by sampling 20 training sequential frames, and from each frame, an object was randomly selected. The image then underwent cropping and resizing to a gray-scaled 128

128-pixel image. In the experiment, as the baseline, the variant of VGG network was used to extract visual features. As described in Table 2, the network is composed of 10-layers and outputs 256-channel feature vectors. We also experimented with an 18 layer-based Resnet [10]

to verify that our proposed model works well on other CNNs. For exploiting temporal dynamics, we used the BLSTM network with 256 hidden units and 20 timestep. The last hidden unit feature that was concatenated was fed to 1024 fully connected layers. Finally, to predict the 1-dimensional interaction force, we adopted the linear-regression model. We trained all models from scratch and measured the performance by using the root mean squared error (RMSE) and mean absolute error (MAE). In this paper, we used MAE as the standard measurement for performance comparisons.

5 Experimental Results and Discussion

5.1 Experimental Results on Proposed Sequential Attention Module

Frame Model RMSE MAE Ratio
Baseline 0.10313 0.04051 100%
Single Spatial 0.10057 0.03700 109%
Frame Channel 0.10007 0.03662 111%
Ensemble 0.09738 0.03400 119%
Multi Spatial 0.09734 0.03416 119%
Frame Channel 0.09572 0.03320 122%
Ensemble 0.09356 0.03183 127%
Table 3: Experimental results using a single frame and multiple frames.

Table 3 experimentally shows that spatial and channel attention methods help to improve performance of the baseline, CNN-based LSTM, by more than 9% for predicting the interaction forces using images. The channel attention method (or SCAM) is always better than spatial attention method (or SSAM) in this paper because going to the high layers of CNN, the high-level features are found at the channel maps of CNN, not the spatial maps. Moreover, the proposed ensemble method by merging the results of the spatial and channel attention method leads to more than 27% improvement over each attention method. Compared with the single frame-based method, e.g., 0.034 MAE, the proposed method based on the multi-frame always shows better results, e.g. 0.032 MAE at the ensemble works. It means that the attention map for inferring forces could be effectively generated by exploiting the temporal dynamics of the target object. Quantitative evaluation was conducted to find the optimal multi-frame bounds. Fig. 5 shows that the best performance was achieved by using only the previous frame, .

Figure 5: Evaluation results according to the number of frames. (a) Mean Absolute Error, (b) Root Mean Squared Error.
Frame Model RMSE MAE Ratio
Baseline 0.10313 0.04051 100%
Global Spatial 0.09731 0.03562 114%
Average Channel 0.09599 0.03431 118%
Pooling Ensemble 0.09411 0.03311 122%
Weighted Spatial 0.09734 0.03416 119%
Average Channel 0.09572 0.03320 122%
Pooling Ensemble 0.09356 0.03183 127%
Table 4: Comparison of different pooling method.

We empirically verify that our proposed pooling method is the effective to squeeze sequential frames information. We compare two methods of averaging the feature maps: our weighted average pooling and global average pooling. From Table 4, we conclude that the proposed method is superior to handle the concatenated sequential information for predicting the forces.

5.2 Experimental Result on different network architecture

CNN Model RMSE MAE Ratio
Baseline 0.10313 0.04051 100%
VGG-like(10 layer) 0.09356 0.03183 127%
Resnet(18 layer) 0.09549 0.03122 130%
Table 5: Comparison of different pooling method.

To validate the generality of our method, we apply our model to ResNet [10], one of the well-known deep learning architectures. Table 5 shows the comparative results between VGG-like one and ResNet and we can know the proposed method works successfully regardless of the used architecture types. For example, ResNet-based method also achieves 30% better MAE compared with the baseline.

5.3 Comparative Evaluation with well-known method

Model RMSE MAE Ratio
Baseline 0.10313 0.04051 100%
SE [11] 0.09838 0.03769 107%
CBAM [22] 0.09974 0.03745 108%
Proposed Method 0.09549 0.03122 130%
Table 6: Comparative Evaluation with the networks using state-of-the-art attention modules.

We conducted comparative analysis with other well-known attention methods. In Table 6, we provide a summary of the comparative evaluation results on inferring the interaction forces using our dataset, obtained by our proposed attention module and the recent state-of-the-arts techniques including the approaches based on attention mechanism [11][22]. The proposed method shows its superiority among the previous works. Note that the previous works such as [11][22] are not designed for making the attention map from the sequential images and it results in this performance degradation.

5.4 Performance analysis according to force intensity changes

Figure 6:

The evaluation result of comparing the MAE of the three models, baseline model, single-frame based attention model and our proposed model, by 11 bins according to the size of the force.

To better understand the reasons why the proposed method improves the performance over the baseline method, we divide the force magnitude into 11 bins, each of which spans a force interval as shown in Fig. 6. We used MAE measurement for each force interval to validate how the other methods, e.g., the single-frame based attention method and the proposed method, improved compared with the baseline method. From Fig. 6, we can confirm once again that the proposed method of generating the attention by using sequential images improves performance in most of force intervals. Especially, in relatively strong force intervals, e.g., , the proposed method achieves average 16% better improvements than the single-frame based attention model. Since the shape changes of the target object become large when the external force is strong, the proposed method effectively makes use of the pixel differences between the sequential images for generating the attention maps. From to , the image differences are also large, which helps to making better attention maps because the tip of the tool begins to touch the target object. On the other hand, in , the external force is constantly applied to the object and the shape changes of the target are not relatively large. As a result, the attention map is not generated precisely, and the performance is slightly worse than the baseline method. However, the proposed method has better performances than the single frame-based attention method.

5.5 Performance analysis on Various Material

Figure 7: The estimated interaction force results with our proposed method from the various materials. (a) sponge (b) papercup (c) tube (d) stapler.
Figure 8: The comparative prediction results on (a) sponge, (b) papercup, (c) tube, and (d) stapler. The x-axis represents the force (N), and the y-axis is time axis.

Fig. 7 shows the proposed method successfully has predicted the interaction forces from only images even if the interaction forces are randomly changed. This good performance is observed regardless of which object is used for experiments. In more detail, Fig. 8 shows how the proposed method is better than the baseline method when the external force reaches the peak points. The baseline method estimated the peak point of the interaction force well at first, but its predictions are not stable while the predicted results of the proposed method is closer to the ground truth and is more stable simultaneously. In this respect, we conclude that the temporal dynamics are useful for generating the attention map at CNN, even though LSTM analyzes the temporal information.

MAE sponge papercup tube stapler
Baseline 0.02118 0.02070 0.06689 0.05326
Single 0.01830 0.01607 0.06035 0.04128
Ratio (%) 116% 129% 111% 129%
Proposed 0.01734 0.01555 0.05675 0.03766
Ratio (%) 122% 133% 118% 141%
Table 7: The comparative results of improvement rates for four target objects.
Figure 9: Visualization of the spatial attention map made by the proposed method.

Table 7 describes the performance improvements according to the different target object and Fig. 9 illustrates the spatial attention map generated by the proposed methods. Sponge is an object of good elasticity. Compared with the other objects, the shape change of the sponge by external force is most apparent and it leads to the good results. The proposed method shows the best result on the papercup, because the complex surface textures represents rich visual information. For that reason, it has high estimation accuracy compared to other rigid object. As shown in second rows of Fig. 9, the network focuses mainly on the top and bottom textures of the papercup which the significant changed parts by the external forces are. Tube is composed of plastic rubbers. It is relatively softer than the others and the surface change is not obviously observed when the touch is started. For this reason, the proposed method shows slightly low improvement on the tube. In case of a stapler, because the stapler is made of solid materials, the shape change pattern is always constant when the external force is applied. In this respect, the temporal dynamics play a pivotal role in predicting the interaction forces, and we can confirm this through the experimental results in Table 7. The improvements of the single image-based attention method and the proposed method are 129% and 141%, respectively. Compared with the other objects, this 12% improvement is unique and significant.

6 Conclusion

For predicting the interaction force from the images, we have represented a sequential image-based attention module which learns a salient model from temporal dynamics. We also proposed a weighted average pooling layer for both spatial and channel attention modules, and the result is made by the ensemble of these modules. To verify our method, we collect 359,413 images and corresponding interaction forces by an electronic motor-based device. Extensive experiments show the effectiveness our method, which achieves better performance, compared with well-known single-frame based methods. We observed that our proposed method encourages the network to concentrate on interaction region for inferring interaction forces successfully. From this result, we hope our proposed method become a good initial research in the field of predicting force using one vision sensor.

References

  • [1] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Neural module networks.

    IEEE Conference on Computer Vision and Pattern Recognition

    , 2016.
  • [2] A. Aviles, S. Alsaleh, J. Hahn, and A. Casals. Towards retrieving force feedback in robotic-assisted surgery: A supervised neuro-recurrent-vision approach. IEEE Transactions on Haptics, 10:431–443, 2016.
  • [3] W. H. Beluch, T. Genewein, A. Nurnberger, and J. M. Kohler.

    The power of ensembles for active learning in image classification.

    IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  • [4] A. Cirillo, F. Ficuciello, L. Sabattini, C. Secchi, and C. Fantuzzi. A conformable force/tactile skin for physical human-robot interaction. IEEE Robotics and Automation Letters, 1:41–48, 2016.
  • [5] J. Donahue, L. A. Hendricks, M. Rohrabach, S. Venugopalan, S. Guadarrama, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. IEEE Trans. Pattern Analysis and Machine Intelligence, 39(4), Apr. 2017.
  • [6] M. O. Ernst and M. S. Banks. Humans integrate visual and haptic information in a statistically optimal fashion. Nature,, 415(6870):429, 2002.
  • [7] C. Fermüller, F. Wang, Y. Yang, K. Zampogiannis, Y. Zhang, F. Barranco, and M. Pfeiffer. Prediction of manipulation actions. International Journal of Computer Vision, 126(2–4):358–374, Apr. 2018.
  • [8] A. Graves, A. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. International Conference on Acoustics, Speech, and Signal Processing, 2013.
  • [9] V. Grosu, S. Grosu, B. Vanderborght, D. Lefeber, and C. Rodriguez-Guerrero. Multi-axis force sensor for human–robot interaction sensing in a rehabilitation robotic device. Sensors, 17:1294, 2017.
  • [10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.
  • [11] J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation networks. IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  • [12] W. Hwang and S. Lim. Inferring interaction force from visual information without using physical force sensors. Sensors, 17(11), Oct. 2017.
  • [13] C. T. Landi, F. Ferraguti, L. Sabattini, C. Secchi, and C. Fantuzzi. Admittance control parameter adaptation for physical human-robot interaction. IEEE International Conference on Robotics and Automation, 2017.
  • [14] S. Lim, H. Lee, and J. Park. Role of combined tactile and kinesthetic feedback in minimally invasive surgery. International Journal of Medical Robotics and Computer Assisted Surgery, 11(3):360–374, 2015.
  • [15] Y. Liu, H. Han, T. Liu, J. Yi, Q. Li, and Y. Inoue. A novel tactile sensor with electromagnetic induction and its application on stick-slip interaction detection. Sensors, 16:430, 2016.
  • [16] F. N. Newell, M. O. Ernst, B. S. Tjan, and H. H. Bulthoff. Viewpoint dependence in visual and haptic object recognition. Psychological Science, 12(1):37–42, 2001.
  • [17] T. Pham, A. Kheddar, A. Qammaz, and A. Argyros. Towards force sensing from vision: Observing hand-object interactions to infer manipulation forces. IEEE Conference on Computer Vision and Pattern Recognition, pages 2810–2819, Jun. 2015.
  • [18] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations, 2015.
  • [19] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepfface: Closing the gap to human-level performance in face verification. IEEE Conference on Computer Vision and Pattern Recognition, 2014.
  • [20] W. M. B. Tiest and A. M. Kappers. Physical aspects of softness perception. Luca MD (ed) Multisensory Softness, Springer, pages 3–15, 2014.
  • [21] F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, X. Wang, and X. Tang. Residual attention network for image classification. IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [22] S. Woo, J. Park, J. Lee, and I. Kweon. Cbam: Convolutional block attention module. European Conference on Computer Vision, Sept. 2018.
  • [23] Z. Xu, J. Hu, and W. Deng. Recurrent convolutional neural network for video classification. IEEE International Conference on Multimedia and Expo, Jul. 2016.
  • [24] H. Zhang, R. Wu, C. Li, X. Zang, X. Zhang, H. Jin, and J. Zhao. A force-sensing system on legs for biomimetic hexapod robots interacting with unstructured terrain. Sensors, 17:1514, 2017.
  • [25] X. Zhang, T. Wang, J. Qi, H. Lu, and G. Wang. Progressive attention guided recurrent network for salient object detection. IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  • [26] Y. Zhu, C. Jiang, Y. Zhao, D. Terzopoulos, and S. Zhu. Inferring forces and learning human utilities from videos. IEEE Conference on Computer Vision and Pattern Recognition, pages 3823–3833, Jun. 2016.