Trear: Transformer-based RGB-D Egocentric Action Recognition

01/05/2021 ∙ by Xiangyu Li, et al. ∙ University of Wollongong Tianjin University THE FIRST AFFILIATED HOSPITAL OF ZHENGZHOU UNIVERSITY 0

In this paper, we propose a Transformer-based RGB-D egocentric action recognition framework, called Trear. It consists of two modules, inter-frame attention encoder and mutual-attentional fusion block. Instead of using optical flow or recurrent units, we adopt self-attention mechanism to model the temporal structure of the data from different modalities. Input frames are cropped randomly to mitigate the effect of the data redundancy. Features from each modality are interacted through the proposed fusion block and combined through a simple yet effective fusion operation to produce a joint RGB-D representation. Empirical experiments on two large egocentric RGB-D datasets, THU-READ and FPHA, and one small dataset, WCVS, have shown that the proposed method outperforms the state-of-the-art results by a large margin.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With the popularity of the wearable equipment (e.g. GoPro and VR helmet), recognition of human activities from egocentric videos has attracted much attention due to its wide research and practical applications, such as Robotics, VR/AR, etc. Recently, deep learning is widely applied to many computer vision tasks with promising results which promotes researchers to employ Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs) in egocentric/third-person view action recognition

[1, 2, 3, 4, 5, 6]. While promising, most of these methods are based on the single RGB modality, do not take the combination of multiple heterogeneous modalities, e.g. RGB and depth, into consideration. However, each modality has its own characteristic. Depth modality carries rich 3D structure information, shows insensitive to the illumination changes, and it lacks the vital texture appearance information, while RGB modality is vice versa. For conventional third person action recognition, RGB-D based methods [9, 10, 11] have been widely proposed to exploit the complementary characteristics of both modalities, while for egocentric action recognition, there are still few studies [12]. This paper focuses on RGB-D based egocentric action recognition and explores a novel framework to learn a conjoint representation of both modalities.

Compared to the third person action recognition, egocentric action is more fine-grained and it requires to classify both the motion performed by the subject and objects being manipulated (e.g. close juice bottle, close milk, pour milk, pour wine, etc.). Thus, it’s essential to encode the spatial-temporal relation information of the action clips. Previous works either utilize the optical flow to exploit motion information via a two-stream network

[13]

or adopt a Convolutional Long Short-Term Memory (ConvLSTM) network for spatio-temporal encoding

[1, 2]. However, they either can only model short-term motion or only consider temporal structure sequentially as the activity progresses. Based on this observation, the recently proposed Transformer [14] inspires us to employ it in RGB-D egocentric action recognition due to its strong capability of sequence modeling in NLP (e.g. language translation) tasks, parallelness in processing the input, and ability in building long-range dependencies through self-attention mechanism.

This paper proposes a novel transformer-based egocentric action recognition framework. It consists of two modules, inter-frame attention encoder and mutual-attentional fusion block. Data from each modality is first encoded through an attention encoder to build an intra-modality temporal structure, and then features are incorporated through the fusion block to produce a cross-modal representation. For the inter-frame attention encoder, we adopt a standard transformer encoder consisting of self-attention and feed-forward layers. By mimicking the language translation task, each sampled image (or depth map) in an action video is treated as a word and dependencies with other words is constructed using a self-attention mechanism. Due to the context redundancy among the sampled images, it’s inefficient to conduct attention calculation. Thus, we propose to crop regions randomly from each image, where the different regions are interacted through the encoder to enhance the spatial correlation. Further, a mutual-attentional fusion block is proposed to learn joint representation for classification. In this block, self-attention layer is extended to a mutual-attention layer, where features from different modalities interact. Features after going through mutual-attention layer are fused via a simple operation to produce the cross-modal representation for classification. Our method is extensively evaluated on two large RGB-D egocentric datasets, THU-READ [12] and FPHA [15], and a small WCVS [16] dataset. Experiments results show that the proposed method achieves the state-of-the-art results and outperforms the existing methods by a large margin.

Our contributions can be summarized as follows:

  • Transformer encoder is adopted to model the temporal contextual information over the action period of each modality;

  • A mutual-attentional feature fusion block is proposed to learn a conjoint feature representation for classification;

  • The proposed method achieves the state-of-the-art results on three standard RGB-D egocentric datasets.

2 Related Work

Third Person RGB-D Action Recognition  RGB-D based action recognition has attracted much attention and many works have been reported to exploit the complementary nature of RGB and depth modalities. Kong et al. [11] propose to project features from different modalities into shared space and learn RGB-Depth features for recognition. Wang et al. [10] take the scene flow as input and propose a new representation called Scene Flow to Action Map (SFAM) for action recognition. Instead of treating RGB and depth as separate channels, Wang et al. [9] propose to train a single CNN (called c-ConvNet) for RGB-D action recognition and a ranking loss is utilized to enhances the discriminative power of the learned features. Liu et al. [33] propose a multimodal feature fusion strategy to exploit geometric and visual features within the designed spatio-temporal LSTM unit for skeleton-based action recognition. Shahroudy et al. [34] adopt a deep auto-encoder based nonlinear common component analysis network to discover the shared and informative components of input RGB+D signals. For more methods, readers are referred to the survey paper [17]. The above mentioned methods are mainly based on the third-person datasets, and the first-person action recognition has generated renewed interest due to the development of wearable cameras.

First Person Action Recognition  The early methods utilize semantic cues (object detection, hand pose and gaze information) to assist the egocentric action recognition. For example, a hierarchical model is presented by [18] to exploit a joint representation of objects, hands and actions. Li et al. [19] design a series egocentric cues for action recognition containing hand pose, head movement and gaze direction. The advance of the deep learning has led to the development of methods based on CNNs and RNNs. Several methods adopt two-stream structure [13] as the basic configuration and modify it to fit different purpose. Ma et al. [3] redesign the appearance stream for hand segmentation and object localization. Singh et al. [4] propose a compact two-stream network which uses semantic cues. For temporal encoding, LSTM and ConvLSTM are employed in [1, 2]. However these methods are all based on single RGB modality, and there are few works on RGB-D egocentric action recognition. Tang et al. [12]

propose a multi-stream network to incorporate features from RGB, depth and optical flow using Cauchy estimator and orthogonality constraint. Garcia-Hernando et al.

[15] release a RGB-D egocentric dataset with hand pose annotation, however they do not propose any method based on RGB and depth. This paper focuses on first person action recognition using RGB and depth modalities.

Transformer  Transformer [14], a fully-attentional architecture, has achieved state-of-the-art results than RNN or LSTM based methods for sequence modeling problem e.g. machine translation and language modeling. Apart from NLP tasks, transformer has also been employed in some computer vision tasks such as image generation [20] and human action localization [21]. Inspired by there works, we use transformer for intra-modality temporal modeling and cross-modality feature fusion.

Figure 1: The illustration of the proposed framework. It takes four RGB frames and the corresponding depth maps as input, which are processed by two encoders respectively. Features from each modality are interacted and incorporated through the mutual-attentional block to produce the cross-modal or joint representation. The final classification is the average of each frames which produced by the joint representation.

3 Proposed Method

In this section, we first give an overview of the proposed framework. Then both inter-frame transformer encoder and mutual-attentional feature fusion block will be described in detail.

3.1 Overview

The proposed method is developed for egocentric action recognition from heterogeneous RGB and depth modalities. As shown in Fig. 1, the proposed method contains two parts, two transformer encoders and a mutual-attentional fusion block. The network takes aligned RGB frames and depth maps as input, which are first converted into two sequences of feature embeddings. Then both sequence features are fed to the transformer encoders to model the temporal structure respectively. Features obtained from the encoders interact through the cross-modality block and then fused to produce the cross-modality representation. The conjoint features are processed through the linear layer to get per-frame classification and then averaged over the frames of an action clip as the final recognition result.

3.2 Inter-frame Transformer Encoder

As shown in Fig. 1, the two transformer encoders process both RGB and depth data respectively which form a two-stream structure. Since both streams are composed of the same network configuration (not weight-shared), here we just describe the RGB stream in detail. Given a sequence with RGB frames sampled from an action clip , we conduct average pooling on the feature maps of each frames to produce the feature embeddings with size . In order to encode the position information of each frame in the sequence, we utilize the position encoding proposed by Vaswani et al. [14], which adopting sine and cosine functions of different frequencies:

(1)
(2)

where is the position and is the dimension. This function is chosen for the hypothesis that the model can easily learn to attend by relative positions, since for any fixed offset , can be represented as a linear function of . The positional encodings have the same dimension as the embeddings, so that the two can be summed. The remaining architecture essentially follows the standard Transformer which can be seen in Fig. 2

. After obtaining the feature embeddings, multi-head attention is applied to them. Specifically, features are first mapped to a series vectors query

and key of dimension , and value of dimension using different learned linear projection. Then the dot-product of the vector and vector are calculated through softmax function to get the attention weight, and have a weighted sum of . For each head, the process can be presented by

(3)

The concatenation of each head’s output is followed by a group of operations containing dropout, residual connection and LayerNorm (LN)

[22].

(4)
(5)
Figure 2: The architecture of the inter-frame transformer encoder. For simplicity, the number of feature embeddings are set to 2. Multi-head attention is first applied on the feature embeddings. Then the output of each head are concatenated and pass through series operations contains residual connection, drop and Layer Normalization (LN). FFN denotes Feed-Forward Network. Note that the calculation in the process is matrix operation which packing the embeddings.

where is the matrices packing of input feature embeddings , is the intermediate feature during process and denotes the features after transformer encoder. Feed-Forward Networks (FFN) is composed of two convolution layers with kernel size being . Notice that the whole process is based on the matrices calculation.

Instead of using recurrent units, the inter-frame dependencies is modeled using self-attention mechanism. In order to enhance the spatial correlation, we adopt a simple yet effective image cropping operation. As we know, most RGB-based action recognition methods will apply the data augmentation (resize, crop, flip, etc.) randomly on the input images. For image cropping, as shown in Fig. 3, the RGB images are sampled as a certain interval from the action clip (4 frames for simplify). Fig. (a)a shows the normal crop manner used in most methods [1, 2], where the yellow boxes indicate the cropped region of the original image, and the region are same for all the images. Fig. (b)b indicates the random cropping used in our method. For every sampled RGB image, we extract the region randomly. The red boxes show that the cropped regions can be located in image everywhere. In short, normal cropping only takes a fixed region of action videos into consideration during one training iteration, while our random cropping vice the verse. The random cropping has following advantages: 1) the egocentric action clips always short and have small range of motion, resulting plenty of inter-frames context redundancy. Comparing to crop same regions for all images, our cropping manner can augment the randomness of input data. 2) since the transformer is applied to model the inter-frame relationship, the repeated regions context will result in the inefficiency of attention calculation. Different cropped regions can effectively raise efficiencies and enhance the inter-frame spatial correlation. Results in Table 5 shows the effectiveness of the proposed simple data operation.

3.3 Mutual-attentional Feature Fusion

Due to the feature variations in different modalities, it’s essential to learn a joint representation of the RGB and depth modalities. We propose a cross-modality block to interact features from both modalities using mutual-attentional mechanism. As shown in Fig. 4, the proposed module contains two parts, mutual-attention layer and feature fusion operation. The intermediate feature embeddings of both modalities from inter-frames encoder can be represented as , , where and represent RGB and depth. (), () and () matrices are computed following the standard transformer. Then the mutual-attention is applied to retrieve the information from context vectors (key and value ) of depth stream related to query vector of RGB stream and vice the verse. Specifically, it calculates the RGB feature attention in depth modality and depth feature attention in RGB modality, and produces corresponding cross-modality features and respectively. Then the dropout, residual connection and LayerNorm operations are also employed sequentially. After mutual-attention layer, features of each frame from both modalities are fused via simply feature addition operation and used for per-frame classification. The final classification are the average of per-frame results. The whole process can be presented as follows

(6)
(7)
(a) Cropping same region of all frames.
(b) Cropping regions randomly of each sampled frames.
Figure 3: Different data cropping manner, (a) yellow boxes indicates that regions are cropped at the same location as used in other methods. (b) red boxes indicates that regions are cropped randomly as adoped in our method.

The mutual-attention mechanism builds the interaction among different modalities, and features incorporated after such layers can benefit from the narrow modality discrepancy than fusing the features extracted from modalities directly. Although such a co-attentional mechanism has been utilized in some vision-and-language tasks

[23], e.g. visual question answer (VQA) and visual commonsense reasoning (VCR). However, the co-attentional used in our method has two differences, 1) the two modalities data RGB and depth are restricted aligned, RGB frames and depth maps are one-to-one correspondence. For vision-and-language tasks, words and visual inputs often suffer from mismatch issue, which affect the attention computation among inputs. 2) the modality gap between visual feature and word embedding are much complexity. While both RGB and depth are image-level visual feature, which make them interacted with each other through mutual-attention layer more effectively and straightforward.

Figure 4: Illustration of the proposed mutual-attentional block. It consists of a mutual-attention layer and feature fusion operation. Feature embeddings from RGB and depth modalities are fed to the attention layer to exchange the information. Then the features are passing through the sequential operations similar to transformer encoder and then fused together to get the joint representation
Methods Modality THU-READ (%) WCVS(%)
HOG [24] Depth 45.83 50.61
HOF [25] Depth 43.96 41.25
Depth Stream [26] Depth 34.06 58.47
TSN [27] Depth 65.00 59.32
HOG [24] RGB 39.93 52.14
HOF [25] RGB 46.27 48.50
Appearance Stream [26] RGB 41.90 60.36
TSN [27] RGB 73.85 66.02
TSN [27] RGB + Flow 78.23 67.05
TSN [27] RGB + Flow + Depth 81.67 70.09
MDNN [12] RGB + Flow + Depth + Hand 62.92 67.04
Trear (Ours) Depth 76.04 63.72
Trear (Ours) RGB 80.42 68.27
Trear (Ours) RGB+Depth 84.90 71.49
Table 1: Results obtained by the proposed “Trear” and comparison with the state-of-the-art methods on THU-READ and WCVS datasets. The results are the average of the 4 splits and 5 subjects respectively.

4 Experiments

The proposed method is extensively evaluated on three standard RGB-D action recognition benchmark datasets, the large THU-READ [12] and FPHA [15] datasets and the small WCVS [16] dataset. Ablation studies and attention maps visualization are also reported to demonstrate the effectiveness of the proposed method.

4.1 Implementation Detail and Training

The proposed framework consists of two parallel streams corresponding to the RGB and depth modalities. They interact through the fusion block. Both inter-frame attention encoders share the same structure, and ResNet-34 pre-trained on the ImageNet dataset is adopted as the feature encoder. In order to reduce the computation cost, the number of encoder is set to 1. The number of the heads for attention calculation in both inter-frame block and mutual-attention block is set to 8. Notice that, we found that the number of heads for mutual-attention set to 2 can perform slightly better than 8 heads on THU-READ dataset, and we take the better results for THU-READ dataset comparison.

The experiments are conducted on the Pytorch framework with a single TitanX GPU. The networks are trained for 50 epochs with batch-size of 4 on all the three datasets. The initial learning rate is set to 0.0001 and the learning rate is decayed by a factor of 0.1 after 30 epochs. Adam optimizer is used to train all networks. For the input data, we select 32 frames from each action clip, uniformly sampled in time. Images are first resized to 256, and then randomly cropped to

for training. The center crop is used for testing. The depth data are first normalized to (0-1) and then copied into a 3-channel input so that the depth stream can directly utilize the pre-trained weight of ResNet-34.

4.2 Datasets

THU-READ  The THU-READ [12] dataset is current the largest RGB-D egocentric dataset which consists of 40 different actions performed by 8 subjects. The RGB and depth data are collected by Primesense Carmine camera, which is a RGB-D sensor released by Primesense. It contains 1920 videos with each subject repeating each action for 3 times. We adopt the released leave-one-split-out cross validation protocol, which divides the 8 subjects into 4 groups and uses 3 splits for training and the rest for testing.

FPHA  The FPHA (First-Person Hand Action) [15] dataset collected with Intel RealSense SR300 RGB-D camera on the subject’s shoulder. It contains 1175 sequences belonging to 45 action categories performed by 6 subjects. The dataset also has accurate hand pose annotation. It is separated into 1:1 setting for training and validation at video level with 600 sequences and 575 sequences respectively.

WCVS  Wearable Computer Vision Systems (WCVS) [16] dataset which is captured by RGB-D camera mounted on a helmet that contains three levels of action recognition. The Level 1 consists of two action categories, manipulation and non-manipulation. Level 2 subdivides the two action into 4 and 6 classifications respectively. Although Level 3 contains fine-grained actions, the recording frequency is too low to train a classifier. Following the [16, 12]

, we adopt Level 2 with 4 action classes to evaluate our method. The dataset is performed by 4 subjects in 2 scenarios. The large intra-class variations pose a great challenge to recognition. Cross-subject evaluation metrics is adopted in this paper.

4.3 Results and Comparison with the State-of-the-art

Methods Modality Accuracy
Two stream-color [28] RGB 61.56
H+O [29] RGB 82.43
-depth [30] Depth 59.83
HON4D [31] Depth 70.61
2-layer LSTM [15] Pose 80.14
Gram Matrix [32] Pose 85.39
Two stream [13] RGB + Flow 75.30
-depth+pose [30] Depth + Pose 66.78
Trear (Ours) Depth 92.17
Trear (Ours) RGB 94.96
Trear (Ours) RGB+Depth 97.04
Table 2: Results obtained by “Trear” and comparisons with the state-of-the-art methods on the FPHA dataset . Pose represents the hand pose modality.

As shown in Table 1, the compared methods mainly contain hand-crafted feature based methods: HOG [24] and HOF [25], and deep learning-based methods: TSN [27] and MDNN [12]. In the cases of single modality, RGB-based methods perform better than depth-based methods because of the vital texture features that RGB modality carries. Benefited from the transformer, our method can explicitly model the intra-modality temporal structure and outperform others on the THU-READ and WCVS datasets. In the cases of multi-modality, TSN exploits optical flow modality to process the motion information and treats depth and RGB modality as separate channels for late score fusion. MDNN employs a multi-stream network and deploys Cauchy estimator and orthogonality constraint to assist egocentric action recognition. Our method achieves the state-of-the-art results, indicating that the learned conjoint cross-modal representation produced by mutual-attention block can effectively exploit the complementary nature of both modalities.

Methods THU-READ FPHA WCVS
ResNet-34 79.60 89.16 64.58
ResNet-34+Encoder 84.58 94.96 68.27
Table 3: Ablation study for the Inter-frame Transformer Encoder on THU-READ (CS4), FPHA and WCVS datasets.

Since FPHA can be adopted as hand pose estimation benchmark, thus hand pose annotations are given as a known modality and can be used for action recognition as well. It can be seen from Table

2, methods based on hand pose outperform most of those based on RGB and/or depth. Since the egocentric video mostly contains the hands and interacted objects, the hand pose feature contributes significantly to the recognition performance. Tekin et al. based on this characteristic, [29] develop a unified framework that can estimate 3D hand, object poses and action category from RGB data. Two-stream [13] utilizes the optical flow to exploit short-term motion information and [15] introduces the temporal information vis recurrent units (LSTM). Benefit from the proposed intra-frame encoder, our method can process the input frames parallel and build the context correlation of the action clip without using flow and recurrent unit. In short, our method (single modality or both RGB-D) outperforms all other methods by a large margin, demonstrating the effectiveness of both transformer encoder and mutual-attention block in fine-grained egocentric action recognition.

Methods THU-REA FPHA WCVS
Single Modality
Depth 77.50 92.17 63.72
RGB 84.58 94.96 68.27
RGB+D Feature Fusion
Concatenation 86.25 94.43 70.16
Multiplication 86.67 96.00 69.60
Addition 86.67 94.09 69.59
RGB+D Mutual-attentional Feature Fusion
Concatenation 86.67 95.30 70.23
Multiplication 85.83 97.04 69.58
Addition 88.33 96.34 71.50
Table 4: Ablation study of the proposed mutual-attention fusion block with different fusion manners on THU-READ (CS4), FPHA and WCVS dataset.

4.4 Ablation Studies

In order to verify the effectiveness of the proposed inter-frame Transformer encoder and fusion block, ablation studies are conducted on the THU-READ, FPHA and WCVS datasets. Since the inter-frame Transformer encoder in our framework is composed of CNN and Transformer encoder, we conduct an the ablation study for the ResNet-34 and the encoder, as shown in the Table 3. The results shows that the Transformer encoder can effectively model the temporal structure and can improve the performance significantly. As shown in Table 4

, RGB modality contributes more significantly to the recognition than depth modality because of the needed textual information. Directly fusing features from both modalities for recognition even produces slightly worse results than using RGB alone, especially on the FPHA dataset. This is probably due to the neglect of the modality discrepancy. The proposed mutual-attentional block can effectively mitigate such an issue, in which the features from different modalities can exchange the information through the mutual-attention layer to reduce the feature variations. Then the cross-modal features are fused to produce the conjoint feature representation. The results also show that the addition fusion performs better than concatenation and multiplication fusion on THU-READ and WCVS and a slightly worse in FPHA. In addition, Table

5 shows the results of different image cropping methods, indicating that random cropping used in the proposed method improves the performance significantly.

Methods Random Crop Crop same region
THU-READ
Depth 77.50 73.75
RGB 84.85 82.08
RGB+Depth 88.33 86.67
FPHA
Depth 92.17 88.00
RGB 94.96 91.65
RGB+Depth 97.04 95.65
Table 5: Ablation study for the data crop manner on THU-READ (CS4) and FPHA datasets.
(a) Attention map in RGB stream transformer encoder.
(b) Attention map in mutual-attentional layer.
Figure 5: Attention maps in both inter-frames attention and mutual-attentional layers. The vertical axis denotes the query vectors and the horizontal axis represents the context vectors. The action indicates ”drink mug” in FPHA dataset.

4.5 Attention Map Visualization

As shown in Fig. 5, the attention maps in inter-frame transformer encoder and mutual-attentional layer are visualized respectively. The action is the ”drink mug” with 32 sampled frames, which conducts ”drink” process twice, frames is the first ”drink” and is the second ”drink”. Fig. (a)a

represents the inter-frame temporal attention weight in the RGB stream transformer encoder. It can be seen that the encoder can accurately capture the ”drinking” moments and most of frames are correlated to the two moments. However, the correlation to the actions ”Pick up mug” and ”put down mug” are weak which mainly because of the appearance change are small in these actions. Fig.

(b)b shows the attention map in mutual-attentional layer. From the figure, we can see that the proposed co-attention mechanism can exploit the complementary characteristics of both modality and model the complete action conduction process, contains ”Pick up mug drinking put down mug” twice.

5 Conclusion

In this paper, we present a novel framework for egocentric RGB-D action recognition. It consists of two modules, inter-frame transformer encoder and the mutual-attentional cross-modality feature fusion block. The temporal information is encoded in each modality through the self-attention mechanism. Features from different modalities can exchange information via the mutual-attention layer and fused to become the conjoint cross-modal representation. Experimental results on three RGB-D egocentric datasets demonstrates the effectiveness of the proposed method.

Acknowledgment

This work was supported in part by the National Natural Science Foundation of China (Grant numbers: 61906173, 61822701).

References

  • [1] S. Sudhakaran, S. Escalera, and O. Lanz, “Lsta: Long short-term attention for egocentric action recognition,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , 2019, pp. 9954–9963.
  • [2] S. Sudhakaran and O. Lanz, “Attention is all we need: nailing down object-centric attention for egocentric activity recognition,” arXiv preprint arXiv:1807.11794, 2018.
  • [3] M. Ma, H. Fan, and K. M. Kitani, “Going deeper into first-person activity recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1894–1903.
  • [4] S. Singh, C. Arora, and C. Jawahar, “First person action recognition using deep learned descriptors,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2620–2628.
  • [5] S. Yan, J. S. Smith, W. Lu, and B. Zhang, “Multibranch attention networks for action recognition in still images,” IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 4, pp. 1116–1125, 2018.
  • [6]

    H. Lee, M. Jung, and J. Tani, “Recognition of visually perceived compositional human actions by multiple spatio-temporal scales recurrent neural networks,”

    IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 4, pp. 1058–1069, 2018.
  • [7] S. A. W. Talha, M. Hammouche, E. Ghorbel, A. Fleury, and S. Ambellouis, “Features and classification schemes for view-invariant and real-time human action recognition,” IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 4, pp. 894–902, 2018.
  • [8] D. K. Vishwakarma and K. Singh, “Human activity recognition based on spatial distribution of gradients at sublevels of average energy silhouette images,” IEEE Transactions on Cognitive and Developmental Systems, vol. 9, no. 4, pp. 316–327, 2017.
  • [9] P. Wang, W. Li, J. Wan, P. Ogunbona, and X. Liu, “Cooperative training of deep aggregation networks for rgb-d action recognition,” in

    Thirty-Second AAAI Conference on Artificial Intelligence

    , 2018.
  • [10] P. Wang, W. Li, Z. Gao, Y. Zhang, C. Tang, and P. Ogunbona, “Scene flow to action map: A new representation for rgb-d based action recognition with convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 595–604.
  • [11] Y. Kong and Y. Fu, “Bilinear heterogeneous information machine for rgb-d action recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1054–1062.
  • [12] Y. Tang, Z. Wang, J. Lu, J. Feng, and J. Zhou, “Multi-stream deep neural networks for rgb-d egocentric action recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 10, pp. 3001–3015, 2018.
  • [13] K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” in Advances in neural information processing systems, 2014, pp. 568–576.
  • [14] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, 2017, pp. 5998–6008.
  • [15] G. Garcia-Hernando, S. Yuan, S. Baek, and T.-K. Kim, “First-person hand action benchmark with rgb-d videos and 3d hand pose annotations,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 409–419.
  • [16] M. Moghimi, P. Azagra, L. Montesano, A. C. Murillo, and S. Belongie, “Experiments on an rgb-d wearable vision system for egocentric activity recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2014, pp. 597–603.
  • [17] P. Wang, W. Li, P. Ogunbona, J. Wan, and S. Escalera, “Rgb-d-based human motion recognition with deep learning: A survey,” Computer Vision and Image Understanding, vol. 171, pp. 118–139, 2018.
  • [18] A. Fathi, A. Farhadi, and J. M. Rehg, “Understanding egocentric activities,” in 2011 international conference on computer vision.   IEEE, 2011, pp. 407–414.
  • [19] Y. Li, Z. Ye, and J. M. Rehg, “Delving into egocentric actions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 287–295.
  • [20] N. Parmar, A. Vaswani, J. Uszkoreit, Ł. Kaiser, N. Shazeer, A. Ku, and D. Tran, “Image transformer,” arXiv preprint arXiv:1802.05751, 2018.
  • [21]

    R. Girdhar, J. Carreira, C. Doersch, and A. Zisserman, “Video action transformer network,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 244–253.
  • [22] J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv preprint arXiv:1607.06450, 2016.
  • [23] J. Lu, D. Batra, D. Parikh, and S. Lee, “Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks,” in Advances in Neural Information Processing Systems, 2019, pp. 13–23.
  • [24] H. Wang, M. M. Ullah, A. Klaser, I. Laptev, and C. Schmid, “Evaluation of local spatio-temporal features for action recognition,” in BMCV, 2009.
  • [25] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld, “Learning realistic human actions from movies,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition.   IEEE, 2008, pp. 1–8.
  • [26] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  • [27] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool, “Temporal segment networks: Towards good practices for deep action recognition,” in European conference on computer vision.   Springer, 2016, pp. 20–36.
  • [28] C. Feichtenhofer, A. Pinz, and A. Zisserman, “Convolutional two-stream network fusion for video action recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1933–1941.
  • [29] B. Tekin, F. Bogo, and M. Pollefeys, “H+ o: Unified egocentric recognition of 3d hand-object poses and interactions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 4511–4520.
  • [30] E. Ohn-Bar and M. M. Trivedi, “Hand gesture recognition in real time for automotive interfaces: A multimodal vision-based approach and evaluations,” IEEE transactions on intelligent transportation systems, vol. 15, no. 6, pp. 2368–2377, 2014.
  • [31] O. Oreifej and Z. Liu, “Hon4d: Histogram of oriented 4d normals for activity recognition from depth sequences,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2013, pp. 716–723.
  • [32] X. Zhang, Y. Wang, M. Gou, M. Sznaier, and O. Camps, “Efficient temporal sequence comparison and classification using gram matrix embeddings on a riemannian manifold,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4498–4507.
  • [33] J. Liu and S. Amir and D. Xu and A. Kot and G. Wang,“Skeleton-based action recognition using spatio-temporal lstm network with trust gates”, in IEEE transactions on pattern analysis and machine intelligence, 2017, pp. 3007–3021.
  • [34] S. Amir and T. Ng and Y. Gong and G. Wang,“Deep multimodal feature analysis for action recognition in rgb+ d videos”, in IEEE transactions on pattern analysis and machine intelligence, 2017, pp. 1045–1058.