Earlier methods for HAR were statistical methods where hand crafted features were used for action recognition. Commonly used statistical methods involve the calculation of statistical features like mean and variance in time domain6]. The first disadvantage associated with statistical method is the requirement of domain knowledge about the data 
. Another disadvantage is the separation of feature extraction part from the classification part. Recent success of deep models in the fields of image classification, speech recognition 
and their capacity to learn complex features directly from the raw data has shifted the focus of research towards deep learning.
The advancement in technology, especially the release of Microsoft Kinect, smartphones, and the availability of fast processors have brought new dynamics in HAR. HAR dataset and methods can be divided into vision based and Inertial Measurement Unit (IMU) based . Both approaches have advantages and limitations. Although depth maps and images of vision-based approach provide reasonable accuracy for HAR, occlusion, view point variation, illumination, scaling and noise reduce the performance of this approach. IMU-based HAR is achieved by attaching a few inertial sensors to different parts of an individual’s body. Although not limited by vision-based HAR problems, IMU-based approach becomes cumbersome (increase in number of sensors) and inaccurate when the underlying actions are complex. It has been recently shown that fusing the two modalities and and employing statistical machine learning techniques results in alleviating these shortcomings, which, in turns, provide improved performance in action recognition .
In this paper, we further improve visual-inertial HAR through employing novel feature transformation and a deep learning architecture. We convert depth data into Sequential Front view Images (SFI) and inertial data into signal images and through use of AlexNet and two classifiers namely SVM and softmax classifier, we achieve state of art results in terms of recognition accuracy. The key contributions of the presented work are:
Motivated from the fact that Convolutional Neural Networks (CNNs) are typically designed for image classification task, we convert both datasets, depth and inertial datasets into sequential front view and signal images, respectively. Conversion to image data enables abstraction through CNN to different feature types (e.g. edges, curves, and higher-level abstractions) that are not possible with 1D temporal data . It also enables creating a generic architecture that will be applicable to both image and non-image data without major changes.
Combination of deep models with statistical models have been successful in many applications of machine learning and pattern recognition. Inspired from the results in,  and , we combined Convolutional Neural Network with statistical models (SVM and Softmax Classifier). We trained deep models for learning features from datasets and used statistical models for recognition task. To our knowledge, this is the first work that provides comprehensive evaluation of a deep learning-based method on benchmark datasets for visual-inertial HAR.
The recognition accuracies of individual modalities, depth and inertial data, are also calculated and compared against the multimodal accuracies. Experimental results on two datasets, namely UTD-MHAD and Kinect 2D prove the significance of the proposed method. We achieve average accuracies of 98.7% and 99.8% on these datasets, respectively, beating the state-of-the-art in visual-inertial HAR.
Ii Related Work
Human Action Recognition from depth data and inertial sensor data have been studied actively since the advancement in technology has lead to the availability of inexpensive depth and inertial sensors. Recent studies are focused on fusing the different modalities at early level, feature level or decision level for better recognition results. Example modalities that are typically combined together are depth and inertial sensor data, RGB data and depth data, motion history images(MHI) and depth motion maps(DMM).
Comparing to other modalities, fusion of depth and inertial sensor data gained significant attention due to the cost effectiveness and availability of these sensors . Authors in  merge inertial and depth data to train hidden Morkov model for improving accuracy and robustness of hand gesture recognition. A real-time fusion system for human action recognition is developed in  by decision level fusion of depth data and inertial sensor data. An accurate and robust upper limb tracking system is developed in 
by unscented Kalman filter based fusion of inertial and depth sensor data. Fusion approaches show the noticeable improvement in recognition accuracy. A refined embedded system for fall detection based on KNN classifier is designed in by integration of depth and accelerometer data. In 
features extracted by two convolutional neural networks from image and sensor data are fused together to train a Recurrent Neural Network (RNN) with Long Short Term Memory (LSTM) units for classification purpose.
Other related work where inertial sensor data and depth data are used individually for action recognition includes the work in  where the features extracted from final convolutional layer and the first fully connected layer of CNN are combined to train LSTM network for human action recognition. In  the performance of state-of-the-art deep learning models, convolutional neural network and recurrent neural network, for human activity recognition using inertial sensors are rigorously explored by performing thousands of recognition experiments with randomly sampled model configurations to investigate the suitability of each model for HAR. Authors in  converted the time series data obtained from inertial sensor into raw stacked activity images which serve as an input to CNN for human action recognition. In  CNN with 2D kernel and CNN with 1D kernel are used for human activity recognition on inertial sensor dataset and their performance is compared on the basis of size of kernel.
In  depth video sequence is projected onto three orthogonal Cartesian planes to generate depth motion maps corresponding to front, top and side views and then use these DMMs as features to train collaborative representation classifier. The work in  involves the generation of Histograms of Oriented Gradients (HOG) from depth motion maps to build DMM-HOG descriptors for human action recognition. Comprehensive analysis of multimodal fusion of user-generated multimedia content is explained in .
In previous vision-inertial fusion methods, features are extracted directly from raw data without transformation. In proposed method we transform depth and inertial data into sequential front view and signal images respectively. Transforming data to images empower CNN to pull out the different features like edges, corner and patches that would be impossible with raw data.
Iii Proposed Method
In our proposed method we convert the depth data into Sequential Front view Images(SFI) and inertial data into signal images. We fine-tune the pre-trained AlexNet(CNN based model) 
on SFIs to leverage the full potential of transfer learning and train another CNN, whose architecture is shown in Figure1, on signal images. Finally, learned features are extracted from the fully connected layers of both CNNs, fused to make a shared layer of features, and fed as training / testing data to a supervised classifier. We experiment with two classifiers : support vector machines (SVM) and softmax classifier. The overview of proposed method is shown in Figure 2. The benefits of transfer learning and formation of SFIs and signal images are explained in detail below.
Iii-a Transfer Learning
The concept of transfer learning is derived from multi-task learning  where the purpose is to transfer the already acquired knowledge on a particular domain to different but related domain. Transfer learning is very important in those computer vision, data mining and machine learning applications where the datasets are small but could be made compatible for transfer learning models. AlexNet is a CNN model that has been popular for transfer learning purpose. Transfer learning on pre-trained AlexNet is executed by fine tuning the model on new dataset and doing some fine tuning involving the adjustment of some layers especially to adjust the last fully connected layers according to the number of classification categories of the input dataset.
In the proposed method the classifiers used are Support Vector Machines and softmax classifier. These classifiers are explained in detail below.
Iii-B1 Support Vector Machine
Support Vector Machines (SVM) are famous supervised classifiers for data classification, face recognition, feature and text categorization and regression. In simplest form, the score function for SVM is the mapping of the input vector to the scores and is a simple matrix operation as shown in Equation 1.
Where is the input vector, is the weight determined by input vector and the number of classes and
is the bias vector. When used for multimodal fusion, the input to the classifier is the concatenated feature vector from individual modalities or the scores generated by each classifier. In and , score level fusion based on SVM is performed for human authentication by combining the features obtained from face and iris verifiers.
Iii-B2 Softmax Classifier
Softmax classifier is a multiclass classifier or regressor used in the fields of machine learning, data mining, mathematics, statistics and allied fields 
. Score function for softmax classifier computes the class specific probabilities whose sum is 1.
The mathematical representation of score function for softmax classifer is shown below.
where is the input vector and the score function maps the exponent domain to the probabilities.
When applied to multimodal fusion, the input to the classifier is from shared layer which is formed by combining the features from various modalities.
Iii-C Formation of Sequential Front View Images
In , the front, side and top view of depth motion maps are generated from depth sequences. From these views we observe that using all three views of DMMs are not necessary, since we have supplemental information from inertial dataset. Thus we convert the depth sequences into front view images called Sequential Front view Images (SFI) as shown in Figure 4. By using SFIs, we are reducing the computational cost of the experiment. These images are similar to the motion energy images and motion history images introduced in . These SFIs provide cumulative information about the action from start to completion.
The SFIs are converted to 3-channel images and resized to 227x227 using bicubic interpolation to be compatible with AlexNet.
Iii-C2 Fine Tuning
To apply Alexnet to our classification problem, few modifications are required. We reduce the size of fully connected layer ’fc8’ from 1000 to size equal to the number of classes in our datasets and then replace the output classification layer with new classification layer suited for our datasets.
Iii-D Formation of Signal Images
The data obtained from inertial sensors is in the form of multivariate time series. The wearable devices in our datasets are tri-axis accelerometers and gyroscopes. Hence we have six sequences of signals : three accelerometer and three angular velocity sequences. These sequences are converted into 2D virtual images called signal images as shown in Figure 5. With the given six signal sequences, signal image is obtained through row-by-row stacking of raw signal sequences in such a way that each sequence comes adjacent to every other sequence based on the algorithm in . The signal images are formed by taking advantage of the temporal correlation among the signals. The row-by-row stacking of six sequences has the following order.
Where the numbers 1 to 6 represents the sequence numbers in a raw signal. From the above order of the sequences, it is observed that every sequence neighbors every other sequence to make a signal image. Thus we have 25 rows of sequences in our signal image. By removing the last row of the signal image, the final width of our signal image becomes 24.
Majority of human actions falls in low frequency band, therefore the length of the signal image can be selected equal to sampling rate of dataset which is 50Hz for our datasets. Since the shortest sequence in our datasets has only 107 samples, therefore in order to incorporate all the samples into a signal image and to facilitate the design of CNN of Figure 1, the length of the signal image is finalized as 52. Thus the final size of our signal image is 24x52 as shown in Figure 6.
Iii-D1 CNN Architecture for Signal Images
The architecture of CNN proposed for performing action recognition task on signal images consists of two convolutional layers, two pooling layers, a fully connected layer and a softmax layer as shown in Figure1
. The first convolutional layer has 50 kernels of size 5x5, followed by pooling layer of size 2x2 and stride 2. The output of the first pooling layer is the input of the second convolutional layer which has 100 kernels followed by 2x2 pooling layer with stride 2. The last two layers are fully connected layers and softmax layer.
In the last part of proposed method, we extract the learned features from the last fully connected layers of both CNNs, integrate the features to form a single feature layer which serves as input to our classification network to perform the action recognition task as shown in Figure 2.
Iv Experiments and Results
We experiment on publicly available UTD Multimodal Human Action Dataset (UTD-MHAD)  and Kinect 2D dataset  by separately training and testing three different input data types : SFIs alone, signal images alone and their combination. We used subject specific setting for experiments on both datasets as the subject specific setting generates better recognition results due to less intraclass variations . We conduct our experiments on Matlab R2018a on a desktop computer with NVIDIA GTX-1070 GPU.
Iv-a UTD-MHAD Dataset
The UTD-MHAD dataset contains both depth and inertial data components and consists of 27 samples of different actions as shown in Figure 3
. The first part of experiment is to fine tune AlexNet on the SFIs obtained from the depth sequences of dataset. The number of SFIs obtained from depth data are enough for transfer learning. We select 46636 samples for retraining AlexNet model and 11660 samples for testing. We retrain AlexNet for 50 epochs. We perform experiment 20 times by randomly selecting the same percentage of training and testing samples : 46636 samples for training and 11660 samples for testing and report the average accuracy. The values of other training parameters are shown below in TableI.
|Initial Learn Rate||0.005|
|Learn Rate Drop Factor||0.5|
|Learn Rate Drop Period||10|
We reached these values through using the grid search method. For example, we evaluate AlexNet for values [0.01, 0.05, 0.001, 0.005] of initial learning rate. Similarly Momentum for the values of [ 0.7, 0.8 , 0.9] and so on.
The second part of the experiments is to train the CNN on signal images obtained from inertial data. The inertial sensor component of UTD-MHAD dataset is very challenging to train a CNN. The first deficiency is that inertial sensor was worn either on volunteer’s right wrist or right thigh depending upon nature of action. Hence the sensor is worn only on two positions for collecting data of 27 actions which is not enough to capture all the dependencies and characteristics of data. The other challenge is that the number of data samples is very small to train and test the CNN as compared to the inertial dataset in  where sensors were worn at five different places and there are large number of samples. Thus the number of signal images obtained from inertial sensor data are only 1722.
To overcome these problems we perform data augmentation on signal images to increase the number of samples.
Iv-A1 Data Augmentation
Signal Images are 2D virtual images formed by row by row stacking of signals by preserving the correlation among the rows of the signal. Hence only those data augmentation techniques are valid that could not alter the correlation among the rows of the signal image. Following data augmentation techniques are used:
we flip signal images left to right and upside down : reverse the original image across horizontal axis and vertical axis. Flipping of image preserve the correlation and doesn’t alter the size of the signal image.
Rotating image at a particular angle is an important data augmentation technique. A negative angle represents a clockwise rotation and a positive angle represents a counter clockwise rotation. Care should be taken while selecting the angle so that the dependencies among the signal samples do not change. We rotate the signal images to 180 degrees to get more samples of the images.
Adding Gaussian Noise
To further increase the number of training samples we add Gaussian Noise of zero mean and variance of 0.009.
After augmentation we get 13776 samples of signal images. We select 11021 for training and 2755 for testing. We train CNN on signal images as shown in Figure 1 for 100 epochs. We perform experiment 20 times by randomly selecting the same percentage of training and testing samples : 11021 samples for training and 2755 samples for testing and report the average accuracy. The values of remaining parameters for training CNN are shown below in Table II.
|Initial Learn Rate||0.001|
|Learn Rate Drop Factor||0.5|
|Learn Rate Drop Period||10|
In the final part of the experiment, learned features are extracted from the fully connected layers of both CNNs, combined together and input to the classification network. We work with two classification networks : multiclass SVM and softmax classifier. We achieve the average accuracies of 98.7% and 98.2% respectively with the classification networks.
The results obtained by proposed method and their comparison to previous methods for UTD-MHAD dataset are shown in Table III. As we can see, the proposed fusion-based method with SVM beats the current state-of-the-art  by 0.2%, which proves the value of the proposed method. Although the improvement is marginal, at such high level of accuracy even a small boost is significant. We also show the class specific accuracies in Figure 7
when the modalities are used independently and fused. As expected, fusion results in better accuracy for all classes. Comparing SVM with softmax, we see that SVM performs better. Softmax classifier reduces the cross-entropy function while SVM employs a margin based function. Multiclass SVM classifies data by locating the hyperplane at a position where all data points are classified correctly. Thus SVM determines the maximum margin among the data points of various classes. The more rigorous nature of classification is likely the reason why SVM performs better than softmax.
|C.chen et al. ||79.1|
|Bulbul et al. ||88.4|
|J.Imran et al. ||91.2|
|Chen et al. ||97.2|
|Mahjoub et al. ||98.5|
|Depth + Inertial Sensor (Softmax Classifier)||98.2|
|Depth + Inertial Sensor (SVM Classifier)||98.7|
Iv-B kinect2D Dataset
Kinect 2D action dataset is another publicly available dataset that contains both depth and inertial data. It is a new dataset using the second generation of Kinect . It contains 10 actions performed by six subjects with each subject repeating the action 5 times. The 10 actions are ”right hand high wave”, ”right hand catch”, ”right hand high throw”, ”right hand draw X”, ”right hand draw tick”, ”right hand draw circle”, ”right hand horizontal wave”, ”right hand forward punch”, ”right hand hammer, and ”hand clap”.
The proposed method is applied on this dataset with the same manner as that of UTD-MHAD dataset. The training parameters for training AlexNet on SFIs and CNN on signal images are same as that of Tables I and II, however the number of training and testing samples are different from UTD-MHAD dataset. The total SFIs obtained from the depth data sequences are 17622. We select 14098 samples for training AlexNet and 3524 for testing.
The signal images obtained from inertial data are fewer in number to train the CNN of Figure 1. Thus the same data augmentation techniques explained in section IV-A are used on signal images for increasing the number of samples. After data augmentation we select 3533 samples for training and 833 samples for testing. Experimental results on this dataset and their comparison with previous method are shown below in Table IV. The class specific accuracies are shown in Figure 8.
|Chen et al. ||99.5|
|Depth + Inertial (softmax classifier)||99.5|
|Depth + Inertial (SVM classifier)||99.8|
We achieved higher accuracies with Kinect 2D dataset as compare to UTD-MHAD dataset. It is due to the fact that interclass discrimination in Kinect 2D dataset is higher than UTD-MHAD dataset and none of the action has class specific accuracy less than 93% as shown in figure 8. On the other hand, there are actions in UTD-MHAD dataset which are less discriminant such as ”sit to stand” and ”stand to sit” and ”right arm swipe to left” and ”right arm swipe to right”. The class specific accuracies of these less discriminant actions are low as compare to other actions as shown in figure 7.
In this paper, we perform a novel multimodal fusion scheme for visual-inertial HAR using CNN, SVM and softmax classifier. We successfully extracted learned features from fully connected layers of CNNs and perform the recognition task by SVM or softmax classifier. Our experiments on UTD-MHAD and Kinect 2D datasets achieves state of the art results. We outperform the current state-of-the-art on UTD-MHAD dataset by 0.2% , with an increase from 98.5% to 98.7% and on Kinect 2D dataset by 0.3%, with an increase from 99.5% to 99.8%. In this paper we did not utilize skeleton data due to jitters in skeleton joint positions in UTD-MHAD dataset captured by the first generation of Kinect. In our future work, we are planning to apply the proposed method on more datasets, combine other modalities than depth and inertial sensors, explore other fusion methods and employ end-to-end deep learning architectures.
-  B. Zhou, M. Sundholm, J. Cheng, H. Cruz, and P. Lukowicz, “Never skip leg day: A novel wearable approach to monitoring gym leg exercises,” in Pervasive Computing and Communications (PerCom), 2016 IEEE International Conference on. IEEE, 2016, pp. 1–9.
-  P. Corbishley and E. Rodriguez-Villegas, “Breathing detection: towards a miniaturized, wearable, battery-operated monitoring system,” IEEE Transactions on Biomedical Engineering, vol. 55, no. 1, pp. 196–204, 2008.
-  J. Qin, L. Liu, Z. Zhang, Y. Wang, and L. Shao, “Compressive sequential learning for action similarity labeling,” IEEE Transactions on Image Processing, vol. 25, no. 2, pp. 756–769, 2016.
-  Y. Kim and B. Toomajian, “Hand gesture recognition using micro-doppler signatures with convolutional neural network,” IEEE Access, vol. 4, pp. 7125–7130, 2016.
-  L. Bao and S. S. Intille, “Activity recognition from user-annotated acceleration data,” in International Conference on Pervasive Computing. Springer, 2004, pp. 1–17.
-  A. Krause, D. P. Siewiorek, A. Smailagic, and J. Farringdon, “Unsupervised, dynamic identification of physiological and activity context in wearable computing.” in ISWC, vol. 3, 2003, p. 88.
T. Plötz, N. Y. Hammerla, and P. Olivier, “Feature learning for activity
recognition in ubiquitous computing,” in
IJCAI Proceedings-International Joint Conference on Artificial Intelligence, vol. 22, no. 1, 2011, p. 1729.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” inAdvances in neural information processing systems, 2012, pp. 1097–1105.
-  G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82–97, 2012.
-  D. Cook, K. D. Feuz, and N. C. Krishnan, “Transfer learning for activity recognition: A survey,” Knowledge and information systems, vol. 36, no. 3, pp. 537–556, 2013.
-  C. Chen, R. Jafari, and N. Kehtarnavaz, “Improving human action recognition using fusion of depth camera and inertial sensors,” IEEE Transactions on Human-Machine Systems, vol. 45, no. 1, pp. 51–61, 2015.
-  N. Hatami, Y. Gavet, and J. Debayle, “Classification of time-series images using deep convolutional neural networks,” in Tenth International Conference on Machine Vision (ICMV 2017), vol. 10696. International Society for Optics and Photonics, 2018, p. 106960Y.
-  D.-X. Xue, R. Zhang, H. Feng, and Y.-L. Wang, “Cnn-svm for microvascular morphological type recognition with data augmentation,” Journal of medical and biological engineering, vol. 36, no. 6, pp. 755–764, 2016.
-  Y. Shima, Y. Nakashima, and M. Yasuda, “Pattern augmentation for handwritten digit classification based on combination of pre-trained cnn and svm,” in Informatics, Electronics and Vision & 2017 7th International Symposium in Computational Medical and Health Technology (ICIEV-ISCMHT), 2017 6th International Conference on. IEEE, 2017, pp. 1–6.
A. F. M. Agarap, “A neural network architecture combining gated recurrent unit (gru) and support vector machine (svm) for intrusion detection in network traffic data,” inProceedings of the 2018 10th International Conference on Machine Learning and Computing. ACM, 2018, pp. 26–30.
-  C. Chen, R. Jafari, and N. Kehtarnavaz, “A survey of depth and inertial sensor fusion for human action recognition,” Multimedia Tools and Applications, vol. 76, no. 3, pp. 4405–4425, 2017.
-  K. Liu, C. Chen, R. Jafari, and N. Kehtarnavaz, “Fusion of inertial and depth sensor data for robust hand gesture recognition,” IEEE Sensors Journal, vol. 14, no. 6, pp. 1898–1903, 2014.
-  C. Chen, R. Jafari, and N. Kehtarnavaz, “A real-time human action recognition system using depth and inertial sensor fusion,” IEEE Sensors Journal, vol. 16, no. 3, pp. 773–781, 2016.
-  Y. Tian, X. Meng, D. Tao, D. Liu, and C. Feng, “Upper limb motion tracking with the integration of imu and kinect,” Neurocomputing, vol. 159, pp. 207–218, 2015.
-  B. Kwolek and M. Kepski, “Improving fall detection by the use of depth sensor and accelerometer,” Neurocomputing, vol. 168, pp. 637–645, 2015.
-  I. Hwang, G. Cha, and S. Oh, “Multi-modal human action recognition using deep neural networks fusing image and inertial sensor data,” in Multisensor Fusion and Integration for Intelligent Systems (MFI), 2017 IEEE International Conference on. IEEE, 2017, pp. 278–283.
-  H. Gammulle, S. Denman, S. Sridharan, and C. Fookes, “Two stream lstm: A deep fusion framework for human action recognition,” in Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on. IEEE, 2017, pp. 177–186.
-  N. Y. Hammerla, S. Halloran, and T. Ploetz, “Deep, convolutional, and recurrent models for human activity recognition using wearables,” arXiv preprint arXiv:1604.08880, 2016.
-  W. Jiang and Z. Yin, “Human activity recognition using wearable sensors by deep convolutional neural networks,” in Proceedings of the 23rd ACM international conference on Multimedia. ACM, 2015, pp. 1307–1310.
-  S. Ha, J.-M. Yun, and S. Choi, “Multi-modal convolutional neural networks for activity recognition,” in Systems, Man, and Cybernetics (SMC), 2015 IEEE International Conference on. IEEE, 2015, pp. 3017–3022.
-  C. Chen, K. Liu, and N. Kehtarnavaz, “Real-time human action recognition based on depth motion maps,” Journal of real-time image processing, vol. 12, no. 1, pp. 155–163, 2016.
-  X. Yang, C. Zhang, and Y. Tian, “Recognizing actions using depth motion maps-based histograms of oriented gradients,” in Proceedings of the 20th ACM international conference on Multimedia. ACM, 2012, pp. 1057–1060.
-  R. Shah and R. Zimmermann, Multimodal analysis of user-generated multimedia content. Springer, 2017.
-  R. Caruana, “Multitask learning,” in Learning to learn. Springer, 1998, pp. 95–133.
-  C. J. Burges, “A tutorial on support vector machines for pattern recognition,” Data mining and knowledge discovery, vol. 2, no. 2, pp. 121–167, 1998.
-  F. Wang and J. Han, “Multimodal biometric authentication based on score level fusion using support vector machine,” Opto-electronics review, vol. 17, no. 1, pp. 59–64, 2009.
Y. Bouzouina and L. Hamami, “Multimodal biometric: Iris and face recognition based on feature selection of iris with ga and scores level fusion with svm,” inBio-engineering for Smart Technologies (BioSMART), 2017 2nd International Conference on. IEEE, 2017, pp. 1–7.
-  J. Wolfe, X. Jin, T. Bahr, and N. Holzer, “Application of softmax regression and its validation for spectral-based land cover mapping,” ISPRS-International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp. 455–459, 2017.
-  A. F. Bobick and J. W. Davis, “The recognition of human movement using temporal templates,” IEEE Transactions on pattern analysis and machine intelligence, vol. 23, no. 3, pp. 257–267, 2001.
-  C. Chen, R. Jafari, and N. Kehtarnavaz, “Utd-mhad: A multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor,” in Image Processing (ICIP), 2015 IEEE International Conference on. IEEE, 2015, pp. 168–172.
-  Kinect2d dataset. [Online]. Available: http://www.utdallas.edu/~kehtar/Kinect2DatasetReadme.pdf
-  C. Chen, R. Jafari, and N. Kehtarnavaz, “Fusion of depth, skeleton, and inertial data for human action recognition,” in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016, pp. 2712–2716.
-  M. Shoaib, S. Bosch, O. D. Incel, H. Scholten, and P. J. Havinga, “Fusion of smartphone motion sensors for physical activity recognition,” Sensors, vol. 14, no. 6, pp. 10 146–10 176, 2014.
-  A. B. Mahjoub and M. Atri, “An efficient end-to-end deep learning architecture for activity classification,” Analog Integrated Circuits and Signal Processing, pp. 1–10, 2018.
-  Y. Tang, “Deep learning using linear support vector machines,” arXiv preprint arXiv:1306.0239, 2013.
-  M. F. Bulbul, Y. Jiang, and J. Ma, “Dmms-based multiple features fusion for human action recognition,” International Journal of Multimedia Data Engineering and Management (IJMDEM), vol. 6, no. 4, pp. 23–39, 2015.
-  J. Imran and P. Kumar, “Human action recognition using rgb-d sensor and deep convolutional neural networks,” in Advances in Computing, Communications and Informatics (ICACCI), 2016 International Conference on. IEEE, 2016, pp. 144–148.