Human activity recognition (HAR) is a research hotspot in the field of computer vision and has broad application prospects in security monitoring, biological health, and other fields. Traditional recognition algorithms are mainly based on images or videos. With the emergence of various wearable smart devices embedded with microsensors such as inertial measurement units (IMUs), these devices are highly used in daily life and play an indispensable role in emerging fields that strongly demand HAR such as virtual reality (VR). Therefore, it is a natural way to realize HAR based on wearable devices.
In recent years, HAR based on wearable devices has been conducted deep studies 13, 14]
. However, these methods need to to design features manually, calculate time and frequency domain features based on characteristics of the data. To reduce the computational consumption and compress input data, a further selection of features also needs to be conducted. Due to the longtime design and selection of manual features, it always costs lots using traditional methods of machine learning. With the development of deep learning in recent years, deep neural networks such as Convolutional Neural Network (CNN)
or Long Short-Term Memory networks (LSTM)
have been widely used for HAR, finishing both feature extraction and activity classification.
Almost all the above methods now can achieve excellent results on specific sensor-based HAR datasets. The widely used public datasets and their main characteristics are shown in Table 1.
|Datasets||Sampling Rate (Hz)||Sensors||Activities||Subjects|
|UCI HAR||50||2 (Acc, Gyro)||6||30|
|PAMAP2||100||4 (Acc, Gyro, Mag, Temp)||18||9|
However, all these datasets have defects as follows:
The most widely used datasets such as UCI HAR contain only simple daily activities, for example, walking, running or jumping, while human behaves much more complex in real life.
Subjects involved in data collection are always limited, and the same activity tends to be performed similarly, for instance, walking may only include walking at normal speed. However, the same activity can be performed in different styles and may vary with different humans in the real world.
During data collection, most datasets use only a single IMU, which makes them unsuitable for recognizing more elaborate activities such as stretching arms or stretching legs. Though other datasets use more than two IMUs, the increase in IMUs also leads to the intrusion to subjects.
To solve problems above, this paper innovatively adopts a pose reconstruction dataset AMASS, which is a large collection of motion capture (Mocap) datasets, for HAR. The adoption of this dataset has the following advantages:
AMASS contains rich motion types. It includes complex activities such as house cleaning in addition to simple daily activities, making this dataset closer to real life.
The containing of multiple mocap datasets in AMASS leads to both richer characteristics in activities and an increase in the number of involved subjects, which is more than 300.
Inspired by , where virtual IMU data are innovatively used in pose reconstruction, we similarly use virtual IMU for HAR, which greatly reduces the cost of collecting real datasets.
The main contributions of this paper are as follows:
Adopt a novel pose reconstruction dataset AMASS for HAR and use virtual IMU data in this dataset.
Use a realistic dataset to fine-tune the model for further reducing the gap between real and virtual data.
Propose a CNN framework combined with an unsupervised penalty for HAR.
Experimental results show that test result on the realistic dataset is 91.15% after fine-tuning, which demonstrates the feasibility of applying pose recognition datasets and using virtual IMU data for HAR.
2 Dataset preprocessing based on the SMPL model
One major work of this paper is the processing of AMASS, making it suitable for HAR. Since the IMU data in AMASS is virtual, this paper further processes the DIP dataset proposed in , which contains real IMU data that can be used to reduce the gap between virtual data and real data.
2.1 SMPL model
SMPL  is a parameterized model of the 3D human body, totally including vertices, joints. Input parameters of this model are shape parameters , which takes 10 values controlling the shape change of the human body, and pose parameters that takes 72 values which define the relative angles of 24 joints (including the root joint) of the human body:
where defines a template mesh, to which pose-dependent deformations and shape-dependent deformations are added. Based on the rotation around the predicted joint locations with smoothing defined by the blend weight matrix , the resulting mesh is then posed using a standard linear skinning function (LBS).
Using this model, AMASS converts the motion poses of several classical motion capture datasets such as Biomotion , from a skeletal form to a more realistic 3D skin model, while the pose parameters are given as a rotation matrix.
2.2 Virtual data generation
Though AMASS contains the input parameters of the SMPL model, it does not contain IMU data as original mocap datasets do not provide IMU data. To use AMASS for sensor-based pose reconstruction,  confirms the feasibility of synthesizing IMU data and generating corresponding SMPL parameters based on the input of different models.
Based on the rich information provided by AMASS, virtual acceleration data and orientation readings in the rotation matrix can be generated by placing virtual sensors on the SMPL mesh surface. Orientation readings are directly obtained using forward kinematics, while virtual accelerations are calculated via finite differences . The virtual acceleration for time is defined as:
where is the position of a virtual IMU for time , and is the time interval between two consecutive frames.
2.3 Labeling and filtering with SMPL model
Since AMASS contains over 11000 motions, it is necessary to classify these motions into different activities and make true labels. Further, a single motion file in AMASS may consist of several activities, so it is also essential to filter out some motions that affect the balance of the dataset. Activity labeling and data filtering are mainly achieved through three steps.
2.3.1 Posture-based labeling
We first classify the whole motions in AMASS into 12 categories based on the superficial descriptions of motions in most classical mocap datasets included in AMASS. Two types of motions are directly removed in this procedure. The first type is motions with little relevance to human daily activities, such as boxing and other martial motions described in Biomotion . The second type refers to some frequently converted motions(e.g. quick transitions between walking, stopping and running). Since the motion duration is generally short in AMASS, frequent motion transitions may conflict with the subsequent sliding window length settings, therefore such motions are also excluded.
2.3.2 Acceleration-based filtering
A simple classification of the dataset is implemented in the previous section, while some data are further filtered based on accelerations. Using the accelerations obtained via the sensor on the wrist, a dynamic graph of the acceleration over time can be created. The accelerations at the left wrist for typical walking and running movements are shown in Fig. 2 and Fig. 2.
2.3.3 Data cleaning with SMPL model
For some activities whose acceleration characteristics are not obvious, such as the stretching of the arms, it almost fails using acceleration features to clean the dataset. However, since AMASS provides SMPL pose parameters in the form of the rotation matrix, it becomes feasible to filter this type of activity adopting visualization with the SMPL model.
After using Unity to build the SMPL model, the motions can be visualized by passing in different SMPL pose parameters. Clapping motion and motion of waving arms are shown in Fig. 4 and Fig. 4 respectively. After visualization of such data, mislabeled motions can be successfully deleted.
However, preprocessed AMASS still suffers the problem of extremely unbalanced activities after processing above, which is mainly caused by unbalanced motions in the original AMASS. To alleviate this problem, interpolation up-sampling is adopted in this paper.
3 Deep learning algorithm and fine-tuning
3.1 Proposed method
The proposed method includes two stages: the off-line training stage and the on-line testing stage. At the first stage, we firstly employ the AMASS dataset, containing abundant human poses, to enhance the variety and diversity of the real data.
Motivated by the pioneer works [5, 7], a deep convolutional neural network ( convolutional layers and fully-connected layers) with an unsupervised penalty ( deconvolutional layers) is proposed to automatically extract the features of AMASS. Specially, given -th batch IMU data and the related labels , the proposed method tries to update the neural network parameters by minimizing
is the activation function of the-th layer, and is the penalty parameter that balances and . We use an unsupervised penalty to promote the generalization of the proposed method by considering:
In our case, by optimizing , we try to represent of high-dimension by the latent layer of low-dimension (). Such an operation, considering the low dimensionality of the IMU data, is helpful for the key feature extraction.
3.2 Fine-tuning with real IMU
Since IMU data in AMASS is virtually generated via the SMPL model and virtual sensors, while the IMU data in the real world tends to be affected by environmental noise, electromagnetic waves, etc. Therefore, certain differences exist between virtual and real data. To eliminate the gap, this paper uses the DIP dataset with real IMU data provided in  for fine-tuning. Data processing of DIP is similar to AMASS, except the fact that DIP only contains 5 activities, namely “computer works”, “walking”, “jumping”, “stretching arms” and “stretching legs”. Meanwhile, DIP has rather balanced activity categories, therefore up-sampling is not performed on DIP.
Following the off-line training stage in Section 3.1, at the on-line testing stage, by leveraging advantages of the transfer learning, we obtain the final result by fine-tuning the parameters in the fully-connected layers with the real IMU data.
4 Test Verification
This paper innovatively adopts a pose reconstruction dataset AMASS with virtual IMU data for HAR and proposes a new CNN framework with an unsupervised penalty. We design several comparative experiments, to prove the feasibility of using pose reconstruction dataset for HAR.
To further verify the rationality of the method proposed in this paper, both classical machine learning algorithms and deep learning algorithms are tested on AMASS and DIP. Taking the sequence length in AMASS into consideration, this paper finally adopts RF and DeepConvLSTM algorithms for comparisons. For RF we directly input the processed data for classification, while we adopt the original DeepConvLSTM architecture for comparison.
4.1 Experimental design
Three groups of comparative experiments based on different datasets are designed. Experiment 1 conducts training and testing on AMASS, using all three algorithms. The ratio of the training set to the test set is 7:3. Experiment 2 conducts training and testing on DIP and adopts all three algorithms similar to experiment 1. Experiment 3 is trained on the AMASS training set, fine-tuned on the DIP training set and finally tested on the DIP test set. Only our proposed method and DeepConvLSTM are involved in experiment 3.
Considering that some activities cannot be identified using only one IMU, three IMUs located at the left wrist, the right thigh, and the head are selected in this paper. The total input data have features in 36 dimensions, including three-axis acceleration and rotation matrix. Since the sampling rates of AMASS and DIP are both 60Hz, a sliding window with 60 frames (i.e. 1 second) length is selected, while the degree of overlapping is set as 50%.
4.2 Evaluation criteria
Commonly used evaluation criteria in HAR are accuracy, recall, F1-score and Area Under the Curve (AUC), among which accuracy and F1-score are most commonly used. Therefore, we also adopt accuracy and F1-score as the performance measures:
where denotes the class number. Variables , , , are the true positives, false positives, true negatives and false negatives of the class , respectively.
4.3 Experimental results and analysis
Table 2 illustrates all results in three experiments. From the results on the AMASS dataset in Table 2, we can see that all three algorithms can achieve accuracy over 70%, despite the fact that IMU data in AMASS is virtual and the containing of complex activities composed of several motions. Results on the DIP dataset in Table 2 corresponds to the results of experimen2, comparing three algorithms on a realistic IMU dataset DIP. We can see that the proposed method outperforms DeepConvLSTM and RF on both AMASS and DIP, which strongly illustrates the rationality of the deep learning algorithm proposed in this paper.
|Dataset||Methods & results|
|AMASS & DIP||91.15%||91.21%||84.80%||85.12%||
Notice that the classification result on DIP is not as good as the classification result of DeepConvLSTM and RF on classical HAR datasets. The main reason is that although DIP only contains 5 activities, similar to AMASS, each activity may be composed of a variety of motions, such as activity stretching legs which includes two motions, leg raising, and stepping. Activities with multiple motions greatly increase the difficulty of classification.
To confirm gaps between virtual IMU data and real IMU data, we additionally use the proposed network trained on AMASS to finish the classification task on DIP, an unsurprising result of accuracy less than 50% is obtained. While the network trained based on AMASS and fine-tuned on the DIP training set achieves the best performance on the DIP test set, both for the proposed method and DeepConvLSTM. The results confirm that fine-tuning indeed eliminates the gap between the virtual IMU and the real IMU to some extent.
We also show the confusion matrix figures of the proposed method in experiment 2 experiment 3. As Fig.6 and Fig. 6
show, fine-tuning effectively improves the classification results of some categories in DIP, which is mainly caused by richer motions in AMASS that make it easier to distinguish some confusing activities. Another interesting thing to be noticed is that fine-tuning can achieve rather excellent results within 20 epochs. This also provides a way for future research, that is, training on large-scale virtual IMU datasets, only need for a small scale of datasets with real IMU data for fine-tuning, which will reduce the cost of collecting real data.
This paper innovatively adopts a pose reconstruction dataset AMASS for HAR for the problem of simple daily activities and limited subjects in classical datasets. At the same time, a pose reconstruction dataset DIP with real IMU data is used for fine-tuning, to reduce the gap between virtual IMU data and real IMU data. Future work can focus on the most suitable IMU configurations through more detailed experiments.
This work was supported by the National Nature Science Foundation of China (NSFC) under Grant 61873163, Equipment Pre-Research Field Foundation under Grant 61405180205, Grant 61405180104.
-  D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz. A public domain dataset for human activity recognition using smartphones. In Esann, 2013.
-  B. Bruno, F. Mastrogiovanni, and A. Sgorbissa. Wearable inertial sensors: Applications, challenges, and public test benches. IEEE Robotics & Automation Magazine, 22(3):116–124, 2015.
-  G. Chevalier. Lstms for human activity recognition, 2016.
-  L. Chu, H. Li, and R. C. Qiu. Lemo: Learn to equalize for mimo-ofdm systems with low-resolution adcs. arXiv preprint arXiv:1905.06329, 2019.
-  D. Erhan, Y. Bengio, A. C. Courville, P. A. Manzagol, and S. Bengio. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 11(3):625–660, 2010.
-  Y. Huang, M. Kaufmann, E. Aksan, M. J. Black, O. Hilliges, and G. Pons-Moll. Deep inertial poser: learning to reconstruct human pose from sparse inertial measurements in real time. ACM Transactions on Graphics (TOG), 37(6):185, 2019.
-  Y. Lecun, Y. Bengio, and G. Hinton. Deep learning. 521(7553):436, 2015.
-  J. W. Lockhart, G. M. Weiss, J. C. Xue, S. T. Gallagher, A. B. Grosner, and T. T. Pulickal. Design considerations for the wisdm smart phone-based sensor mining architecture. In Proceedings of the Fifth International Workshop on Knowledge Discovery from Sensor Data, pages 25–33. ACM, 2011.
-  M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black. Smpl: A skinned multi-person linear model. ACM transactions on graphics (TOG), 34(6):248, 2015.
-  N. Mahmood, N. Ghorbani, N. F. Troje, G. Pons-Moll, and M. J. Black. Amass: Archive of motion capture as surface shapes. arXiv preprint arXiv:1904.03278, 2019.
-  A. Maurer and M. Pontil. Excess risk bounds for multitask learning with trace norm regularization. Journal of Machine Learning Research, 30:55–76, 2013.
F. Ordóñez and D. Roggen.
Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition.Sensors, 16(1):115, 2016.
-  L. Pei, R. Guinness, R. Chen, J. Liu, H. Kuusniemi, Y. Chen, L. Chen, and J. Kaistinen. Human behavior cognition using smartphone sensors. Sensors, 13(2):1402–1424, 2013.
-  L. Pei, J. Liu, R. Guinness, Y. Chen, H. Kuusniemi, and R. Chen. Using ls-svm based motion recognition for smartphone indoor wireless positioning. Sensors, 12(5):6155–6175, 2012.
-  A. Reiss and D. Stricker. Introducing a new benchmarked dataset for activity monitoring. In 2012 16th International Symposium on Wearable Computers, pages 108–109. IEEE, 2012.
-  C. A. Ronao and S.-B. Cho. Human activity recognition with smartphone sensors using deep learning neural networks. Expert systems with applications, 59:235–244, 2016.
-  W. Sousa Lima, E. Souto, K. El-Khatib, R. Jalali, and J. Gama. Human activity recognition using inertial sensors in a smartphone: An overview. Sensors, 19(14):3213, 2019.
-  N. F. Troje. Decomposing biological motion: A framework for analysis and synthesis of human gait patterns. Journal of vision, 2(5):2–2, 2002.
-  S. Zhang, Z. Wei, J. Nie, L. Huang, S. Wang, and Z. Li. A review on human activity recognition using vision-based method. Journal of healthcare engineering, 2017, 2017.