Deep CHORES: Estimating Hallmark Measures of Physical Activity Using Deep Learning

07/26/2020 ∙ by Mamoun T. Mardini, et al. ∙ 0

Wrist accelerometers for assessing hallmark measures of physical activity (PA) are rapidly growing with the advent of smartwatch technology. Given the growing popularity of wrist-worn accelerometers, there needs to be a rigorous evaluation for recognizing (PA) type and estimating energy expenditure (EE) across the lifespan. Participants (66 battery of 33 daily activities in a standardized laboratory setting while a tri-axial accelerometer collected data from the right wrist. A portable metabolic unit was worn to measure metabolic intensity. We built deep learning networks to extract spatial and temporal representations from the time-series data, and used them to recognize PA type and estimate EE. The deep learning models resulted in high performance; the F1 score was: 0.82, 0.81, and 95 for recognizing sedentary, locomotor, and lifestyle activities, respectively. The root mean square error was 1.1 (+/-0.13) for the estimation of EE.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

It is well known that regular and sufficient amounts of physical activity (PA) have tremendous benefits on reducing the risk of common chronic diseases and enhancing mental health, wellbeing and quality of life. Globally, one out of four adults (almost 1.4 billion) do not meet the World Health Organization (WHO) PA recommendations [16]. Recently, WHO has published an action plan to enhance PAs with a target of 15% reduction in physical inactivity by year 2030 [38]. Although the effect of fitness trackers on increasing PA has yet rarely explored, a few recent studies have shown that these trackers can potentially change individuals behavior and increase PA [6, 4]. Therefore, there is a need to build models that can accurately recognize PA type and intensity.

Historically, the adopted approach used to recognize PA type and to estimate energy expenditure (EE) relied on data collected from the hip position in standardized laboratory settings. The advantage of the hip position over other positions is the closeness to the body’s center of the mass. Therefore, it offers a convenient and accurate approach for capturing ambulatory activity [2]. Recently, however, the wrist position has become popular for collecting accelerometer data due to a rise in smartwatches, convenience, ability to capture sleep quality and enhanced compliance in research studies [14, 19, 22, 29]. Despite the popularity of wrist-worn accelerometers, there is a paucity of models that are deemed viable for accurately assessing wrist PA. The use of data from the wrist position to recognize PA type and estimate EE is challenging due to its limitation in quantifying and capturing large lower limb movements and other lifestyle activities.

In this paper, we targeted two pervasive issues for using accelerometers: i) recognizing PA type, which is a classification problem and ii) estimating energy expenditure, which is a regression problem. To address these issues, previous research has used several machine learning algorithms including decision tree

[35]

, random forests

[35, 11], and bag-of- words [23]

. This approach is generally better than traditional statistical regression-based approaches, due to their ability to handle a high resolution data across three accelerometer axes and are able to extract and utilize non-linear relationships more efficiently than traditional statistical-based approaches. The current machine learning approaches follow standard time series analysis, where relevant features are aggregated (feature engineering) on sliding windows (bouts) of the raw data followed by classification or regression. While this approach has successfully discovered nonlinear relationships, it stops short in identifying hidden features because it flattens the temporality in the data by summarizing it to time- and frequency-domain features. This will limit the ability of the machine learning model to extract the temporal features from the time series data, which are important for capturing the transitions between activity types. Lastly, the current machine learning approaches rely on the selection of relevant features, which requires domain expertise and varies significantly among researchers.

For these reasons, we embrace the power of deep learning in this paper, because it learns embeddings from the raw accelerometry data and identifies a comprehensive feature representation without user input. As recent examples, convolutional neural networks (CNN) and recurrent neural networks (RNN) demonstrated important breakthroughs in image

[26] and speech recognition [15] and word embeddings [27]. Recognizing activity type is an ideal fit for CNN and RNN to appropriately utilize the spatial and temporal representations of the time series signals [34, 42]. Deep learning is able to process the data through an organized hierarchy of neural networks that can extract progressively non-linear and abstract features that can be used for classification. Data are processed throughout the layers, where the input of each layer is the processed output of the previous layer. This architecture of deep learning allows the network to generalize and become domain-invariant, which means that after learning a specific pattern from data, the deep network will be able to recognize it on different data sources.

In this paper, we analyzed the raw accelerometry data that are collected from the wrist position in a laboratory setting. We filtered the data and split it into 15-seconds non-overlapping windows. These windows are used as an input to the deep learning network. The output of our models are the recognition of PA type and the estimation of EE. We hypothesize that the deep learning network is effective in extracting relevant features from the raw accelerometry data, predicting physical activity type, and estimating energy expenditure precisely from accelerometer data collected from the wrist. (Figure 1) shows the overflow of the accelerometry data processing proposed in this work.

Figure 1:

Accelerometry data processing overview. CNN refers to convolutional neural network; LSTM refers to Long-short-term memory (LSTM); and DNN refers to dense neural network.

Methods

Participants

One hundred and forty five participants (96 women and 49 men , aged 20-89 yrs) performed a battery of 33 typical daily activities in a standardized laboratory setting as listed in (Table 2). Participants were community dwelling adults 20+ years old who can read and speak English language, were willing to undergo all testing procedures, and their weight was stable in the last three months (+/-5 lbs). (Table 2) shows participants’ descriptive characteristics. Institutional Review Board at the University of Florida approved all study procedures, and all participants provided written informed consents before the study.

Activity type
Activity Sedentary Locomotion Life-style
LEISURE WALK No Yes No
RAPID WALK No Yes No
LIGHT GARDENING No No Yes
YARD WORK No No Yes
PREPARE SERVE MEAL No No Yes
DIGGING No No Yes
STRAIGHTENING UP DUSTING No No Yes
WASHING DISHES No No Yes
UNLOADING STORING DISHES No No Yes
WALKING AT RPE 1 No Yes No
PERSONAL CARE No No Yes
DRESSING No No Yes
WALKING AT RPE 5 No Yes No
SWEEPING No No Yes
VACUUMING No No Yes
STAIR DESCENT No Yes No
STAIR ASCENT No Yes No
TRASH REMOVAL No No Yes
REPLACING SHEETS ON A BED No No Yes
STRETCHING YOGA* No No No
MOPPING No No Yes
LIGHT HOME MAINTENANCE No No Yes
COMPUTER WORK Yes No No
HEAVY LIFTING No No Yes
SHOPPING No No Yes
IRONING No No Yes
LAUNDRY WASHING No No Yes
STRENGTH EXERCISE LEG CURL* No No No
STRENGTH EXERCISE CHEST PRESS* No No No
STRENGTH EXERCISE LEG EXTENSION* No No No
TV WATCHING Yes No No
STANDING STILL Yes No No
WASHING WINDOWS No No Yes
A total of 29 activities were considered for PA type recognition and 33 for EE estimation
* Only considered for energy expenditure estimation
Table 1: List of the performed physical activities and their type

A tri-axial accelerometer (ActiGraph GT3X+) was worn on the right wrist. Additionally, a portable metabolic unit (Cosmed K4b2) was worn to measure metabolic intensity that was expressed as a relative metabolic equivalent (MET). Tasks were chosen because they mimic daily chores activities, common among most Americans, and they are consistent with average time spent in the 2010 American Time Use Survey [3]. Participants performed the scripted activities over four separate visits (2-3 hours each). Each one of these visits was designed to reduce the burden on participants and fatigue associated with performing physical activities over long periods of time. Activities were performed from lowest to highest metabolic demand with a 5-10 minute period between each activity. Full list of inclusion/exclusion criteria and data collection reproducibility can be found in the articles published by our group [24, 7].

Total (n=145)
Age (yr) 58.8 (17.1)
Female, n(%) 96 (66.2)
BMI () 26.5 (4.8)
Race/ethnicity, n
 Non-Hispanic 142
 Hispanic 3
Data are means and SD unless otherwise noted.
BMI: Body Mass Index
Table 2: Participants descriptive characteristics

Instrumentation

Participants wore an ActiGraph GT3X+ monitor on their right wrist. Findings suggest that wearing the accelerometer on the non‐dominant or dominant wrist has no impact on physical activity assessment [10]. The ActiGraph GT3X+ monitor is a tri-axial lightweight accelerometer that records accelerations in units of gravity (1 g) in perpendicular, anterior-posterior, and medio-lateral axes. Accelerometers were initialized to collect data at 100 Hz sampling rate. Also, participants wore a portable indirect calorimetry device, Cosmed K4b2 [8], while performing the activities listed in Table 1. Before data collection, the oxygen () and carbon dioxide () sensors were calibrated using a gas mixture sample of 16.0% and 5.0% and room air calibration. The turbine flow meter was calibrated using a 3.0-L syringe. A flexible facemask was positioned over the participant’s mouth and nose and attached to the flow meter. Oxygen consumption ( = ) was measured breath-by-breath. The breath-by-breath data were smoothed with a 30-sec running average window. The ventilation of () data were displayed using a customize application in Labview to locate when steady state oxygen consumption was reached. Steady state was defined as a plateau in oxygen consumption, which typically occurs after 2 min of the start of the activity. was averaged over a 2-4 minute steady state window. Data were expressed as METs after dividing the values by the traditional standard of 3.5 [20].

Problem Formulation

Activities were split into three binary classification tasks: i) sedentary vs non-sedentary; ii) locomotion vs non-locomotion and iii) lifestyle vs non-lifestyle. MET values were examined as a continuous variable. We extracted consecutive non-overlapping 15-seconds windows from the raw accelerometry data and used it as an input () to the deep learning networks. Previous studies used various window lengths, ranging from 0.1 seconds to 128 seconds [18, 36, 32, 28, 25]. Our choice of 15-second window provided acceptable results. Participants performed the tasks listed in Table 1 at a self-selected pace for 10 min (treadmill walking was done for 7 min). Splitting these periods into 15 seconds intervals resulted in non-overlapping windows, and ensured that there is only one label for each window. The output of the model is a binary score

in case of the classification tasks, and a continuous value in case of the regression task. We used binary cross-entropy loss function to measure the dissimilarity between the predicted (

) and the ground truth () values, and optimized using Adam optimizer as shown in (Equation 1).

(1)

where is the loss function, () is the prediction, () is the ground truth; i is the index of the raw accelerometry data window; and N is the number of samples.

Deep learning Model Architecture

The deep learning network consists of multiple layers of neural networks where the processed output of one layer acts as the input to the next layer and so on. As any machine learning algorithm, the goal is to map the input to the target (labels). Technically, this is done through multiple data transformations (called layers) that are parameterized by numbers (called weights). Learning in this sense means finding the right weights to map the input to the target. Our network architecture comprised of multiple layers: input layer, convolutional neural network (CNN) layers, Long-short- term memory recurrent (LSTM) layer, and a classifier (dense neural network). The input layer is the raw accelerometry data split into 15-second non-overlapping windows. CNN is used to extract spatial features from the raw data. The LSTM layer is used to extract time features from the data. LSTM is a recurrent neural network (RNN) that includes a memory to carry information across timestamps, which can prevent the vanishing gradient problem

[17] - preventing older information from vanishing progressively throughout processing, which is important in time series analysis as in the case of accelerometry data [13]

. The output of LSTM is basically a feature map that is used by the classifier to recognize the PA type and estimate EE. It is worth mentioning that representations learned by CNN and LSTM are generic and reusable - useful regardless of the analyzed data. However, the classifier get rid of the spatial and temporal notions in the data resulting in representations that are specific to the classes in the problem. It will contain information about the probability of classes. We used binary cross-entropy loss function to measure how the classifier’s output is far from the expected one. Then the loss score is used as a feedback signal to update the weights of the network using Adam optimizer. In summary, our network comprised of 3 CNN layers, 1 LSTM layer, 2 layers of dense neural network, sigmoid activation, binary cross-entropy loss function, and Adam optimizer. (Figure

2) shows the architecture of the deep learning network.

Figure 2: The architecture of the deep learning network.

Model Training

Our dataset consisted of 145 participants included in the PA type recognition and 141 included in the EE estimation. Though the majority of accelerometry data were collected at 100 Hz frequency, a few cases were collected at 30 and 80 Hz frequencies. For the sake of consistency, we down sampled the frequencies to 30 hz using scipy python library. Then, we split the three-axis data into 15-seconds samples with 30 hz frequency, i.e., every sample had a shape of 450X3. As elucidated earlier, we categorized the PAs into three categories: sedentary, locomotion and lifestyle. We built three binary classification models and evaluated the performance of each one of them. All the 145 participants were randomly distributed into 10 batches, 9 batches of 15 participants each, and 1 batch of 10 participants. We used 10 fold nested cross validation to report the results on PA type classification with two loops. The outer loop considering every batch as test set and the inner loop with 9 iterations making every batch other than the test batch as validation set. In total there were 90 runs in our nested cross validation approach.

There were more samples of lifestyle class compared to other classes making our dataset imbalanced. In the model training phase, we used weight balancing to assign higher weights to the minority class. This helped in preventing our binary models from becoming biased to the majority class. In the validation phase, we down-sampled the majority class, by randomly selecting samples, to make it equal to the minority class.

We used python 3.7 and keras deep learning library for our approach. The CNN-LSTM model is trained for 50 epochs. We used Adam optimizer and earlystopping callback with patience 5. The model is trained on Nvidia-Tesla K80 gpu for training. (Table

3) provides technical information about the parameters and output sizes of each layer. For reproducibility purposes, we uploaded the code to our GitHub repository [30] with a step-by-step manual explaining how to use them.

Layer (type) Output Shape Param #
(Input Layer) [(None, 450, 3)] 0
(Conv1D) [(None, 450, 16)] 400
(Conv1D) [(None, 450, 32)] 4128
(Conv1D) [(None, 450, 64)] 16448
(LSTM) [(None, 50)] 23000
(Dense) [(None, 10)] 510
(Dense) [(None, 1)] 11
Total params: 44,497
Trainable params: 44,497
Non-trainable params: 0
Table 3: Parameters of the deep learning networks.

Results

Balanced classification accuracy, sensitivity, specificity, precision, F1 score and area under the curve (AUC) metrics were used to evaluate the performance of the classification tasks. The selection of the balanced classification accuracy and F1 score was due to the data imbalance. Root mean square error (RMSE) was used to evaluate the regression task. The upper part of (Table 4) shows the performance metrics of the classification tasks, and the lower part shows the RMSE value for all activities. Each one of the values is a mean of the 10 fold nested cross validation as explained earlier.

Activity Type
Metric Sedentary Locomotion Lifestyle
Balanced Accuracy 0.89 (0.03) 0.86 (0.05) 0.88 (0.03)
F1 score 0.82 (0.05) 0.81 (0.07) 0.95 (0.01)
AUC 0.98 (0.01) 0.95 (0.02) 0.96 (0.02)
Sensitivity (Recall) 0.80 (0.07) 0.72 (0.09) 0.97 (0.01)
Specificity 0.98 (0.01) 0.99 (0.01) 0.79 (0.06)
Precision 0.85 (0.07) 0.93 (0.06) 0.94 (0.02)
Energy Expenditure
RMSE 1.1 (0.13)

All values are mean and standard deviation.

Table 4: Performance metrics of recognizing physical activity type and estimating energy expenditure using deep learning networks. Each value is the mean and standard deviation of the 10 fold nested cross validation.

Figure 3 shows the receiver operator characteristic (ROC) curves for all the classification tasks. Each one of the blue curves represents the curve from each run and the red curve is the mean over the 10 fold nested cross validation.

Figure 3: Receiver operating characteristic (ROC) curves for all the PA type classification tasks. Each one of the blue curves represents the curve from each run and the red curve is the mean over the 10 fold nested cross validation.

Discussion

The goal of the study was to show the effectiveness of the deep learning networks in extracting relevant features from the raw accelerometry data, predicting physical activity type, and estimating energy expenditure precisely from data collected from the wrist. We considered a deep learning network comprised of convolutional neural networks, long short-term networks and dense neural network for the classification and regression tasks. Results demonstrated that the deep learning models were relatively accurate at classifying physical activity types: i) sedentary vs non sedentary, ii) locomotion vs non-locomotion and iii) lifestyle vs non-lifestyle. Additionally, the models estimated the overall METs with reasonable accuracy — within +/- 1.1 METS or about 3.8 of oxygen.

The results of the deep learning models show relatively high performance in recognizing physical activity types. The lifestyle model achieved the highest F1 score (0.95), while sedentary and locomotion models were lower, 0.82 and 0.81, respectively. The high F1 score reflects the balance between precision and recall, which is indeed the case in lifestyle and sedentary recognition and a bit less in case of locomotion prediction. Lifestyle activities typically require more wrist involvement (i.e., ironing, trash removal) than other physical activity types. This can explain the high performance achieved by the lifestyle model. However, it seems from the results that less wrist involvement can lead to confusion in the deep learning model and reduce the performance as in the case of sedentary and locomotion activities. By looking descriptively at sensitivity scores, the lifestyle model seems to be confident in labeling lifestyle activities— neither under or over estimating the classifications. However, in case of sedentary, the model was less accurate and may over estimate activity level by misclassifying them as locomotion or lifestyle.

Comparing relevant literature results is an intricate endeavor because of the differences in the data collection environment and the variables that govern the study. There are numerous differences between studies that include: sample size, the demographic characteristics of participants, the number and diversity of the physical activities tested, type of accelerometer, body position, statistical and machine learning algorithms applied, the extracted statistical features, the window size, and the metrics measured to evaluate the overall performance. Given these differences, the results from work with a similar purpose are quite comparable. For example, Ellis et al. [12] built random forest models on data collected form the dominant wrist to predict physical activity type and estimate energy expenditure. The models were developed and tested on 40 (average age 35.8 years) participants. They obtained an average F1 score of 0.75 on 8 daily activities. Additionally, they obtained an RMSE value of 1.0 METs. Staudenmayer et al. [35] also used random forest to estimate energy expenditure and metabolic intensity of 19 physical activities from wrist accelerometry data. The models derived from a small young sample of 20 (24.1 years) estimated RMSE at 1.21 METs. When compared to others using machine learning approaches, the DL results from the current work are slightly better and reflect small RMSE MET differences between studies - Only +/- 0.20 METs.

Fewer studies have examined deep learning models to recognize physical activity type [1, 43, 41, 31]. The performance of the models [41, 31, 43] built on the Opportunity dataset ranged between F1 score = [56.1 - 0.915] and an accuracy of 76.83%. Additionally, the performance of the models built on Skoda dataset [43, 31] ranged between accuracy = [88.19 - 89.38]. Furthermore, the models [1] built on WISDM and Dephnet were 98.23% and 91.5%, respectively. It should be noted that these studies have used publicly available data that contain activity labels, but not measures of metabolic intensity or energy expenditure (e.g. Opportunity[5] (multiple body positions, 3 participants), PAMAP2 (chest, arm and ankle positions, 9 participants)[33], UCI daily and sports dataset (hip position, 30 participants)[37], Skoda Mini Checkpoint (multiple body positions, 1 participant)[39], WISDM (hip position, 29 participants) [40], and Daphnet Freezing of Gait Dataset (legs and hip positions, 10 participants)[9]). They are also limited by a small number of participants, age-range being mostly 40 years, a low number and diversity of activity types, and most importantly lacking sufficient data from the wrist position. Given these substantial differences, the models presented here show relatively higher performance than others using DL approaches. Additionally, the current model may generalize better due to the high diversity of activities, wide age-span, gender and racial diversity and the larger number of participants enrolled.

A limitation of the current study is that data were collected in controlled lab settings, which is appropriate and a first step in evaluating positional differences [21]. Collecting data in the free-living settings is more reflective of numerous transitions between activity types, but it is challenged by labeling the activity type. Another limitation is the consideration of window size, which was based on previous studies that extracted time- and frequency-domain features. This window size may not reflect the most appropriate size for deep learning networks. Additional simulation work should evaluate different window sizes for optimizing performance.

Conclusion

The goal of the study was to show the effectiveness of the deep learning networks in extracting relevant features from the raw accelerometry data, recognizing physical activity type, and estimating energy expenditure precisely from accelerometry data collected from the wrist. Deep learning networks comprising mainly of convolutional neural networks (CNN) and long-short-term memory (LSTM) demonstrated excellent performance in classifying broad activity types and estimating activity energy expenditure. As such, the spatial and temporal representations extracted from the deep learning models appear to be an effective substitution to the manual feature extraction. This knowledge is beneficial for the development of more accurate estimates of physical activity type and metabolic intensity for wrist accelerometers and mobile devices that use accelerometers (e.g. smart- watches) in both public and research arenas.

References

  • [1] M. A. Alsheikh, A. Selim, D. Niyato, L. Doyle, S. Lin, and H. Tan (2016) Deep activity recognition models with triaxial accelerometers. In

    Workshops at the Thirtieth AAAI Conference on Artificial Intelligence

    ,
    Cited by: Discussion.
  • [2] F. Attal, S. Mohammed, M. Dedabrishvili, F. Chamroukhi, L. Oukhellou, and Y. Amirat (2015) Physical human activity recognition using wearable sensors. Sensors 15 (12), pp. 31314–31338. Cited by: Introduction.
  • [3] S. BoL (2010-02) American time use survey - 2010 results. Note: [Online]. Available from: https://www.bls.gov/tus/[February 1, 2020] Cited by: Participants.
  • [4] K. Brickwood, G. Watson, J. O’Brien, and A. D. Williams (2019) Consumer-based wearable activity trackers increase physical activity participation: systematic review and meta-analysis. JMIR mHealth and uHealth 7 (4), pp. e11819. Cited by: Introduction.
  • [5] R. Chavarriaga, H. Sagha, A. Calatroni, S. T. Digumarti, G. Tröster, J. d. R. Millán, and D. Roggen (2013) The opportunity challenge: a benchmark database for on-body sensor-based activity recognition. Pattern Recognition Letters 34 (15), pp. 2033–2042. Cited by: Discussion.
  • [6] G. L. C. Chia, A. Anderson, and L. A. McLean (2019) Behavior change techniques incorporated in fitness trackers: content analysis. JMIR mHealth and uHealth 7 (7), pp. e12768. Cited by: Introduction.
  • [7] D. B. Corbett, A. A. Wanigatunga, V. Valiani, E. M. Handberg, T. W. Buford, B. Brumback, R. Casanova, C. M. Janelle, and T. M. Manini (2017) Metabolic costs of daily activity in older adults (chores xl) study: design and methods. Contemporary clinical trials communications 6, pp. 1–8. Cited by: Participants.
  • [8] COSMED () COSMED-k4b2. Note: [Online]. Available from: https://www.cosmed.com/en/[February 6, 2020] Cited by: Instrumentation.
  • [9] () Daphnet freezing of gait data set. Note: [Online]. Available from: https://archive.ics.uci.edu[March 4, 2020] Cited by: Discussion.
  • [10] O. Dieu, J. Mikulovic, P. S. Fardy, G. Bui-Xuan, L. Beghin, and J. Vanhelst (2017) Physical activity using wrist-worn accelerometers: comparison of dominant and non-dominant wrist. Clinical physiology and functional imaging 37 (5), pp. 525–529. Cited by: Instrumentation.
  • [11] K. Ellis, S. Godbole, S. Marshall, G. Lanckriet, J. Staudenmayer, and J. Kerr (2014) Identifying active travel behaviors in challenging environments using gps, accelerometers, and machine learning algorithms. Frontiers in public health 2, pp. 36. Cited by: Introduction.
  • [12] K. Ellis, J. Kerr, S. Godbole, G. Lanckriet, D. Wing, and S. Marshall (2014) A random forest classifier for the prediction of energy expenditure and type of physical activity from wrist and hip accelerometers. Physiological measurement 35 (11), pp. 2191. Cited by: Discussion.
  • [13] C. François (2017) Deep learning with python. Manning Publications Company. Cited by: Deep learning Model Architecture.
  • [14] K. M. Full, J. Kerr, M. A. Grandner, A. Malhotra, K. Moran, S. Godoble, L. Natarajan, and X. Soler (2018) Validation of a physical activity accelerometer device worn on the hip and wrist against polysomnography. Sleep health 4 (2), pp. 209–216. Cited by: Introduction.
  • [15] A. Graves, A. Mohamed, and G. Hinton (2013) Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing, pp. 6645–6649. Cited by: Introduction.
  • [16] R. Guthold, G. A. Stevens, L. M. Riley, and F. C. Bull (2018) Worldwide trends in insufficient physical activity from 2001 to 2016: a pooled analysis of 358 population-based surveys with 1⋅ 9 million participants. The Lancet Global Health 6 (10), pp. e1077–e1086. Cited by: Introduction.
  • [17] S. Hochreiter (1998) The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 6 (02), pp. 107–116. Cited by: Deep learning Model Architecture.
  • [18] T. Huynh and B. Schiele (2005) Analyzing features for activity recognition. In Proceedings of the 2005 joint conference on Smart objects and ambient intelligence: innovative context-aware services: usages and technologies, pp. 159–163. Cited by: Problem Formulation.
  • [19] IDC forecasts steady double-digit growth for wearables as new capabilities and use cases expand the market opportunities, american time use survey - 2010 results. Note: [Online]. Available from: https://www.idc.com/[February 6, 2020] Cited by: Introduction.
  • [20] M. Jette, K. Sidney, and G. Blümchen (1990) Metabolic equivalents (mets) in exercise testing, exercise prescription, and evaluation of functional capacity. Clinical cardiology 13 (8), pp. 555–565. Cited by: Instrumentation.
  • [21] S. K. Keadle, K. A. Lyden, S. J. Strath, J. W. Staudenmayer, and P. S. Freedson (2019) A framework to evaluate devices that assess physical behavior. Exercise and sport sciences reviews 47 (4), pp. 206–214. Cited by: Discussion.
  • [22] J. Kerr, C. R. Marinac, K. Ellis, S. Godbole, A. Hipp, K. Glanz, J. Mitchell, F. Laden, P. James, and D. Berrigan (2017) Comparison of accelerometry methods for estimating physical activity. Medicine and science in sports and exercise 49 (3), pp. 617. Cited by: Introduction.
  • [23] M. Kheirkhahan, S. Mehta, M. Nath, A. A. Wanigatunga, D. B. Corbett, T. M. Manini, and S. Ranka (2017) A bag-of-words approach for assessing activities of daily living using wrist accelerometer data. In 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 678–685. Cited by: Introduction.
  • [24] J. D. Knaggs, K. A. Larkin, and T. M. Manini (2011) Metabolic cost of daily activities and effect of mobility impairment in older adults. Journal of the American Geriatrics Society 59 (11), pp. 2118–2123. Cited by: Participants.
  • [25] A. Krause, D. P. Siewiorek, A. Smailagic, and J. Farringdon (2003) Unsupervised, dynamic identification of physiological and activity context in wearable computing.. In ISWC, Vol. 3, pp. 88. Cited by: Problem Formulation.
  • [26] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: Introduction.
  • [27] S. Lai, L. Xu, K. Liu, and J. Zhao (2015) Recurrent convolutional neural networks for text classification. In Twenty-ninth AAAI conference on artificial intelligence, Cited by: Introduction.
  • [28] A. Mannini, S. S. Intille, M. Rosenberger, A. M. Sabatini, and W. Haskell (2013) Activity recognition using a single accelerometer placed at the wrist or ankle. Medicine and science in sports and exercise 45 (11), pp. 2193. Cited by: Problem Formulation.
  • [29] J. H. Migueles, C. Cadenas-Sanchez, U. Ekelund, C. D. Nyström, J. Mora-Gonzalez, M. Löf, I. Labayen, J. R. Ruiz, and F. B. Ortega (2017) Accelerometer data collection and processing criteria to assess physical activity and other outcomes: a systematic review and practical considerations. Sports medicine 47 (9), pp. 1821–1845. Cited by: Introduction.
  • [30] U. of Florida () Deep learning models code. Note: [Online]. Available from: https://github.com/ufdsat/CHORES-Analyses[March 4, 2020] Cited by: Model Training.
  • [31] F. J. Ordóñez and D. Roggen (2016) Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 16 (1), pp. 115. Cited by: Discussion.
  • [32] S. Pirttikangas, K. Fujinami, and T. Nakajima (2006) Feature selection and activity recognition from wearable sensors. In International symposium on ubiquitious computing systems, pp. 516–527. Cited by: Problem Formulation.
  • [33] A. Reiss and D. Stricker (2012) Introducing a new benchmarked dataset for activity monitoring. In 2012 16th International Symposium on Wearable Computers, pp. 108–109. Cited by: Discussion.
  • [34] C. A. Ronao and S. Cho (2016) Human activity recognition with smartphone sensors using deep learning neural networks. Expert systems with applications 59, pp. 235–244. Cited by: Introduction.
  • [35] J. Staudenmayer, S. He, A. Hickey, J. Sasaki, and P. Freedson (2015) Methods to estimate aspects of physical activity and sedentary behavior from high-frequency wrist accelerometer measurements. Journal of applied physiology 119 (4), pp. 396–403. Cited by: Introduction, Discussion.
  • [36] M. Stikic, T. Huynh, K. Van Laerhoven, and B. Schiele (2008) ADL recognition based on the combination of rfid and accelerometer sensing. In 2008 second international conference on pervasive computing technologies for healthcare, pp. 258–263. Cited by: Problem Formulation.
  • [37] () UCI daily and sports dataset. Note: [Online]. Available from: https://archive.ics.uci.edu[March 4, 2020] Cited by: Discussion.
  • [38] WHO () WHO launches active: a toolkit for countries to increase physical activity and reduce noncommunicable diseases.. Note: [Online]. Available from: https://www.who.int/news-room/detail/17-10-2018-who-launches-active-a-toolkit-for-countries-to-increase-physical-activity-and-reduce-noncommunicable-diseases[March 4, 2020] Cited by: Introduction.
  • [39] () Wiki:dataset [human activity/context recognition datasets]. Note: [Online]. Available from: http://har-dataset.org/doku.php?id=wiki:dataset[March 4, 2020] Cited by: Discussion.
  • [40] () Wireless sensor data mining dataset (wisdm). Note: [Online]. Available from: http://www.cis.fordham.edu/wisdm/dataset.php[March 4, 2020[ Cited by: Discussion.
  • [41] J. Yang, M. N. Nguyen, P. P. San, X. L. Li, and S. Krishnaswamy (2015) Deep convolutional neural networks on multichannel time series for human activity recognition. In Twenty-Fourth International Joint Conference on Artificial Intelligence, Cited by: Discussion.
  • [42] M. Zeng, L. T. Nguyen, B. Yu, O. J. Mengshoel, J. Zhu, P. Wu, and J. Zhang (2014) Convolutional neural networks for human activity recognition using mobile sensors. In 6th International Conference on Mobile Computing, Applications and Services, pp. 197–205. Cited by: Introduction.
  • [43] M. Zeng, L. T. Nguyen, B. Yu, O. J. Mengshoel, J. Zhu, P. Wu, and J. Zhang (2014) Convolutional neural networks for human activity recognition using mobile sensors. In 6th International Conference on Mobile Computing, Applications and Services, pp. 197–205. Cited by: Discussion.