Digit Recognition From Wrist Movements and Security Concerns with Smart Wrist Wearable IOT Devices

04/22/2020 ∙ by Lambert T. Leong, et al. ∙ University of Hawaii 0

In this paper, we investigate a potential security vulnerability associated with wrist wearable devices. Hardware components on common wearable devices include an accelerometer and gyroscope, among other sensors. We demonstrate that an accelerometer and gyroscope can pick up enough unique wrist movement information to identify digits being written by a user. With a data set of 400 writing samples, of either the digit zero or the digit one, we constructed a machine learning model to correctly identify the digit being written based on the movements of the wrist. Our model's performance on an unseen test set resulted in an area under the receiver operating characteristic (AUROC) curve of 1.00. Loading our model onto our fabricated device resulted in 100 when predicting ten writing samples in real-time. The model's ability to correctly identify all digits via wrist movement and orientation changes raises security concerns. Our results imply that nefarious individuals may be able to gain sensitive digit based information such as social security, credit card, and medical record numbers from wrist wearable devices.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Wearable smart technologies are becoming cheaper, more accessible, and thus more common. The wrist is an ideal location for wearable technologies and oftentimes this technology is in the form of a smart watch. Smart watches afford more functionality than just keeping track of the time and are often equipped with various hardware, such as infrared sensors, accelerometers, gyroscopes, etc. These various on-board hardware allows the user to track many personal metrics that have implications for health and productivity benefits (Al-Eidan et al., 2018). While many features which take advantage of wrist wearable hardware output already exist, the output variety is vast and all use cases have not yet been explored. Exploration into new ways to use wearable output metrics could result in beneficial as well as malicious use cases. In this work we investigate the potential of using wearable output metrics to capture and predict hand written digits from users. An individual’s wrist undergoes subtle movements and orientation changes (Lee and Cho, 1998; Mavrogiorgou et al., 2001)

when writing different digits. We hypothesized that these subtle wrist movements and orientation changes are unique to the digit being written and machine learning can be used to accurately classify the written digits.

Hand written digit recognition from wrist movement and orientation has security implications which include nefarious individuals gaining sensitive information from users wearing smart wrist devices. Sensitive information is often in the form of digits such as social security, credit card, and medical record numbers. In addition, many wearable devices are connected to the internet and recorded data is stored in the cloud. Machine learning models which can classify hand written digits from wrist movement and orientation could, in theory, be feed data stored in the cloud to retroactively gain sensitive user information.

The remainder of this paper is organized as follows: Section 2 looks at previous work on machine learning and handwriting recognition; Section 3 details our hardware design, experimental design, and the construction and tuning of our machine learning model; Section 4 our model’s performance is reported and we discuss our findings; Section 5 concludes this paper with the implications of our findings in the scope of wrist wearable user security and directions of future research.

2. Related Work

Common wrist wearables, which include the Apple Watch, Fitbit, and Samsung Galaxy Watch, contain the hardware capable of capturing movement and orientation (11; 21; 8). These hardware includes accelerometers and gyroscopes, which have been shown to provide useful data needed to identify fine motor task (Zeng et al., 2014). In fact, other works have shown that accelerometers mounted on the wrist have the sensitivity to identify tremors associated with different neuro-muscular diseases such as Parkinson’s (Wile et al., 2014) as well as seizures (Regalia et al., 2019). Hand writing is a fine motor task and these works lead us to believe that accelerometers and gyroscope hardware are sufficient for measuring the movements the wrist undergoes.

Wearable internet of things (IoT) devices provide a constant data stream and result in a considerable amount of data. Machine and deep learning offers many tools and techniques to analyze the vast and copious amounts of wearable IOT data 

(Fernandez Molanes et al., 2018). Various research efforts are aimed at leveraging machine learning models to help make sense of all data and correlate them to particular task and activity related to sports performance, health, and safety (Davila et al., 2017; Regalia et al., 2019).

In our work, we focus specifically on wrist wearables and machine learning models built around the corresponding data. Data from just a wrist mounted gyroscope alone has been shown to be adequate for building a machine learning model to detect hand gestures for a novel human computer interaction (HCI) device (Han and Yoon, 2019). Several groups have explored machine learning to build models that perform writing recognition task from wrist wearable device output. In one instance, additional custom sensors were placed on the upper forearm and on the finger tip to capture additional information needed to correctly classify hand gestures (Xu et al., 2015). They were also able to identify characters written with one’s index finger with an accuracy of 95%. However, the strength of their model is likely attributed to the data coming from the finger sensor more so, than the wrist sensors. Our work aims to perform written digit recognition from sensors placed solely on an individuals wrist. Word level recognition from smart watches was explored by Xia et al. (Xia et al., 2018). Their model was able to achieve an accuracy of 48.8% on word level recognition based off wrist movement and they highlighted potential security concerns of their results. Letter level recognition with smart watches was also explored (Ardüser et al., 2016). In that work, writing tasks were performed on whiteboards and audio input from an on-board microphone was used for segmentation, which helped recognition accuracy. These works assured us that sensor data from wrist wearable devices provides sufficient data for building machine learning models to perform written recognition tasks.

Security issues associated with IoT devices is a popular and an ever growing area of research. IoT devices have been shown to be easily compromised (Lee et al., 2016) and work has been done, using machine learning, to help improve IoT security and detect threats on IoT devices (Azmoodeh et al., 2018; Pajouh et al., 2019). Our work is not necessarily concerned with the issue of compromised wrist wearable IoT devices rather, it seeks to exploit a nefarious use case for already available device data. An example of security exploits on readily available wearable device data can be seen in (9; L. Sly (2018)). In these articles, restricted areas such as military bases have been mapped out just by having a wrist wearable user, with security clearance, passively walk around secure areas. Wrist wearables were not compromised in those instances but the use of the already available data (e.g. GPS coordinates) posed an alarming security vulnerability. Sensor based attacks involving wrist wearables to capture keystrokes have been shown to be possible (Liu et al., 2015). In another instance, Pandelea et al. showed that a machine learning model could be built using data from on-board smart watch hardware to guess the password being entered onto the device (Pandelea and Chiroiu, 2019). In that paper, they showed that pressing different keys on the smart watch corresponded to a different set of movements and their model was able to map smart watch movements to different key inputs.

3. Methods

For the purpose of our investigation we focus on handwritten digits. More specifically we focus only on the wrist movements associated with writing the digit zero and the digit one. We formalize our problem, with respect to machine learning, as a binary classification problem. Working with digits mitigates issues that arise with the written differences in English upper and lower case and cursive alphabet characters.

3.1. Hardware Design

Accelerometers and gyroscopes are common amongst wrist wearables and these hardwares are ideal for capturing the wrist movements and orientation. The accelerometer can record the wrist acceleration in three planes, x, y, and z and the gyroscope can capture the wrist angle or tilt during writing, in the x, y, and z plane. We fabricated our own devices equipped with an accelerometer and gyroscope, similar to those seen in popular wrist wearables. Designing our own hardware allowed us to more accurately capture and label data for our experiments.

(a) The ESP-32 feather board was used to handle processing and I/O from the LSM9D1 IMU
(b) The LSM9D1 housed the accelerometer and gyroscope
Figure 1. Micro-controller and inertial measurement unit (IMU)

We used Adafruit’s ESP32 Feather (1) micro-controller board and the LSM9D1 inertial measurement unit (IMU) (2). The accelerometer and gyroscope are housed on the IMU and serial peripheral interface (SPI) protocols are used to communicate data recordings from the IMU to the ESP32 Feather board. The IMU and micro-controller were connected and assembled into a single housing. Acquiring labeled data or labeling data after collection can be expensive and often requires a degree of processing and cleaning. A switch was added to the design of our device for the sole purpose of parsing and labeling data during collection. As a result, the accelerometer and gyroscope data would only be recorded while the switch was engaged and the switch was only engaged during the writing of either of the digits. The switch allowed us to identify when writing began and ended within the accelerometer and gyroscope data streams and also allowed for immediate labeling.

3.2. Data Collection

We recruited participants who are right hand dominant and write with their right hand. The sensor housing was attached to the posterior side of the ulna and radius at the most distal point from the body. In other words, the housing was attached to the top of the participants wrist, common in convention to how a watch and wrist wearables are normally worn. Participants wrote out digits to fill a 10 cm by 10 cm square region. After the writing of each digit, the data was labeled with the appropriate digit and saved to a file. We collected a total of 400 writing samples of digits which breaks down to 200 samples for the digit zero and 200 samples for the digit one.

3.3. Data Processing and Feature Engineering

Seven data fields, as seen in Table 1, were recorded from our device during data collection. These fields include time, acceleration in three planes (x, y, z), and pitch angle in three planes (x, y, z). It is often the case that certain digits can be written in different amounts of time. For instance, writing the digit one usually takes less time to write than the digit zero. To deter our machine learning algorithm from only learning on the time it takes to write a digit, we extracted features which are uncoupled from time. The features we extracted were the minimum, maximum, and mean for acceleration and pitch angle in all three planes (x, y, z).

Using the acceleration features we were able to engineer a velocity and a displacement feature. Integrating over the acceleration yielded the velocity and subsequently, integrating over the velocity yielded the displacement. Velocities and displacements were calculated in all three planes. The minimum, maximum, and mean velocities for all three planes were added to our current feature list. The total displacement in all three (x,y,z) planes was also added to the feature list. Lastly, we calculated the total overall displacement and added that to the new feature list. As a result we transformed our original seven features into 31 new features, shown in Table 2, that are irrespective of time.

Metrics Axis Value Type Feature Count
Acceleration x,y,z 3
Pitch Angle x,y,z 3
Time Total 1
Total Features 7
Table 1. List of original features gathered from the device
Metrics Axis Value Type Feature Count
Acceleration x,y,z Minimum, Mean, Maximum 9
Pitch Angle x,y,z Minimum, Mean, Maximum 9
Velocity x,y,z Minimum, Mean, Maximum 9
Displacement x,y,z Total 4
Total Features 31
Table 2. List of engineered features. Features used in final model

3.4. Principal Components Analysis and Class Separability Investigation

Principal components analysis (PCA) was used to evaluate the explained variance of each of the 31 re-engineered features. Data was normalized and Scikit-Learns (Pedregosa et al., 2011) PCA module was used to perform PCA. It was found that the top three principal components explain greater than 99.99% of the variance. Mapping the principal components (PC) back to the original features revealed that the top three components are the maximum pitch angle in the z plane, the displacement in the x plane and the mean pitch angle in the z plane, respectively. Distributions for each of the three features were generated with respect to class and are presented in Figure 2.

(a) Distribution of maximum pitch angle in the z plane with respect to class. gz = gyroscope z plane
(b) Distribution of displacement in the x plane with respect to class dx = displacement in x plane
(c) Distribution of mean pitch angle in the z plane with respect to class.
Figure 2. Distribution of top three principal components with respect to class

Figure 2(a) shows the best separation of classes when compared to the x displacement and mean z pitch angle. There is still some, non-negligible overlap in Figure 2(a) and other features may be needed to get clear class separation. Figure 2(b) and 2(c) have a considerable overlap but some separation can be seen and including these features may help define a better decision boundary. Scatter plots were generated to investigate the separability of the two classes and a potential decision boundaries. Plots are shown in Figure 3.

(a) x displacement versus maximum z pitch angle colored by class
(b) mean z pitch angle versus maximum z pitch angle colored by class
(c) x displacement versus maximum z pitch angle versus mean z pitch angle colored by class
Figure 3. Plots of top three principal components versus each other to look for separability

Figure 3(a) shows fairly good separation amongst the classes however there there is some overlap in the middle where the two classes meet. Figure 3(b) also presents with fairly good separation but there is also overlap in the middle where the two classes meet. Good separability can be seen with respect to two of the top three principal components. Figure 3(c) plots all three principal components against each other to see if better separability can be seen in a higher dimension. Overlap of the two classes is still present, however, there is more separation seen in three dimensions than seen in two dimensions. This is somewhat expected as greater separability is often observed in higher dimensions and using more features may create more separation between the two classes. Therefore, it is likely that more features are needed to generate a stronger model and thus we chose to build a model utilizing all 31 features rather than just the top three PC.

3.5. Model and Hyper Parameter Tuning

The dataset of 400 writing samples was randomly split into a training, validation, and testing sets via a 60%, 20%, 20% split. The number of samples for each class is shown in Table 3

. Our relatively modest sample size led us to first explore simpler models rather than a deep learning approach. We explored ensembling methods which led us to use extreme gradient boosting with the help of Sci-Kit Learn’s xgboost package. To evaluate our model choice we set the following hyper parameters to the package defaults as follows: number of boosting stages at 100, learning rate of 0.1, max tree depth of six, and the auto tree algorithm. The classifier was trained on the training set with all 31 feature and the validation set achieved an AUROC of 88.03%. Preliminary performance was good and we proceeded with gradient boosting as our model choice.

Dataset Class digit 0 Class digit 1
Train 122 118
Validation 42 38
Test 36 44
Total 200 200
Table 3. Breakdown of sample per class

Hyper parameters, which include the number of estimators, the learning rate, the maximum depth of a tree, and the tree construction algorithm, were optimized using an exhaustive grid search. Models were trained on different combinations of hyper parameters using five fold cross-validation. A list of hyper parameters and their explored ranges are presented in Table 

4.

Parameter Values and ranges
n_estimators 1000, 2000, 3000, 4000, 5000
tree_algorithm hist, exact
max_depth 1, 2, 3, 4, 5, 6, 7, 8
learning rate 0.1, 0.3, 0.5
Table 4. Hyper parameters and ranges explored

The best hyper parameters, from the exhaustive grid search, which yielded the best AUROC’s on the validation, set are shown in Table 5. The final models were retrained on the combined, train and validation, dataset using five fold cross-validation. The best hyper parameters, from Table 5 were used in the retraining of the final PCA model and full 31 feature model.

Parameter Values
n_estimators 1000
tree_algorithm hist
max_depth 1
learning rate 0.1
Table 5. Best hyper parameters used to train final models

3.6. Performance Evaluation

Model performance was mainly evaluated using the area under the receiver operating characteristic (AUROC) curve. AUROC values are reported as values between zero and one where values closer to one indicate better performance. The test set, which is 20% of the dataset that was never seen by the models, was used to calculate the final AUROC values. Final AUROC values from both the final PCA model and the final full feature model were compared to each other to evaluate model performance.

4. Performance Results

Two models were constructed using a different number of features. The first model was constructed using the top three principal components (PC) and the second model used all 31 features. The better of the two models was used to construct our final model which was ported to our device for real time evaluations.

4.1. PCA Model Results

As mentioned in Section 3.4, the top three principal components explain 99.99% of the variance and we investigated if these features were sufficient to build a good classifier. The test set, which was held out and not seen by the model during construction and hyper parameter tuning, was offered to the model constructed with the three features, shown in Table 6, corresponding to the top three PC. The receiver operating characteristic (ROC) curve and AUROC value is shown in Figure 4.

Metrics Axis Value Type Feature Count
Pitch Angle z Maximum 1
Displacement x Total 1
Pitch Angle z Mean 1
Total Features 3
Table 6. List of PCA reduced features
Figure 4. ROC curve and AUROC values calculated as a result of running the test set through the model trained with only the top three principal components. The dash line indicates random guess performance.
Predicted
Digit 0 Digit 1

Actual

Digit 0 33 3
Digit 1 8 36
Table 7. Confusion matrix of PCA models performance on held out test set

PCA was explored as a means of possibly reducing the dimensionality of the data. We were interested to see if any subset of the 31 features could be used to build a strong model. Training a model with just the top three PC resulted in an AUROC of 0.87, as seen in Figure 4. Class break downs and predictions by class are shown in the confusion matrix in Table 7. Results indicate that three feature (maximum z pitch angle, total x displacement, and mean z pitch angle) may not contain enough information to define a clear decision boundary. Dimensionality reduction offered the potential to build our model off of only one sensor input which could have made our methods more applicable to more devices. For instance, if a model built on only gyroscope input could correctly classify the written digits then accelerometer hardware would not be needed. Our PCA model suggested the contrary and more features were needed thus both sensors are needed as well.

4.2. Full Feature Model Results

We constructed a second model in an attempt to achieve better performance than the three PC model. We used all 31 engineered features, shown in Table 2, to train a second model and the performance results are shown in Figure 5.

Figure 5. ROC curve and AUROC values calculated as a result of running the test set through the model trained on all 31 feature. The dash line indicates random guess performance.
Predicted
Digit 0 Digit 1

Actual

Digit 0 36 0
Digit 1 0 44
Table 8. Confusion matrix of final models performance on held out test set

Figure 5 shows an improvement in model performance when all 31 features are used to train our model. The second model achieved an AUROC of 1.00 on our test set. This indicates that it was able to predict which digit was written without any errors. Class break downs and full feature model predictions by class are shown in the confusion matrix in Table 8. Although the top three PC explain the great majority of the variance, the other components contain the information needed to create a good decision boundary. Separability is often easier to observe in higher dimensions and this seemed to be the case for our classification problem. An AUROC value of 1.00 does not suggest the need for more feature engineering and suggests that our current features are sufficient. We are confident in our model’s generalizability due to its good performance on the test set and have no reason to suspect any significant overfitting.

4.3. Real Time Performance

The second, full feature, model was retrained on the entire dataset with the same optimum hyper parameters to produce our final model. The final model was loaded onto our hardware device so that real time predictions could be made. A sample set of ten written digits, which breaks down to five samples of written digit zero and five samples of written digit one, was evaluated in real time. Volunteers were randomly assigned to write either the digit 0 or the digit 1. With the device attached to their wrist, the volunteer wrote their assigned digit and the devices output a prediction. Results from real time testing are reported in Table 9.

Predicted
Digit 0 Digit 1

Actual

Digit 0 5 0
Digit 1 0 5
Table 9. Confusion matrix of final models performance during real time testing
Model Accuracy (%) Digit 0 Precision (%) Digit 0 Recall (%) Digit 0 F1 Score (%) Digit 1 Precision (%) Digit 1 Recall (%) Digit 1 F1 Score (%)
PCA 86.25 80.49 91.67 85.74 92.31 81.82 86.75
Full Feature 100.00 100.00 100.00 100.00 100.00 100.00 100.00
Real Time, Full Feature 100.00 100.00 100.00 100.00 100.00 100.00 100.00
Table 10. Performance metrics for the PCA, full feature models, and real time full feature models

As seen from the confusion matrix, in Table 9, the final model was able to predict all written digits accurately which results in an AUROC of 1. Real time performance results reassures us of our models generalizability and, again, does not lead us to suspect significant overfitting. Performance metrics, including accuracy, percision, recall and F1 score, are reported for the PCA, full feature, and real time full feature models in Table 10.

5. Conclusions

Our findings imply a potential security vulnerability that is associated with wrist wearable devices. Accelerometers and gyroscopes, which we used, are common hardware on-board wrist wearables. We demonstrate the ability to capture the subtle movements and position changes of the wrist with those hardware during writing. Using machine learning, we were able to identify that the wrist movements involved in writing the digit zero is unique and different to the wrist movements involved in writing the digit one. As a result, a robust machine learning model was constructed which demonstrated perfect real-time prediction performance. Our results imply a plausible reality where sensitive information can be recorded from users during writing while wearing some smart watch or wrist wearable device. In addition, our potential security exploit results from using already available data from smart wrist wearables. Our methods do not involve nor require compromising the wearable devices themselves. While the machine learning model we developed is simple and only for the binary classification of two written digits, it is an important first step and brings awareness to some security vulnerabilities associated with wrist wearables.

6. Future Work

We hope to explore how our data and model relates to left hand dominant users. It is hypothesized that since your left hand is a mirror image of the right that simply flipping signs or the direction vector will lead to the correct solution. More data, specific to left handed users, is needed to explore how handedness affects generalizability of machine learning writing recognition models.

The size of our data set was modest and only contained wrist movement data for the digits one and zero. Our binary classification problem was not hard to solve and thus a simpler model was sufficient which also raises concerns. The use of a simpler machine learning model implies that capturing users private information may be a trivial task. Our current findings warrant further work to aggregate wrist movement data for the writing of all ten digits, zero through nine. Working with all ten digits presents as a multi-class classification problem and while it may be more difficult there exists more powerful tools which were not explored in our work. In fact, deep neural networks could potentially handle classifying wrist movements for all ten digits fairly easily. More work is needed to explore these concepts to improve and maintain security for the vast variety of wearable IoT devices.

References

  • [1] Adafruit esp32 feather board datasheet. External Links: Link Cited by: §3.1.
  • [2] Adafruit lsm9ds1 accelerometer gyro magnetometer 9-dof breakout. External Links: Link Cited by: §3.1.
  • R. Al-Eidan, H. Al-Khalifa, and A. Malik Al-Salman (2018) A review of wrist-worn wearable: sensors, models, and challenges. Journal of Sensors 2018, pp. 1–20. External Links: Document Cited by: §1.
  • L. Ardüser, P. Bissig, P. Brandes, and R. Wattenhofer (2016) Recognizing text using motion data from a smartwatch. In 2016 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops), Vol. , pp. 1–6. External Links: Document, ISSN Cited by: §2.
  • A. Azmoodeh, A. Dehghantanha, M. Conti, and K. R. Choo (2018) Detecting crypto-ransomware in iot networks based on energy consumption footprint. Journal of Ambient Intelligence and Humanized Computing 9 (4), pp. 1141–1152. External Links: ISSN 1868-5145, Document, Link Cited by: §2.
  • J. C. Davila, A. Cretu, and M. Zaremba (2017) Wearable sensor data classification for human activity recognition based on an iterative learning framework. MDPI. External Links: Link Cited by: §2.
  • R. Fernandez Molanes, K. Amarasinghe, J. Rodriguez-Andina, and M. Manic (2018) Deep learning and reconfigurable platforms in the internet of things: challenges and opportunities in algorithms and hardware. IEEE Industrial Electronics Magazine 12 (2), pp. 36–49. External Links: Document, ISSN 1932-4529 Cited by: §2.
  • [8] Fitbit development: accelerometer api. External Links: Link Cited by: §2.
  • [9] (2018-01) Fitness app strava lights up staff at military bases. BBC. External Links: Link Cited by: §2.
  • H. Han and S. W. Yoon (2019) Gyroscope-based continuous human hand gesture recognition for multi-modal wearable input device for human machine interaction. Sensors 19 (11). External Links: Link, ISSN 1424-8220, Document Cited by: §2.
  • [11] A. Inc Human interface guidelines. External Links: Link Cited by: §2.
  • D. Lee and H. Cho (1998) The beta-velocity model for simulating handwritten korean scripts. In Electronic Publishing, Artistic Imaging, and Digital Typography, R. D. Hersch, J. André, and H. Brown (Eds.), Berlin, Heidelberg, pp. 252–264. External Links: ISBN 978-3-540-69718-3 Cited by: §1.
  • M. Lee, K. Lee, J. Shim, S. Cho, and J. Choi (2016) Security threat on wearable services: empirical study using a commercial smartband. In 2016 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), Vol. , pp. 1–5. External Links: Document, ISSN Cited by: §2.
  • X. Liu, Z. Zhou, W. Diao, Z. Li, and K. Zhang (2015) When good becomes evil: keystroke inference with smartwatch. In Proceedings of the 22Nd ACM SIGSAC Conference on Computer and Communications Security, CCS ’15, New York, NY, USA, pp. 1273–1285. External Links: ISBN 978-1-4503-3832-5, Link, Document Cited by: §2.
  • P. Mavrogiorgou, R. Mergl, P. Tigges, J. El Husseini, A. Schröter, G. Juckel, M. Zaudig, and U. Hegerl (2001) Kinematic analysis of handwriting movements in patients with obsessive-compulsive disorder. Journal of Neurology, Neurosurgery & Psychiatry 70 (5), pp. 605–612. External Links: Document, ISSN 0022-3050, Link, https://jnnp.bmj.com/content/70/5/605.full.pdf Cited by: §1.
  • H. H. Pajouh, R. Javidan, R. Khayami, A. Dehghantanha, and K. R. Choo (2019) A two-layer dimension reduction and two-tier classification model for anomaly-based intrusion detection in iot backbone networks. IEEE Transactions on Emerging Topics in Computing 7 (2), pp. 314–323. External Links: Document, ISSN 2168-6750 Cited by: §2.
  • A. Pandelea and M. Chiroiu (2019) Password guessing using machine learning on wearables. In 2019 22nd International Conference on Control Systems and Computer Science (CSCS), Vol. , pp. 304–311. External Links: Document, ISSN 2379-0482 Cited by: §2.
  • F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay (2011) Scikit-learn: machine learning in Python. Journal of Machine Learning Research 12, pp. 2825–2830. Cited by: §3.4.
  • G. Regalia, F. Onorati, M. Lai, C. Caborni, and R. W. Picard (2019) Multimodal wrist-worn devices for seizure detection and advancing research: focus on the empatica wristbands. Epilepsy Research 153, pp. 79 – 82. External Links: ISSN 0920-1211, Document, Link Cited by: §2, §2.
  • L. Sly (2018) U.s. soldiers are revealing sensitive and dangerous information by jogging. WP Company. External Links: Link Cited by: §2.
  • [21] Specifications — samsung galaxy watch. External Links: Link Cited by: §2.
  • D. J. Wile, R. Ranawaya, and Z. H.T. Kiss (2014) Smart watch accelerometry for analysis and diagnosis of tremor. Journal of Neuroscience Methods 230, pp. 1 – 4. External Links: ISSN 0165-0270, Document, Link Cited by: §2.
  • Q. Xia, F. Hong, Y. Feng, and Z. Guo (2018) MotionHacker: motion sensor based eavesdropping on handwriting via smartwatch. In IEEE INFOCOM 2018 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Vol. , pp. 468–473. External Links: Document, ISSN Cited by: §2.
  • C. Xu, P. H. Pathak, and P. Mohapatra (2015) Finger-writing with smartwatch: a case for finger and hand gesture recognition using smartwatch. In Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications, HotMobile ’15, New York, NY, USA, pp. 9–14. External Links: ISBN 978-1-4503-3391-7, Link, Document Cited by: §2.
  • Y. Zeng, P. H. Pathak, C. Xu, and P. Mohapatra (2014) Your ap knows how you move: fine-grained device motion recognition through wifi. In Proceedings of the 1st ACM Workshop on Hot Topics in Wireless, HotWireless ’14, New York, NY, USA, pp. 49–54. External Links: ISBN 978-1-4503-3076-3, Link, Document Cited by: §2.