Providing Confidential Cloud-based Fall Detection from Remote Sensor Data Using Multi-Party Computation

by   Pradip Mainali, et al.

Fall detection systems are concerned with rapidly detecting the occurrence of falls from elderly and disabled users using data from a body-worn inertial measurement unit (IMU), which is typically used in conjunction with machine learning-based classification. Such systems, however, necessitate the collection of high-resolution measurements that can violate users' privacy, such as revealing their gait, activities of daily living (ADLs), and relative position using dead reckoning. In this paper, for the first time, we present the design, implementation and evaluation of applying multi-party computation (MPC) to IMU-based fall detection for assuring the confidentiality of device measurements. The system is evaluated in a cloud-based setting that precludes parties from learning the underlying data using three parties deployed in geographically disparate locations in three cloud configurations. Using a publicly-available dataset comprising fall data from real-world users, we explore the applicability of derivative-based features to mitigate the complexity of MPC-based operations in a state-of-the-art fall detection system. We demonstrate that MPC-based fall detection from IMU measurements is both feasible and practical, executing in 365.2 milliseconds, which falls well below the required time window for on-device data acquisition (750ms).



There are no comments yet.


page 8


An Efficient Machine Learning-based Elderly Fall Detection Algorithm

Falling is a commonly occurring mishap with elderly people, which may ca...

CrypTen: Secure Multi-Party Computation Meets Machine Learning

Secure multi-party computation (MPC) allows parties to perform computati...

Conclave: secure multi-party computation on big data (extended TR)

Secure Multi-Party Computation (MPC) allows mutually distrusting parties...

Senate: A Maliciously-Secure MPC Platform for Collaborative Analytics

Many organizations stand to benefit from pooling their data together in ...

A Mobile Cloud Collaboration Fall Detection System Based on Ensemble Learning

Falls are one of the important causes of accidental or unintentional inj...

Motion and Region Aware Adversarial Learning for Fall Detection with Thermal Imaging

Automatic fall detection is a vital technology for ensuring health and s...

A Fourier Domain Feature Approach for Human Activity Recognition Fall Detection

Elder people consequence a variety of problems while living Activities o...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Falls are the leading cause of fatal injury and the most common cause of nonfatal, trauma-related hospital admissions among older adults, with 20–30% of mild-to-severe injuries and 40% of injury-related fatalities (Musci et al., 2018)

. The annual medical costs attributable to fatal and nonfatal falls, meanwhile, was estimated at $50bn (USD) in 2015 

(Florence et al., 2018). Fall detection systems are concerned with automatically detecting the occurrence of falls within the homes of elderly and disabled users, with the aim of minimising response times to potential injuries and facilitating independent living (Mubashir et al., 2013). These systems typically involve the collection of data, such as accelerometer data from a body-worn device (Mauldin et al., 2018; Liu and Cheng, 2012; Santoyo-Ramón et al., 2018; Doukas et al., 2007; Yavuz et al., 2010; Albert et al., 2012; Luštrek and Kaluža, 2009), video imagery from cameras in the home (Zhang et al., 2012; Mastorakis and Makris, 2014; Kwolek and Kepski, 2014; Edgcomb and Vahid, 2012), and proximity sensors in ceilings (Tao et al., 2012)

, which is classified with respect to a dataset of known fall and non-fall data using machine learning.

The deployment challenges surrounding fall detection systems have, to date, remained largely out-of-scope in current work, which implicitly assume a local deployment or focus solely on classification performance using off-line analyses. Recent work has touted the benefits of a cloud-based, software-as-a-service (SaaS) approach for mobility-based sensing systems, where applications and their underlying platform can be updated, upgraded, monitored and secured with greater flexibility (Doukas and Maglogiannis, 2011; Fortino et al., 2014; Gravina et al., 2017). This is important in managing large numbers of remote devices, where patch penetration remains a significant challenge (Palani et al., 2016). Indeed, the failure to update all devices concerned could lead to potentially unsafe misclassification errors due to accuracy discrepancies between implementations. This principle of algorithm agility provides greater flexibility in changing the underlying classification algorithm in a cloud-based scenario without potential penetration issues of patching and updating large numbers of devices in the field. Another benefit lies in the separation of the platform development, e.g. device hardware, firmware and software, from the algorithm implementation itself. In this paradigm, individual device OEMs111OEM: Original Equipment Manufacturer. are relinquished from the responsibility of implementing and maintaining the detection algorithm; rather, this can be performed by a dedicated third-party with the potential of supporting devices from multiple OEMs in a service-oriented architecture.

A myriad of security and privacy concerns remain, however, regarding large volumes of data being harvested, used or, indeed, misused by SaaS providers (Zhang et al., 2014). At worst, sensor measurements collected from users’ devices could be exploited to infer their activity patterns and whereabouts, with major privacy implications. This is compounded by the potential risk of the disclosure of plaintext measurements following the exploitation of a vulnerable cloud service. A large corpus of work has already demonstrated the effectiveness of identifying users based on their gait from accelerometer and other inertial measurement unit (IMU) data (Gafurov et al., 2007; Thang et al., 2012; Mantyjarvi et al., 2005; Juefei-Xu et al., 2012; Gafurov, 2007); their position using dead reckoning (Pratama et al., 2012; Kang and Han, 2015; Wang et al., 2018); and determining activities of daily living (ADLs), such as whether the user is walking, cycling, sleeping, and sitting (Stikic et al., 2008; Hong et al., 2010). Existing work on privacy-enhancing fall detection has focused on non-cryptographic approaches for video-based methods, such as image blurring, silhouetting and foreground extraction techniques, which still reveal significant information about users, such as their presence. Moreover, these approaches are not generalisable to a large portion of fall detection proposals using time-series IMU measurements from body-worn devices, such as a smartphone or smartwatch (Mauldin et al., 2018; Santoyo-Ramón et al., 2018; Cao et al., 2012; Vilarinho et al., 2015; Albert et al., 2012).

A promising solution is multi-party computation (MPC), where a measurement is first splitted into shares using a secure secret sharing scheme and distributed to mutually distrusting parties, after which functions can be jointly computed without any party revealing their inputs to another. Existing literature work has already shown that machine learning-based inferencing, upon which most fall detection systems rely, can be performed efficiently using MPC without disclosing the input data. In particular, this has been examined for image classification using a Support Vector Machine (SVM) classifier in

(Makri et al., 2017)

, and remote rehabilitation treatment classification using a decision tree (ID3) upon patient data 

(de Hoogh et al., 2014). In this work, we present the implementation and evaluation of the first system for MPC-based fall detection from IMU device data for preserving measurement confidentiality in a cloud-based deployment. We evaluate the system, presented at a high-level in Figure 1

, using an off-the-shelf device and three MPC parties deployed in a public cloud service, which perform the requisite feature extraction and classification without learning the contents of IMU measurements received from the remote device.

Figure 1. High-level workflow for MPC-based fall detection.

This work presents the following contributions:

  • The first framework for performing privacy-enhancing fall detection from mobile sensor measurements using MPC. The proposed framework demonstrates the use of MPC to preserve the confidentiality of measurements in a cloud-based environment, while retaining computability for feature extraction and classification using widely-used machine learning algorithms.

  • A two-part evaluation with experiments conducted using a publicly-available fall detection dataset. The results show the framework complements existing approaches with state-of-the-art accuracy, while executing in under one second for feature extraction and classification of IMU measurements for detecting falls using MPC-based operations.

1.1. Document Structure

This paper proceeds with a general review of fall detection systems in Section 2, focussing on those based on body-worn sensors, which is followed by a multitude of privacy challenges associated with existing systems. Next, in Section 3, we present the assumptions and threat model in greater detail, before introducing and motivating the use of MPC as a solution to the aforementioned challenges. This section also develops the design of the proposed framework after which, in Section 4, the implementation details are discussed. Section 5 presents the evaluation of the proposed framework, including a discussion of the methodology and experimental results in the form of both classification and computational performance. Lastly, Section 6 concludes our work, including a discussion of future research directions.

2. Fall Detection

Fall detection systems fall broadly into two categories: 1), those based on device data from the ambient environment surrounding the user; and, 2), body-worn devices, such as smartwatches. We now describe prominent schemes and their operation. This section is not an exhaustive analysis of current proposals—for this, the reader is referred to Mubashir et al. (Mubashir et al., 2013) and Delahoz et al. (Delahoz and Labrador, 2014)—but, rather, we aim to summarise their salient features and results.

Privacy-enhancing fall detection has focused almost entirely on non-cryptographic techniques from ambient-based modalities, such as silhouetting, blurring and foreground masking from camera data. Mastorakis and Makris (Mastorakis and Makris, 2014) and Zhang et al. (Zhang et al., 2012) use RGBD data from a Microsoft Kinect unit to detect five activities of daily living (ADLs), where privacy preservation is ostensibly achieved using the depth values and foreground mask of the RGB data, which occludes users’ facial features. Tao et al. (Tao et al., 2012) study the use of a ceiling-based sensor network comprising infrared cameras for fall detection in a home environment. The sensors output binary values to indicate the existence of persons underneath, which are used to model a map of the user’s home. Edgcomb and Vahid (Edgcomb and Vahid, 2012) explore four techniques for privacy-enhancing video-based fall detection: blurring (of the user), silhouetting, and covering the user with a graphical box and oval. Raw video imagery is first taken from a stationary in-home camera, before applying one of the techniques.

However, the use of ambient-based devices has inherent issues associated with detecting falls in occluded areas; installing multiple cameras to cover all possible locations in the user’s living environment—bathroom, bedroom, kitchen, and so on—may impose significant cost pertaining to unit installation and maintenance. Another paradigm employs devices likely to be already owned and carried by the user, such as a smartwatch or smartphone. Unlike the previous work, we are unaware of proposals that investigate privacy-enhancing fall detection from body-worn devices (BWDs) to the best of our knowledge. As such, we provide a general review of highly-cited and state-of-the-art work from the literature.

Mauldin et al. (Mauldin et al., 2018) present SmartFall, which uses accelerometer data sampled at 31.25Hz from a user-worn smartwatch to detect falls. Watch data is streamed to the user’s smartphone that performs the feature extraction, using the vector magnitude of the accelerometer data and the difference between the maximum and minimum magnitudes over a 750ms sliding window (Equations 1 and 2

). Naïve Bayes, Support Vector Machine (SVM) and Recurrent Neural Network (RNN) classifiers are evaluated using labelled data from seven volunteers between 21–55 years old, alongside the Farseeing dataset 

(Mellone et al., 2012) with data from 2,000 participants. Results of 0.37–0.79 precision, 0.55–1.0 recall, and 0.65–0.99 accuracy are presented, depending on the chosen dataset and classifier.


Liu and Cheng (Liu and Cheng, 2012) explore the use of an IMU placed at the waist of users, which collected accelerometer readings at 200Hz. The vector magnitude is then computed, along with the fast changed vector, vertical acceleration and posture angle between the vertical acceleration and gravity over a 0.1s sliding window. The features are inputted to an SVM classifier with labelled fall data from a set of 15 volunteers, who performed 10 simulated falls and 11 ADLs repeated 10 times. The proposal yields error rates of 98.38% accuracy, 97.40% recall, and 99.27% precision.

Santoyo-Ramón et al. (Santoyo-Ramón et al., 2018)

employ IMUs attached to the user’s chest, waist, wrist and thigh, which are aggregated into 15 second segments by the user’s smartphone. A sliding window of 0.5s is used over which the fast changed vector of the accelerometer data is computed alongside the magnitudes’ mean, standard deviation and mean absolute distance, and the mean rotation angle and mean module of the


components. These seven features are inputted to SVM, k-Nearest Neighbour, Naïve Bayes and Decision Tree classifiers—evaluated using data from 19 subjects who simulated 746 ADLs and falls. The geometric mean of precision and recall is used as the performance metric, with results of 0.614–0.999 depending on the chosen algorithm and sensor combination.

Doukas et al. (Doukas et al., 2007)

investigate the use of an accelerometer attached to a user’s foot for binary fall detection. The sensor data is transmitted to a receiving device, e.g. laptop, where it is normalised and inputted to an SVM classifier in raw form after applying a Kalman filter. The evaluation reports a classification accuracy of 98.2% using a limited set of labelled data from two volunteers.

Cao et al. (Cao et al., 2012) present E-FallD, using the accelerometer of a single Android smartphone. The magnitude of the accelerometer components is computed and a global threshold is determined for identifying a fall based on the user’s acceleration. The work also investigates customised thresholds from the user’s gender, age and body mass index (BMI). The authors recruit 20 participants of varying gender, age and BMI, who perform a total of 400 falls and 1200 ADLs. The approach yields results of 86.75% precision and 85.5% recall using global thresholds, and 92.75% precision and 86.75% recall with customised thresholds.

Similarly, Vilarinho et al. (Vilarinho et al., 2015)

examine the use of accelerometer data from an Android-based smartphone and smartwatch. In this work, the authors derive a rule-based system for threshold-based detection using the vector magnitude, fall index, and absolute vertical acceleration as features over a 0.8s sliding window. The approach is evaluated using data from three participants, who performed 12 falls and 7 ADLs each, with a 0.68 classification accuracy, 0.78 precision and 0.63 recall reported.

Yavuz et al. (Yavuz et al., 2010) propose a fall detection system based on accelerometer values from an Android smartphone. In contrast to previous proposals, the work functions solely on extracting frequency domain features, namely the wavelet coefficients after applying the discrete wavelet transform. A threshold-based approach is used to test the coefficients generated from the phone against those of previously observed falls. 100 fall sequences from five volunteers were collected to evaluate the system, with results of 46-95% precision and 88-90% recall.

Albert et al. (Albert et al., 2012)

use accelerometer values from an LG G1 Android smartphone at the user’s pelvic region. 178 features are subsequently extracted from the time-series data, divided into nine categories: moments, e.g. kurtosis and skew; moments between successive samples; smoothed root mean squares; extremities, e.g. min and max; histogram values ; Fourier components; mean acceleration magnitude and cross products. The features are inputted to five classifiers—SVM, Logistic Regression, Naïve Bayes, Decision Tree and kNN—which are trained using data from 18 simulated falls from 15 subjects, resulting in 98% classification accuracy.

Musci et al. (Musci et al., 2018)

evaluate the use of recurrent neural networks RNNs and long short-term memory units (LSTMs). The authors propose a custom deep learning architecture tailored for ternary fall detection, which detects whether or not a fall occurred, and whether one is about to occur. The architecture is evaluated using the SisFall dataset from

(Sucerquia et al., 2018)

, comprising accelerometer and gyroscope measurements from the smartphone and custom IMU attached to the waists of 38 participants. The authors augment the dataset with their own labels corresponding to the three aforementioned states. A confusion matrix of the three classes reports 92.86–97.16% classification accuracy following hyperparameter optimisation.

Luštrek and Kaluža (Luštrek and Kaluža, 2009)

explore the use of a body area network (BAN) comprising 12 IMUs at the user’s shoulders, wrists, hips, knees and ankles. Features are collected regarding the coordinates of units in a body coordinate system, their velocities, absolute distance, and relative angles. The features are used as input to seven machine learning algorithms—SVM, DT, kNN, NB, Random Forest, Adaboost, and bagging—implemented by the Weka library. Three participants are recruited who performed a total of 45 activities: 15 falls and 10 lying down, sitting down and walking ADLs each. 73.4–96.3% classification accuracy is reported depending on the algorithm used, with SVM yielding the best case.

2.1. Discussion

Generally, fall detection systems rely upon traditional supervised learning algorithms and, more recently, deep neural networks (LSTMs and RNNs), trained over labelled fall data collected

a priori. The majority of existing work models binary classification, i.e. ‘fall’ or ‘not fall’, while other work also models whether a fall may be about to occur and classification of general ADLs, such as sitting, walking, and lying down. Additionally, the generation of global models that generalise to all users, rather than individuals, is also a common facet of current work.

Figure 2. Overview of IMU-based binary fall detection.

Accelerometer measurements are used ubiquitously for BWD-based fall detection—illustrated in Figure 2 for the binary case—from which a variety of features are extracted prior to classification. Most work focuses on time-domain features, such as the vector magnitude, arithmetic mean, standard deviation, and maximum and minimum values. The choice of algorithm remains varied, although SVMs tend to exhibit the best performance with 0.9838 (Liu and Cheng, 2012), 0.982 (Doukas et al., 2007) and 0.73–0.99 (Mauldin et al., 2018) accuracy. Recent work (Mauldin et al., 2018; Musci et al., 2018) has begun to explore the application of deep learning using LSTMs and RNNs with promising results—0.9286–0.9716 (Musci et al., 2018) and 0.85–0.99 (Mauldin et al., 2018) accuracy—but work in the area remains limited. Rather than using machine learning for the discovery of optimal decision boundaries, another widely-used approach has been the use of manually-set threshold values, which are established experimentally. Proponents of this approach have touted faster classification as an advantage over machine learning-based systems, and hence suitable for more constrained devices; however, such approaches generally yield higher error rates and over more limited datasets, e.g. five (Yavuz et al., 2010) and three (Vilarinho et al., 2015). We present a summary of related BWD-based fall detection schemes in Table 1.

Proposals Algorithms Error Rates
SmartFall (Mauldin et al., 2018) NB, SVM, RNN
0.37–0.79 (P), 0.55–1.0 (R),
0.65–0.99 (A)
Liu & Cheng (Liu and Cheng, 2012) SVM
0.9927 (P), 0.9740 (R),
0.9838 (A)
S-R et al. (Santoyo-Ramón et al., 2018)
0.614–0.999 (GMPR)
Doukas et al. (Doukas et al., 2007) SVM 0.982 (A)
E-FallD (Cao et al., 2012) Thresholds
GT: 0.8675 (P), 0.855 (R)
CT: 0.9275 (P), 0.8675 (R)
Vilarinho et al. (Vilarinho et al., 2015) Thresholds
0.78 (P), 0.63 (R), 0.68 (A)
Yavuz et al. (Yavuz et al., 2010) Thresholds 0.88–0.90 (P), 0.46–0.95 (R)
Albert et al. (Albert et al., 2012)
0.982 (A)
Musci et al. (Musci et al., 2018) RNN, LSTM 0.9286–0.9716 (A)
Luštrek & Kaluža (Luštrek and Kaluža, 2009)
NB, RF, Adaboost,
0.732–0.963 (A)
  • P: Precision, R: Recall, A: Classification accuracy, GMPR: Geometric mean of precision and recall, GT: Global thresholds, CT: customised thresholds from personal physiologies, BAN: Body area network.

Table 1. Comparison of BWD-based fall detection systems.

3. Privacy-Enhancing BWD-Based Fall Detection

To the best of our knowledge, the issue of privacy protection is yet to be explored for BWD-based fall detection systems, despite some privacy concerns facing such systems. Accelerometers, used in the bulk of current proposals, measure fine-grained movements at high frequency rates (typically ¿30Hz) on commodity devices, namely smartphones and smartwatches. A wealth of literature exists surrounding the identification of users based on their gait using accelerometer measurements alone from such a device (Gafurov et al., 2007; Thang et al., 2012; Mantyjarvi et al., 2005; Juefei-Xu et al., 2012; Gafurov, 2007). Furthermore, we are also aware of accelerometer and other IMU measurements from commodity devices being used for distinguishing non-fall ADLs (Stikic et al., 2008; Hong et al., 2010), and position tracking using dead reckoning (Pratama et al., 2012; Kang and Han, 2015; Wang et al., 2018)—all of which may violate users’ privacy.

3.1. Applying Multi-Party Computation (MPC)

We observe that MPC can be applied to protect IMU measurements, extract features, and perform fall classification in a privacy-enhancing way with strong guarantees regarding data secrecy. In this work, we tackle the base case of MPC using arithmetic secret sharing from Shamir’s secret sharing scheme (SSSS) (Shamir, 1979), which is briefly described as follows.

3.1.1. Shamir’s Secret Sharing Scheme

SSSS divides a secret into shares, , such that knowing shares from s renders easy to compute, and that knowing fewer shares than leaves undetermined. The scheme relies on the fact that points are required to construct a polynomial of degree , which is defined over . Next, positive integers, are chosen with , with the secret assigned . The polynomial is then built, after which points are constructed from it, e.g. using , to retrieve pairs . Any subset with distinct pairs can be used to recover the constant term,

, i.e. the secret, which is found using Lagrange interpolation.

3.1.2. MPC from Arithmetic Secret Sharing

An important observation is that SSSS allows each party to locally compute linear combinations of secrets and public values. Firstly, let us define as the share of data item, . SSSS allows parties to locally compute the following arithmetic operations over their shares (Chen, 2012):

  • Addition (): Each party can compute a new share, , that is the addition of two other shares, i.e. .

  • Addition of a secret and public value (): Party can compute a new share from the addition of a held share and a public value ; that is, .

  • Multiplication of a secret and public value (): can compute a new share of the multiplication of a held share with a public value, .

Constructing a new share based on the multiplication of two others, is less straightforward, however: given that SSSS uses polynomials of degree , then as the resulting polynomial will have degree . A method for computing a -degree polynomial that represents was presented in (Ben-Or et al., 1988). As opposed to entirely local computation for addition of shares and the addition and multiplication of a share with a constant value, multiplying shares necessitates one round of communication. While many functions can be computed using only these operations, they preclude integer comparison, e.g. , , , and , and Boolean operations, e.g. AND and OR. The reader is referred to (Chen, 2012) for a comprehensive description and analysis of these protocols; the differences in asymptotic communication complexity are reproduced from (Chen, 2012) in Table 2. For completeness, we employ 48-bit floating point numbers by default in this work, corresponding to 100 rounds (with ) of inter-party communication for comparison-based MPC operations.

Cost (in rounds)
Multiplication 1
Inner Product 1
Inverse 1
Random element 1
Random bit 3
Equality () 2
  • Where is the number of operands and is the bound wherein operands and lie.

Table 2. Communication cost for MPC functions using arithmetic secret sharing (Chen, 2012).

3.2. Assumptions and Threat Model

An outsourcing model is assumed whereby a body-worn device streams IMU measurements to a cloud-based service that performs the feature extraction and classification algorithms for fall detection. The focus is on binary classification for inferring the occurrence of a fall using a globally-trained model in line with the bulk of existing work described in Section 2. We divide the entities involved into two categories.

3.2.1. Source device

We assume the existence of a body-worn device—a smartphone in the user’s pocket, smartwatch, a sensor placed on the user’s belt, or otherwise—that possesses an IMU. The device is also assumed to posses the ability to split measurements into shares using SSSS and subsequently stream them to an Internet-facing service over a secure channel, such as TLS using a pre-installed certificate. In this work, the IMU measurements are considered to be trusted; the focus is on the unauthorised disclosure of measurements after departing the device over a network to a cloud-based service.

3.2.2. Cloud service

Comprises the software components—database instance, REST APIs, and so on—for receiving, storing and classifying falls from IMU sensor data on a public cloud platform. The primary threat is the exfiltration and disclosure of raw IMU measurements from a remote attacker, such as from an unsecured, public-facing database, e.g. AWS S3 bucket, which could be used to model and infer users’ general ADLs, position, or gait for user identification. This work tackles the base case for MPC where parties operate under an honest-but-curious model, whereby the protocol specification is executed as intended. We also assume that the output, i.e. whether or not a fall occurred, is learned by the cloud services in order to contact an emergency number or provide alternative remediation. We note that some information may be revealed by the communication between the device and the cloud service, such as the absence of receiving protected IMU data at certain times of day, e.g. at night where the user deactivates their smartphone or smartwatch before sleeping. However, this is orthogonal to the ultimate goal of this work of preventing either the service or an external adversary from learning additional information from IMU measurements beyond that intended for detecting falls.

3.3. Workflow

The principle stages of the proposed system architecture were illustrated previously in Figure 1 and described as follows:

  1. Data acquisition. The collection of tri-axial IMU data on a master device, e.g. smartphone, at an appropriate sampling frequency for fall detection. This may be unimodal input, i.e. only accelerometer (Liu and Cheng, 2012; Mauldin et al., 2018; Cao et al., 2012; Doukas et al., 2007; Yavuz et al., 2010); multi-modal using, for example, accelerometer and gyroscope together (Santoyo-Ramón et al., 2018); or aggregating measurements from multiple sensors in a body-area network (Luštrek and Kaluža, 2009).

  2. Secret sharing. Given a tri-axial sample from a given IMU modality, , the master device splits each component into shares using SSSS. In this work, we set equal to the number of authorised communicating parties, while the reconstruction threshold is fixed at . In short, IMU samples are split as follows:

    Where the rows comprise the individual shares of a particular component, while each column represents the share of all components.

  3. Share distribution. The master device transmits the column elements to each cloud service instance individually; that is, , and , is transmitted to party , , and to , and so on. By sharing each sample independently, no single cloud-based party may recover the underlying IMU samples assuming the reconstruction threshold above.

  4. MPC-based inferencing. The parties use arithmetic operations upon the received shares to perform fall detection feature extraction, such as computing the vector magnitude of IMU samples, and subsequent classification using a desired machine learning classifiers, e.g. support vector machine (SVM), logistic regression (LR), and naïve Bayes (NB).

  5. Classification decision. The resulting label, , is used to denote the occurrence of a fall in a binary fashion. This value may be used to raise an alarm or telephone an emergency contact number.

4. Implementation

This section describes the implementation of the device- and cloud-side systems introduced in the previous section.

4.1. Device Application

A Python application was developed that reads CSV IMU data line-by-line from file and subsequently creates shares of each element representing the separate accelerometer components using SSSS. The resulting shares are base 64-encoded and JSON-formatted, and transmitted over TLS to their respective cloud parties using the method described in Section 3.3. The application was deployed on a Raspberry Pi 3 Model B, with a Broadcom BCM2837 system-on-chip with a quad-core ARM Cortex-A53 at 1.2GHz, 1GB RAM, and Broadcom BCM43438 WiFi module (802.11N, 150 Mbit/s at 2.4GHz). The board was connected to a corporate WiFi network over which all communications occurred. Benchmarking procedures were also implemented using the Python time module, which wraps functions from the GNU C standard library, for measuring the upload time for transmitting the shares and receiving an acknowledgement from the server.

4.2. Cloud Framework

We developed a cloud-based framework for performing MPC using arithmetic secret sharing comprising three applications:

Web server. A Python web server application that implements a public-facing service for retrieving JSON-formatted shares from the device application; performs database initialisation, storage, and retrieval; and bootstraps the launching of MPC-based operations with other parties and returns the result to the device.

MPyC instance. We make use of, and extend, MPyC—an open-source Python library developed by Schoenmakers (Schoenmakers, 2018) intended for the rapid prototyping of MPC-based systems. MPyC is based on the VIFF framework by Damgård et al. (Damgård et al., 2009), which implements numerous MPC operation protocols using arithmetic secret sharing, and abstracts the complexity of network management, e.g. socket connections, message formatting and transmission, and reconstructing and computing upon shares between the parties.

Database. Stores time-series IMU shares received from the device application; our implementation uses MongoDB222MongoDB:, a JSON document-based NoSQL database.

4.2.1. Deployment

Config MPC Party 1 Party 2 Party 3
A US_Oregon Canada_Central US_Ohio
B US_Oregon Canada_Central Europe_London
C US_Oregon Europe_London Asia_Singapore
Table 3. MPC party server locations in AWS.

We deploy our architecture using three MPC parties deployed in geographically disparate locations available via AWS, comprising US West (Oregon), US East (Ohio), Central Canada, Europe (London), and South East Asia (Singapore). These configurations are listed in Table 3, representing a strictly North American deployment (Config A), North America and Europe (Config B), and North America, Europe and Asia (Config C). The three server-side applications, listed above, were deployed collectively on separate AWS EC2 instances in their respective region; each instance was also pre-loaded with the other instances’ static IP addresses in configuration files. We utilise three t2.micro AWS instances using Amazon’s Linux-based machine image, featuring a single-core Intel Xeon Scalable Processor at 3.3 GHz, with 1GB RAM and up to 0.72Gbit/s network throughput, which corresponds to an operating cost of $0.0116 (USD) per hour, or $8.47 per month per instance (Amazon, 2019).

4.2.2. Feature Extraction

In this work, we implement MPC-based feature extraction for replicating the SmartFall proposal by Mauldin et al. (Mauldin et al., 2018), which relies upon two features: the samples’ vector magnitude and the difference between the maximum and minimum magnitudes, , in a sliding window (Equations 1 and 2 respectively). These features are computed over a sliding window of size 750ms, which is managed using our cloud-based framework by aggregating received shares in an array of size 24, assuming a sampling frequency of 31.25Hz as used in (Mauldin et al., 2018). The vector magnitude is computed for each sample, before finding , which uses MPyC’s maximum and minimum functions of a vector of secret-shared elements. These are implemented recursively by finding the maximum (or minimum) of each half of the vector, computing their difference, and performing a or operation with zero—the costs of which were listed in Table 2.

An important observation is that the choice of features dramatically affects the asymptotic cost of MPC-based operations. Features reliant upon large numbers of multiplication, comparison or Boolean operations will necessitate many rounds of inter-party communication versus features that can be computed locally, such as additions or multiplications with public values, as noted in Section 3.1.2. We note that the SmartFall features, the computation of the vector magnitude and computing in a sliding window imposes significant cost in terms of the square root operation and minimum and maximum functions respectively.

To address this, we also explore derivative-based features, which are used extensively for feature extraction in computer vision tasks, such as used extensively in computer vision feature extraction tasks

(Lowe, 2004; Bay et al., 2008; Dalal and Triggs, 2005). In particular, inspired by the SURF features used in Bay et al. (Bay et al., 2008), we use the sum of the derivatives and the sum of the squared derivatives for each data channel within the window, resulting in a feature of dimension six as follows:


Where , , and represent the differentials of the , an accelerometer measurement components respectively. The derivative is computed by convolution as follows:


Where is the -th component of an accelerometer measurement and is a filter kernel with impulse response . This feature extraction scheme is substantially more efficient, requiring only a single round of inter-party communication for each derivative squaring operation; the summing operations can be performed locally without communication, as can the convolution operation using a filter kernel that applies multiplication with constant values.

4.2.3. Privacy-Enhancing Inferencing

We developed a range of MPC-based functions for privacy-preserving inferencing/classification from data shares, all of which were implemented using the fundamental operations described in Section 3.1.2. At present, we have developed MPC-based implementations of the following classification procedures.

Logistic Regression

(LR) is a linear model of the probability of a binary response given a feature vector,

. LR applies the logit function to

and parameters,

, that minimise some cost function, e.g. ordinary least squares. The decision boundary,

, is represented by a linear combination of the parameters and features; that is, .

A Support Vector Machine

(SVM) is a binary linear model that attempts to find a hyperplane that maximises the margin (distance) between linearly separable data points of any class in the training set. We implement a traditional SVM with linear kernel for binary classification, whereby samples are classified using a given set of weights,

, feature vector, , and testing Equation 5 with a learned bias, . Note that, for inferencing, both SVM and LR use a single inner product. For MPC, a naïve approach to computing the inner product requires rounds of communication to compute each ; however, work in (de Hoogh, 2012) presented a method for computing the inner product of two vectors with only a single round of communication using (non-interactive) local summations, which is reproduced in Algorithm 1 for two shared vectors, and .

Result: , where
foreach  in parallel do
       Party computes shares into
end foreach
Algorithm 1 Inner product using secret sharing (Chen, 2012).

Naïve Bayes

(NB) is a probabilistic classifier that uses Bayes’ Theorem with a strong independence assumption between features. A feature vector is subsequently classified by maximising the posterior probability for each class label using the prior and conditional probabilities computed in the training phase. In particular, Gaussian NB is used to compute the log-likelihoods of real-valued inputs, parameterised by mean

with variance

and evaluated as follows:


Where is the number of features and is the probability of the occurance of class .

5. Evaluation

This section presents a two-part evaluation of the proposed framework, comprising: 1), classification error rates achieved by our framework using MPC-based implementations of SmartFall and derivative-based features with the three inferencing algorithms; and 2), computational performance, for evaluating the time complexity of the framework.

5.1. Classification Error Rates

This first analysis presents error rates achieved by the models used in our MPC-based implementation of the SmartFall proposal by Mauldin et al. (Mauldin et al., 2018) using the standard and derivative-based features described in Section 4.2.2. The authors provide a publicly-available dataset comprises labelled IMU accelerometer measurements corresponding to binary fall events from seven participants between 21–55 years old who wore a Microsoft Band 2 smartwatch. We reproduce and extend this work to incorporate three classifiers in total: Naïve Bayes, SVM (with linear kernel), and logistic regression.

A separate Python application was used to produce the model parameters (off-line) using the dataset. Firstly, a sliding window of 750ms was used comprising data sampled at a frequency of 31.25Hz over which the vector magnitude and were computed; this was repeated again for the derivative-based feature extraction. Next, the models—six in total, representing the three inferencing algorithms for each feature type—were trained independently using the Scikit-Learn (Pedregosa et al., 2011) library; the inclusion of Logistic Regression goes beyond the classifiers tested in the initial SmartFall proposal (Mauldin et al., 2018). The dataset was split using a 80:20 training-test set ratio, and five-fold cross-validation was used to select the best performing model from the training set whose final error rate was measured using the test set. The parameters of the best performing models were subsequently exported and implemented within the inferencing procedures detailed in Section 4.2.3.

Tables 4 and 5 presents the average error rates using each model in terms of classification accuracy, precision, recall, and F1-scores averaged across all participants for the SmartFall and derivative-based features respectively. The achieved results, i.e. accuracy and F1-score in the best case for SVM, are commensurate with the original SmartFall proposal and other fall detection work listed previously in Table 1. We note that the derivative features outperform the original features in terms of precision, recall and F1-score, e.g. F1-score for NB versus , and recall for SVM versus ; however, the classification accuracy is broadly similar for both feature types.

Algorithm Accuracy Precision Recall F1-Score
LR 0.933 0.780 0.688 0.731
NB 0.926 0.704 0.762 0.732
SVM 0.936 0.819 0.663 0.733
  • LR: Logistic Regression, NB: Naïve Bayes, SVM: Support Vector Machine.

Table 4. Classification error with SmartFall features.
Algorithm Accuracy Precision Recall F1-Score
LR 0.936 0.771 0.812 0.791
NB 0.934 0.725 0.903 0.804
SVM 0.937 0.762 0.835 0.797
Table 5. Classification error with derivative-based features.

5.2. Computational Performance

In this section, we evaluate the execution times in device and cloud sides. This suite of experiments evaluates the latency imposed by each stage of the inferencing process. We also report execution time by deploying MPC servers at three different configurations given in Table 3.

5.2.1. Device-side Latency

For the device, we measure the time to both construct the shares using SSSS and their transmission time to each party, with parties and a reconstruction threshold of . We measure this for the sharing and transmission of samples, equivalent to the sliding window size of 750ms. The mean share construction time, averaged across all windows, took 6.1ms (), with the upload time taking 398ms (ms), 404ms (ms) and 493.2ms (ms) respectively for A, B and C configurations. These timings are measured by the device-side application aboard a Raspberry Pi 3 located in Belgium, Europe, the specifications of which were listed in Section 4.1.

Config A Config B Config C
Algorithm FE IN FE IN FE IN
LR 307.6 (91.00) 382.8 (104.1) 501.7 (150.0)
SVM 778.6 (119.0) 1004.5 (125.3) 1204.3 (194.1)
NB 508.8 (103.0) 557.4 (225.6) 692.3 (114.0)
  • : FE common for all algorithms, —: Repeat entry (vertically), FE: Feature extraction, IN: Inferencing.

Table 6. Mean feature extraction and inferencing times (milliseconds) for SmartFall features across AWS configurations.
Config A Config B Config C
Algorithm FE IN FE IN FE IN
LR 297.3 (106.0) 365.1 (95.4) 490.0 (104.0)
SVM 67.90 (14.70) 101.9 (18.80) 111.0 (24.40)
NB 551.6 (103.0) 602.1 (105.5) 741.5 (112.80)
  • : FE common for all algorithms, —: Repeat entry (vertically), FE: Feature extraction, IN: Inferencing.

Table 7. Mean feature extraction and inferencing times (milliseconds) for derivative-based features.

5.2.2. Cloud-side Latency

On the cloud side, we evaluate the times taken by the feature extraction and prediction (inferencing) procedures as MPC-based operations, which were described in Sections 4.2.2 and 4.2.3 respectively. The total time for classification of window is sum of these two values. We report for three different regions the results for the standard SmartFall features in Table 6 and the derivative-based features in Table 7. These times are also illustrated in Figures 3 and 4.

Figure 3. Mean window execution times using SmartFall features for Config. A (no hatch), B (hatched), and C (dotted).
Figure 4. Mean window execution times using derivative features for Config. A (no hatch), B (hatched), and C (dotted).

5.3. Discussion

One concern with MPC, raised previously in Section 3.1.2

, resides in the inter-party communication costs for multiplication- and comparison-heavy functions, which poses questions regarding the potential latency of MPC-based feature extraction and inferencing. For the originally proposed SmartFall features, inferencing and feature extraction takes approximately 1900ms in the worst case—Naive Bayes using Config C—to executing in roughly 1100ms in the best case, using LR with Config A. We also note that the feature extraction stage dominates the execution times using this feature type, accounting for approximately two-thirds of the total time for each tested algorithm. Comparatively, this reverses when using the derivative features, which are computable using fewer rounds of inter-party communication; the

inferencing stage becomes the dominant factor for all algorithms, rather than the feature extraction. The execution time is also substantially reduced for the derivative-based features; for example, falling from 1085ms to 365ms using LR with Config A between the SmartFall and derivative-based features respectively. In general, the derivative-based features are extracted 11x faster than the originally proposed SmartFall features in (Mauldin et al., 2018); the total execution time, including inferencing, exhibits a 2.8x increase between derivative- and SmartFall-based experiments.

We also see that the SVM and LR algorithms execute in similar times, due to the mathematical similarities in which to perform inferencing, i.e. with one inner product operation, requiring only a single round of inter-party communication. NB, however, is relatively expensive that requires two multiplication operations for each feature value in the window: one for computing the square of the feature subtracted from the mean, and the inverse multiplication (division) of this value by the feature’s variance, as shown in Equation 6. The choice of server configuration, i.e. the parties’ geographical distribution, also has a significant effect on the total execution time; increasing, for example, from approximately 1085ms to 1700ms using SmartFall features with SVM-based inferencing, corresponding to a 56% increase from Config A to C. Similar results are exhibited for all SmartFall- and derivative-based experiments.

Lastly, we observe that, in some instances, the total execution time falls within that required for acquiring the measurements within the window (750ms). This applies for all derivative-based prediction using configurations A and B, but only the SVM and LR inferencing algorithms using Config C. This leads us to believe that these would be best suited for real-time prediction.

6. Conclusion

A plethora of current fall detection systems rely upon motion data from user-owned devices, such as smartphones and smartwatches. However, as we showed in Section 2, such proposals necessitate the collection of potentially sensitive IMU data that has also been used effectively in identifying users’ from their gait, tracking their position using dead reckoning, and determining their activities of daily living (ADLs), such as whether they are sleeping, walking and sitting. In light of this, we presented the first analysis of the application of multi-party computation (MPC) for IMU-based fall detection, where the principle goal is retaining the confidentiality of IMU measurements after, for example, their exfiltration and disclosure from a vulnerable cloud-based service.

To this end, we presented the design and implementation of this system in Sections 3 and 4. The evaluation was conducted using three AWS EC2 instances in geographically disparate locations in three configurations; each instance acted as a distinct MPC party for jointly computing the feature extraction and inferencing procedures using weights from pre-trained models. This was performed for the originally-proposed SmartFall features as well as an derivative-based features. The derivative-based feature was explored to mitigate the communication complexity imposed by MPC operations for comparison in SmartFall features.

In Section 5, the results from a two-fold evaluation, comprising classification error and computational performance, were presented using a publicly-available dataset comprising data from seven participants wearing a commodity smartwatch. The performance results indicate that state-of-the-art error rates can be achieved for MPC-based fall detection within reasonable computational performance of 365ms. This is in conjunction with achieving classification error rates of 93.4% classification accuracy and 0.791 F1-score. We showed quantitatively that the choices of inferencing algorithm, feature extraction, and server location have substantial effects on the performance of MPC-based IMU fall detection. Indeed, in the best cases, the total prediction time is less than the data acquisition time on the device, providing some indication that real-time prediction is feasible. Lastly, to conclude this work, we aim to pursue the following future research directions:

  • MPC-based fall detection using deep learning. Existing fall detection systems rely heavily on traditional machine learning classifiers, such as SVMs; a small selection of recent work, however, has begun to explore the use of deep neural networks (DNNs), such as RNNs (Musci et al., 2018). As such, a future avenue lies in exploring MPC-based DNNs and the overhead imposed by evaluating large numbers of intermediate layers with potentially hundreds or thousands of nodes.

  • Video-based fall detection. A separate paradigm of fall detection systems, stated in Section 2, relies upon video data from ambient devices in the user’s home, such as a Microsoft Kinect. This data, however, is of significantly greater dimension than the IMU data explored in this work; a future direction is an investigation into the applicability of MPC for computing upon such data in higher dimensions.


  • (1)
  • Albert et al. (2012) Mark V Albert, Konrad Kording, Megan Herrmann, and Arun Jayaraman. 2012. Fall classification by machine learning using mobile phones. PloS one 7, 5 (2012).
  • Amazon (2019) Amazon. 2019. Amazon EC2 T2 instances.
  • Bay et al. (2008) Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. 2008. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 110, 3 (June 2008), 346–359.
  • Ben-Or et al. (1988) Michael Ben-Or, Shafi Goldwasser, and Avi Wigderson. 1988. Completeness theorems for non-cryptographic fault-tolerant distributed computation. In

    Proceedings of the 20th Annual ACM Symposium on Theory of Computing

    . ACM.
  • Cao et al. (2012) Yabo Cao, Yujiu Yang, and WenHuang Liu. 2012. E-FallD: A fall detection system using android-based smartphone. In 9th International Conference on Fuzzy Systems and Knowledge Discovery. IEEE, 1509–1513.
  • Chen (2012) Ping Chen. 2012. Secure Multiparty Computation for Privacy Preserving Data Mining. Master’s thesis. TU Eindhoven.
  • Dalal and Triggs (2005) Navneet Dalal and Bill Triggs. 2005. Histograms of Oriented Gradients for Human Detection. In

    Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition

    . IEEE Computer Society, 886–893.
  • Damgård et al. (2009) Ivan Damgård, Martin Geisler, Mikkel Krøigaard, and Jesper Buus Nielsen. 2009. Asynchronous multiparty computation: Theory and implementation. In International Workshop on Public Key Cryptography. Springer, 160–179.
  • de Hoogh (2012) Sebastiaan de Hoogh. 2012.

    Design of large scale applications of secure multiparty computation: secure linear programming.

    Ph. D. dissertation (2012).
  • de Hoogh et al. (2014) Sebastiaan de Hoogh, Berry Schoenmakers, Ping Chen, and Harm op den Akker. 2014. Practical secure decision tree learning in a teletreatment application. In International Conference on Financial Cryptography and Data Security. Springer.
  • Delahoz and Labrador (2014) Yueng Santiago Delahoz and Miguel Angel Labrador. 2014. Survey on fall detection and fall prevention using wearable and external sensors. Sensors 14, 10 (2014), 19806–19842.
  • Doukas and Maglogiannis (2011) Charalampos Doukas and Ilias Maglogiannis. 2011. Managing wearable sensor data through cloud computing. In 3rd IEEE International Conference on Cloud Computing Technology and Science. IEEE.
  • Doukas et al. (2007) Charalampos Doukas, Ilias Maglogiannis, Philippos Tragas, Dimitris Liapis, and Gregory Yovanof. 2007. Patient fall detection using support vector machines. In

    IFIP International Conference on Artificial Intelligence Applications and Innovations

    . Springer, 147–156.
  • Edgcomb and Vahid (2012) Alex Edgcomb and Frank Vahid. 2012. Automated fall detection on privacy-enhanced video. In Engineering in Medicine and Biology Society (EMBC), Annual International Conference of the IEEE. IEEE, 252–255.
  • Florence et al. (2018) Curtis S Florence, Gwen Bergen, Adam Atherly, Elizabeth Burns, Judy Stevens, and Cynthia Drake. 2018. Medical costs of fatal and nonfatal falls in older adults. Journal of the American Geriatrics Society 66, 4 (2018), 693–698.
  • Fortino et al. (2014) Giancarlo Fortino, Daniele Parisi, Vincenzo Pirrone, and Giuseppe Di Fatta. 2014. BodyCloud: A SaaS approach for community body sensor networks. Future Generation Computer Systems 35 (2014), 62–79.
  • Gafurov (2007) Davrondzhon Gafurov. 2007. A survey of biometric gait recognition: Approaches, security and challenges. In Annual Norwegian Computer Science Conference.
  • Gafurov et al. (2007) Davrondzhon Gafurov, Einar Snekkenes, and Patrick Bours. 2007. Gait authentication and identification using wearable accelerometer sensor. In IEEE Workshop on Automatic Identification Advanced Technologies. IEEE, 220–225.
  • Gravina et al. (2017) Raffaele Gravina, Congcong Ma, Pasquale Pace, Gianluca Aloi, Wilma Russo, Wenfeng Li, and Giancarlo Fortino. 2017. Cloud-based Activity-aaService cyber–physical framework for human activity monitoring in mobility. Future Generation Computer Systems 75 (2017), 158–171.
  • Hong et al. (2010) Yu-Jin Hong, Ig-Jae Kim, Sang Chul Ahn, and Hyoung-Gon Kim. 2010. Mobile health monitoring system based on activity recognition using accelerometer. Simulation Modelling Practice and Theory 18, 4 (2010), 446–455.
  • Juefei-Xu et al. (2012) Felix Juefei-Xu, Chandrasekhar Bhagavatula, Aaron Jaech, Unni Prasad, and Marios Savvides. 2012. Gait-ID on the move: Pace independent human identification using cell phone accelerometer dynamics. In Fifth International Conference on Biometrics: Theory, Applications and Systems (BTAS). IEEE, 8–15.
  • Kang and Han (2015) Wonho Kang and Youngnam Han. 2015. SmartPDR: Smartphone-based pedestrian dead reckoning for indoor localization. IEEE Sensors Journal 15, 5 (2015).
  • Kwolek and Kepski (2014) Bogdan Kwolek and Michal Kepski. 2014. Human fall detection on embedded platform using depth maps and wireless accelerometer. Computer methods and programs in biomedicine 117, 3 (2014), 489–501.
  • Liu and Cheng (2012) Shing-Hong Liu and Wen-Chang Cheng. 2012. Fall detection with the support vector machine during scripted and continuous unscripted activities. Sensors 12, 9 (2012), 12301–12316.
  • Lowe (2004) David G. Lowe. 2004. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vision 60, 2 (Nov. 2004), 91–110.
  • Luštrek and Kaluža (2009) Mitja Luštrek and Boštjan Kaluža. 2009. Fall detection and activity recognition with machine learning. Informatica 33, 2 (2009).
  • Makri et al. (2017) Eleftheria Makri, Dragos Rotaru, Nigel P. Smart, and Frederik Vercauteren. 2017. EPIC: Efficient Private Image Classification (or: Learning from the Masters). Cryptology ePrint Archive, Report 2017/1190.
  • Mantyjarvi et al. (2005) Jani Mantyjarvi, Mikko Lindholm, Elena Vildjiounaite, S-M Makela, and HA Ailisto. 2005. Identifying users of portable devices from gait pattern with accelerometers. In IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE.
  • Mastorakis and Makris (2014) Georgios Mastorakis and Dimitrios Makris. 2014. Fall detection system using Kinect’s infrared sensor. Journal of Real-Time Image Processing 9, 4 (2014).
  • Mauldin et al. (2018) Taylor Mauldin, Marc Canby, Vangelis Metsis, Anne Ngu, and Coralys Rivera. 2018. SmartFall: a smartwatch-based fall detection system using deep learning. Sensors 18, 10 (2018), 3363.
  • Mellone et al. (2012) S Mellone, C Tacconi, L Schwickert, J Klenk, C Becker, and L Chiari. 2012. Smartphone-based solutions for fall detection and prevention: the FARSEEING approach. Zeitschrift für Gerontologie und Geriatrie 45, 8 (2012), 722–727.
  • Mubashir et al. (2013) Muhammad Mubashir, Ling Shao, and Luke Seed. 2013. A survey on fall detection: Principles and approaches. Neurocomputing 100 (2013), 144–152.
  • Musci et al. (2018) Mirto Musci, Daniele De Martini, Nicola Blago, Tullio Facchinetti, and Marco Piastra. 2018. Online Fall Detection using Recurrent Neural Networks. arXiv preprint arXiv:1804.04976 (2018).
  • Palani et al. (2016) Kartik Palani, Emily Holt, and Sean Smith. 2016. Invisible and forgotten: Zero-day blooms in the IoT. In Pervasive Computing and Communication Workshops. IEEE.
  • Pedregosa et al. (2011) Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12 (2011), 2825–2830.
  • Pratama et al. (2012) Azkario Rizky Pratama, Risanuri Hidayat, et al. 2012. Smartphone-based pedestrian dead reckoning as an indoor positioning system. In International Conference on System Engineering and Technology. IEEE.
  • Santoyo-Ramón et al. (2018) José Antonio Santoyo-Ramón, Eduardo Casilari, and José Manuel Cano-García. 2018. Analysis of a smartphone-based architecture with multiple mobility sensors for fall detection with supervised learning. Sensors 18, 4 (2018), 1155.
  • Schoenmakers (2018) Berry Schoenmakers. 2018. MPyC—Python Package for Secure Multiparty Computation. In Workshop on the Theory and Practice of MPC.
  • Shamir (1979) Adi Shamir. 1979. How to share a secret. Commun. ACM 22, 11 (1979), 612–613.
  • Stikic et al. (2008) Maja Stikic, Tâm Huynh, Kristof Van Laerhoven, and Bernt Schiele. 2008. ADL recognition based on the combination of RFID and accelerometer sensing. In Pervasive Computing Technologies for Healthcare. IEEE, 258–263.
  • Sucerquia et al. (2018) Angela Sucerquia, José David López, and Jesús Francisco Vargas-Bonilla. 2018. Real-life/real-time elderly fall detection with a triaxial accelerometer. Sensors 18, 4 (2018), 1101.
  • Tao et al. (2012) Shuai Tao, Mineichi Kudo, and Hidetoshi Nonaka. 2012. Privacy-preserved behavior analysis and fall detection by an infrared ceiling sensor network. Sensors 12, 12 (2012), 16920–16936.
  • Thang et al. (2012) Hoang Minh Thang, Vo Quang Viet, Nguyen Dinh Thuc, and Deokjai Choi. 2012. Gait identification using accelerometer on mobile phone. In Control, Automation and Information Sciences. IEEE, 344–348.
  • Vilarinho et al. (2015) Thomas Vilarinho, Babak Farshchian, Daniel Gloppestad Bajer, Ole Halvor Dahl, Iver Egge, Sondre Steinsland Hegdal, Andreas Lønes, Johan N Slettevold, and Sam Mathias Weggersen. 2015. A combined smartphone and smartwatch fall detection system. In IEEE International Conference on Ubiquitous Computing and Communications. IEEE, 1443–1448.
  • Wang et al. (2018) Boyuan Wang, Xuelin Liu, Baoguo Yu, Ruicai Jia, and Xingli Gan. 2018. Pedestrian Dead Reckoning Based on Motion Mode Recognition Using a Smartphone. Sensors 18, 6 (2018), 1811.
  • Yavuz et al. (2010) Gokhan Yavuz, Mustafa Kocak, Gokberk Ergun, Hande Ozgur Alemdar, Hulya Yalcin, Ozlem Durmaz Incel, and Cem Ersoy. 2010. A smartphone based fall detector with online location support. In International Workshop on Sensing for App Phones. 31–35.
  • Zhang et al. (2012) Chenyang Zhang, Yingli Tian, and Elizabeth Capezuti. 2012. Privacy preserving automatic fall detection for elderly using RGBD cameras. In International Conference on Computers for Handicapped Persons. Springer, 625–633.
  • Zhang et al. (2014) Zhi-Kai Zhang, Michael Cheng Yi Cho, Chia-Wei Wang, Chia-Wei Hsu, Chong-Kuan Chen, and Shiuhpyng Shieh. 2014. IoT security: ongoing challenges and research opportunities. In 7th International Conference on Service-Oriented Computing and Applications. IEEE, 230–234.