The PIN, password, and pattern based authentication systems have several drawbacks as they need to be remembered; are not user-friendly anymore–as the overall authentication process is time consuming; offer only entry-point security; and are susceptible to video-based side channel, shoulder surfing, and social engineering attacks [1, 2]. The fingerprint, face, and iris based recognition systems do address the first two drawbacks, however, they are also susceptible to spoof, and social engineering (intoxication) attacks, as well as, not very suitable for continuous verification . The above-mentioned pitfalls of the existing and extensively used authentication systems are some of the key reasons behind the rapidly evolution of behavioral-biometrics continuous authentication systems since the last few years.
Behavioral footprints such as typing, swiping, walking, arm movements while walking, hand kinematic synergies and their neural representations, and possible combination of these have demonstrated the potential, and promise in user authentication [4, 5, 6, 7, 8, 9, 10, 11]. However, several challenges still exist and have not been addressed. For example, the availability of these modalities throughout the user interaction with the smartphone; collection of labeled data for enrollment and verification process under different operating conditions (phone-usage contexts), and practicality of these systems under realistic scenarios.
Moreover, the context of phone usage varies from user to user significantly. For example, a smartphone user can swipe, type, talk while sitting, standing, walking, in an elevator, in a moving bus, train, or in a car. For each of these contexts, the biometric footprints may vary significantly. So building authentication systems without taking the context into consideration would result in poor authentication decisions. Additionally, the availability of these footprints may vary across different applications. In practice, the existing individual modality based systems could have limited application, such as to secure activities on a targeted application. Almost every study has assumed the phone-usage context, and/or the availability of labeled samples to guide the authentication process, and have maintained constrained data collection environments [12, 6, 8, 9].
In reality, the process of labeling contexts is quite intrusive and unlikely to be implemented by industry and/or accepted by common smartphone users. Hence, both of the assumptions i.e. context is known, and availability of labeled samples are unrealistic. Some smartphone users may have specific usage contexts occurring with very high frequency during typical phone usage, while the other may never operate their smartphones in some of the contexts at all. Therefore, assuming a universal set of contexts, common across the user population may not be very helpful. Instead, developing models that could identify user-specific phone-usage contexts, and designing authentication models for each context could be a plausible solution.
This paper attempts to address the above-mentioned challenges by collecting phone movement patterns that are seamlessly available while the phone is in use or just in user’s possession under a completely unconstrained environment (i.e. makes no attempt to label the data or assumes any context); applies semi-supervised learning algorithm to identify phone-usage contexts, and utilizes the predicted contexts to guide the authentication process. The main contribution of this paper is summarized below:
Builds a dataset of continuous phone movement patterns collected from a diverse population of users over a period of to days under a completely unconstrained environment.
Presents a novel authentication method based only on phone movement patterns. The method identifies the phone-usage context automatically by using K-means clustering and Random Forest classifier. The enrollment and verification process was implemented by employing five distinct machine learning classifiers, namely, Logistic Regression, Neural Network, kNN, SVM, and Random Forest.
The performance of the authentication system was evaluated and reported in terms of Equal Error Rates (EERs). The performance of classification algorithms was compared using a series of statistical tests.
The suitability of the proposed system for different types of users was also investigated using Failure to Enroll policy. This investigation provided interesting insights into the usability of the proposed system.
The rest of this paper is organized as follows: Section 2 presents related work; Section 3 describes the data collection, preprocessing, and feature analysis; Section 4 discusses design, implementation, training and testing processes of the proposed authentication system; Section 5 presents the experimental results and discussion; and Section 6 concludes our work.
2 Related Work
Several behavioral footprints including swiping , typing , walking , arm movements , and their possible fusion [8, 9] have been studied in the past for authenticating individuals continuously . The phone movement pattern was studied by Kumar et al. , Sitova et al.  and, Buriro et al.  recently. Sitova et.  focused mainly on phone movement patterns while walking and sitting, whereas, Kumar et al. mainly studied phone movement patterns while typing or swiping. In a different study, Tang et al.  studied the phone movement patterns under two different conditions, dynamic (walk, upstairs, and downstairs), and static (sit, stand, and lie). Murmuria et al.  studied power consumption, touch gestures, physical movement, and their combination for continuous authentication of smartphone users.
One of the common problems that the above studies have that their experiments have been carried out under a controlled or restricted environment. Another major problem that they have is the availability of the mentioned footprints thought the user activity on the device. Our work addresses these concerns by (1) keeping the data collection environment completely unconstrained, and (2) capturing the phone movement patterns throughout the user interaction with the device.
In a similar attempt, Mahbub et al.  collected data from multiple sensors including front camera, touch sensor, and location service under an unconstrained environment for continuous authentication of smartphones users. In another study, Mehbub et al.  proposed methods to authenticate users based on their trace histories. They also present a challenging dataset that contains multiple sensor signals collected from volunteers on Nexus 5 phones over a period of months. The sensors include a front-facing camera, touchscreen, gyroscope, accelerometer, magnetometer, light sensor, GPS, Bluetooth, Wi-Fi, proximity sensor, temperature sensor and a pressure sensor. It would be interesting to apply our methods on the dataset presented by Mahbub et al.  and explore the viability of all the modalities that have been captured for continuous authentication of smartphone users.
3 Data Collection and Preprocessing
3.1 Data Collection
Following approval of the University’s Institutional Review Board, we invited the faculty, staff, and students to participate in our study. The majority of the participants were university students, while the rest were university staff or faculty. All of them were regular smartphone users. The participating individuals were from different colleges, departments, and programs, including Engineering, Mathematics, Biomedical and from different countries including India, China, Nepal, Guyana, Uganda, USA, Iran, and Russia, resulting in a diverse sample. The participants were briefed about the level of engagement, type of data collection, battery consumption, our expectation, and the amount of compensation.
An Android application (App) was developed to capture phone movements patterns continuously as long as the phone was switched on. On its first startup, the App automatically activated a service in the background that was responsible to capture the acceleration of the phone seamlessly. The App was installed on the participant’s phone instead providing them an experimental phone to ensure completely realistic and unconstrained operating environment.
However, there was at least one disadvantage of doing so was the varying sampling rate across the user population. The configuration for sampling rate was set to sensor_delay_normal to avoid excessive power consumption by the participant’s device. The rule of sampling rate was not obeyed by all of the devices as they were running different versions of the operating systems (Android or higher). The forty of the total users had sampling rate between to , six users had between to , whereas, remaining users had more than samples per second. The mean of sampling rate was whereas the median was
. To deal with the varying sampling rates, we used a fixed length of windows in terms of time instead of number of samples during feature extraction.
The participants were asked to come in at least five days or later, give the data, and collect their compensation of . Several individuals registered to participate in our data collection study, about half of them never returned. We selected only those users who had at least five and up to twelve days of data for this study. The total number of users who followed the days criteria turned out to be out of who returned. The data was collected for several days to ensure that a full cycle of activities that the user undertake is captured.
3.2 Exploratory Data Analysis
For almost every user, a major segment of the data belonged to the unattended (but switched on) state of the phone. The unattended state of the phone in this context means the accelerometer records almost no acceleration. The direct removal of the segment of data that had lower acceleration than a threshold was distorting the signal. So, we used a window based scheme to identify and remove the segments belonging to the unattended state of the phone. For each user, the data was divided into segments of 2.5 seconds. For each segment, the medians of x, y, and z values were computed and compared with predefined thresholds. The segment was discarded using the following criteria: if () && () && () then discarded else kept, where , , and were the medians of x, y, and z values, and , , , , , and , were lower and upper thresholds for series x, y, z respectively. The thresholds were computed by using the upper and lower envelope of the accelerometer readings that were collected by keeping the phone unattended for hours. The values of the thresholds were -, , , , , and respectively.
The data belonging to unattended phone state was discarded for two reasons. First, the phone movement pattern while the phone was unattended, was overlapping too much across the user population, so it could have injected only noise into the authentication system. Second, we believed that there is no need to authenticate the user when the phone is in the unattended state. The median filtering with a span of three data points was applied before extracting features.
|Spectral entropy||Histogram (16 bins)||Standard deviation||Interquartile Range||Mean|
|Bandpower||Dynamic Time Warping distance between pair of signals||Range (max-min)||Peak magnitude to RMS Ratio||Energy|
|Median frequency||Mutual information between pair of signals||Screen on/off||Correlation between pair of signals|
4 Experimental Setup
4.1 Characteristics of Phone Movement Pattern
We hypothesized that the phone movement patterns recorded under different operational contexts could be used to build continuous authentication systems. The hypothesis was based on the fact that the phone movements are readily available, and easily collectible throughout the user interaction with the device, exerted by almost every user, could be distinctive if observed under specific contexts, do not change frequently, neither requires any user attention or intervention, and hardly imitable . The initial design goals of the system aimed to be able to: run in the background; identify the context of the phone usage automatically; select the corresponding authentication model; and verify individual’s authenticity at frequent intervals.
The phone movement patterns were not distinctive enough among the users as we expected in the first observation. However, when divided into several contexts through clustering, they turned out to be quite distinctive measurements among the users [9, 8]. So one of the biggest challenges that we faced was to automate the identification of contexts from the unlabeled phone movement patterns. We considered several possibilities: (1) developing semi-supervised models to divide the phone movement patterns into well-known human activities, (2) develop samples for human activities and apply sparse coding to identify the contexts (3) clustering the unlabeled behavioral footprints and use the cluster indices to train a Context Identification Model (). However, we realized that it was not required to map the contexts to well-known human activities for developing the authentication systems, so we decided to implement (3).
The next challenge was to define features that could help clustering algorithm in dividing the data not only based on the distance-based similarity but also on the structural-based similarity. This was one of the reasons to apply clustering of data at the feature-level not at the data-level. To capture the structural similarity of the signal as well as the pair of signals, we extracted a variety of features including spectral entropy, histograms (bins), Dynamic Time Warping (DTW) distance between the pair of signals ( acceleration in , , and directions). The maximum number of clusters was fixed to eight in our experiment for each user. The distribution of data among the clusters varied drastically for every user (see Figure 1). Some clusters had nominal samples, hence were neglected in the experiment. This behavior was expected as some users might have operated their phone in very different circumstances than the other ones. It would be interesting to find out the optimal number of clusters (contexts) that offers the best authentication accuracy.
A flowchart of the system is presented in Figure 2. The design of the authentication system could be divided into two phases: enrollment and continuous verification. The enrollment phase in our system, included three steps: user-specific clustering of training data; training user-specific Context Identification Model () using the clusters and their indices; and finally training context-specific authentication models for each user. While the continuous verification phase included identification of the context of the test samples using the ; selecting the appropriate authentication model using the context id produced by ; and then comparing the scores (assigned by the authentication model to the test samples) with a predefined threshold to make the authentication decision.
4.2 Feature Extraction
The clustering, context classification, and user authentication were carried out at the feature level. The features were extracted from the signals formed by the accelerometer readings in the , , , directions and their resultant which was defined as,
. The features set consisted of descriptive statistical features such as mean, standard deviation, interquartile range, as well as features from frequency, spectral, and information theory domain e.g. band power, median frequency, spectral entropy, and mutual information, correlation, and Dynamic Time Warping distance between the pair of signals (see Table1).
In order to implement the continuous authentication paradigm, we extracted these features by using sliding window protocol with ten seconds of window size and five seconds of overlapping [6, 19]. These numbers were derived from the existing body of the work in this domain. The screen on/off information was also used as a feature during both, the context classification and authentication process. The values of this feature were basically the percentage of times the screen was switched on in the windows of data used for feature extraction.
4.3 Feature Analysis
To reduce the number of features, we evaluated all the extracted features by using the correlation based feature subset selection method with breadth first search method. The feature selection was performed separately for each context, for each user. The best of features that could distinguish between the genuine and the impostor classes for a particular context of a user, were selected and used to build the authentication model for that context of that user. On an average 70% are more features were discarded through the correlation based feature selection method.
The scale of extracted features varied drastically, hence we normalized all the features between zero to one. To decide upon the normalization method, we tested all the features for normality using one-sample Kolmogorov-Smirnov test 
. The null hypothesis of each test was that the feature values follow a standard normal distribution. The test resulted in one if it rejected the null hypothesis at the significance level of 5%, or zero otherwise. The test results concluded that most of the features were not normally distributed, hence, we applied_ normalization .
4.4 Genuine and Impostor Sample Considerations
The authentication models were implemented by using multi-class classifiers, hence, were trained by using samples from both genuine and impostors classes. For both, training and testing phases, the genuine samples were created using the data belonging to the actual (candidate) user, whereas the impostor samples were created using the data belonging to the rest of the users.
4.5 Context Identification Model (CIM)
The context identification model () was built separately for each user assuming that the users might have a different number of and distinct phone usage contexts. The number of context for each user indeed varied as shown in Figure 1. To build the , we clustered the normalized feature vectors using the k-means clustering algorithm. Assuming each cluster represented a distinct phone usage context, the cluster indices were assumed as class labels. To train , we used feature vectors along with their corresponding cluster ID as class labels. The was implemented using the Random Forest classifier. Random Forest was chosen because it was proven to be effective in existing studies [19, 7, 12].
4.6 Identifying Phone Usage Contexts
As shown in the Figure 2, testing samples were passed through of the corresponding user to find out the context of the test sample. Using the predicted context, the corresponding authentication model was selected that provided the authentication decision. For example, let the supplied test sample was classified into context , then was used to make an authentication decision, if was build using the feature vectors belonging to cluster during enrollment.
4.7 Context-wise Authentication Models
Multiple authentication models, one for each context, were trained for each user. These classifiers needed knowledge of both genuine, and impostor classes. In our experimental setup, a fixed number of samples were borrowed from rest of the users and used as impostor samples. The ideal thing would be to choose any of the samples from other users as impostor samples than the candidate user. However, we used only those samples from other users as impostor samples that were classified (by ) in the similar context as of the candidate user.
The mapping of contexts among the users was challenging as the context of the may not necessarily match with the contexts of . Therefore, it would not be ideal to use the samples from of user as impostor samples for training the authentication model for the context of the . To address this problem, we simply used the of that classified samples of into one of the contexts of . For example, to build the authentication model for context of user , the samples belonging to of user were used as genuine samples. While samples of other users - were first classified using the of , then, a portion of those samples that were classified as was used as impostor samples for training .
4.8 Verification and Performance Evaluation
Five different classifiers, namely, Logistic Regression, Neural Network, k nearest neighbor (kNN), support vector machine (SVM), and Random Forest were used for implementing the authentication models. One of the reasons to employ these classifiers was their effectiveness in the existing studies [6, 19, 22, 12, 9, 7, 8, 23]
. Also, their suitability for solving both linear and nonlinear recognition problems as well as have distinct operational characteristics. The setting for Logistic Regression classifiers was the generalized model with binomial family. The number of neurons in the hidden layer of Neural Network (Multilayer Perceptron) was set to 10. The k-NN was implemented using the k=10. The SVM was used with RBF Kernel and C-classification settings. Rest of the settings for all classifiers were left to default as provided by respective packages of R language.
All authentication models were tested for both genuine and impostor pass rates. The prediction probabilities for unknown samples that came fromwere referred to as the genuine score. While the prediction probabilities for unknown samples that came from were referred to as the impostor scores. The performance of every authentication model was evaluated using the equal error rate (EER). The error rate obtained for a threshold at which false accept and false reject rates match is known as the EER. The EER was computed using the genuine scores, impostor score, and varying thresholds. The EER has been extensively used to compare the authentication systems. The lower the EER is the better the authentication system.
5 Experimental Results and Discussion
5.1 Performance Across User Population
The results of all experiments are summarized in Figure 3
. All sub figures except the one in the bottom-right corner represent the user-wise performance of Logistic Regression, Neural Network, kNN, SVM, and Random Forest respectively. The box plot for each user was plotted using the mean EERs for each context. The box plots depict the median EER (center red line) for each user. Red cross whiskers represent the EERs of the outliers contexts. Figure3(f) illustrates the average and standard deviation of EERs computed across all the contexts of all 57 users. The average equal error rate were 13.7%, 13.5%, 12.1%, 10.7%, and 5.6% with the standard deviation of 7%, 7%, 4.2%, 4.3%, 2.7% for Logistic Regression, Neural Network, kNN, SVM, and Random Forest classifiers respectively. The median EERs achieved by Random Forest, SVM, and kNN were very consistent across the user population. While the Neural Network has performed very well for most of the users but quite poorly for few users. A similar trend could be observed for the Logistic Regression classifiers.
|Pair of Algorithms||p-values|
|Random Forest - SVM||4.63E-13||1.43E-11||8.74E-11|
|Random Forest - Neural Network||3.57E-13||3.22E-13||5.43E-11|
|Random Forest - Logistic Regression||1.80E-12||2.22E-12||2.24E-10|
|Random Forest - kNN||1.21E-12||2.22E-12||1.02E-10|
|Neural Network-Logistic Regression||1.54E-10||0.8946||0.5328|
5.2 Comparison of the Classifier’s Performance
The average and standard deviations of EERs could be misleading while ranking the quality of these classifiers for authentication purpose. Figure 3(f) gives an impression that the Neural Network-based authentication systems exhibited significantly inferior performance compared to kNN, and SVM based systems. However, that may not be the case if we remove a few users that exhibited outlying error rates. Hence, to measure the quality of these classifiers, a series of statistical tests was conducted.
The mixed effects Analysis of Variance (ANOVA) is generally used to test the statistical significance of differences between mean error rates of the two different methods. Mixed effects ANOVA works under the assumption that the underlying distribution of the pairwise difference of error rates follows a Gaussian distribution. Therefore, first, we tested the pairwise difference of mean EERs for the normality using Kolmogorov-Smirnov (KS) test . The null hypothesis was that the pairwise difference of mean EERs obtained by two algorithms follows a Gaussian distribution. KS test rejected the null hypothesis at the 5% significance level for all pair of algorithms. Since for all pair of algorithms, the pair-wise difference between mean error rates failed to pass the normality test, we chose to use Friedman test .
The Friedman test was carried out under the null hypothesis that the algorithms are equally accurate. Table 2 shows that the test resulted in significantly low p-value (less than ), for all pairs of the classifiers except for (kNN, NNet), (kNN, LogReg), and (NNet, LogReg). Therefore we failed to reject the null hypothesis for kNN, NNet, and LogReg at the significance level of 3%. We also performed the Wilcoxon signed rank test. The test results affirmed the same conclusion as of Friedman (see Table 2). As the statistical tests demonstrated that the mean EERs obtained by kNN, NNet, and LogReg are sufficiently similar, we conclude that differences among the performances of kNN, NNet, and LogReg are statistically insignificant.
5.3 User-wise Suitability of the Proposed Systems
The behavioral footprints of individuals are subjected to change under different operating conditions. Our proposed model addresses that concern up to a certain extent by clustering the patterns. The same behavioral trait, if significantly different under different conditions, could get clustered under two or more contexts depending on what operating conditions that behavioral activity occurred. In Figure 3, we could see that few users have significantly high error rates (e.g. user ids and ). We referred to the users who have high error rates as bad users. The high error rates could be because of the following reasons: (1) the bad users have high variance in their own data, and/or (2) their phone movement pattern overlap too much with the other users.
Through Figure 4, we could see the impact of the systematic removal of bad users on the overall performance of the proposed systems. The removal of top 3 (5% of total) bad users caused the Random Forest consistently maintain the mean EERs below 10% for each user. SVM, kNN, and Neural Network could achieve below 10% mean EERs for half of the users, while the rest of the users fell under 10-20%. However, Logistic Regression still had more than 20% of the total users who had beyond 20% EERs, and more than 40% users had EERs between 10-20%. Likewise, removal of top 9 (15% of total) bad users brought SVM, kNN, and Neural Network in the same league in terms of mean EERs. Needless to say that the Random Forest was the most suitable algorithm, whereas, Logistic Regression was the least.
6 Conclusion and Future Work
We conclude that it is possible to continuously authenticate smartphone users using only phone movement patterns. Our experimental results hold Random Forest as the most suitable classifier for implementing such authentication system. The Random Forest outperformed SVM, kNN, Neural Network, and Logistic Regression. User-wise suitability of the proposed system was also investigated. The results suggested that the proposed system may not be suitable for certain types users and could exhibited quite high error rates. The exclusion of those users, however, could improve the overall performance significantly. Therefore, we conclude that the phone movement pattern based authentication systems may not be suitable for every smartphone user. In the future, we plan to explore: the suitability of one class classifiers for the above system as decision on selecting the impostor samples for training was difficult; explore the fusion of phone movement patterns with other biometric modalities in order to improve the authentication accuracy; and the investigate the fusion of decision of authentication models based on different classification algorithms.
This work was supported in part by DARPA Active Authentication contract FA #8750-13-2-0274 and by National Science Foundation Award SaTC #1527795.
- Shukla et al.  Diksha Shukla, Rajesh Kumar, Abdul Serwadda, and Vir V. Phoha. Beware, your hands reveal your secrets! In Proceedings of the 2014 ACM SIGSAC CCS, ’14, 2014. ISBN 978-1-4503-2957-6.
- Ye et al.  Guixin Ye, Zhanyong Tang, Dingyi Fang, Xiaojiang Chen, Kwang In Kim, Ben Taylor, and Zheng Wang. Cracking Android pattern lock in five attempts. Internet Society, 2 2017. ISBN 1891562460.
- Charlton  Alistair Charlton. iphone 5s fingerprint security bypassed by german computer club. http://www.ibtimes.co.uk/apple-iphone-5s-touch-id-fingerprint-scanner-508196, September 2013. [Online; Last accessed 27 April, 2017].
- Zheng et al.  Nan Zheng, Kun Bai, Hai Huang, and Haining Wang. You are how you touch: User verification on smartphones via tapping behaviors. In IEEE-ICNP 2014, pages 221–232, Oct 2014. doi: 10.1109/ICNP.2014.43.
- Li et al.  Lingjun Li, Xinxin Zhao, and Guoliang Xue. Unobservable re-authentication for smartphones. In NDSS, 2013.
-  Abena Primo, Vir V. Phoha, Rajesh Kumar, and Abdul Serwadda. Context-aware active authentication using smartphone accelerometer measurements.
- Kumar et al. [2016a] R. Kumar, VV. Phoha, and R. Raina. Authenticating users through their arm movement patterns. CoRR, abs/1603.02211, 2016a. URL http://arxiv.org/abs/1603.02211.
- Kumar et al. [2016b] R. Kumar, V. V. Phoha, and A. Serwadda. Continuous authentication of smartphone users by fusing typing, swiping, and phone movement patterns. In 2016 IEEE (BTAS-2016), pages 1–8, 2016b.
- Sitová et al.  Z. Sitová, J. Šedenka, Q. Yang, G. Peng, G. Zhou, P. Gasti, and K. S. Balagani. Hmog: New behavioral biometric features for continuous authentication of smartphone users. IEEE-TIFS, 11(5):877–892, 2016.
- Patel et al.  V. Patel, M. Burns, R. Chandramouli, and R. Vinjamuri. Biometrics based on hand synergies and their neural representations. IEEE Access, 5:13422–13429, 2017. doi: 10.1109/ACCESS.2017.2718003.
- Johnston and Weiss  A. H. Johnston and G. M. Weiss. Smartwatch-based biometric gait recognition. In 2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS), pages 1–6, Sept 2015. doi: 10.1109/BTAS.2015.7358794.
- Tang and Phoha  C. Tang and V. V. Phoha. An empirical evaluation of activities and classifiers for user identification on smartphones. In 2016 IEEE (BTAS-2016), pages 1–8, Sept 2016. doi: 10.1109/BTAS.2016.7791159.
- Patel et al.  V. M. Patel, R. Chellappa, D. Chandra, and B. Barbello. Continuous user authentication on mobile devices: Recent progress and remaining challenges. IEEE Signal Processing Magazine, 33(4):49–61, July 2016. ISSN 1053-5888. doi: 10.1109/MSP.2016.2555335.
- Buriro et al.  Attaullah Buriro, Bruno Crispo, Filippo Del Frari, Jeffrey Klardie, and Konrad Wrona. ITSME: Multi-modal and Unobtrusive Behavioural User Authentication for Smartphones, pages 45–61. Springer International Publishing, Cham, 2016. ISBN 978-3-319-29938-9.
- Murmuria et al.  Rahul Murmuria, Angelos Stavrou, Daniel Barbará, and Dan Fleck. Continuous Authentication on Mobile Devices Using Power Consumption, Touch Gestures and Physical Movement of Users, pages 405–424. Springer International Publishing, Cham, 2015. ISBN 978-3-319-26362-5. doi: 10.1007/978-3-319-26362-5˙19.
- Mahbub et al.  U. Mahbub, S. Sarkar, V. M. Patel, and R. Chellappa. Active user authentication for smartphones: A challenge data set and benchmark results. In 2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), pages 1–8, Sept 2016. doi: 10.1109/BTAS.2016.7791155.
- Mahbub and Chellappa  U. Mahbub and R. Chellappa. Path: Person authentication using trace histories. In 2016 IEEE 7th Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON), pages 1–8, Oct 2016.
- Jain et al.  A. K. Jain, A. Ross, and S. Prabhakar. An introduction to biometric recognition. IEEE Trans. Cir. and Sys. for Video Technol., 14(1):4–20, January 2004. ISSN 1051-8215. doi: 10.1109/TCSVT.2003.818349. URL http://dx.doi.org/10.1109/TCSVT.2003.818349.
- Kumar et al.  R. Kumar, V. V. Phoha, and A. Jain. Treadmill attack on gait-based authentication systems. In 2015 IEEE (BTAS-2015), pages 1–8, 2015.
- Massey  Frank J. Massey. The Kolmogorov-Smirnov test for goodness of fit. Journal of the American Statistical Association, 46(253):68–78, 1951.
- Jain et al.  Anil Jain, Karthik Nandakumar, and Arun Ross. Score normalization in multimodal biometric systems. Pattern Recogn., 38(12):2270–2285, December 2005. ISSN 0031-3203.
- Kwapisz et al.  J.R. Kwapisz, G.M. Weiss, and S.A. Moore. Cell phone-based biometric identification. In IEEE-BTAS, pages 1–7, Sept 2010.
- Serwadda et al.  Abdul Serwadda, Vir V. Phoha, Zibo Wang, Rajesh Kumar, and Diksha Shukla. Toward robotic robbery on the touch screen. ACM Trans. Inf. Syst. Secur., 18(4), May 2016.
- Friedman  M. Friedman. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. Journal of the American Statistical Association, 32(200):675–701, 1937.