Codes for the paper "WristAuthen: A Dynamic Time Wrapping Approach for User Authentication by Hand-Interaction through Wrist-Worn Devices".
The growing trend of using wearable devices for context-aware computing and pervasive sensing systems has raised its potentials for quick and reliable authentication techniques. Since personal writing habitats differ from each other, it is possible to realize user authentication through writing. This is of great significance as sensible information is easily collected by these devices. This paper presents a novel user authentication system through wrist-worn devices by analyzing the interaction behavior with users, which is both accurate and efficient for future usage. The key feature of our approach lies in using much more effective Savitzky-Golay filter and Dynamic Time Wrapping method to obtain fine-grained writing metrics for user authentication. These new metrics are relatively unique from person to person and independent of the computing platform. Analyses are conducted on the wristband-interaction data collected from 50 users with diversity in gender, age, and height. Extensive experimental results show that the proposed approach can identify users in a timely and accurate manner, with a false-negative rate of 1.78%, false-positive rate of 6.7%, and Area Under ROC Curve of 0.983 . Additional examination on robustness to various mimic attacks, tolerance to training data, and comparisons to further analyze the applicability.READ FULL TEXT VIEW PDF
Biometric-based authentication is gaining increasing attention for weara...
This paper proposes a new privacy-enhancing, context-aware user
The security of private information is becoming the bedrock of an
Implicit authentication (IA) is gaining popularity over recent years due...
Implicit authentication (IA) transparently authenticates users by utiliz...
We quickly approach a "pervasive future" where pervasive computing is th...
Online handwriting recognition has been studied for a long time with onl...
Codes for the paper "WristAuthen: A Dynamic Time Wrapping Approach for User Authentication by Hand-Interaction through Wrist-Worn Devices".
Nowadays the authentication and identification process of mobile devices are becoming increasingly important. For instance, many people are using online banking, and their property are being threatened directly by potential attacks. In general, there are three types of authentication factors: a password or a PIN code; some substance such as a security token; the biometrics feature, such as fingerprints. The password is the usual way we use, but [1, 2, 3] have shown that they have some security problems. The identification by biometrics features developed nowadays seems to provide a more accurate and safe approach for authentication. There are also two types of methods by biometrics traits: using physiological features such as fingerprint or iris [4, 5], or using behavioral features such as gait or keystroke. Moreover, attackers might even get users’ faces and fingerprints from public events and then use these biometrics for authentication . In contrast, some behavior-based methods can be deal with attackers better because it is more difficult to be simulated or forged.
Wrist sensing, with data collected from wrist-worn devices, has many applications.  discovered that wrist activity can help distinguish sleep from wakefulness. They distinguished in approximately of the time. 
developed a usage of a novel usage of wrist motion: They used a watch-like sensors to continuously track wrist motion throughout the day and detect periods of eating. They described an algorithm that segments and classifies eating periods and finally obtained an accuracy offor 1-s resolution.
In the authentication field, some researchers  found the photoplethysmographic (PPG) sensor in smart-watches, which is usually used for catching heart rate, could distinguish a specified hand motion done by the user. Their experiments shown that a continuous wrist movement of three times achieved an average error rate of 11.6, and a movement of nine times achieved an average error rate of 8.8. However, the data collected by PPG sensor might be affected by user’s physical condition in that a person’s heart rate can be much higher after strenuous exercise.
Recently authentication by wrist-worn devices come out with various approaches.  developed a motion-based authentication for wrist worn devices, by histogram method and dynamic time warping method. They set some gesture for authentication. The EER value could be as low as However, compared to our method, the range of the hand motion is too large (for example, draw a huge circle in front of user’s body), which is not convenient in public places. What’s more, it may be forged and attacked by attackers, which is not as safe as our method.
 used digital wristband to distinguish user’s behavior after a user logged in. They analyzed user’s habit of using mouse and keyboard, and then deauthenticate the current user or not. They could verify of users in 11 seconds and in 50 seconds. However, their method mainly solve the risk that when the user forget to log out after using. Actually, they have to spend 50 seconds to get correct rate, with the system running background. In contrast, our method focuses on instantaneous pass of authentication.
 worked on signature verification by wrist-worn devices. They successfully determined whether the signature is genuine or forged. They collected data with the accelerometer and gyroscope, then extracted features with dynamic time warping (DTW) and trained some classifiers. Finally they obtained 0.98 AUC and 0.05 EER. Despite the high accuracy, there are some disadvantages in comparison with our method. They require the labels of forged and genuine signatures to be given. However, we don’t have forged signature data in practice. Because limited forged signature data, the system can’t cover all situations. This will result in an unexpected classification for other input not included in the training data. This flaw will be illustrated in detail in Section 4.5. We only needs genuine signatures and can distinguish forged signatures automatically.
In general, people’s writing behaviors vary greatly from one to another, but there are still several challenges. Wrist movement data is hard to distinguish with small letters, while writing larger words produces more distinguishable data. In order to increase the usability and capability to defend simulated attacks, we restrict the minimum size of every letter with a square with side length . Data caught by accelerometer and gyroscope can be different when the same one writes the same word, since they might move quicker in one letter and slower in another, or make unexpected stop somewhere. So we chose DTW(Dynamic Time Warping) algorithm to cover these complex situations, enabling the similarities in writing stand out of the discrepancies. There is no direct relationship between the path of wrist and writing information since the former one does not imply finger movement. As a result, it is not effective if we simply calculate wrist path from wrist movement directly for authentication.
Following are some advantages of our method: Hard to attack. If someone knows the word, the font is hard to acquire. Even if the attacker knows how the user writes the word, it is difficult to simulate the writing habit to forge and through authentication. Related experiments will demonstrate these. Efficient and convenient. It does not need much calculation and can be finished in 1 to 2 seconds. Fitting in with office or class, where people usually use a pen. Suitable for protecting some extremely important things, such as safety box, classified documents and so on. Our method can also provide a secondary password in situations like online paying.
Among all available sensors , we selected the accelerometer and the gyroscope, which could capture wrist movements precisely. Our method is not restricted by the type of digital devices, since these two sensors exist in most of the wrist-worn devices nowadays. The accelerometer measures the acceleration of the Band in , and axes. A value of 1 means the band is under of acceleration, where . The gyroscope measures the angular velocity of the Band also in , and axes. When the angular velocity is 1 degree per second , the value will be 1.
Fifty distinct individuals’ wrist movement data are collected with 62 times per-second sample rate. Each individual is required to write (using their right hands) the word ”love” and an arbitrary word with 3 to 5 letters in lowercase, at least 6 times for each word. In order to capture personal characters in writing precisely, they are required to keep the writing speed in their normal level and control the size of each word to be larger than .
The raw data is recorded as a tuple in sequence of time, with 6 motion dimensions in each tuple , which denote the acceleration and angular velocity along axis and denotes time. Finally we collected 600 samples in total, while each file describing one person’s wrist movement of a writing particular word in 6 motion dimensions.
It is hard to extract features in such high dimension, so we deal with versus time respectively in 2-D. For signal denoising and smoothing, we adopted Savitzky-Golay filter , a digital filter that increases the signal-to-noise ratio while keeping the features of signal, to preprocess our data. This is done by fitting successive nearby data points with a low-degree polynomial. To find the polynomial, we equally spaced the points and constructed least-squares equations for solution. In this paper, we chose each set of 9 successive points for smoothing and the degree of polynomial is 2. We had every point filtered except for first four and last four points, since there is not enough points around them. For those points, we reduced the length of the successive points set and did similar process. The denoising effect is shown in Fig. 3. The main shape of the curve does not change while the curve gets smoother. Extremely peak values in a small neighborhood region are also corrected by S-G filter.
It is necessary to find a robust and efficient algorithm to measure the distance between two time series. Actually, two time series share very similar shapes but not aligned in the time axis. Therefore, the Euclidean distance or norm fails to capture the similarity between them.
The dynamic time warping method (DTW) meets our requirement . It provides a more intuitive distance measurement, which is shown in Fig. 4. It matches two time series with similar shapes, even if they are not closed under Euclidean distance. Below is a short explanation for DTW algorithm.
Assuming there are two time series and of length and , where
Then a -by- matrix is built, where the element of refers to distance between and : . Now we want to find a route from to . Note the route as , where is the points we stand on. The length of is noted as . Then we have
And we are interested in the path reaching minimum:
The authentication process consists of 4 parts: sensing, filter, trainer and identifier. The flow chat is shown in Fig 5.
We used data collected from the accelerometer and the gyroscope of wrist movement to express the writing behavior.
We used the S-G filter introduced in Section 2.3. The original data was smoothed and the points whose absolute values are extremely large were modified .
The DTW distance was selected to do the authentication. To train our system, we obtained a group of data by letting the user write the same word for several times, and then we calculated the ideal DTW distance of this group of data. This concept is introduced in the next section.
For a given testing data, we calculate the DTW distance from the testing data to the group of training data at first, then compare it with the ideal DTW distance of the group. Finally the total similarity score of the testing data is calculated to decide whether to accept or to deny.
A proper measurement is needed to capture the similarity within in one person’s writing style while highlight the discrepancies among people. Actually we only have one class of data, in that the user sets the ”password” by writing a certain word for several times. So we put forward two definitions here:
The word ”ideal” means our expectation of testing data for passing the authentication. If the user produces a testing data whose DTW distance to the group is closed to the ideal DTW distance, then the user could pass the authentication. For a group of training data , calculate DTW distance between and , where corresponding to 6 motion dimensions . Then for each from 1 to 6, choose
’s upper quartilefor ideal distance. From such calculation we get group ideal DTW distance .
For testing data , we intend to measure its distance to the training group . During our tests, we found the speed of writing and the size of the word were different each time, although the user was asked to keep the speed and size. So we calculated the final DTW distance by proper weights. We calculated distances by 6 motion dimensions and got , which means the DTW distance between and of motion dimension. Then we got sorted and obtained , where
. After that we chose PDF of Poisson Distribution for weight:
Then Final DTW distance is where
By feature of Poisson Distribution, the weights mainly distribute near , and for those distances ranked over , the weights are almost 0. That is, for a given testing data, the more similar training data is given higher weight when we calculate the DTW distance from the testing data to the training group.
As the result, the most outstanding advantage is to have a high fault tolerance of input training data. That is, if there are some bad training data mixed in the training set, the system can automatically ignore those bad data. The performance of the fault torlerance will be discussed by experiments later in section 4.3.
The judgment happens when a user wants to pass the authentication. First, the user wears the Band and writes a word as the testing data. Then the system calculates the total similarity score between it and training data. If the score is high enough, the access query will be accepted, otherwise denied.
Based on these definitions of group, when a testing data comes into the system, we easily define the similarity score by 6 motion dimensions:
Set weight for 6 motion dimensions, where . Then the (Total Similarity Score) is
We can simply set . We did AUC test and got better for each specific system. At last we set a threshold . If , the user passes the authentication, otherwise denied.
Another important issue is to complete the authentication process quickly. The Computational Complexity of our system is since we can store the ideal DTW distance into the system, where denotes the motion dimensions, denotes the number of sampling points, and denotes the number of training groups.
In this section several experiments are put forward to evaluate the performance of our system. We demonstrate that our system can distinguish well between the authorized users and different types of mimic attackers. The system is also well designed to tolerate some improper input data in the training data set, and the performance of the system can be improved if we use the personalized signatures or patterns as the password.
The first issue is the practicability. We intend to demonstrate that there is enough discrimination between authorized users and unauthorized users in our system. Here is a self-similarity test shown in Fig 6. We calculated TSS between the password of each of 7 distinct users, where each training data group contains 5 samples. Setting , the result shows that the TSS from a sample to the group where it is belong to is much higher than that of other groups. Self-similarity within each group fits well with our expectation.
The next step is to show the discrimination in practical usage. We selected 15 user’s writing data and trained each user’s system with 5 trails. Under the threshold , the result of (false negative rate) and (false positive rate) is shown in Fig 7(left), from which we got the average and . So our system performs high self and non-self discrimination under proper threshold.
Generally, the resistance to various attack is a vital issue to an authentication system. The traditional authentication methods such as PIN code and fingerprint identification, can be attacked in some ways. For instance, if the user is recorded by a video while typing PIN code, the password is easily to be stolen; another research  showed that if someone is photographed while waving hand, his fingerprint can be restored by attacker only using the photo. So our system should have resistance to these attacks. We trained the system by 25 ”love”s written by a user and tested 3 types of attack methods. The result is shown in Fig 7(right).
One of the simplest attack method is simulated by letting the attacker only knows the word the user using. We asked 15 attackers to write word ”love” by their habit, for 10 trials each.
This happens if the attacker gets the script the user has written. For example, the user wrote the word on a piece of paper for authentication and the attacker got that paper. 15 attackers are asked to forge the script, for 10 trials each.
If the attacker records a video of the user’s wrist movement while the user is doing authentication, the attacker might be able to simulate all the writing process and finally pass the authentication. This is the most threatening and challenging attack. We recorded the video while the user was writing password, and then let 15 attackers tried their best to simulate the user’s writing for 10 trials each. Fig 8 shows that the curve is very close but the curve is distinct.
The result shows that the of attacker increases since the attacker is getting more and more information of user’s authentication process. Through the attack simulation, we found that although the attacking data can not be completely distinguished from user’s test data, there is a boundary lying between the TSS of attackers and that of the authorized users. Taken together security and convenience, the threshold is suggested to set as although the (True Positive Rate) decreases to . This means that in order to defend attacks better, the is sacrificed a little bit. Fortunately, we came up with a practical method to increase the introduced in Section 4.4.
The authentication system is supposed to be well trained by the user before being put into service. However, the user might provide some unreasonable training data: words written in abnormal ways. This might happen for example when the user provides the training data in hurry, or when he is interrupted by others. So it is necessary to check the overall performance of the system when there are improper data mixed in the training data group, namely, the fault tolerance.
We trained the system by 10 original trails and added wrong trails to training groups little by little. We chose 50 fine trails and 50 bad trails for testing. The results of and are shown in Fig 9. We can see that the is always equal to 1 at the beginning, because our algorithm gives more weights to more similar trails when calculating the DTW distance from a testing data to a group. The increases as the rate of bad data raises, because the group ideal DTW distance cannot provide the relative distance we want. Fortunately, we could control the under if the percentage of wrong data is below 50%. It’s a fine result, indicating that our system is robust towards abnormal training data.
In order to find out the discrimination of 6 motion dimensions , the AUC (Area Under Curve) test is implemented for each motion dimension basing on its ROC curve, the results are shown in Fig 10. AUC value for each motion dimension is: and the total is 0.947. The discrimination of an motion dimension is better if the AUC value is higher, so we distribute the weight based on AUC value. We want the performance of our system to satisfy at least for each motion dimension. Therefore, we distribute as:
And we got
Then we did the attack simulation in Section 4.2 again. The training set is composed of 10 samples from one person and another 100 samples from 20 different persons, and the test set is composed of 40 samples from the first person. The result shows that the of all attacks is zero but the increases up to 90% when we still set threshold . Even if with threshold of , we still can block all the attacks, with increasing to 94%. Also the value of the whole system increased to 0.983.
The support vector machine (SVM) has been widely used in behavior-based authentication approaches, and sometimes could achieve good results. Therefore, we further contrast our approach with the SVM. We will show the results and discuss the flaw of this method. The SVM method was implemented for our authentication scenario by following steps.
Each trail has 6 motion dimensions, for each column 9 features are extracted.
Frequency-Domain features: Energy and Entropy. Let
be the vector after Fast Fourier Transformation (FFT), then,
Additional features: We added two vectors and as features. For
times we randomly chose 2 points in each motion dimension and calculated their difference. The we got an empirical probability distribution of the difference of points in each motion dimension, called. Another feature, , was the peak values of the trails.
Thus, there are totally 54 features plus 2 additional vectors of features. For the first 54 features, they may be strongly related. Since the feature dimension is high, we use ( penalty) regression to extract important features.
For the first 54 features, they may be strongly related. In order to reduce time expense, we need to find important and unrelated features. Fig 11 shows the correlation matrix of them.
The regularization method provides a very good way to estimate the contribution of each feature. Two popular regularization methods areand regression. The linear method to find parameter is to directly minimize , while the regularization method just adds one more item:
where is a given parameter. given by regression shows the contribution of each feature(shown in Fig 12 left); while given by regression tends to be sparse(shown in Fig 12 right): if two important features are strongly related, then only one of them will be reported important.
Now we have 400 samples of 20 words with each word written for 20 times. 5-cross validation was applied and each time the mean precision was recorded; the final mean average precision equals to the mean value of 5 recorded precisions.
|Feature selection||54 features||all features|
However, compared to our method, SVM is hard to detect abnormal points, in that provided a testing data, SVM would classify it into one existed class, even if the input is not supposed to be classified into any of the existed group. This flaw makes it fairly unreliable.
We collected 10 samples for each of the 10 different words (not ”love”), and 20 samples for one additional word ”love”, which is the password. Then we choose some training data in each group and the rest are testing data. Use SVM to train the training data and then classify the testing data, then 5 of them (like word ”book”) are classified as ”love”, which makes the authentication extremely insecure!
In this paper, we present a new approach to user authentication using the behavioral biometrics provided by wrist-worn devices. Our approach focuses on effective modeling methods to obtain fine-grained writing-interaction metrics, which have two advantages over other metrics. First, fine-grained writing-interaction metrics can distinguish a user accurately with very few strokes. Second, the metrics are hard to attack even if the whole authentication process are recorded in attackers’ video tape. Extensive experimental results show that the proposed approach can identify users in a convenient and accurate manner, making itself suitable for online and mobile authentication.