Adversarial Attacks on Remote User Authentication Using Behavioural Mouse Dynamics

Mouse dynamics is a potential means of authenticating users. Typically, the authentication process is based on classical machine learning techniques, but recently, deep learning techniques have been introduced for this purpose. Although prior research has demonstrated how machine learning and deep learning algorithms can be bypassed by carefully crafted adversarial samples, there has been very little research performed on the topic of behavioural biometrics in the adversarial domain. In an attempt to address this gap, we built a set of attacks, which are applications of several generative approaches, to construct adversarial mouse trajectories that bypass authentication models. These generated mouse sequences will serve as the adversarial samples in the context of our experiments. We also present an analysis of the attack approaches we explored, explaining their limitations. In contrast to previous work, we consider the attacks in a more realistic and challenging setting in which an attacker has access to recorded user data but does not have access to the authentication model or its outputs. We explore three different attack strategies: 1) statistics-based, 2) imitation-based, and 3) surrogate-based; we show that they are able to evade the functionality of the authentication models, thereby impacting their robustness adversely. We show that imitation-based attacks often perform better than surrogate-based attacks, unless, however, the attacker can guess the architecture of the authentication model. In such cases, we propose a potential detection mechanism against surrogate-based attacks.

READ FULL TEXT VIEW PDF
07/01/2021

Machine Learning and Deep Learning for Fixed-Text Keystroke Dynamics

Keystroke dynamics can be used to analyze the way that users type by mea...
10/15/2021

Hand Me Your PIN! Inferring ATM PINs of Users Typing with a Covered Hand

Automated Teller Machines (ATMs) represent the most used system for with...
05/20/2021

Preventing Machine Learning Poisoning Attacks Using Authentication and Provenance

Recent research has successfully demonstrated new types of data poisonin...
07/26/2022

Generative Extraction of Audio Classifiers for Speaker Identification

It is perhaps no longer surprising that machine learning models, especia...
04/17/2019

Authenticated Preambles for Denial of Service Mitigation in LPWANs

In this article we introduce authentication preambles as a mechanism to ...
01/25/2022

Common Evaluation Pitfalls in Touch-Based Authentication Systems

In this paper, we investigate common pitfalls affecting the evaluation o...
06/12/2022

Darknet Traffic Classification and Adversarial Attacks

The anonymous nature of darknets is commonly exploited for illegal activ...

I Introduction

Static authentication (e.g., passwords and PINs) has been the predominant means of performing user authentication in many computer systems. With the demonstrated effectiveness of machine learning methods, researchers have turned to these methods to perform biometrics-based authentication, both physiological [1]

(e.g., iris, facial recognition and fingerprints) and behavioural (e.g, keystrokes

[2] and mouse dynamics). Although physiological-based authentication has proven to be highly effective, it can be costly to implement due to dedicated hardware requirements. Furthermore, this form of authentication is easily bypassed if one can have access to a copy of the required features, since the features are usually clearly defined. Behaviour-based authentication, on the other hand, has the potential to be much more cost efficient, since it does not require extra hardware to accomplish the same task. Moreover, its non-intrusive nature allows users to be authenticated continuously, which provides an additional layer of security by only allowing the legitimate user to have access and continuous usage of a protected resource.

In the area of behaviour-based authentication by means of mouse dynamics analytics, a number of efforts have been undertaken [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 2] to improve the performance of mouse dynamics authentication models. They range from using statistics-based approaches [6, 8], classical machine learning models [5, 7], and to using deep learning [4] to perform authentication tasks. Also, most of the adversarial attacks proposed in prior studies focused on the image domain [16, 17, 18, 19, 20, 21, 22], which is considered less challenging since there are distinct and observable features within images that humans can identify. In contrast, mouse sequences are hard to distinguish through the mere act of visual inspection by humans, as illustrated by Figure 1. Therefore, to the best of our knowledge, very little work has been done in the area of performing adversarial attacks in the domain of mouse dynamics, which deals mainly with temporal data, and also assessing the robustness of those models to such attacks and we attempt to address these gaps in this work.

(a) Sample mouse sequence of User 3
(b) Sample mouse sequence of User 7
(c) Sample mouse sequence of User 9
Fig. 1: Sample mouse sequences belonging to three different users from the Balabit dataset [23] illustrating the relative difficulty of the authentication problem. This is because human visual inspection does not reveal whether a sample contains information relevant to the prediction of the legitimacy of a particular user, in contrast to many problems in the image domain for which adversarial attacks have been applied (e.g., digits and object classification).

We consider a more realistic attack scenario than in previous works[16, 17, 18, 19, 20, 21, 22]; in our attack scenario an attacker has access to the victim’s host machine (i.e. client), but does not have access to the authenticators or their outputs which are deployed on a remote secured central authentication server. The attacker is only capable of recording the mouse movements of the targeted user. This makes the attack scenario more challenging compared to scenarios in which the attacker has some kind of access to the authenticators.

The contributions of our research include the following:

  1. Considering the remote behavioural mouse dynamics authentication scheme, we propose three adversarial approaches to evade the authentication models: statistics-based, imitation-based, and surrogate-based. We examine this in a novel and realistic setting as stated above.

  2. We address the question of which approach is the most effective for performing adversarial attacks in this kind of setting. We evaluate the robustness of machine learning-based authentication models to such adversarial attacks. As representative examples, we choose one authenticator based on engineered features and another based on neural networks.

  3. We analyse the relationships between the difficulty of the authentication task, the type of adversarial attack, and the adversarial success rate.

  4. Given that the attacker is able to guess the architecture of the models, we perform the surrogate-based attack and analyse its impact on the robustness of the models. Finally, we propose a potential detection mechanism.

This paper is organized as follows. Section II discusses related prior work, while Section III lays out our assumptions regarding the attacker and provides an example use case. In Section IV, we introduce our adversarial attack approaches. In Section V, we present our experimental results; this is followed by a discussion of the results in Section VI and our conclusions in Section VII.

Ii Related Work

Ii-a Mouse Dynamics Authentication Models

The authors in [6] defined a total of

handcrafted features that represents the mouse movement sequences’ spatial and temporal information. However, in this research, the authors did not use the full set of features but instead, performed a feature selection step that is user-specific in order to obtain a subset of the most “discriminative” features. The resultant subset was then used for training statistics-based classifiers tailored to each user. The authors in

[5] used multi-level feature aggregation based on the features introduced in [6]

, which concatenates various mouse actions to form higher level features. Furthermore, the authors proposed additional features that were derived from individual mouse actions in order to reduce authentication time, due to the aggregation of several mouse sequences, while also improving the authentication accuracy. The final feature set was then used to train a random forest classifier model that learns to discriminate between a legitimate and an illegitimate users.

More recently, the authors in [4]

utilized a deep learning approach, more specifically a two-dimensional convolutional neural network (2DCNN), to learn the characteristics of users’ mouse sequences for authentication. They gathered mouse dynamics sequences and converted them to images of the mouse trajectories. The authors showed that even by removing the temporal aspect of mouse movements, they still could achieve state-of-the-art authentication results on different mouse datasets. Furthermore, in

[4], explanations were derived regarding which parts of mouse sequences contain evidence for a particular user by layer-wise relevance propagation [24, 25].

Ii-B Adversarial Attacks

Adversarial attacks against machine learning classifier models can be categorized into white-box or black-box attacks. For white-box attacks, it is assumed that the attacker has full access to a fully-differentiable target classifier (weights, architecture, and feature spaces). This allows the attacker to perform gradient-based attacks, for instance the fast gradient sign method (FGSM), which was proposed by the authors in [18]. This method involves calculating the gradients of the loss with respect to the model inputs, and subsequently, using it to perturb the input sample, in order to bring the resultant perturbed sample closer to the point of being misclassified. In addition to presenting the FGSM, in [17] the authors proposed the jacobian-based saliency map attack (JSMA). The JSMA approach first calculates the impact of changing a particular feature (originally from class , to a target class ) on the predicted label. Features of higher significance will be perturbed up to a defined threshold. These two proposed methods proved to be highly effective in attacking the robustness of the victim’s machine learning model. However, having white-box access to a victim’s machine learning model is highly unrealistic, as highly sensitive information, such as the model’s weights and architecture would be a well-guarded secret in practice.

In [16], the authors addressed this limitation by introducing the concept of performing black-box attacks against machine learning classifiers, both deep neural networks (DNNs) and classical-based methods as well. The authors require only the ability to query the classifier with a custom sample and to access its outputs. They performed their attack by training a surrogate classifier on a synthetic dataset which is labelled by the target classifier by sending sample queries and collecting its response. It should be noted that the requirement here is to be able to feed in inputs and to collect outputs from the authenticator. The authors then showed that the trained surrogate model is able to approximate the decision boundary of the target classifier, providing the opportunity for white-box attacks. The authors concluded by showing that attacking the robustness of the victim’s model is feasible, even with defences in place during the phase of training the victim models. It is important to note that the context of the work done by authors in [16, 18, 17] was in the image domain.

In [26], the authors adopted the methodology used in [16] and applied it to perform adversarial attacks on malware classifiers. Similar to [16], the authors trained a surrogate malware classifier and generated malware through the method proposed by [18]. The malware resulting from the perturbation was still able to fulfil its original malicious functionality due to the introduction of “No Operations” (NOPs) as a form of perturbation. The authors achieved nearly perfect rates of bypassing the targeted malware classifiers with this approach.

However, none of the attacks described above were applied against machine learning-based authentication models. Furthermore, in contrast to the aforementioned adversarial attacks, in the domain of machine learning-based authentication, it is not realistic to assume that the adversary has access to the authentication model; in this case, the model might be situated in a remote server, away from the victim computer given a highly secured environment. Hence, the attacker would not have any knowledge of the outcome of the authentication computed by this remote server and thus be unable to perform black-box attacks proposed by the authors in [16]. For the same reason, white-box attacks that rely on access to the architecture and weights of the model are not possible.

Iii Threat Model

While the focus of this work is on adversarial attacks, we first provide a justification for the considered setup by an example use case before describing our threat model proper. We consider a remote user authentication scenario, based on mouse behavioural dynamics in the setting of a client/server architecture (see Figure 2). In this setting, a user on the client machine is allowed to access a remote service on a server if the user’s currently collected behavioural characteristics are consistent with the model stored at the remote server. On the other hand, if the user’s machine is compromised by an external attacker (e.g., by exploiting a vulnerability or installing a malware) or by an internal attacker (i.e., insider threat such as masquerader [27]), then after a sufficient period of time, a record about this activity would be generated at the server. Additionally, the session of the user might get interrupted by additional verification measures.

The attacker’s goal is to have the generated adversarial samples evade a machine learning-based authentication model (i.e. target classifier). We assume that the attacker does not target the secured central authentication server (see Figure 2) but instead compromises the host machine of the target user. Thus, the attacker has the ability to record the mouse movement sequences of the legitimate user on the infected host machine. We extend the threat model assumed by the authors in [16], by not allowing the attacker to query the target classifier, which makes our threat model more realistic and at the same time more challenging. It is important to note here that the recorded data is not a subset of the training data used to train the target classifier and also, no feedback regarding the result of the authentication will be accessible to the attacker. If the attacker’s goal is achieved to a reasonable extent, the robustness of target classifiers can indeed be adversely affected and the reliability of these models must be verified, regardless of their authentication performance.

Fig. 2: Illustration of the proposed threat model.

Iv Proposed Adversarial Strategies

We investigated three possible attack approaches used to bypass the target classifiers. The first approach trains a generator model to impersonate a user’s mouse movement sequence through the teacher-forcing approach [28], which we refer to as an imitation-based attack. This trained model then generates mouse trajectories, which will be tested against the target classifier. The second method is based on the idea of training a surrogate classifier, as proposed in [16], however, while assuming that the attacker has no access to the target classifier. We refer to this method as a surrogate-based attack. This trained surrogate model will be used to perform perturbation of any mouse trajectory, through the use of a white-box attack method. The perturbed sequences will then be evaluated against the target classifier. Hence, our first method involves the generation of mouse curves, while the second involves the perturbation of any existing mouse curves. Finally, these two methods are compared against a third attack approach, which we refer as a statistics-based attack.

Iv-a Statistics-based Attack

As a baseline, we adopted a statistical-based approach to generate adversarial trajectories in an attempt to bypass the target classifier as it was the simplest and fastest to implement. It does not require any overhead in training any neural network models for performing such adversarial attacks. As mentioned earlier in Section III

, we assumed that the attacker has access to recorded mouse dynamics of the target user; hence, the attacker can calculate several useful statistics of the user. For sequence generation, we first compute a histogram of position difference vectors, a histogram of starting points, and the median time interval between each mouse event. We then select bins in the histograms through random sampling, weighted on the rate of occurrence. As a bin contains a range of possible values, we sampled uniformly within the bounds of the selected bin, in order to obtain a mouse coordinate. From the histogram of position difference vectors, we were able to sample a sequence of mouse coordinate perturbations. With the sampled start positions (from the histogram of starting points), each subsequent point can be generated by adding the perturbation from the last computed position vector to form a sequence of absolute position vectors. The median time interval was then used to construct timestamps for each generated mouse event.

Iv-B Imitation-based Attack

We employed the gated recurrent unit variant of recurrent neural networks (GRU-RNN) architecture as the basis of our generator network,

. The GRU has the ability to learn what to forget through the reset gate, , as well as the relations between data points across timesteps in a sequence. The governing equations for a GRU cell are shown below:

(1)
(2)
(3)
(4)

where and are the weights and biases respectively, and is the output of the GRU cell at timestep .

is the sigmoid activation function.

The GRU-RNN model was trained to predict the coordinates of the next timestamp, , given the ground truth from the current timestamp, , and the hidden states of the previous timestamp, . It can be expressed as:

(5)

where denotes the model parameters of the generator network . The prediction of the generator, , and the ground truth,

, are used to calculate the mean square error (MSE) loss, which will be used for backpropagation to update the model parameters,

.

Our GRU-RNN model consists of two stacked GRU layers, with a hidden dimension of 128, and a fully-connected layer at the end of the GRU sequence to convert a dimension of 128 to two (to translate the latent space back to an x-y space).

There are a number of design choices for the representation of , and in our experiments, we used three different variants. They are absolute position vectors (ABS), position difference vectors (DVs) between the mouse coordinates of the current and the previous timestamp, and, velocities (VELs) which are DVs divided by the corresponding time difference. In addition, we have experimented training our generator with and without regularization. Generating mouse movement sequences was done in two ways, by giving the generator an initial single start position for the generator model to start generating a full length sequence and by providing an initial sequence of coordinates, taken from the recorded mouse trajectories, to start the generation process. More details regarding the generation of these sequences are provided in Section V-C2.

Iv-C Surrogate-based Attack

Fig. 3: Workflow of a surrogate-based attack

We trained a surrogate classifier, inspired by [16], that learns to have the same functionality as the target classifier from a surrogate dataset, even though their network architectures might be different. This requires positive and negative labelled samples for training. Therefore, the surrogate dataset is comprised of the target user’s mouse movement sequences recorded by the attacker and a set of other mouse sequences that do not belong to the user. After training the surrogate classifier, we performed white-box attacks on the surrogate model, by starting with an arbitrary mouse movement sequence and perturbing it for a fixed number of iterations. In order to accomplish this, we adopted the FGSM [18]. The amount of perturbation, , to be applied on the input sequence, , can be defined as:

(6)

where is the perturbation factor that controls the extent of the perturbation, and

is the log probability of the legitimate user class for the input sequence,

. Note that the surrogate model can have any architecture and not necessarily have the same architecture as the target classifier. Also, we used the cross entropy loss function. Figure

3 illustrates the process of the described surrogate-based attack.

We experimented with two different surrogate model architectures. One was a three-layer GRU-RNN with two fully-connected (FC) layers on top and a rectified linear unit (ReLU) activation after the first FC layer. The model has a hidden dimension of 100, and a fully-connected layer at the end to convert the GRU-RNN outputs to two-dimensional vectors, which represents logits for each decision (zero or one). The second surrogate only uses two stacked FC layers, with an exponential linear units (ELU) activation after the first layer. The first layer takes in a flattened sequence as inputs, where every

odd numbered node takes in the x-axis, while every even numbered node takes in the y-axis of the mouse sequence.111For example, if a mouse sequence has a sequence length of 10 (each timestep being represented as a vector consisting of x and y coordinates), then the input dimension of the first layer would be 20. The output of the second model has the same format as the first. In contrast to the work of [16], we do not use the target classifier to label our samples, due to our assumptions mentioned in Section III.

V Experiments

In order to comprehensively evaluate the effectiveness of the adversarial attacks and also the robustness of the target classifiers to adversarial attacks, we used two different publicly available mouse datasets. In our experiments, we used the PyTorch library

[29] for developing our models unless stated otherwise.

V-a Datasets

We used the Balabit [23] and The Wolf of SUTD (TWOS) [30] datasets. In both datasets, we preprocessed the raw data logs into mouse movement sequences, removing mouse scroll and click events as they are not part of our scope. The data contain critical information like the event timestamp (in seconds), mouse position vector, and the user’s identity.

The Balabit dataset consists of mouse dynamics data from

users which is split into training and testing sets, with the training set consisting of only the corresponding user’s data. For the testing set, there is a mix of other users’ data to simulate anomalous mouse dynamics sequences for comparison against the legitimate user. In our experiments, these anomalous sequences were not needed and were removed, since the true identity of those anomalous sequences was not known. Unfortunately, in this dataset the screen resolution of the users is not known, therefore we estimated their screen resolutions by calculating the maximum coordinates for each dimension and mapping it to a finite set of screen resolutions.

The TWOS dataset consists of mouse dynamics data from users, one of which was not considered because of its small sample size. The TWOS dataset provided screen resolutions of the users, in contrast to the Balabit dataset. However, we found that the TWOS dataset is more unclean as there exists multiple instances of repeated mouse events (with the same timestamp, x-y coordinates) which are considered as anomalies (Balabit has less of such occurrences).

For both datasets, we performed the following data reshuffling techniques in order to obtain a training set for authentication model training and a disjoint training set for simulating the recording of mouse sequences from the legitimate user on the victim’s machine, while ensuring that all users were represented in each set. The latter subset will be used either for surrogate or generator training.

  1. Combine session files from both training and test folders.

  2. Redistribute the combined data into two sets of approximately equal size, all while ensuring that the mouse data within each session file does not get separated.222For the case of the TWOS dataset, where each user might not necessarily have many session files, a session file might be further broken down to what we refer to as “mini-sessions”, where each mini-session is separated by a time difference of at least two hours. This is done to try to prevent any two mouse movement sequences which were generated one after another to be present in both the “training” and “test” datasets. This is required as certain users in the TWOS dataset may merely have 2-3 available session files.

  3. Further split each set into a “training” and “test” subset, of and of the dataset respectively.

Due to the anomalous mouse events encountered as described prior, we implemented a simple data cleaning step to remove such events. As such, some mouse sequences might be shortened. We then conducted a check to use only mouse sequences that are minimally of a certain length. Hence, the number of eligible mouse sequences will be reduced. In order to circumvent the issue of having little data, we performed a data augmentation step for data expansion. We did so by performing 10 different affine rotations within °(chosen at random) for each sequence.

We chose to perform affine rotations as this allows us to both generate more mouse sequences while also maintaining the user’s mouse trajectory characteristics for each generated curve. Mouse sequence features like velocity, acceleration and curvature will be consistent across these generated curves with such a data expansion policy. We used 5°as it would provide us with sufficient variability. We also used the screen resolutions of each user to normalize each mouse coordinate to the range of [0, 1].

V-B Authentication Models

We considered two different types of authentication models: a neural network-based model using sequential data and one based on engineered features. More specifically, we used a one-dimensional convolutional neural network (1DCNN) and support vector machines (SVM) respectively. We considered the SVM as it gives us a simple baseline authentication model which can be trained stably with small sample sizes. Also, we chose to use these two types of authentication models as we would be able to observe if the same deductions can be drawn across highly different authentication models.

V-B1 One-Dimensional Convolutional Neural Network

Fig. 4: 1DCNN architecture

We choose a 1DCNN model as a neural network variant of our authentication model. This model was trained on sequences of velocities with a fixed sequence length which were preprocessed from sequences of position vectors with their corresponding timestamps. The model performs one dimensional convolutions along the time axis, with ELU as activation functions after each of the convolution layers. The first layer parses the input velocity sequence using two different time scales. The outputs of the first layer is passed to the second layer of convolution with shared weights. Lastly, the outputs of the second convolution layer are passed to a FC layer to produce a scalar value that represents the score of the input sample. The 1DCNN architecture is illustrated in Figure 4.

V-B2 Support Vector Machine

The SVM model was trained on features, adopted from the feature set that used in [5], however we used a slightly smaller feature space.333This was accomplished by filtering Jitter and Critical Points

We refer the reader to the “Movement Features” section for more information regarding the mouse movement features used. For data preprocessing, the input features were normalized by removing the mean and scaling it to unit variance. The SVM model was developed using scikit-learn’s LinearSVC API

[31].

The performance of the authenticators was measured by the area under curve444Receiver operating characteristic area under curve (AUC) and equal error rate (EER) metrics.

Balabit TWOS
Authentication models AUC EER AUC EER
SVM 0.82590 0.23129 0.80485 0.24103
1DCNN 0.87469 0.12531 0.77158 0.29242
2DCNN [4] 0.96 0.10 0.93 0.13
TABLE I: Baseline average AUC and EER for the respective authenticators on the different datasets. The metrics for the Balabit dataset are an average across users, while the metrics for the TWOS dataset are an average across 23 users. The state-of-the-art performance can be found in [4] with a two-dimensional convolutional neural network (2DCNN) architecture.

It is important to note here that in our work, we are not interested in achieving the state-of-the-art performance for the user authentication task, but instead focus on evading authenticators. As such, our authentication performance is lower than the state-of-the-art reported in [4]. We do not use the 2DCNN authenticator proposed by [4] because it takes in input images instead of sequences. The purpose of Table I is to show that the target classifiers that we used as our authentication models are reasonable and thus, that we are not attempting to attack a poor performing model. Based on Table I, we can deduce that the authentication task is less complex for the Balabit dataset than it is for the TWOS dataset. The AUC for both authentication models using the Balabit dataset is higher than obtained using the TWOS dataset. Likewise, the EER for the Balabit dataset is lower than for the TWOS dataset. This behaviour is also supported by a prior work done by [4], with their state-of-the-art results as shown similarly in Table I.

V-C Adversarial Attack Results

In this work, we define the adversarial success rate (ASR) as the proportion of generated/perturbed sequences being classified as legitimate by the targeted classifier, and thus having evaded the authenticator successfully.

V-C1 Statistics-based Attack

Table II shows the ASR we obtained by using a statistics-based method of generating adversarial mouse trajectories, in an attempt to bypass the target classifiers described in Section V-B. With a simple statistics-based approach, target classifiers classified at least of the adversarial samples as legitimate, with a maximum of classified as legitimate. These results demonstrate that the target classifiers are not even robust to simple means of generating adversarial samples.

Balabit TWOS
SVM 1DCNN SVM 1DCNN
Statistics-based baseline
TABLE II: Statistics-based attack results based on 1000 generated curves. Metrics reported represents the ASR obtained (in [0, 1])

V-C2 Imitation-based Attack

Before describing our results, one should be aware of an inherent limitation of imitation-based attacks in the realistic scenario defined in Section III. To illustrate this, consider a dataset and a method which trains a generator to accurately generate sequences according to the dataset perfectly. This generator would be able to replicate the instances from the dataset itself; as such, the ASR that the attacker can achieve would be equal to the accuracy of the authentication model, when evaluated on samples of the legitimate user. Thus, in the case in which the generator is able to reproduce the distribution of the provided dataset well, the ASR would be close to the accuracy of the authentication model on the legitimate sample but not significantly above as the authentication model cannot be used to improve the generator. It should be noted that when the authenticator cannot be used to guide the generator, this observation will likely hold for authentication methods beyond the mouse-based context, as the observation is independent of the input modality.

Generation Method Start Point Start Sequence
Model Strategy ABS DV VEL
Regularizing Strategy No Cluster Derivative
Sequence Length 50 100
TABLE III: Summary of setting variations used in the imitation-based attack experiments.

As mentioned in Section IV-B, the generator is trained using only the sequences of the victim and we also briefly described two methods of generating sequences, using a single start point or using a whole sequence555The generator’s input sequence can be thought of as a queue. As more samples are being generated spanning across time, these generated samples are added to this queue, while pushing out the more outdated samples from the beginning of the sequence. for initialization. In addition to varying the generation method, we have also experimented with training the generator with different regularization strategies and also with different sequence lengths. Table III

summarizes the different variations used in the imitation-based attack. In the Table, describing the regularization strategies, “No” refers to the lack of regularization during the training process. “Derivative” refers to regularizing each generated trajectory to the average of the user’s velocity or acceleration, the former is used for “ABS” and “DV”, while the latter is used for “VEL”. “Cluster” refers to using a trained k-means clustering model based on a set of features of users described next. The regularization term is the Euclidean distance to the nearest cluster centroid in the feature space, where we used five clusters in our k-means. The features are statistics calculated from mouse sequences. We used the mean and standard deviation of the velocity (x and y direction), acceleration (x and y direction), and the angle of movement, resulting in a total of 10 features used in the clustering regularization method.

The hyperparameters we used to train our generators are as follows. We used an Adam optimizer with a learning rate of 0.001. We adopted a learning rate decay strategy every 15 epochs by a factor of 0.5 and the models were trained for 60 epochs.

Table IV summarizes the results obtained in the imitation-based attacks. Although we performed experiments enumerating all possible combinations of the settings described in Table III, we aggregated the results over all model strategies and over all regularization strategies by computing the mean. This was done, because we found out that the effects of changes within these two settings did not produce any significant differences in the attacks’ performance based on the Wilcoxon signed-ranked test [32]. More details about how this test was performed are discussed in Section VI-B.

Balabit TWOS
Generation method Sequence length SVM 1DCNN SVM 1DCNN
Statistics-based baseline
Start point 50
100
Start sequence 50
100
TABLE IV: Imitation-based attack results based on 1000 generated curves. Results show the average ASR (in ) for start point and start sequence generation methods. Results presented are in the form of (mean standard deviation). The mean and standard deviation are calculated based on the aggregation strategy mentioned in Section V-C2.

For both the Balabit and TWOS datasets, the neural network-based attacks were consistently better than the statistics-based attacks against a 1DCNN target classifier. For the Balabit dataset, neural network-based methods were around to better than the baseline. For the TWOS dataset, neural network-based methods were around to better than the baseline. For SVM with the TWOS dataset, the results were comparable, however for SVM with the Balabit dataset, the results were poorer.

ASR is generally higher for the Balabit dataset than the TWOS dataset. This could be because the Balabit dataset is less complex than the TWOS dataset, as mentioned in Section V-A. During the data preprocessing phase, we encountered many more anomalies in the TWOS data, e.g., having more instances of repeated time and mouse coordinates, compared to the Balabit data. As such, the generator would be able to learn sequences more effectively with the Balabit dataset than the TWOS dataset.

An interesting observation is that training a generator and generating sequences based on length 100 yields a higher ASR in general than a sequence length of 50. This indicates that discriminative traits of users are more pronounced on longer time scales. The experimental results show that even with a straightforward way of training a generator and generating samples after, imitation-based methods can still bypass the target classifiers to a reasonable extent and are able to impact the robustness of these models adversely.

Fig. 5: Boxplot illustrating the ASR variability. For the “DV” and “VEL” settings, we used sequences of seqlen 50 and 100 respectively. For both cases, we used the cluster regularization and we generated the adversarial samples using the start point strategy.

We analyse the stability of the imitation-based attack method, by repeating the training and generation procedures five times per setting, selecting two settings from the highest two ASR instances, to perform the variability test. We used the TWOS dataset for this set of experiments. The boxplot in Figure 5 summarizes our results. The setting when “VEL” was used yields a higher variability compared to the case of when “DV” is used, as can be seen in the figure where the former has a larger interquartile range (IQR), denoted by the coloured regions. Although the “DV” setting show a lower variability, it still obtained an IQR of approximately . Hence, the imitation-based attack still has much room for improvement. Having said that, it still suffice to show that the target classifiers are not very robust to imitation-based attacks.

V-C3 Surrogate-based Attack

Given the assumptions made regarding the threat model (discussed in Section III), we only have mouse sequences from the targeted user. As such, we can only train our surrogate with positive samples from the target user and negative samples from other sources of data. To simulate this scenario, when performing experiments in the context of the Balabit dataset, we use the TWOS dataset sequences as the negative samples and vice versa. We followed the same approach for selecting a mouse sequence for perturbation. We conducted our experiments for this approach using velocity sequences.

The hyperparameters we used to train both of our surrogates are as follows. We used an Adam optimizer with a learning rate of 0.0005. We adopted a learning rate decay strategy every 10 epochs by a factor of 0.5, and the models were trained for 60 epochs. For the perturbation algorithm, we used an value of 0.001 in Equation 6.

Balabit TWOS
Surrogate Variant SVM 1DCNN SVM 1DCNN
Statistics-based 0.6154 0.3183 0.3895 0.264
GRU-RNN surrogate 0.6928 0.406 0.35265 0.36165
FC surrogate 0.6993 0.341 0.33996 0.43304
TABLE V: Surrogate-based attack results. Metrics reported are the ASR (in [0, 1]) when adversarial samples constructed based on surrogate model architectures are evaluated on the target classifiers.

As illustrated in Table V, both the GRU-RNN and FC surrogate variants were able to bypass the target classifiers better than the statistics-based approach, with the exception of the SVM target classifier using the TWOS dataset. Interestingly, the ASR of the GRU-RNN does not deviate very far from the FC variant, although their architectures are vastly different. This shows that the attacker can use any arbitrary surrogate model architecture, which will impact the robustness of the target classifiers adversely.

To show that the constructed sequences based on the surrogate models are the best that we can obtain, Table VI shows the ASR of these constructed adversarial samples when evaluated against the surrogate models themselves.

Balabit TWOS
Surrogate Variant SVM 1DCNN SVM 1DCNN
GRU-RNN surrogate 0.995 0.995 0.849 0.848
FC surrogate 1.00 1.00 0.997 0.997
TABLE VI: Surrogate-based attack results when the constructed adversarial samples were evaluated against the surrogate models. Metrics reported are the ASR, averaged across the users in the Balabit and TWOS datasets (in [0, 1]).

The table shows that high ASR can be achieved when white-box attacks are performed on the surrogate models. Recall that in Section IV-C, we mentioned that we continue to perturb our input mouse sequence based on the loss of the surrogate model, calculated with respect to the input sequences. Hence, a high ASR for the surrogate models implies that our loss will be minimal and any further perturbation applied would be insignificant. As such, we have reached an ASR saturation point with the surrogate-based attacks in the given setting which was described in Section III.

In contrast to the imitation-based method, the weak upper limit does not apply here. This is because the surrogate-based method does not aim to imitate sequences but tries to find the region of space which is classified by the surrogate model as legitimate. The errors in the surrogate-based approach can be due to two factors. Firstly, in case of a mismatch between the target and surrogate architecture, intermediate feature representation differs. Thus, estimation of correctly classified regions will be unreliable, as seen in Table V. Secondly, in the case of matching architectures, the errors are the result of the variability of decision boundaries associated with the different datasets used in the surrogate models and target classifiers. In cases of non-convex optimization, as with neural networks, different weight initialization also matters.

Vi Discussion

In this section, we discuss the relative efficiency of the explored approaches to perform adversarial attacks in the given setting. Next, we discuss in greater detail our approach to performing the Wilcoxon signed-ranked test and how we arrived at our conclusion for the test, which was mentioned in Section V-C2. Finally, we share our insights to an extension of our surrogate-based attack approach, in the event when the attacker knows the architecture of the victim’s authentication model and a potential defensive mechanism to detect such adversarial attacks in this scenario.

Vi-a Attack Strategy Selection

In Table IV it can be seen that with the TWOS dataset, which is a more challenging dataset, all imitation-based attacks perform roughly the same, approximately to ASR. The results from the imitation-based attacks are notably worse than when the attacker could obtain access to the true authentication model (in contrast to our assumed threat model) and perform a white or black-box attack on it. This shows the challenges inherent in modelling the mouse movement sequences of a particular user. However, a comparison of Tables IV and V shows that most of the time using an imitation-based approach is better than training a surrogate which mismatches the architecture of the victim’s authentication model. It should be noted that the surrogates in Table V were deliberately chosen to be different from the architecture of the actual authentication model. We refer the reader to Section VI-C for more insights in this area.

For the statistics-based attack, it is consistently being outperformed by the other neural network-based attacks (imitation or surrogate-based), regardless of the dataset or the type of authentication model used. This shows that modelling the user’s mouse movement sequences is even more challenging through a statistics-based approach. For SVM with the Balabit dataset, the surrogate-based attack is better while for the rest, the imitation-based is better. As such, using statistics to model a user’s mouse movement sequences should be avoided.

To summarise, both the imitation-based and surrogate-based approaches are viable options to perform adversarial attacks on the authentication model. This is because there are instances where when one approach fails, the other succeeds. Hence, a neural network-based attack is still a better option than a statistics-based one.

Vi-B Does Representation or Regularization Matter?

We investigated whether the input representation or regularization strategy has a statistically measurable impact on the ASR. The Wilcoxon signed-ranked test was performed by forming pairs between experimental results that have the same settings, except for the variable being compared.666For example, if we would like to compare the effects of having differing sequence lengths, we would form pairs between the results in which each of the values in the pair would come from length and length , with all other settings being consistent.

At significance level, the critical z-value for a two-tailed test is . During the comparison between the different model strategies, we compared between ABS and DV, ABS and VEL, and DV and VEL. During each of the tests, there were pairs present; for brevity, the calculated z-values were , , and respectively. Thus, we concluded that there is no significant evidence to claim that there is a significant impact by altering the input representation of the generator.

In our comparison of the regularizing strategies, we compared between “No” and “Cluster”, “No” and “Derivative”, and “Cluster” and “Derivative”. During each of the tests, there were pairs present; again, for brevity, the calculated z-values were , , and respectively. Thus, we concluded that there is no significant evidence to claim that there is significant impact by altering the regularizing strategy.

Vi-C Surrogate-based Attack Extension

We also asked ourselves what happens if the attacker has knowledge of the architecture of the target classifiers and also the dataset that consists of the same group of users that were used to train them (e.g., if multiple users are using the victim’s machine), but when one still does not have access to the parameters learned by the classifier. Interestingly, if the attacker trains a surrogate with the same architecture as the target classifier, the ASR is much higher than the other two approaches, as evident in Table VII. In the Balabit dataset, the ASR can reach for the SVM and for the 1DCNN. For the TWOS dataset, the ASR can reach for the SVM and for the 1DCNN. This is the highest result achieved among the proposed approaches. Therefore, having any form of query access to the target classifier is not required to achieve these relatively high ASR results, although if the attacker fails to obtain any of this information, it is clear that the ASR will suffer. Yet the results are not close to 100%, showing that performance suffers when one cannot query the target classifier as in [16].

Balabit TWOS
Surrogate Variant SVM 1DCNN SVM 1DCNN
Statistics-based 0.6154 0.3183 0.3895 0.264
SVM surrogate 0.92108 0.39801 0.66970 0.22278
1DCNN surrogate 0.60583 0.81707 0.36881 0.58710
TABLE VII: Surrogate-based attack results when the attacker has access to the target classifier’s model architecture. The bolded values highlights the high ASR obtained when a surrogate model of the same architecture as the target classifier is used.

The results presented in Table VII suggest a possible detection strategy against surrogate-based attacks using a single surrogate for our threat model. Namely, one employs a probabilistic average of multiple authentication models, where at every timestep one randomly decides which authentication model to use. The difference in ASR would be reflected in the alert frequency which would change when the models are switched (see Table VIII). Increasing alert rates (compared to using legitimate user’s data) due to a shift in test distribution (covariate shifts) would affect all models to some degree, but an adversarial attack by a surrogate would affect one model less than all of the others. Hence, detection can be performed by checking the alert rates across the models to observe whether one model has an exceptionally low alert rate, while the others have a much higher alert rate.

Balabit TWOS
Experiment Setting SVM 1DCNN SVM 1DCNN
Legitimate user’s data 0.412 0.302 0.301 0.477
Surrogate-based attack (SVM) 0.0774 0.660 0.375 0.803
Covariate shift 0.444 0.352 0.525 0.561
TABLE VIII: Alert rates (in [0, 1]) obtained for corresponding experimental settings. Covariate shifts in the data are made by performing affine rotations ranging from 45° to 90°.

Vii Conclusion

In conclusion, we proposed different strategies that a potential attacker can use to launch synthetically-generated adversarial samples, either through imitation-based, surrogate-based or statistics-based approaches. Based on the experimental results, we show that neural network-based attacks (imitation or surrogate based) are better performing than a statistics-based attacks. Although the generation of mouse sequences is a difficult task, and the proposed adversarial attacks have their flaws, it is sufficient to show that the robustness of these authentication models can be adversely affected even when tested in a realistic setting (see Section III). Also, we have shown that if the attacker could guess the architecture of the authentication model correctly, its robustness would be greatly affected even without any form of access to it. One can also infer from our results, particularly those presented in Tables IV, V, and VII, that in a realistic setting in which the authentication model is inaccessible, attacking a system based on behavioural features is harder than, for example, copying a fingerprint.

Acknowledgment

This work was supported by both ST Electronics and the National Research Foundation (NRF), Prime Minister’s Office, Singapore under Corporate Laboratory @ University Scheme (Programme Title: STEE Infosec-SUTD Corporate Laboratory). Alexander Binder also gratefully acknowledges the support by PIE-SGP-AI-2018-01.

References

  • [1] D. Bhattacharyya, R. Ranjan, F. Alisherov, M. Choi et al., “Biometric authentication: A review,” International Journal of u-and e-Service, Science and Technology, vol. 2, no. 3, pp. 13–28, 2009.
  • [2] F. Monrose and A. D. Rubin, “Keystroke dynamics as a biometric for authentication,” Future Generation computer systems, vol. 16, no. 4, pp. 351–359, 2000.
  • [3] A. A. E. Ahmed and I. Traore, “A New Biometric Technology Based on Mouse Dynamics,” IEEE Transactions on Dependable and Secure Computing, vol. 4, no. 3, pp. 165–179, 2007. [Online]. Available: http://ieeexplore.ieee.org/document/4288179/
  • [4] P. Chong, Y. X. M. Tan, J. Guarnizo, Y. Elovici, and A. Binder, “Mouse Authentication Without the Temporal Aspect – What Does a 2D-CNN Learn?” 2018 IEEE Security and Privacy Workshops (SPW), pp. 15–21, 2018. [Online]. Available: https://ieeexplore.ieee.org/document/8424627/
  • [5] C. Feher, Y. Elovici, R. Moskovitch, L. Rokach, and A. Schclar, “User identity verification via mouse dynamics,” Information Sciences, vol. 201, pp. 19–36, 2012. [Online]. Available: http://dx.doi.org/10.1016/j.ins.2012.02.066
  • [6] H. Gamboa and A. Fred, “A behavioral biometric system based on human computer interaction,” Proceedings of SPIE, vol. 5404, no. i, pp. 381–392, 2004.
  • [7] P. Kasprowski and K. Harezlak, “Fusion of eye movement and mouse dynamics for reliable behavioral biometrics,” Pattern Analysis and Applications, vol. 21, no. 1, pp. 91–103, 2018.
  • [8] S. Mondal and P. Bours, “A study on continuous authentication using a combination of keystroke and mouse biometrics,” Neurocomputing, vol. 230, no. November 2016, pp. 1–22, 2017. [Online]. Available: http://dx.doi.org/10.1016/j.neucom.2016.11.031
  • [9] C. Shen, Z. Cai, and X. Guan, “Continuous authentication for mouse dynamics: A pattern-growth approach,” Proceedings of the International Conference on Dependable Systems and Networks, 2012.
  • [10] C. Shen, Z. Cai, X. Guan, Y. Du, and R. A. Maxion, “User authentication through mouse dynamics,” IEEE Transactions on Information Forensics and Security, vol. 8, no. 1, pp. 16–30, 2013.
  • [11] B. Sayed, I. Traore, I. Woungang, and M. S. Obaidat, “Biometric authentication using mouse gesture dynamics,” IEEE Systems Journal, vol. 7, no. 2, pp. 262–274, 2013.
  • [12] F. Mo, S. Xiong, S. Yi, Q. Yi, and A. Zhang, Intelligent Computing and Internet of Things.   Springer Singapore, 2018, vol. 924. [Online]. Available: http://link.springer.com/10.1007/978-981-13-2384-3
  • [13] Y. Aksari and H. Artuner, “Active authentication by mouse movements,” in 2009 24th International Symposium on Computer and Information Sciences, Sept 2009, pp. 571–574.
  • [14] P. Bours and C. J. Fullu, “A login system using mouse dynamics,” in 2009 Fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Sept 2009, pp. 1072–1077.
  • [15] N. Zheng, A. Paloski, and H. Wang, “An efficient user verification system via mouse movements,” Proceedings of the 18th ACM conference on Computer and communications security - CCS ’11, no. February, p. 139, 2011. [Online]. Available: http://dl.acm.org/citation.cfm?doid=2046707.2046725
  • [16] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, ser. ASIA CCS ’17.   New York, NY, USA: ACM, 2017, pp. 506–519. [Online]. Available: http://doi.acm.org/10.1145/3052973.3053009
  • [17] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” in 2016 IEEE European Symposium on Security and Privacy (EuroS&P).   Los Alamitos, CA, USA: IEEE Computer Society, mar 2016, pp. 372–387. [Online]. Available: https://doi.ieeecomputersociety.org/10.1109/EuroSP.2016.36
  • [18] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and Harnessing Adversarial Examples,” pp. 1–11, 2014. [Online]. Available: http://arxiv.org/abs/1412.6572
  • [19] N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in 2017 IEEE Symposium on Security and Privacy (SP).   IEEE, 2017, pp. 39–57.
  • [20] W. Brendel, J. Rauber, and M. Bethge, “Decision-based adversarial attacks: Reliable attacks against black-box machine learning models,” arXiv preprint arXiv:1712.04248, 2017.
  • [21] I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li, A. Prakash, A. Rahmati, and D. Song, “Robust physical-world attacks on machine learning models,” arXiv preprint arXiv:1707.08945, 2017.
  • [22] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
  • [23] A. Fülöp, L. Kovács, T. Kurics, and E. Windhager-Pokol, “Balabit mouse dynamics challenge data set,” https://github.com/balabit/Mouse-Dynamics-Challenge, 2016.
  • [24] A. Binder, G. Montavon, S. Lapuschkin, K.-R. Müller, and W. Samek, “Layer-wise relevance propagation for neural networks with local renormalization layers,” in International Conference on Artificial Neural Networks.   Springer, 2016, pp. 63–71.
  • [25] S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Müller, and W. Samek, “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation,” PloS one, vol. 10, no. 7, p. e0130140, 2015.
  • [26] I. Rosenberg, A. Shabtai, L. Rokach, and Y. Elovici, “Generic black-box end-to-end attack against state of the art api call based malware classifiers,” in International Symposium on Research in Attacks, Intrusions, and Defenses.   Springer, 2018, pp. 490–510.
  • [27] I. Homoliak, F. Toffalini, J. Guarnizo, Y. Elovici, and M. Ochoa, “Insight into insiders and it: A survey of insider threat taxonomies, analysis, modeling, and countermeasures,” 11 2018.
  • [28] R. J. Williams and D. Zipser, “A Learning Algorithm for Continually Running Fully Recurrent Neural Networks,” Neural Computation, vol. 1, no. 2, pp. 270–280, 1989. [Online]. Available: http://www.mitpressjournals.org/doi/10.1162/neco.1989.1.2.270
  • [29] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in pytorch,” in NIPS-W, 2017.
  • [30] A. Harilal, F. Toffalini, I. Homoliak, J. Castellanos, J. Guarnizo, S. Mondal, and M. Ochoa, “The wolf of sutd (twos): A dataset of malicious insider threat behavior based on a gamified competition,” vol. 9, 03 2018.
  • [31] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.
  • [32] F. Wilcoxon, “Individual comparisons by ranking methods,” Biometrics bulletin, vol. 1, no. 6, pp. 80–83, 1945.