One-Class Adversarial Nets for Fraud Detection

03/05/2018 ∙ by Panpan Zheng, et al. ∙ UNC Charlotte University of Oregon University of Arkansas 0

Many online applications, such as online social networks or knowledge bases, are often attacked by malicious users who commit different types of actions such as vandalism on Wikipedia or fraudulent reviews on eBay. Currently, most of the fraud detection approaches require a training dataset that contains records of both benign and malicious users. However, in practice, there are often no or very few records of malicious users. In this paper, we develop one-class adversarial nets (OCAN) for fraud detection using training data with only benign users. OCAN first uses LSTM-Autoencoder to learn the representations of benign users from their sequences of online activities. It then detects malicious users by training a discriminator with a complementary GAN model that is different from the regular GAN model. Experimental results show that our OCAN outperforms the state-of-the-art one-class classification models and achieves comparable performance with the latest multi-source LSTM model that requires both benign and malicious users in the training phase.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Online platforms such as online social networks (OSNs) and knowledge bases play a major role in online communication and knowledge sharing. However, there are various malicious users who conduct various fraudulent actions, such as spams, rumors, and vandalism, imposing severe security threats to OSNs and their legitimate participants. For example, the crowdsourcing mechanism of Wikipedia attracts the attention of vandals who involve in various vandalism actions like spreading false or misleading information to Wikipedia users. Meanwhile, fraudsters in the OSNs can also easily register fake accounts, inject fake content, or take fraudulent activities. To protect legitimate users, most Web platforms have tools or mechanisms to block malicious users. For example, Wikipedia adopts ClueBot NG (contributors, 2010) to detect and revert obvious bad edits, thus helping administrators to identify and block vandals.

Detecting malicious users has also attracted increasing attention in the research community (Cheng et al., 2017; Kumar et al., 2017; Yuan et al., 2017a; Kumar et al., 2015; Ying et al., 2011). For example, research in (Kumar et al., 2015)

focused on predicting whether a Wikipedia user is a vandal based on his edits. The VEWS system adopted a set of behavior features based on user edit-patterns, and used several traditional classifiers (e.g., random forest or SVM) to detect vandals. To improve detection accuracy and avoid manual feature reconstruction, a multi-source long-short term memory network (M-LSTM) was proposed to detect vandals

(Yuan et al., 2017b). M-LSTM is able to capture different aspects of user edits and the learned user representations can be further used to analyze user behaviors. However, these detection models are trained over a training dataset that consists of both positive data (benign users) and negative data (malicious users). In practice, there are often no or very few records from malicious users in the collected training data. Manually labeling a large number of malicious users is tedious.

In this work, we tackle the problem of identifying malicious users when only benign users are observed. The basic idea is to adopt a generative model to generate malicious users with only given benign users. Generative adversarial networks (GAN) as generative models have demonstrated impressive performance in modeling the real data distribution and generating high quality synthetic data that is similar to real data (Goodfellow et al., 2014; Radford et al., 2015). However, given benign users, a regular GAN model is unable to generate malicious users.

We develop one-class adversarial nets (OCAN) for fraud detection. During training, OCAN contains two phases. First, OCAN adopts the LSTM-Autoencoder (Srivastava et al., 2015)

to encode the benign users into a hidden space based on their online activities, and the encoded vectors are called benign user representations. Then, OCAN trains improved generative adversarial nets in which the discriminator is trained to be a classifier for distinguishing benign users and malicious users with the generator producing potential malicious users. To this end, we adopt the idea that the generator is trained to generate complementary samples instead of matching the original data distribution

(Dai et al., 2017). In particular, we propose a complementary GAN model. The generator of the complementary GAN aims to generate samples that are complementary to the representations of benign users, i.e., the potential malicious users. The discriminator is trained to separate benign users and complementary samples. Since the behaviors of malicious users and that of benign users are complementary, we expect the discriminator can distinguish benign users and malicious users. By combining the encoder of LSTM-Autoencoder and the discriminator of the complementary GAN, OCAN can accurately predict whether a new user is benign or malicious based on his online activities.

The advantages of OCAN for fraud detection are as follows. First, since OCAN does not require any information about malicious users, we do not need to manually compose a mixed training dataset, thus more adaptive to different types of malicious user identification tasks. Second, different from existing one-class classification models, OCAN generates complementary samples of benign users and trains the discriminator to separate complementary samples from benign users, enabling the trained discriminator to better separate malicious users from benign users. Third, OCAN can capture the sequential information of user activities. After training, the detection model can adaptively update a user representation once the user commits a new action and predict whether the user is a fraud or not dynamically.

2. Related Work

Fraud detection: Due to the openness and anonymity of Internet, the online platforms attract a large number of malicious users, such as vandals, trolls, and sockpuppets. Many fraud detection techniques have been developed in recent years (Akoglu et al., 2015; Jiang et al., 2014; Cao et al., 2014; Ying et al., 2011; Kumar and Shah, 2018), including content-based approaches and graph-based approaches. The content-based approaches extract content features, (i.e., text, URL), to identify malicious users from user activities on social networks (Benevenuto et al., 2010). Meanwhile, graph-based approaches identify frauds based on network topologies. Research in (Yuan et al., 2017a)

proposed two deep neural networks for fraud detection on a signed graph. Often based on unsupervised learning, the graph-based approaches consider fraud as anomalies and extract various graph features associated with nodes, edges, ego-net, or communities from the graph

(Akoglu et al., 2015; Noble and Cook, 2003; Manzoor et al., 2016).

Fraud detection is also related to the malicious behavior and misinformation detection, including detecting the vandalism edits on Wikipedia or Wikidata, rumor and fake review detection. Research in (Heindorf et al., 2016) developed both content and context features of a Wikidata revision to identify the vandalism edit. Research in (Kumar et al., 2016) focused on detecting hoaxes on Wikipedia by finding characteristic in terms of article content and features of the editor who created the hoax. In (Mukherjee et al., 2013), different types of behavior features were extracted and used to detect fake reviews on Yelp. Research in (Lim et al., 2010) have identified several representative behaviors of review spammers. Research in (Rayana and Akoglu, 2015) proposed a framework that combined the text, metadata as well as relational data to detect suspicious users and reviews. Research in (Xie et al., 2012) studied the co-anomaly patterns in multiple review-based time series. Some researches further focused on detecing frauders who delibrately evaded the detection by mimicing normal users (Wang et al., 2017; Hooi et al., 2016).

Deep neural network:

Deep neural networks have achieved promising results in computer vision, natural language processing, and speech recognition

(LeCun et al., 2015)

. Recurrent neural network as one type of deep neural networks is widely used for modeling time sequence data

(Graves, 2013; Neubig, 2017). However, it is difficult to train standard RNNs over long sequences of text because of gradient vanishing and exploding (Bengio et al., 1997). Long shot-term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) was proposed to model temporal sequences and capture their long-range dependencies more accurately than the standard RNNs. LSTM-Autoencoder is a sequence-to-sequence model that has been widely used for paragraph generation (Li et al., 2015; Nallapati et al., 2016; Sutskever et al., 2014), video representation (Srivastava et al., 2015)

, etc. GAN is a framework for estimating generative models via an adversarial process

(Goodfellow et al., 2014). Recently, generative adversarial nets (GAN) have achieved great success in computer vision tasks, including image generation (Radford et al., 2015; Springenberg, 2016; Ledig et al., 2017) and image classification (Chen et al., 2016; Springenberg, 2016; Odena et al., 2016). Currently, the GAN model is usually applied on two or multi-class datasets instead of one-class.

One-class classification: One-class classification (OCC) algorithms aim to build classification models when only one class of samples are observed and the other class of samples are absent (Khan and Madden, 2014)

, which is also related to the novelty detection

(Pimentel et al., 2014)

. One-class support vector machine (OSVM), as one of widely adopted for one class classification, aims to separate one class of samples from all the others by constructing a hyper-sphere around the observed data samples

(Tax and Duin, 2004; Manevitz and Yousef, 2001). Other traditional classification models also extend to the one-class scenario. For example, one-class nearest neighbor (OCNN) (Tax and Duin, 2001) predicts the class of a sample based on its distance to its nearest neighbor in the training dataset. One-class Gaussian process (OCGP) chooses a proper GP prior and derives membership scores for one-class classification (Kemmler et al., 2013)

. However, OCNN and OCGP need to set a threshold to detect another class of data. The threshold is either set by a domain expert or tuned based on a small set of two-class labeled data. In this work, we propose a framework that combines LSTM-Autoencoder and GAN to detect vandals with only knowing benign users. To our best knowledge, this is the first work that examines the use of deep learning models for fraud detection when only one-class training data is available. Meanwhile, comparing to existing one-class algorithms, our model trains a classifier by generating a large number of “novel” data and does not require any labeled data to tune parameters.

3. Preliminary

3.1. Long Short-Term Memory Network

Long short-term memory network (LSTM) is one type of recurrent neural network. Given a sequence where denotes the input at the -th step, LSTM maintains a hidden state vector to keep track the sequence information from the current input and the previous hidden state . The hidden state is computed by

(1)

where

is the sigmoid function;

represents element-wise product; , , , indicate the input gate, forget gate, output gate, and cell activation vectors and denotes the intermediate vector of cell state; and are the weight parameters; is the bias term.

We simplify the update of each LSTM step described in Equation 1 as

(2)

where is the input of the current step; is the hidden vector of the last step; indicates the output of the current step.

3.2. Generative Adversarial Nets

Generative adversarial nets (GAN) are generative models that consist of two components: a generator and a discriminator . Typically, both and are multilayer neural networks. generates fake samples from a prior on a noise variable and learns a generative distribution to match the real data distribution . On the contrary, the discriminative model is a binary classifier that predicts whether an input is a real data or a generated fake data from . Hence, the objective function of is defined as:

(3)

where

outputs the probability that

is from the real data rather than the generated fake data. In order to make the generative distribution close to the real data distribution , is trained by fooling the discriminator not be able to distinguish the generated data from the real data. Thus, the objective function of is defined as:

(4)

Minimizing the Equation 4 is achieved if the discriminator is fooled by generated data and predicts high probability that is real data.

Overall, GAN is formalized as a minimax game with the value function:

(5)

Theoretical analysis shows that GAN aims to minimize the Jensen-Shannon divergence between the data distribution and the generative distribution (Goodfellow et al., 2014). The minimization of JS divergence is achieved when . Therefore, GAN is trained by distinguishing the real data and generated fake data.

Figure 1. The training framework of OCAN

4. OCAN: One-Class Adversarial Nets

4.1. Framework Overview

OCAN contains two phases during training. The first phase is to learn user representations. As shown in the left side of Figure 1

, LSTM-Autoencoder is adopted to learn the benign user representations from the benign user activity sequences. The LSTM-Autoencoder model is a sequence-to-sequence model that consists of two LSTM models as the encoder and decoder respectively. The encoder computes the hidden representation of an input, and the decoder computes the reconstructed input based on the hidden representation. The trained LSTM-Autoencoder can capture the salient information of users’ activity sequences because the objective function is to make the reconstructed input close to the original input. Furthermore, the encoder of the trained LSTM-Autoencoder, when deployed for fraud detection, is expected to map the benign users and malicious users to relatively separate regions in the continuous feature space because the activity sequences of benign and malicious users are different.

Given the user representations, the second phase is to train a complementary GAN with a discriminator that can clearly distinguish the benign and malicious users. The generator of the complementary GAN aims to generate complementary samples that are in the low-density area of benign users, and the discriminator aims to separate the real and complementary benign users. The discriminator then has the ability to detect malicious users which locate in separate regions from benign users. The framework of training complementary GAN for fraud detection is shown in the right side of Figure 1.

The pseudo-code of training OCAN is shown in Algorithm 1. Given a training dataset that contains activity sequence feature vectors of benign users, we first train the LSTM-Autoencoder model (Lines 11). After training the LSTM-Autoencoder, we adopt the encoder in the LSTM-Autoencoder model to compute the benign user representation (Lines 11). Finally, we use the benign user representation to train the complementary GAN (Lines 11). For simplicity, we write the algorithm with a minibatch size of 1, i.e., iterating each user in the training dataset to train LSTM-Autoencoder and GAN. In practice, we sample real benign users and use the generator to generate complementary samples in a minibatch. In our experiments, the size of minibatch is 32.

Our OCAN moves beyond the naive approach of adopting a regular GAN model in the second phase. The generator of a regular GAN aims to generate the representations of fake benign users that are close to the representations of real benign users. The discriminator of a regular GAN is to identify whether an input is a representation of a real benign user or a fake benign user from the generator. However, one potential drawback of the regular GAN is that once the discriminator is converged, the discriminator cannot have high confidence on separating real benign users from real malicious users. We denote the OCAN with the regular GAN as OCAN-r and compare its performance with OCAN in the experiment.

Inputs : Training dataset ,

Training epochs for LSTM-Autoencoder

and GAN
Outputs : Well-trained LSTM-Autoencoder and complementary GAN
1 initialize parameters in LSTM-Autoencoder and complementary GAN; ; while  do
2       foreach user in  do
3             compute the reconstructed sequence of user activities by LSTM-Autoencoder (Eq. 6, 8, and 9

); optimize the parameters in LSTM-Autoencoder with the loss function Eq.

10;
4       end foreach
5      ;
6 end while
7 ; foreach user in  do
8       compute the benign user representation by the encoder of LSTM-Autoencoder (Eq. 6, 7); ;
9 end foreach
10 ; while  do
11       foreach benign user representation in  do
12             optimize the discriminator and generator with loss functions Eq. 16, 14, respectively;
13       end foreach
14      
15 end while
return well-trained LSTM-Autoencoder and complementary GAN
Algorithm 1 Training One-Class Adversarial Nets

4.2. LSTM-Autoencoder for User Representation

The first phase of OCAN is to encode users to a continuous hidden space. Since each online user has a sequence of activities (e.g., edit a sequence of pages), we adopt LSTM-Autoencoder to transform a variable-length user activity sequence into a fixed-dimension user representation. Formally, given a user with activities, we represent the activity sequence as where is the -th activity feature vector.

Encoder: The encoder encodes the user activity sequence to a user representation with an LSTM model:

(6)

where is the feature vector of the -th activity; indicates the -th hidden vector of the encoder.

The last hidden vector captures the information of a whole user activity sequence and is considered as the user representation :

(7)

Decoder: In our model, the decoder adopts the user representation as the input to reconstruct the original user activity sequence :

(8)
(9)

where is the -th hidden vector of the decoder; indicates the -th reconstructed activity feature vector; denotes a neural network to compute the sequence outputs from hidden vectors of the decoder. Note that we adopt as input of the whole sequence of the decoder, which has achieved great performance on sequence-to-sequence models (Cho et al., 2014).

The objective function of LSTM-Autoencoder is:

(10)

where () is the -th (reconstructed) activity feature vector. After training, the last hidden vector of encoder can reconstruct the sequence of user feature vectors. Thus, the representation of user captures the salient information of user behavior.

4.3. Complementary GAN

The generator of complementary GAN is a feedforward neural network where its output layer has the same dimension as the user representation . Formally, we define the generated samples as . Unlike the generator in a regular GAN which is trained to match the distribution of the generated fake benign user representation with that of benign user representation , the generator of complementary GAN learns a generative distribution that is close to the complementary distribution of the benign user representations, i.e., . The complementary distribution is defined as:

(11)

where is a threshold to indicate whether the generated samples are in high-density regions; is a normalization term; is a small constant; is the space of user representation. To make the generative distribution close to the complementary distribution , the complementary generator is trained to minimize the KL divergence between and . Based on the definition of KL divergence, we have the following objective function:

(12)

where is the entropy, and is the indicator function. The last term of Equation 12 can be omitted because both and are constant terms and the gradients of the indicator function with respect to parameters of the generator are mostly zero.

Meanwhile, the complementary generator adopts the feature matching loss (Salimans et al., 2016) to ensure that the generated samples are constrained in the space of user representation .

(13)

where denotes the output of an intermediate layer of the discriminator used as a feature representation of .

Thus, the complete objective function of the generator is defined as:

(14)

Overall, the objective function of the complementary generator aims to let the generative distribution close to the complementary samples , i.e., , and make the generated samples from different regions (but in the same space of user representations) than those of the benign users.

Figure 2 illustrates the difference of the generators of regular GAN and complementary GAN. The objective function of the generator of regular GAN in Equation 4 is trained to fool the discriminator by generating fake benign users similar to the real benign users. Hence, as shown in Figure 1(a), the generator of regular GAN generates the distribution of fake benign users that have the similar distribution of real benign users in the feature space. On the contrary, the objective function of the generator of complementary GAN in Equation 14 is trained to generate complementary samples that are in the low-density regions of benign users (shown in Figure 1(b)).

(a) Regular GAN
(b) Complementary GAN
Figure 2. Demonstrations of the ideal generators of regular GAN and complementary GAN. The blue dot line indicates the high density regions of benign users.

To optimize the objective function of generator, we need to approximate the entropy of generated samples

and the probability distribution of real samples

. To minimize , we adopt the pull-away term (PT) proposed by (Zhao et al., 2016; Dai et al., 2017) that encourages the generated feature vectors to be orthogonal. The PT term increases the diversity of generated samples and can be considered as a proxy for minimizing . The PT term is defined as

(15)

where is the size of a mini-batch. The probability distribution of real samples is usually unavailable, and approximating is computationally expensive. We adopt the approach proposed by (Schoneveld, 2017) that a discriminator from a regular GAN can detect whether the data from the real data distribution or from the generator’s distribution. The basic idea is that the discriminator is able to detect whether a sample is from the real data distribution or from the generator when the generator is trained to generate samples that are close to real benign users. Hence, the discriminator is sufficient to identify the data points that are above a threshold of during training. We separately train a regular GAN model based on benign user representations and use the discriminator of the regular GAN as a proxy to evaluate .

The discriminator takes the benign user representation and generated user representation as inputs and tries to distinguish from . As a classifier, is a standard feedforward neural network with a softmax function as its output layer, and the objective function of is:

(16)

The first two terms in Equation 16 are the objective function of discriminator in the regular GAN model. Therefore, the discriminator of complementary GAN is trained to separate the benign users and complementary samples. The last term in Equation 16 is a conditional entropy term which encourages the discriminator to detect real benign users with high confidence. Then, the discriminator is able to separate the benign and malicious users clearly.

Although the objective functions of the discriminators of regular GAN and complementary GAN are similar, the capabilities of discriminators of regular GAN and complementary GAN for malicious detection are different. The discriminator of regular GAN aims to separate the benign users and generated fake benign users. However, after training, the generated fake benign users locate in the same regions as the real benign users (shown in Figure 1(a)). The probabilities of real and generated fake benign users predicted by the discriminator of regular GAN are all close to 0.5. Thus, giving a benign user, the discriminator cannot predict the benign user with high confidence. On the contrary, the discriminator of complementary GAN is trained to separate the benign users and generated complementary samples. Since the generated complementary samples have the same distribution as the malicious users (shown in Figure 1(b)), the discriminator of complementary GAN can also detect the malicious users.

Figure 3. The fraud detection model
Inputs : Testing dataset ,
Well-trained LSTM-Autoencoder and GAN
Outputs : the user labels in
1 ; foreach user in  do
2       compute the user representation by the encoder in LSTM-Autoencoder (Eq. 6, 7); predict the label of the user by
3 end foreach
return the user labels
Algorithm 2 Fraud Detection

5. Fraud Detection Model

Although the training procedure of OCAN contains two phases that train LSTM-Autoencoder and complementary GAN successively, the fraud detection model is an end-to-end model. We show the procedure pseudocode of detecting fraud in Algorithm 2 and illustrate its structure in Figure 3. To detect a malicious user, we first compute the user representation based on the encoder in the LSTM-Autoencoder model (Line 2). Then, we predict the user label based on the discriminator of complementary GAN, i.e., .

Early fraud detection: The upper-left region of Figure 3 shows that our OCAN model can also achieve early detection of malicious users. Given a user , at each step , the hidden states are updated until the -th step by taking the current feature vector as input and are able to capture the user behavior information until the -th step. Thus, the user representation at the -th step is denoted as . Finally, we can use the discriminator to calculate the probability of the user to be a malicious user based on the current step user representation .

6. Experiments

Input Algorithm Precision Recall F1 Accuracy
Raw feature vector OCNN
OCGP
OCSVM
User representation OCNN
OCGP
OCSVM
OCAN
User representation OCAN-r
Table 1. Vandal detection results (meanstd.) on precision, recall, F1 and accuracy

6.1. Experiment Setup

Dataset: To evaluate OCAN, we focus on one type of malicious users, i.e., vandals on Wikipedia. We conduct our evaluation on UMDWikipedia dataset (Kumar et al., 2015). This dataset contains information of around 770K edits from Jan 2013 to July 2014 (19 months) with 17105 vandals and 17105 benign users. Each user edits a sequence of Wikipedia pages. We keep those users with the lengths of edit sequence range from 4 to 50. After this preprocessing, the dataset contains 10528 benign users and 11495 vandals.

To compose the feature vector of the user’s -th edit, we adopt the following edit features: (1) whether or not the user edited on a meta-page; (2) whether or not the user consecutively edited the pages less than 1 minutes; (3) whether or not the user’s current edit page had been edited before; (4) whether or not the user’s current edit would be reverted.

We further evaluate our model on a credit card transaction dataset in Section 6.5. Although it is not a sequence dataset, it can still be used to compare the performance of OCAN against baselines in the context of one-class fraud detection.

Hyperparameters: For LSTM-Autoencoder, the dimension of the hidden layer is 200, and the training epoch is 20. For the complementary GAN model, both discriminator and generator are feedforward neural networks. Specifically, the discriminator contains 2 hidden layers which are 100 and 50 dimensions. The generator takes the 50 dimensions of noise as input, and there is one hidden layer with 100 dimensions. The output layer of the generator has the same dimension as the user representation which is 200 in our experiments. The training epoch of complementary GAN is 50. The threshold defined in Equation 14

is set as the 5-quantile probability of real benign users predicted by a pre-trained discriminator. We evaluated several values from 4-quantile to 10-quantile and found the results are not sensitive.

Repeatability: Our software together with the datasets used in this paper are available at https://github.com/PanpanZheng/OCAN

6.2. Comparison with One-Class Classification

Baselines: We compare OCAN with the following widely used one-class classification approaches:

  • One-class nearest neighbors (OCNN) (Tax and Duin, 2001) labels a testing sample based on the distance from the sample to its nearest neighbors in training dataset and the average distance of those nearest neighbors. If the difference between these two distances is larger than a threshold, the testing sample is an anomaly.

  • One-class Gaussian process (OCGP) (Kemmler et al., 2013) is a one-class classification model based on Gaussian process regression.

  • One-class SVM (OCSVM) (Tax and Duin, 2004) adopts support vector machine to learn a decision hypersphere around the positive data, and considers samples located outside this hypersphere as anomalies.

For baslines, we use the implementation provided in NDtool 111http://www.robots.ox.ac.uk/~davidc/publications_NDtool.php. The hyperparameters of baselines set as default values in NDtool. Note that both OCNN and OCGP require a small portion (5% in our experiments) of vandals as a validation dataset to tune an appropriate threshold for vandal detection. However, OCAN does not require any vandals for training and validation. Since the baselines are not sequence models, we compare OCAN to baselines in two ways. First, we concatenate all the edit feature vectors of a user to a raw feature vector as an input to baselines. Second, the baselines have the same inputs as the discriminator, i.e., the user representation computed from the encoder of LSTM-Autoencoder. Meanwhile, OCAN cannot adopt the raw feature vectors as inputs to detect vandals. This is because GAN is only suitable for real-valued data (Goodfellow et al., 2014).

To evaluate the performance of vandal detection, we randomly select 7000 benign users as the training dataset and 3000 benign users and 3000 vandals as the testing dataset. We report the mean value and standard deviation based on 10 different runs. Table

1 shows the means and standard deviations of the precision, recall, F1 score and accuracy for vandal detection. First, OCAN achieves better performances than baselines in terms of F1 score and accuracy in both input settings. It means the discriminator of complementary GAN can be used as a one-class classifier for vandal detection. We can further observe that when the baselines adopt the raw feature vector instead of user representation, the performances of both OCNN and OCGP decrease significantly. It indicates that the user representations computed by the encoder of LSTM-Autoencoder capture the salient information about user behavior and can improve the performance of one-class classifiers. However, we also notice that the standard deviations of OCAN are higher than the baselines with user representations as inputs. We argue that this is because GAN is widely known for difficult to train. Thus, the stability of OCAN is relatively lower than the baselines.

Furthermore, we show the experimental results of OCAN-r, which adopts the regular GAN model instead of the complementary GAN in the second training phase of OCAN, in the last row of Table 1. We can observe that the performance of OCAN is better than OCAN-r. It indicates that the discriminator of complementary GAN which is trained on real and complementary samples can more accurately separate the benign users and vandals.

Vandals Precision Recall F1 Edits
M-LSTM 7000 0.8416 0.9637 0.8985 7.21
1000 0.9189 0.8910 0.9047 5.98
400 0.9639 0.6767 0.7951 3.64
300 0.0000 0.0000 0.0000 0.00
100 0.0000 0.0000 0.0000 0.00
OCAN 0 0.8014 0.9081 0.8459 7.23
OCAN-r 0 0.7228 0.8968 0.7874 7.18
Table 2. Early Vandal detection results on precision, recall, F1, and the average number of edits before the vandals are blocked

6.3. Comparison with M-LSTM for Early Vandal Detection

We further compare the performance of OCAN in terms of early vandal detection with one latest deep learning based vandal detection model, M-LSTM, developed in (Yuan et al., 2017b). Note that M-LSTM assumes a training dataset that contains both vandals and benign users. In our experiments, we train our OCAN with the training data consisting of 7000 benign users and no vandals and train M-LSTM with a training data consisting the same 7000 benign users and a varying number of vandals (from 7000 to 100). For OCAN and M-LSTM, we use the same testing dataset that contains 3000 benign users and 3000 vandals. Note that in OCAN and M-LSTM, the hidden state of the LSTM model captures the up-to-date user behavior information and hence we can achieve early vandal detection. The difference is that the M-LSTM model uses as the input of a classifier directly whereas OCAN further trains complementary GAN and uses its discriminator as a classifier to make the early vandal detection. In this experiment, instead of applying the classifier on the final user representation , the classifiers of M-LSTM and OCAN are applied on each step of LSTM hidden state and predict whether a user is a vandal after the user commits the t-th action.

Table 2 shows comparison results in terms of the precision, recall, F1 of early vandal detection, and the average number of edits before the vandals were truly blocked. We can observe that OCAN achieves a comparable performance as the M-LSTM when the number of vandals in the training dataset is large (1000, 4000, and 7000). However, M-LSTM has very poor accuracy when the number of vandals in the training dataset is small. In fact, we observe that M-LSTM could not detect any vandal when the training dataset contains less than 400 vandals. On the contrary, OCAN does not need any vandal in the training data.

The experimental results of OCAN-r for early vandal detection are shown in the last row of Table 2. OCAN-r outperforms M-LSTM when M-LSTM is trained on a small number of the training dataset. However, the OCAN-r is not as good as OCAN. It indicates that generating complementary samples to train the discriminator can improve the performance of the discriminator for vandal detection.

6.4. OCAN Framework Analysis

Figure 4. Visualization of 3000 benign users (blue star) and 3000 vandals (cyan triangle) based on user representation.

LSTM-Autoencoder: The reason of adopting LSTM-Autoencoder is that it transforms edit sequences to user representation. It also helps encode benign users and vandals to relatively different locations in the hidden space, although the LSTM-Autoencoder is only trained by benign users. To validate our intuition, we obtain user representations of the testing dataset by the encoder in the LSTM-Autoencoder. Then, we map those user representations to a two-dimensional space based on the Isomap approach (Tenenbaum et al., 2000). Figure 4 shows the visualization of user representations. We observe that the benign users and vandals are relatively separated in the two-dimensional space, indicating the capability of LSTM-Autoencoder.

(a) Prob. predicted by OCAN
(b) Prob. predicted by OCAN-r
(c) F1 score of OCAN
(d) F1 score of OCAN-r
Figure 5. Training progresses of OCAN (4(a),4(c)) and OCAN-r(4(b),4(d)). Three lines in Figures 4(a) and 4(b) indicate the probabilities of being benign users predicted by the discriminator: real benign users (green line) vs. generated samples (red broken line) vs. real malicious users (blue dotted line). Figures 4(c) and 4(d) show the F1 scores of OCAN and OCAN-r during training.

Complementary GAN vs. Regular GAN:

In our OCAN model, the generator of complementary GAN aims to generate complementary samples that lie in the low-density region of real samples, and the discriminator is trained to detect the real and complementary samples. We examine the training progress of OCAN in terms of predication accuracy. We calculate probabilities of real benign users (shown as green line in Figure 4(a)), malicious users (blue dotted line) and generated samples (read broken line) being benign users predicted by the discriminator of complementary GAN on the testing dataset after each training epoch. We can observe that after OCAN is converged, the probabilities of malicious users predicted by the discriminator of complementary GAN are much lower than that of benign users. For example, at the epoch 40, the average probability of real benign users predicted by OCAN is around , while the average probability of malicious users is only around . Meanwhile, the average probability of generated complementary samples lies between the probabilities of benign and malicious users.

On the contrary, the generator of a regular GAN in the OCAN-r model aims to generate fake samples that are close to real samples, and the discriminator of GAN focuses on distinguishing the real and generated fake samples. As shown in Figure 4(b), the probabilities of real benign users and probabilities of malicious users predicted by the discriminator of regular GAN become close to each other during training. After the OCAN-r is converged, say epoch 120, both the probabilities of real benign users and malicious users are close to 0.5. Meanwhile, the probability of generated samples is similar to the probabilities of real benign users and malicious users.

We also show the F1 scores of OCAN and OCAN-r on the testing dataset after each training epoch in Figure 4(c) and 4(d). We can observe that the F1 score of OCAN-r is not as stable as (and also a bit lower than) OCAN. This is because the outputs of the discriminator for real and fake samples are close to 0.5 after the regular GAN is converged. If the probabilities of real benign users predicted by the discriminator of the regular GAN swing around 0.5, the accuracy of vandal detection will fluctuate accordingly.

We can observe from Figure 5 another nice property of OCAN compared with OCAN-r for fraud detection, i.e., OCAN is converged faster than OCAN-r. We can observe that OCAN is converged with only training 20 epochs while the OCAN-r requires nearly 100 epochs to keep stable. This is because the complementary GAN is trained to separate the benign and malicious users while the regular GAN mainly aims to generate fake samples that match the real samples. In general, matching two distributions requires more training epochs than separating two distributions. Meanwhile, the feature matching term adopted in the generator of complementary GAN is also able to improve the training process (Salimans et al., 2016).

Figure 6. 2D visualization of three types of users: real benign (blue star), vandal (cyan triangle), and complementary benign (red dot)

Visualization of three types of users: We project the user representations of the three types of users (i.e., benign, vandal and complementary benign generated by OCAN) to a two-dimensional space by Isomap. Figure 6 visualizes the three types of users. We observe that the generated complementary users lie in the low-density regions of real benign users. Meanwhile, the generated samples are also between the benign users and vandals. Since the discriminator is trained to separate the benign and complementary benign users, the discriminator is able to separate benign users and vandals.

Cluster Size Benign Vandal Complement
C1 2537 2448 89 0
C2 2420 93 2327 0
C3 2840 0 0 2840
Isolated 1203 459 584 160
Table 3. Clustering results of DBSCAN
Input Algorithm Precision Recall F1 Accuracy
Raw feature vector OCNN
OCGP
OCSVM
OCAN
Transaction representation OCNN
OCGP
OCSVM
OCAN
Table 4. Fraud detection results (meanstd.) on precision, recall, F1 and accuracy of credit card fraud detection

User clustering: To further analyze the complementary GAN model, we adopt the classic DBSCAN algorithm (Ester et al., 1996) to cluster 3000 benign users, 3000 vandals from the testing dataset, and 3000 generated complementary benign users. Table 3 shows clustering results including cluster size and class distributions of each cluster. We set the maximum radius of the neighborhood (the average distances among the user representations) and the minimum number of points . We observe three clusters where C1 is benign users, C2 is vandal, and C3 is complementary samples in addition to 1203 isolated users that could not form any cluster. We emphasize that our OCAN can still make good predictions of those isolated points accurately (with 89% accuracy).

We further calculate the centroid of C1, C2 and C3 based on their user representations and adopt the centroids to calculate distances among each type of users. The distance between the centroids of real benign users and complementary benign users is 3.6346, while the distance between the centroids of real benign users and vandals is 3.888. Since the discriminator is trained to identify real benign users and complementary benign users, the discriminator can detect vandals which have larger distances to real benign users than that of complementary benign users.

6.5. Case Study on Credit Card Fraud Detection

We further evaluate our model on a credit card fraud detection dataset 222https://www.kaggle.com/dalpozz/creditcardfraud. The dataset records credit card transactions in two days and has 492 frauds out of 284,807 transactions. Each transaction contains 28 features. We adopt 700 genuine transactions as a training dataset and 490 fraud and 490 genuine transactions as a testing dataset. Since the transaction features of the dataset are numerical data derived from PCA, OCAN is able to detect frauds by using raw features as inputs. Meanwhile, we also evaluate the performance of OCAN in the hidden feature space. Because the transaction in credit card dataset is not a sequence data, we adopt the regular autoencoder model instead of LSTM-autoencoder to obtain the transaction representations. In our experiments, the dimension of transaction representation is 50.

Table 4 shows the classification results of credit card fraud detection. Overall, the performance of OCAN and baselines are similar to the results of vandal detection shown in Table 1. OCAN achieves the best accuracy and F1 with both input settings. Meanwhile, the performance of OCAN using transaction representations as inputs is better than using raw features. It shows that OCAN can outperform the existing one-class classifiers in different datasets and can be applied to detect different types of malicious users.

Testing on an imbalanced dataset. In the real scenario, there are more genuine transactions than fraud transactions. Hence, after training OCAN on 700 genuine transactions, we further test OCAN on an imbalanced dataset which consists of 1000 genuine transactions and 100 fraud transactions. Figure 7 shows the ROC curve of OCAN on the unbalanced dataset. It indicates OCAN achieves promising performance for fraud detection with adopting raw feature vector and transaction representation as inputs on the imbalanced dataset. In particular, the AUC of OCAN with raw features is 0.9645, and the AUC of OCAN with transaction representation is 0.9750.

Figure 7. ROC curve of OCAN on an imbalanced dataset

7. Conclusion

In this paper, we have developed OCAN that consists of LSTM-Autoencoder and complementary GAN for fraud detection when only benign users are observed during the training phase. During training, OCAN adopts LSTM-Autoencoder to learn benign user representations, and then uses the benign user representations to train a complementary GAN model. The generator of complementary GAN can generate complementary benign user representations that are in the low-density regions of real benign user representations, while the discriminator is trained to distinguish the real and complementary benign users. After training, the discriminator is able to detect malicious users which are outside the regions of benign users. We have conducted theoretical and empirical analysis to demonstrate the advantages of complementary GAN over regular GAN. We conducted experiments over two real world datasets and showed OCAN outperforms the state-of-the-art one-class classification models. Moreover, our OCAN model can also achieve early vandal detection since OCAN takes the user edit sequences as inputs and achieved comparable accuracy with the latest M-LSTM model that needs both benign and vandal users in the training data. In our future work, we plan to extend the techniques for fraud detection in the semi-supervised learning scenario.

Acknowledgements.
The authors acknowledge the support from National Science Foundation to Panpan Zheng and Xintao Wu (1564250), Jun Li (1564348) and Aidong Lu (1564039).

References