Deceptive opinion spam refers to illegitimate activities, such as writing fake reviews, giving fake ratings, etc., to mislead consumers. While the problem has been researched from both linguistic [1, 2] and behavioral [3, 4] aspects, the case of sockpuppets still remains unsolved. A sockpuppet refers to a physical author using multiple aliases (user-ids) to inflict opinion spam to avoid getting filtered. Sockpuppets are particularly difficult to detect by existing opinion spam detection methods as a sockpuppet invariably uses a user-id only a few times (often once) thereby limiting context per user-id. Deceptive sockpuppets may thus be considered as a new frontier of attacks in opinion spam.
However, specific behavioral techniques such as Internet Protocol (IP) and session logs based detection in  and group spammer detection in  can provide important signals to probe into few ids that form a potential sockpuppet. Particularly, some strong signals such as using same IP and session logs, abnormal keystroke similarities, etc. (all of which are almost always available to a website administrator) can render decent confidence that some reviews are written by one author masked behind a sockpuppet. This can render a form of “training data” for identifying that sockpuppeter; and the challenge is to find other fake reviews which are also written by the same author but using different aliases in future. Hence, the problem is reduced to an author verification problem. Given a few instances (reviews) written by a (known) sockpuppet author , the task is to build an Author Verifier,
(classifier) that can determine whether another (future) review is also written byor not. This problem is related to authorship attribution (AA) where the goal is to identify the author of a given document from a closed set of authors. However, having short reviews with diverse topics render traditional AA methods, that mostly rely on content features, not very effective (see section 7). While there have been works in AA for short texts such as tweets in  and with limited training data , the case for sockpuppets is different because it involves deception. Further, in reality sockpuppet detection is an open set problem (i.e., it has an infinite number of classes or authors) which makes it very difficult if not impossible to have a very good representative sample of the negative set for an author. In that regard, our problem bears resemblance with authorship verification .
In this work we first find that under traditional attribution setting, the precision of a verifier degrades with the increase in the diversity and size of , where refers to the negative set authors for a given verifier . This is detailed in section 4.1. This shows that the verifier struggles with higher false positive and cannot learn
well. It lays the ground for exploiting the unlabeled test set to improve the negative set in training. Next, we improve the performance by learning verification models in lower dimensions (section 5). Particularly, we employ a feature selection scheme,KL Parse Tree Features (henceforth abbreviated as KL-PTFs) that exploits the KL-Divergence of the stylistic language models (computed using PTFs) of and . Lastly, we address the problem by taking advantage of transduction (section 6). The idea is to simply put a carefully selected subset of positive samples, reviews authored by (referred to as a spy set) from the training set to the unlabeled test set (i.e., the test set without seeing the true labels) and extract the nearest and farthest neighbors of the members in the spy set. These extracted neighbors (i.e., samples in the unlabeled test set which are close and far from the samples in the spy set) are potentially positive and negative samples that can improve building the verifier . This process is referred to as spy induction. The basic rationale is that since all samples retain their identity, a good distance metric should find hidden positive and negative samples in the unlabeled test set. The technique is particularly effective for situations where training data is limited in size and diversity. Although both spy induction and traditional transduction  exploit the assumption of implicit clusters in the data 
, there is a major difference between these two schemes; Spy induction focuses on sub-sampling the unlabeled test set for potential positive and negative examples to grow the training set whereas traditional transduction uses the entire unlabeled test set to find the hyperplane that splits training and test sets in the same manner. Our results show that for the current task, spy induction significantly outperforms traditional transduction and other baselines across a variety of classifiers and even for cross domains.
2 Related Work
Authorship Attribution (AA):
AA solves the attribution problem on a closed set of authors using text categorization. Supervised multi-class classification algorithms with lexical, semantic, syntactic, stylistic, and character n-gram features have been explored in[14, 15, 16]. In , a tri-training method was proposed to solve AA under limited training data that extended co-training using three views: lexical, character and syntactic. The method however assumes that a large set of unlabeled documents authored by the same given closed set of authors are available which is different from our sockpuppet verification. In , latent topic features were used to improve attribution. This method also requires larger text collection per author to discover the latent topics for each author which is unavailable for a sockpuppet.
Authorship Verification (AV): In AV, given writings of an author, the task is to determine if a new document is written by that author or not. Koppel and Schler, (2004) explored the problem on American novelists using one-class classification and “unmasking” technique. Unmasking exploits the rate of deterioration of the accuracy of learned models as the best features are iteratively dropped. In , the task was to determine whether a pair of blogs were written by the same author. Repeated feature sub-sampling was used to determine if one document of the pair allowed selecting the other among a background set of “imposters” reliably. Although effective unmasking requires a few hundred word texts to gain statistical robustness and was shown to be ineffective for short texts (e.g., reviews) in .
Sockpuppet Detection: Sockpuppets were studied in  for detecting fake identities in Wikipedia content providers using an SVM model with word and Part Of Speech (POS) features. In , a similarity space based learning method was proposed for identifying multiple userids of the same author. These methods assume reasonable context (e.g., 30 reviews per userid). These may not be realistic in opinion spamming (e.g., [6, 23, 24]) as the reviews per userid are far less and often only one, as shown in singleton opinion spamming .
 reports that crowdsourcing is a reasonable method for soliciting ground truths for deceptive content. Crowdsourcing has been successfully used for opinion spam generation in various previous works [1, 27, 28, 29]. In this work, our focus is to garner ground truth samples of multiple fake reviews written by one physical author (sockpuppet). To our knowledge, there is no existing dataset available for opinion spam sockpuppets. Hence, we used Amazon Mechanical Turk.
Participating turkers were led to a website for this experiment where responses were captured. To model a realistic scenario such as singleton opinion spamming , Turkers were asked to act as a sockpuppet having access to several user-ids and each user-id was to be used exactly once to write a review as if written by that alias. The core task required writing 6 positive and 6 negative deceptive reviews, each had more than 200 words, on an entity (i.e., 12 reviews per entity). Each entity belonged to one of the three domains: hotel, restaurant and product. We selected 6 entities across each domain for this task. Each turker had to complete the core task for two entities each per domain (i.e., 24 reviews per domain). The entities and domains were spread out evenly across 17 authors (Turkers). It took us over a month to collect all samples and the mean writing time per review was about 9 minutes.
To ensure original content, copy and paste was disabled in the logging website. We also followed important rubrics in  (e.g., restricted to US Turkers, maintaining an approval rating of at least ) and Turkers were briefed with the domain of deception with example fake reviews (from Yelp). All responses were evaluated manually and those not meeting the requirements (e.g., overly short, incorrect target entity, unintelligible, etc.) were discarded resulting in an average of 23 reviews per Turker per domain. The data and code of this work is available at this link 111https://www.dropbox.com/sh/xybjmxffmype3u2/AAA95vdkDp6z5fnTHxqjxq5Ga?dl=0 and will be released to serve as a resource for furthering research on opinion spam and sockpuppet detection.
Throughout the paper, for single domain experiments, we focus on the hotel domain which had the same trends to that of product and restaurant domains. However, we report results on all domains for cross domain analysis (section 7.4).
4 Hardness Analysis
This section aims to understand the hardness of sockpuppet verification via two schemes.
4.1 Employing Attribution
An ideal verifier (classifier) for an author requires a representative sample of . We can approximate this by assuming a pseudo author representing and populating it by randomly selecting reviews of all authors except . Under the AA paradigm, this is reduced to binary classification. We build author verifiers for each author . As in AA paradigm, we use in-training setting, i.e., negative samples () in both training and test sets are authored by the same closed set of 16 authors although the test and training sets are disjoint. Given our task, since there are not many documents per author to learn from, the effect of author diversity on problem hardness becomes relevant. Hence, we analyze the effect of the diversity and size of the negative set. Let be the fraction of total authors in that are used in building the verifier . Here refers to author diversity under in-training setting. We will later explore the effect of diversity under out-of-training setting (section 5). For e.g., when , we randomly choose 8 authors, 50% of total 16 authors, from to define the negative set for . Note that since we have a total of 16 authors in for each and all values, the class distribution is imbalanced with the negative class in majority. We keep the training set balanced throughout the paper as recommended in 
to avoid learning bias due to data skewness. We use 5-fold Cross Validation (5-fold CV) so, the training fold consists of 80% of the positive () and equal sized negative () samples. But the test fold includes the rest 20% of positive and remaining negative samples except those in training. Under this scheme, since is the majority class in the test set, accuracy is not an effective metric. For each , we first compute the precision, recall and F-Score (on the positive class ) using 5-fold CV. Next, we average the results across all authors using their individual verifiers (Figure 1). This scheme yields us a robust measure of performance of sockpuppet verification across all authors and is used throughout the paper.
|Parse tree for: “The staff were friendly.”|
|PTF(I)||S NP VP|
|PTF(II)||JJ ^ADJP VP|
|PTF (III)||S NP|
|Interior nodes||DT, NP|
With increase in diversity of negative samples, of , the test set size and variety also increase and we find significant drops in precision across all classifiers. This shows a significant rise in false positives. In other words, as the approximated negative set approaches the universal negative set ( with increase in diversity of ), learning becomes harder.
Recall, however, does not experience major changes with increase in the diversity of negative set as it is concerned with retrieving the positive class ().
Thus, sockpuppet verification is non-trivial and the hardness increases with the increase in diversity.
4.2 Employing Accuracy and F1 on Balanced Class Distribution
Under binary text classification and balanced class distribution, if accuracy or F1 are high, it shows that the two classes are well separated. This scheme was used in  for authorship verification. In our case, we adapt the method as follows. We consider two kinds of balanced data scenarios for a verifier for author , : and . Under , we have the positive class that consists of half of all reviews authored by , i.e., . The negative class comprises of the other half, and . Under , we keep intact but use a random sampling of for its negative class, yielding us . Essentially, with this scheme, we wish to understand the effect of negative training set when varied from false negative to approximated true negative . Using lexical and parse tree features and 5-fold CV we report performance under each scenario and in Table 2. We note the following:
The precision, recall, F1 and accuracy of all models under is higher than . While this is intuitive, it shows for deceptive sockpuppets, writings of an author () bear separation from other sockpuppeters ().
Sockpuppet verification is a difficult problem because under balanced binary classification (), there is just 5-10% gain in accuracy than random (50% accuracy). Yet it does show the models are learning some linguistic knowledge that separate and and using writings of authors other than is a reasonable approximation for universal .
5 Learning in Lower Dimensions
From the previous experiment, it hints that in the case of deceptive sockpuppets, only a small set of features differentiate and . As explored in , there often exists discriminative author specific stylistic elements that can characterize an author. However, the gamut of all PTFs per author (greater than 2000 features in our data) may be overlapping across authors (e.g., due to native language styles). To mine those discriminative PTFs, we need a feature selection scheme. We build on the idea of linguistic KL-Divergence in  and model stylistic elements to capture how things are said as opposed to what is said. The key idea is to construct the stylistic language model for author, and its pseudo author . Let and denote the stylistic language models for author and comprising the positive and negative class of respectively, where and
denote the probability of the PTF,in the reviews of and . provides a quantitative measure of stylistic difference between and . Based on its definition, PTF that appears in with higher probability than in , contributes most to . Being asymmetric, it also follows that PTF that appears in more than in contributes most to . Clearly, both of these types of PTF are useful for building . They can be combined by computing the per feature, , as follows:
Discriminative features are found by simply selecting the top PTF based on the descending order of until . This is a form of sub-sampling the original PTF space and lowers the feature dimensionality. Intuitively, as is proportional to the relative difference between the probability of PTF in positive () and negative () classes, the above selection scheme provides us those PTF that contribute most to the linguistic divergence between stylistic language models of and .
To evaluate the effect of learning in lower dimensions, we consider a more realistic “out-of-training” setting instead of the in-training setting as in previous experiments. Under out-of-training setting, the classifier cannot see the writings of those authors that it may encounter in the test set. In other words test and training sets of a verifier are completely disjoint with respect to which is realistic and also more difficult than in-training setting. Further, we explore the effect of author diversity under out-of-training setting, for the negative set (not to be confused with as in section ). For each experiment, the reviews from of all authors except the intended author, participate in the training of a verifier while the rest () authors make the negative test set. We also consider standard lexical units (word unigram) (L), L + PTF, and top (tuned via CV) PTF selected using metric (L + PTF ) as baselines. We examine different values of but not as that leaves no test samples due to out-of-training setting. From Table 3, we note:
For each feature space, as the diversity () increases, across each classifier, we find gains in precision with reasonably lesser drops in recall resulting in overall higher F1. This shows that with increase in diversity in training, the verifiers reduced false positives improving their confidence. Note that verification gets harder for smaller as the size and skewness of the test set increases. This trend is different from what we saw in Figure 1 with which referred to diversity under in-training setting.
Average F1 based on three classifiers (column AVG, Table 3) improves for using L+PTF than L showing parse tree feature can capture style. However feature selection using (L+PTF ) is not doing well as for all values there is reduction in F1 for SVM and LR. L+ PTF feature selection performs best in AVG F1 across different classifiers. It recovers the loss of PTF and also improves over the L+PTF space by about 2-3.
6 Spy Induction
We recall from section 1 that our problem suffers with limited training data per author as sockpuppets only use an alias few times. To improve verification, we need a way to learn from more instances. Also from section 4, we know that precision drops with increase in diversity of . This can be addressed by leveraging the unlabeled test set to improve the set in training under transduction.
Figure 2 provides an overview of the scheme. For a given training set and a test set for , spy induction has three main steps. First is spy selection where some carefully selected positive samples are sent to the unlabeled test set. The second step is to find certain Nearest and Farthest Neighbors (abbreviated NN, FN henceforth) of the positive spy samples in the unlabeled test set. As the instances retain their original identity, a good distance metric should be able to retrieve potentially hidden positive (using common NN across different positive spies) and negative (using common FN across different positive spies) samples in the unlabeled test set. These newly retrieved samples from unlabeled test set are used to grow the training set. The previous step can have some label errors in NN and FN as they may not be true positive () and negative () samples, which can be harmful in training. These are shown in Figure 2(B) by and samples. To reduce such potential errors, a third step of label verification is employed where the labels of the newly retrieved samples from unlabeled test set are verified using agreement of classifiers on orthogonal feature spaces. with this step, we benefit from the extended training data without suffering from the possible issue of error propagation. Lastly, the verifier undergoes improved training with additional samples and optimizes the F-Score on the training set.
6.1 Spy Selection
This first step involves sending highly representative spies that can retrieve new samples to improve training. For a given verification problem, , let denotes the whole data. Although any positive instance in can be a spy sample, only few of them might satisfy the representativeness constraint. Hence, we select the spies as those positive samples that have maximum similarity with other positive instances. In other words, the selection respects class based centrality and employs minimum overall pairwise distance (OPD) as its selection criterion:
where is the positive class of training set, denotes a potential spy sample and is distance function. Our spy set, consists of different spies that have the least pairwise distance to all other positive samples. We also consider different sizes of the spy set and experiment with different values of . The method (line 4, Algorithm 1) implements this step.
6.2 New Instance Retrieval via Nearest and Farthest Neighbors
After the selected spies are put into the unlabeled test set, the goal is to find potential positive and negative samples. Intuitively, one would expect that the closest data points to positive spy samples belong to the positive class while those that are farthest are likely negative samples. For each spy, , we consider nearest neighbors forming the likely positive set and farthest neighbors forming the likely negative set specific to . Then, we find the common neighbors across multiple spies to get confidence on the likely positive or negative samples which yields us the final set of potentially positive and negative samples,
This is implemented by the methods (lines 5, 6, Algorithm 1). In most cases, we did not find the common neighbors to be empty, but if it is null, it implies no reliable samples were found. Further, like (in section 6.1), we try different values for and . These values were set based on pilot experiments. The above scheme of new sample retrieval works with any distance metric.
We consider two distance metrics on the feature space L+ PTF to compute all pairwise distances in the methods , and (line 4-6, Algorithm 1): (1) Euclidean, (2) Distance metric learned from data. Specifically, we use the large margin method in  which learns a Mahalanobis distance metric that optimizes kNN classification in the training data using . The goal is to learn such that the k-nearest neighbors (based on ) of each sample have the same class label as itself while different class samples are separated by a large margin.
6.3 Label Verification via Co-Labeling
As it is not guaranteed that the distances between samples can capture the notion of authorship, the previous step can have errors, i.e., there may be some positive samples in and negative samples in . To solve this, we apply co-labeling  for label verification. In co-labeling, multiple views are considered for the data and classifiers are built on each view. Majority voting based on classifier agreement is used to predict labels of unlabeled instances. In our case, we consider to train an SVM on five feature spaces (views): i) unigam, ii) unigram+bigram, iii) PTF, iv) POS , v) PTF+unigram+bigram as five different label verification classifiers. Then, the labels of samples in and are verified based on agreements of majority on classifier prediction. Samples having label discrepancies are discarded to yield the verified retrieved samples, (line 7, Algorithm 1). The rationale here is that it is less probable for majority of classifiers (each trained on a different view) to make the same mistake in predicting the label of a data point than a single classifier.
6.4 Improved Training
The retrieved and verified samples from the previous steps are put back into the training set. However, the key lies in estimating the right balance between the amount of spies sent, and the size of the neighborhood considered for retrieving potentially positive or negative samples, which are governed by the parameters. To find the optimal parameters, we try different values of the parameter triple, (lines 2, 3 Algorithm 1) and record the F-Score of 5-fold CV on as (line 8, Algorithm 1). This step is carried out by the method . Finally, the parameters that yield the highest in training are chosen (line 10, Algorithm 1) to yield the output spy induced verifier (line 11, Algorithm 1).
7 Experimental Evaluation
This section evaluates the proposed spy method. We keep all experiment settings same as in section (i.e., use out-of-training with varying author diversity ). We fix our feature space to L+ PTF as it performed best (see Table 3). As mentioned earlier, we report average verification performance across all authors. Below we detail baselines, followed by results and sensitivity analysis.
7.1 Baselines and Systems
We consider the following systems:
MBSP runs the Memory-based shallow parsing approach  to authorship verification that is tailored for short text and limited training data.
Base runs classification without spy induction and dovetails with Table 3 (last row) for each .
TSVM uses the transductive learner of SVMLight  and aims to leverage the unlabeled (test) set by classifying a fraction of unlabeled samples to the positive class and optimizes the precision/recall breakeven point.
Spy (Eu.) & Spy (LM) are spy induction systems without co-labeling but use Euclidean (Eu.) and learned distance metric (LM) to compute neighbors.
Spy (EuC) & Spy (LMC) are extensions of previous models that consider label verification via co-labeling approach.
0.001 using a t-test
Table 4 reports the results. We note the following:
Except for two cases (F1 of SVM and kNN for Spy(LM) with ), almost all spy models are able to achieve significantly higher F1 than base (without spy induction) and TSVM for all classifiers SVM, LR, kNN and across all diversity values . MBSP performs similarly as Base showing memory based learning does not yield a significant advantage in sockpuppet verification. TSVM is not doing well on F1 but improves recall. One reason could be that due to class imbalance, TSVM has some bias in classifying unlabeled examples to positive class that improves recall but suffers in precision.
The AVG F1 column shows that on average, across three classifiers spy induction yields at least 4% gain or more. The gains in AVG F1 are pronounced for with gains upto 12% with spy (EuC). For , we find gains of about 10% in F1 with spy (EuC). Note that we employ out-of-training setting with varying author diversity () so the test set is imbalanced (i.e., the random baseline is no longer 50%). Across all classifiers, the relative gains in F1 for spy methods over base reduce with increase in author diversity which is due (a) better samples in training that raise the base result and (b) test set size and variety reduction limiting spy induction. Nonetheless, we note that for (harder case of verification), spy induction does well across all classifiers.
Anchoring on one distance metric (Eu./LM), we find that spy induction with co-labeling does markedly better than spy induction without co-labeling across all in AVG F1 across three classifiers. This shows label verification using co-labeling is helpful in filtering label noise and an essential component in spy induction.
Between Euclidean and distance metric learned via large-margin (LM), Euclidean does better than LM in AVG F1 for both spy induction with and without co-labeling. However, using the LM metric yields higher recall than Euclidean in certain cases (underlined) which shows LM metric can yield gains in F1 beyond base with relatively lesser drops in recall which is again useful.
In summary, we can see that spy induction works in improving the F1 across different classifiers and author diversity and distance metrics. Overall, the scheme LR+Spy (EuC) does best across each (highlighted in gray) and is used for subsequent experiments to compare against Base.
7.3 Spy Parameter Sensitivity Analysis
To analyze the sensitivity of the parameters, we plot the range of precision, recall and F1 values as spy induction learns the optimal values in training. We focus on the variation for capturing both extremes of diversity. Figure 3 shows the performance curves for different spy parameter triples sorted in the increasing order of F1. We find that for both the spy induction steadily improves precision with the increase in likely samples ). Although the recall drops more and has more fluctuations for the harder case of , it stabilizes early for with much lesser drop in recall. This shows that the spy induction scheme is robust in optimizing F1 with only a few (5-7) spy samples sent to unlabeled test set.
7.4 Domain Adaptation
We now test the effective of spy induction under domain transfer. As mentioned in section 3, we obtained reviews of Turkers for hotel, restaurant and product domains. Keeping all other settings same as in Table 4, Table 5 reports results for cross domain performance by training the verifiers () using two domains and testing on the third domain. We compare sockpupet verification using LR+Spy (EuC) vs. base (LR without spy induction). We report the F1 scores as the trends of precision and recall for cross domain were similar to the trends in Table 4. The F1 of base in cross domain (Table 5, Hotel row) is lower than corresponding LR results with base (Table 4) for all showing cross domain verification is harder. Nonetheless, spy induction is able to render statistically significant gains in F1 for all (see Table 5).
7.5 Performance on Wikipedia Sockpuppet (WikiSock) Dataset
In , a corpus of Wikipedia sockpuppet authors was produced. It contains 305 authors with an average of 180 documents per author and 90 words per document which we use as another benchmark for evaluating our method.
It is important to note that the base results reported in  are not directly comparable to this experiment (Table 6). This is because  used all 623 cases that were found as candidates but we focus on only 305 of them which were actually confirmed sockpuppets by Wikipedia administrators. Next, we perform experiments under realistic out-of-training setting and varying the author diversity (as in Table 4) which is different from . This explains the rather lower F1 as reported in  for Base. We focus on F1 performance of spy (EuC) versus base (without spy) as the precision and recall trends were same as in Table 4. Compared to Table 4 base results, base does better for SVM and LR on WikiSock dataset that hints the data to be slightly easier. The relative gains of spy over base although are a bit lower than those in Table 4, spy induction consistently outperforms base.
This work performed an in-depth analysis of deceptive sockpuppet detection. We first showed that the problem is different from traditional authorship attribution or verification and gets more difficult with the increase in author diversity. Next, a feature selection scheme based on KL-Divergence of stylistic language models was explored that yielded improvements in verification beyond baseline features. Finally, a transduction scheme, spy induction, was proposed to leverage the unlabeled test set. A comprehensive set of experiments showed that the proposed approach is robust across both (1) different classifiers, (2) cross domain knowledge transfer and significantly outperforms baselines. Further, this work produced a ground truth corpus of deceptive sockpuppets across three domains.
This work is supported in part by NSF 1527364. We also thank anonymous reviewers for their helpful feedbacks.
-  Ott, M., Choi, Y., Cardie, C., Hancock, J.T.: Finding deceptive opinion spam by any stretch of the imagination. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, Association for Computational Linguistics (2011) 309–319
-  Feng, S., Banerjee, R., Choi, Y.: Syntactic stylometry for deception detection. In: Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, Association for Computational Linguistics (2012) 171–175
-  Mukherjee, A., Kumar, A., Liu, B., Wang, J., Hsu, M., Castellanos, M., Ghosh, R.: Spotting opinion spammers using behavioral footprints. In: Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM (2013) 632–640
-  Lim, E.P., Nguyen, V.A., Jindal, N., Liu, B., Lauw, H.W.: Detecting product review spammers using rating behaviors. In: Proceedings of the 19th ACM international conference on Information and knowledge management, ACM (2010) 939–948
-  Li, H., Chen, Z., Mukherjee, A., Liu, B., Shao, J.: Analyzing and detecting opinion spam on a large-scale dataset via temporal and spatial patterns. In: Ninth International AAAI Conference on Web and Social Media. (2015)
-  Mukherjee, A., Liu, B., Glance, N.: Spotting fake reviewer groups in consumer reviews. In: Proceedings of the 21st international conference on World Wide Web, ACM (2012) 191–200
-  Stamatatos, E.: A survey of modern authorship attribution methods. Journal of the American Society for information Science and Technology 60 (2009) 538–556
-  Layton, R., Watters, P., Dazeley, R.: Authorship attribution for twitter in 140 characters or less. In: Cybercrime and Trustwor-thy Computing Workshop (CTC), 2010 Second, IEEE (2010) 1–8
-  Luyckx, K., Daelemans, W.: Authorship attribution and verification with many authors and limited data. In: Proceedings of the 22nd International Conference on Computa-tional Linguistics-Volume 1, Association for Compu-tational Linguistics (2008) 513–520
Koppel, M., Schler, J.:
Authorship verification as a one-class classification problem.
In: Proceedings of the twenty-first international conference on Machine learning, ACM (2004) 62
The nature of statistical learning theory.Springer Science & Business Media (2013)
-  Chapelle, O., Zien, A.: Semi-supervised classification by low density separation. In: AISTATS. (2005) 57–64
Transductive inference for text classification using support vector machines.In: ICML. Volume 99. (1999) 200–209
-  Graham, N., Hirst, G., Marthi, B.: Segmenting documents by stylistic character. Natural Language Engineering 11 (2005) 397–415
-  Gamon, M.: Linguistic correlates of style: authorship classification with deep linguistic analysis features. In: Proceedings of the 20th international conference on Computational Linguistics, Association for Computational Linguistics (2004) 611
-  Sapkota, U., Bethard, S., Montes-y Gómez, M., Solorio, T.: Not all character n-grams are created equal: A study in authorship attribution. In: Human Language Technologies: The 2015 Annual Conference of the North American Chapter of the ACL. (2015) 93–102
-  Qian, T., Liu, B., Chen, L., Peng, Z., Zhong, M., He, G., Li, X., Xu, G.: Tri-training for authorship attribution with limited training data: a comprehensive study. Neurocomputing 171 (2016) 798–806
-  Seroussi, Y., Bohnert, F., Zukerman, I.: Authorship attribution with author-aware topic models. In: Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, Association for Computational Linguistics (2012) 264–269
-  Koppel, M., Winter, Y.: Determining if two documents are written by the same author. Journal of the Association for Information Science and Technology 65 (2014) 178–187
Sanderson, C., Guenter, S.:
Short text authorship attribution via sequence kernels, markov chains and author unmasking: An investigation.
In: Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics (2006) 482–491
-  Solorio, T., Hasan, R., Mizan, M.: A case study of sockpuppet detection in wikipedia. In: Workshop on Language Analysis in Social Media (LASM) at NAACL HLT. (2013) 59–68
-  Qian, T., Liu, B.: Identifying multiple userids of the same author. In: EMNLP. (2013) 1124–1135
-  Jindal, N., Liu, B.: Opinion spam and analysis. In: Proceedings of the 2008 International Conference on Web Search and Data Mining, ACM (2008) 219–230
-  Fusilier, D.H., Montes-y Gómez, M., Rosso, P., Cabrera, R.G.: Detection of opinion spam with character n-grams. In: International Conference on Intelligent Text Processing and Computational Linguistics, Springer (2015) 285–294
-  Xie, S., Wang, G., Lin, S., Yu, P.S.: Review spam detection via temporal pattern discovery. In: Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM (2012) 823–831
-  Gokhman, S., Hancock, J., Prabhu, P., Ott, M., Cardie, C.: In search of a gold standard in studies of deception. In: Proceedings of the Workshop on Computational Approaches to Deception Detection, Association for Computational Linguistics (2012) 23–30
-  Li, J., Ott, M., Cardie, C., Hovy, E.H.: Towards a general rule for identifying deceptive opinion spam. In: ACL (1), Citeseer (2014) 1566–1576
-  Li, J., Ott, M., Cardie, C.: Identifying manipulated offerings on review portals. In: EMNLP. (2013) 1933–1942
-  Banerjee, R., Feng, S., Kang, J.S., Choi, Y.: Keystroke patterns as prosody in digital writings: A case study with deceptive reviews and essays. Empirical Methods on Natural Language Processing (EMNLP) (2014)
-  Mukherjee, A., Venkataraman, V., Liu, B., Glance, N.S.: What yelp fake review filter might be doing? In: ICWSM. (2013)
-  Chang, C.C., Lin, C.J.: LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology 2 (2011) 27:1–27:27 Software available at http://www.csie.ntu.edu.tw/ cjlin/libsvm.
-  Fan, R.E., Chang, K.W., Hsieh, C.J., Wang, X.R., Lin, C.J.: LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research 9 (2008) 1871–1874 Software available at http://www.csie.ntu.edu.tw/ cjlin/liblinear/.
-  Klein, D., Manning, C.D.: Accurate unlexicalized parsing. In: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, Association for Computational Linguistics (2003) 423–430 Software available at http://nlp.stanford.edu/software/lex-parser.shtml.
-  Feng, S., Banerjee, R., Choi, Y.: Characterizing stylistic elements in syntactic structure. In: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learn-ing, Association for Compu-tational Linguistics (2012) 1522–1533
-  Weinberger, K.Q., Blitzer, J., Saul, L.K.: Distance metric learning for large margin nearest neighbor classification. In: Advances in neural information processing systems. (2005) 1473–1480
-  Xu, X., Li, W., Xu, D., Tsang, I.: Co-labeling for multi-view weakly labeled learning. IEEE Transactions on Pattern Analysis and Machine Intelligence PP (2015) 1–1
-  Solorio, T., Hasan, R., Mizan, M.: Sockpuppet detection in wikipedia: A corpus of real-world deceptive writing for linking identities. In: Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), Reykjavik, Iceland, European Language Resources Association (ELRA) (2014)