Authorship Attribution Using a Neural Network Language Model

02/17/2016 ∙ by Zhenhao Ge, et al. ∙ Purdue University 0

In practice, training language models for individual authors is often expensive because of limited data resources. In such cases, Neural Network Language Models (NNLMs), generally outperform the traditional non-parametric N-gram models. Here we investigate the performance of a feed-forward NNLM on an authorship attribution problem, with moderate author set size and relatively limited data. We also consider how the text topics impact performance. Compared with a well-constructed N-gram baseline method with Kneser-Ney smoothing, the proposed method achieves nearly 2:5 author classification accuracy by 3:43 sentences. The performance is very competitive with the state of the art in terms of accuracy and demand on test data. The source code, preprocessed datasets, a detailed description of the methodology and results are available at



There are no comments yet.


page 1

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Authorship attribution refers to identifying authors from given texts by their unique textual features. It is challenging since the author’s style may vary from time to time by topics, mood and environment. Many methods have been explored to address this problem, such as Latent Dirichlet Allocation for topic modeling [Seroussi, Zukerman, and Bohnert2011]

and Naive Bayes for text classification

[Coyotl-Morales et al.2006]. Regarding language modeling methods, there is mixed advocacy for the conventional N-gram methods [Kešelj et al.2003]

and methods using more compact and distributed representations, like Neural Network Language Models (NNLMs), which was claimed to capture semantics better with limited training data

[Bengio et al.2003].

Most NNLM toolkits available [Mikolov et al.2010] are designed for recurrent NNLMs which are better for capturing complex and longer text patterns and require more training data. In contrast, the feed-forward NNLM framework we proposed is less computationally expensive and more suitable for language modeling with limited data. It is developed in MATLAB with full network tuning functionalities.

The database we use is composed of transcripts of 16 video courses taken from Coursera, collected one sentence per line into a text file for each course. To reduce the influence of “topic” on author/instructor classification, courses were all selected from science and engineering fields, such as Algorithm, DSP, Data Mining, IT, Machine Learning, NLP, etc. There are 8000+ sentences/course and about 20 words/sentence on average. The vocabulary size of each author varies from

to . After stemming with Porter’s algorithm and pruning words with frequency less than , author vocabulary size is reduced to a range from to , with average size around . Fig. 1 shows the vocabulary size for each course, under various conditions and the database coverage with the most frequent words after stemming and pruning.

Fig. 1: Vocabulary size and word coverage in various stages

2 Neural Network Language Model (NNLM)

Similar to N-gram methods, the NNLM is also used to answer one of the fundamental questions in language modeling: predicting the best target word , given a context of words. The target word is typically the last word within context size . However, it theoretically can be in any position. Fig. 2 demonstrates the structure of the proposed NNLM with multinomial classification cost function:


where is the vocabulary size, and are the final output and the target label. This NNLM setup contains 4 types of layers. The word layer contains input words represented by

-dimensional index vectors with

“0”s and one “1” positioned in a different location to differentiate it from all other words. Words are then transformed to their distributed representation and concatenated in the embedding layer. Outputs from this layer forward propagate to the hidden sigmoid layer, then softmax layer to predict the probabilities of the possible target words. Weights/biases between layers are initiated randomly and with zeros respectively, and their error derivatives are computed through backward propagation. The network is iteratively updated with parameters, such as learning rate and momentum.

Fig. 2: A feed-forward NNLM setup (: index, : word, : number of context words, : weight, : bias)

In implementation, the processed text data for each course are randomly split into training, validation, and test sets with ratio 8:1:1. This segmentation is performed 10 times with different randomization seeds, so the mean/variance of performance of NNLMs can be measured later. We optimized a 4-gram NNLM (predicting the

word using the previous 3) with mini-batch training through to epochs for each course. The model parameters, such as number of nodes in each layer, learning rate, and momentum are customized for obtaining the best individual models.

3 Classification with Perplexity Measurement

Denote as a word sequence and as the probability of given a LM, perplexity is an intrinsic measurement of the LM fitness defined by:


Using Markov chain theory,

can be approximated by the probability of the closest words , so can be approximated by


The mean perplexity of applying trained 4-gram NNLMs to their corresponding test sets are . This is lower (better) than the traditional N-gram method ( with 4-gram SRILM). The classification is performed by finding the author with his/her NNLM that maximizes the accumulative perplexity of the test sentences. By randomly selecting to test sentences from the test set, Fig. 3 shows the 16-way classification accuracy using 3 methods, for one particular course/instructor and for all courses on average. There are 2 courses taught by the same instructor, intentionally added for investigating the topic impact on accuracy. They are excluded when computing the average accuracy in Fig. 3. Similarly, the accuracies for courses using two methods with differing text lengths are compared in Fig. 4

. Both figures show the NNLM method is slightly better than the SRI baselines at the 4-gram level. A classification confusion matrix (not included due to space limits) was also computed to show the similarity between authors. The results show higher confusion on similar courses, which indicates the topic does impact accuracy. The NNLM has higher confusion values than the SRI baseline on the two different courses from the same instructor, so it is more biased toward the author rather than the topic in that sense.

Fig. 3: Individual and mean accuracies vs. text length in terms of the number of sentences
Fig. 4: Accuracies at 3 stages differed by text length for 14 courses (2 courses from the same instructor are excluded)


4 Conclusion and Future Work

The NNLM-based work achieves promising results compared with the N-gram baseline. The nearly perfect accuracies given 10+ test sentences are competitive with the state-of-the-art, which achieved 95%+ accuracy on a similar author size [Coyotl-Morales et al.2006], or with tens of authors and limited training data [Seroussi, Zukerman, and Bohnert2011]. However, it may also indicate this dataset is not sufficiently challenging, probably due to the training and test data consistency and the topic distinction. In the future, datasets with more authors can be used, for example, taken from collections of books or transcribed speeches. We also plan to integrate a nonlinear function optimization scheme using the conjugate gradient [Rasmussen2006], which automatically selects the best training parameters and saves time in model customization. To compensate for the relatively small size of the training set, LMs may also be trained with a group of authors and then adapted to the individuals.


  • [Bengio et al.2003] Bengio, Y.; Ducharme, R.; Vincent, P.; and Janvin, C. 2003. A neural probabilistic language model. The Journal of Machine Learning Research 3:1137–1155.
  • [Coyotl-Morales et al.2006] Coyotl-Morales, R. M.; Villaseñor-Pineda, L.; Montes-y Gómez, M.; and Rosso, P. 2006. Authorship attribution using word sequences. In

    Progress in Pattern Recognition, Image Analysis and Applications

    . Springer.
  • [Kešelj et al.2003] Kešelj, V.; Peng, F.; Cercone, N.; and Thomas, C. 2003. N-gram-based author profiles for authorship attribution. In PACLING.
  • [Mikolov et al.2010] Mikolov, T.; Karafiát, M.; Burget, L.; Cernockỳ, J.; and Khudanpur, S. 2010. Recurrent neural network based language model. In INTERSPEECH 2010.
  • [Rasmussen2006] Rasmussen, C. E. 2006. Gaussian processes for machine learning.
  • [Seroussi, Zukerman, and Bohnert2011] Seroussi, Y.; Zukerman, I.; and Bohnert, F. 2011. Authorship attribution with latent dirichlet allocation. In CoNLL.