1 Introduction
To many people, the internet has become the primary source of reading material. However, due to the vast amount of available content, it is a challenge to find interesting articles. Recommender systems for textual data address this problem. The two conceptual approaches to recommender systems are collaborative and contentbased filtering.
In collaborative filtering, user attributes and information on his historic behavior are leveraged to make new recommendations. From a corpus of documents and the previously read documents of each user, underlying topic groups are inferred. Articles, read by a group of users, can be recommended to similar users.
While this is a common approach, it has downsides. Collaborative filtering suffers from the coldstart problem. The relevance of documents can only be estimated if they were visited by other users. Thus, enough empiric records have to be collected first. These records need to contain enough overlap between user interests to effectively infer underlying interest groups that connect users and documents.
In contrast, contentbased filtering only requires preferred documents for each user. Documents are grouped semantically by their content so that no overlap in the users’ preferences is required. The challenge in contentbased filtering however, is to find useful semantic representations of documents.
1.1 Vector representations of documents
For contentbased recommender systems, documents must be embedded into a common space. This space allows to relate documents to each other. There are several approaches to create semantic vector representations of text documents. We use learned representations and compare them with Latent Semantic Analysis (LSA) as described below.
A common document representation is bagofwords (BOW) [30] where documents are represented by their contained words. The vector of a document has the length of the overall vocabulary. The elements of this vector are one, if the corresponding word is contained in the document and zero otherwise. Note that this representation looses information about the ordering of words. BOW vectors are often reduced in dimensionality using SVD. Applying SVD to BOW features is known as LSA and further described in [15]. Other additions include weighting the elements of the vector using term frequency measures, such as tfidf [25]. However, the usefulness of those weightings in practice has been questioned [7].
Recently, learning vector representations from large text corpora has become computationally feasible [21]
. Learned representations have led to improvements in the stateoftheart of natural language problems, such as partofspeech tagging and named entity recognition
[28, 5]. Le and Mikolov [17] extended the idea in order to learn representations of phrases and documents. Those document representations are of particular interest to contentbased recommender systems, since the learned vectors show promising semantic properties [7, 22, 14].1.2 Generative model of user interest
Given fixedlength vector representations for the documents, there are multiple approaches to suggest new potentially relevant articles [1]. A common approach for this is to compute a relevance score for each document unseen to the user. One can compute such a scoring using nearest neighbor queries as proposed in Cover and Hart [6]. For each unseen document, the nearest documents from the user documents are queried using a distance measure, such as cosine distance or euclidean distance. The score is composed of the computed distances. Limitations of this approach are the arbitrary choice of both and the distance function [2].
An alternative to using nearest neighbors is density estimation [27] using generative models [29, 12]
. We model the interests of a user as a probability distribution over the semantic space. Assuming that the user’s history was generated from his interest, we model it using a Gaussian mixture model (GMM). This allows to capture the user’s interest being spread across multiple clusters in a semantic space. GMMs are a common choice for modelling densities and have already been used in the field of collaborative recommender systems and text modeling by
Hofmann [13] and Hiemstra [12], respectively.1.3 Our contributions
In this work, we suggest combining learned document representations with generative density estimation using GMMs. We focus on contentbased recommendations in the domain of text documents. In Section 2, we will compare our approach to related work on recommendation systems that use learned representations and density estimation. Section 3 introduces our approach that we call densitybased interest estimation. Following, we conduct a benchmark on the Delicious bookmark dataset and compare learned representations with BOW representations (Section 4). In Section 5, we summarize our findings and point to further directions of research.
2 Related Work
Contentbased recommender systems form an active area of research. For an overview of stateoftheart methods we refer to Lops et al. [18] and Adomavicius and Tuzhilin [1]. Our work differs from most of their systems in two ways. First, learned document representations are used instead of the common LSA representation, as described in Mikolov et al. [21]. Second, we use density estimation to obtain a generative model of user interest.
Learned representations were only recently proposed for use in contentbased recommender systems. Musto et al. [22] study the difference between several document representations for contentbased recommender systems. They suggest learned representations as a wellperforming option compared to latent semantic indexing and random indexing. While they use word representations in their comparison, we use the closely related document representations in our work.
Kataria and Agarwal [14] present another approach using learned representations for contentbased recommendations. They propose a twostep solution for recommending user tags using document embeddings and obtain remarkable results for tag recommendations. We focus on recommending documents instead of tags.
Building on the above research work, we suggest a generative approach that allows to recommend new documents by sampling from the estimated user interest rather than scoring all and sorting all candidate documents.
Based on learned document representations, we employ a generative model of user interest and estimate its density. Westerveld et al. [29] explain the usage of probabilistic models for classification tasks as used in our work.
Another common approach is the usage of a multinomial distribution with independency assumption as used in Naive Bayes classifiers
[9]. However, these approaches do not use density estimation on learned representations, which are explored in this paper.3 Generative Interest Estimation
In general, our approach involves the three steps preprocessing (Section 3.1), computation of document representations (Section 3.3), and density estimation (Section 3.4). For the document vectors, we compare latent semantic analysis and learned representations.
3.1 Preprocessing
Inputs to our method are a corpus of documents and a set of documents, representing the user’s preference. The text of each document is preprocessed by removing specialcharacters and conversion to lowercase as described in Srividhya and Anitha [26]. Neither stemming is used, nor are stop words removed, to avoid introducing bias and removing potentially useful information that word classes contain. When computing document vectors, information captured in the endings of words will be discarded automatically, if not relevant for the task. From tokenized documents, we compute representations in two ways, as explained in the next two subsections.
3.2 Latent semantic analysis
Latent Semantic Analysis (LSA) aims at representing words with a more general semantic structure. LSA involves transforming documents into a vector space model [24], followed by Latent Semantic Indexing (LSI), as first described by Dumais [10]. Deerwester et al. [8] claim that LSI can provide linguistic notion by capturing synonymy and polysemy. Their work shows that related semantic categories are close in the created vector space. LSA works in many cases because it groups words with similar cooccurrence patterns together [23].
3.3 Learned representation
The idea of learned word representations as proposed by Mikolov et al. [21]
, is to start with random vectors of a chosen fixed length. A shallow neural network is then trained on a large text corpus. The objective is to correctly predict the current word from its context, that is, the surrounding words. Using the backpropagation algorithm
[11], the gradient of the errors with respect to both the weights of the classifier and the input vector is calculated. We then update both the classifier and the wordvectors of the surrounding words using gradient descent. There are several modifications to this architecture. We suggest the papers Mikolov et al. [20] and Collobert and Weston [5] for comparison and further insights.To learn distributed representations of documents, Le and Mikolov [17] extended the model as explained. The classifier is initialized with a random paragraph vector, in addition to the context words. This paragraph vector is trained via gradient descent, resulting in a paragraph vector that covers the overall semantics of a text. The authors suggest two versions of this algorithm. Either to only train the classifier and the paragraph vectors, or to train the word vectors as well. We only train the paragraph vectors because we found that training the word vectors does not improve performance significantly.
For a given set of context words, there are multiple plausible choices for the current word. We thus expect the paragraph vector to converge to the vector that best helps resolve this ambiguity. After training, each document has a trained vector representation in a semantic space, where similar documents are close to each other [7].
3.4 Density estimation
Given fixedlength document representations, we now model the user interest. We assume that documents in a user profile were generated by some underlying distribution of user interest [16].
We estimate the probability distribution assuming that each document is an i.i.d. realization of the users interest distribution . Further, we model this distribution as a weighted mixture of Gaussians. We fit the so called Gaussian mixture model to the
using Expectation Maximization. The estimation takes place in the semantic space of document representations.
Using a mixture model, we capture the fact that users can be interested in multiple topics spread across the semantic space. This introduces the parameter , which specifies the number of distributions. should be close to the amount of topics that the user is interested in.
The quality to which the model can capture the user interest heavily depends on the semantic space and thus on the quality of document representations. Therefore, in the next section, we compare the density estimation with both, LSA and learned representations.
4 Experiments
We conduct a benchmark on the Delicious dataset, common in literature on both collaborative and contentbased recommender systems. We start with explaining the dataset and apply preprocessing steps. Afterwards, we explore the spaces produced by both LSA and learned representations on crawled documents of this dataset (Section 4.1). Then, we explain our measure for evaluating the user interest models and compare the performance in both representation spaces (Section 4.2).
The Delicious bookmarks dataset contains 105,000 bookmarks of more than 1,800 users [4]. In order to embed the bookmarked pages, we crawl the corresponding websites and extract their main content by detecting the largest text area. In this step, we discard all bookmarks that are not accessible or have less than 500 characters of content, resulting in 46,000 documents. We then discard users with less than 50 bookmarked pages, following Cantador et al. [3]. The final dataset consists of 2,536 bookmarks from 50 different users. On average, a user holds 57 bookmarked documents.
4.1 Comparison of document representations
Before evaluating the user interests’ models, we explore the semantic spaces constructed by LSA and learned representations. For LSA, we use tfidf weighted occurrence vectors and only consider terms with tfidf scores within range
. After collecting the count vectors, we reduce their dimensionality to 10 components using singular value decomposition (SVD).
For learned representations, we use the paragraph2vec algorithm as described in Mikolov et al. [21] with a vector size of , training iterations, and noisecontrastive examples. We also reduce the dimensionality of the learned representations to components, using kernel PCA with RBF kernel.
In Figure 1, we compare the learned vector spaces of both representations. The figure shows the corpus of all documents with four randomly selected user profiles being highlighted. While in LSA space, the documents of a user profile are spread over the map, learned representations seem to implicitly group users into one or more clusters. In the following section, we will show that this allows to model user interests more accurately.
4.2 Evaluation of Density Models
While the goal of textbased recommender systems is to suggest useful documents, it is not obvious how to measure this formally. With a large empiric study, it would be possible to perform an evaluation based on user interaction with recommendations. However, such an evaluation is not feasible in many cases. Therefore, a common approach is to measure the quality based on the predictive performance of user behavior.
For each user, we conduct kfold crossvalidation with to split the user profile into training data and validation data . For each split, we fit a GMM to and sample times from the model. Since the samples lie in continuous space, we pick the nearest corpus document for each sample, measured by euclidean distance. Finally, we count the number of sampled documents that match documents of as hits.
The hit rate over all splits and users is a measure of predictive performance for the user interests. Each user profile just contains samples from the underlying distribution so that we can expect other corpus documents even in highdensity areas of a user interest. Therefore, we expect the hit rates to be vastly below 100%.
Results are shown in 1. Estimating user interests from learned representations significantly outperforms using LSA representations. This confirms our assumption from the previous section, that learned representations group documents in a way that helps capturing user interests.
Method  Hit rate  Hits 

GMM on LSA representations  8.63 %  244 / 2828 
GMM on learned representations  12.80 %  362 / 2828 
[10pt]
5 Conclusion
We proposed a generative approach to modelling user interests based on user profiles of preferred documents. Based on the recently introduced method of learned document representations, our model effectively captures user interest. We show this by comparison with the established approach of LSA, on a benchmark common in the literature on recommender systems. We conclude that learned representations create effective semantics for contentbased recommendations. Further exploration is needed to fully understand the reasons for this effectiveness. The areas in which learned representations or established methods are preferable are another worthwhile direction for further research.
References
 Adomavicius and Tuzhilin [2005] G. Adomavicius and A. Tuzhilin. Toward the next generation of recommender systems: A survey of the stateoftheart and possible extensions. IEEE transactions on knowledge and data engineering, 17(6):734–749, 2005.
 Bhatia et al. [2010] N. Bhatia et al. Survey of nearest neighbor techniques. arXiv preprint arXiv:1007.0085, 2010.
 Cantador et al. [2010] I. Cantador, A. Bellogín, and D. Vallet. Contentbased recommendation in social tagging systems. In Proceedings of the fourth ACM conference on Recommender systems, pages 237–240. ACM, 2010.
 Cantador et al. [2011] I. Cantador, P. Brusilovsky, and T. Kuflik. 2nd workshop on information heterogeneity and fusion in recommender systems (hetrec 2011). In Proceedings of the 5th ACM conference on Recommender systems, RecSys 2011, New York, NY, USA, 2011. ACM.

Collobert and Weston [2008]
R. Collobert and J. Weston.
A unified architecture for natural language processing: Deep neural
networks with multitask learning.
In
Proceedings of the 25th international conference on Machine learning
, pages 160–167. ACM, 2008.  Cover and Hart [1967] T. Cover and P. Hart. Nearest neighbor pattern classification. IEEE transactions on information theory, 13(1):21–27, 1967.
 Dai et al. [2015] A. M. Dai, C. Olah, and Q. V. Le. Document embedding with paragraph vectors. arXiv preprint arXiv:1507.07998, 2015.
 Deerwester et al. [1990] S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman. Indexing by latent semantic analysis. Journal of the American society for information science, 41(6):391, 1990.
 Duda et al. [2012] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern classification. John Wiley & Sons, 2012.
 Dumais [2004] S. T. Dumais. Latent semantic analysis. Annual review of information science and technology, 38(1):188–230, 2004.
 HechtNielsen [1989] R. HechtNielsen. Theory of the backpropagation neural network. In Neural Networks, 1989. IJCNN., International Joint Conference on, pages 593–605. IEEE, 1989.
 Hiemstra [1998] D. Hiemstra. A linguistically motivated probabilistic model of information retrieval. In International Conference on Theory and Practice of Digital Libraries, pages 569–584. Springer, 1998.
 Hofmann [2003] T. Hofmann. Collaborative filtering via gaussian probabilistic latent semantic analysis. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pages 259–266. ACM, 2003.
 Kataria and Agarwal [2015] S. Kataria and A. Agarwal. Distributed representations for contentbased and personalized tag recommendation. In 2015 IEEE International Conference on Data Mining Workshop (ICDMW), pages 1388–1395. IEEE, 2015.
 Landauer et al. [1998] T. K. Landauer, P. W. Foltz, and D. Laham. An introduction to latent semantic analysis. Discourse processes, 25(23):259–284, 1998.
 Lavrenko [2008] V. Lavrenko. A generative theory of relevance, volume 26. Springer Science & Business Media, 2008.
 Le and Mikolov [2014] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053, 2014.
 Lops et al. [2011] P. Lops, M. De Gemmis, and G. Semeraro. Contentbased recommender systems: State of the art and trends. In Recommender systems handbook, pages 73–105. Springer, 2011.
 Maaten and Hinton [2008] L. v. d. Maaten and G. Hinton. Visualizing data using tsne. Journal of Machine Learning Research, 9(Nov):2579–2605, 2008.
 Mikolov et al. [2013a] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013a.
 Mikolov et al. [2013b] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, 2013b.
 Musto et al. [2015] C. Musto, G. Semeraro, M. De Gemmis, and P. Lops. Word embedding techniques for contentbased recommender systems: an empirical evaluation. In RecSys Posters, ser. CEUR Workshop Proceedings, P. Castells, Ed, volume 1441, 2015.
 Sahlgren [2006] M. Sahlgren. The wordspace model: Using distributional analysis to represent syntagmatic and paradigmatic relations between words in highdimensional vector spaces. 2006.
 Salton et al. [1975] G. Salton, A. Wong, and C.S. Yang. A vector space model for automatic indexing. Communications of the ACM, 18(11):613–620, 1975.
 Sparck Jones [1972] K. Sparck Jones. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation, 28(1):11–21, 1972.
 Srividhya and Anitha [2010] V. Srividhya and R. Anitha. Evaluating preprocessing techniques in text categorization. International journal of computer science and application, 47(11), 2010.
 Sun et al. [2012] M. Sun, G. Lebanon, P. Kidwell, et al. Estimating probabilities in recommendation systems. Journal of the Royal Statistical Society: Series C(Applied Statistics), 61(3):471–492, 2012.

Turian et al. [2010]
J. Turian, L. Ratinov, and Y. Bengio.
Word representations: a simple and general method for semisupervised learning.
In Proceedings of the 48th annual meeting of the association for computational linguistics, pages 384–394. Association for Computational Linguistics, 2010.  Westerveld et al. [2007] T. Westerveld, A. de Vries, and F. de Jong. Generative probabilistic models. In Multimedia Retrieval, pages 177–198. Springer, 2007.
 Zhang et al. [2010] Y. Zhang, R. Jin, and Z.H. Zhou. Understanding bagofwords model: a statistical framework. International Journal of Machine Learning and Cybernetics, 1(14):43–52, 2010.
Comments
There are no comments yet.