With the growing repository of information in the Internet, the existing search engines are failed to satisfy user’s demand Alaoui et al. (2015)
efficiently. One reason is that they do not consider user profile into account. In fact, for a given search query, a search engine provides the same information to all users, although individual users may have their own preferences. This drawback necessitates a personalized information system whose goal would be to satisfy a user’s preference based on her own need. Of late, user’s personification becomes one of the current research interest and becoming popular in several domains, such as artificial intelligence, data mining, information science, etc.Gauch et al. (2007). Another, the most sought application is the recommendation system Kanoje et al. (2015); Das et al. (2017). A recommendation system, in general, recommends a list of items out of a large pool of items based on user’s interest(s). More precisely, it should generate distinct information given a specific search query, but issued by different users. Thus, a good user profile plays an important role for an interactive model and enhance the performance of the system. Personification through user profile is adapted in the design of recommendation systems in different domains, for example, movies, songs, books, videos, web pages, e-learning, articles, etc. Pereira and Varma (2016); Song et al. (2017). Among all these, recommendation of articles, that is, research paper recommendation system deserves more attention.
Traditional studies of research paper recommendation systems primarily focus on estimating intrinsic user preferences, which is consistent as time passes and also are not perfectly represented. For example, the typical TF-IDF Sugiyama and Kan (2010); Wang et al. (2018)
model represents user and item using vector of words. Further, “topic of interest”Hong et al. (2012, 2013); Al Alshaikh et al. (2017b, a); Hassan (2017) or ‘keyphrase” Ferrara et al. (2011); Gautam and Kumar (2012), extracted from metadata such as title, abstract, keyword, whole text of user’s tagged or published papers are also used to represent user preference. Another approach of representing user preference is collecting information explicitly Gautam and Kumar (2012); Wang et al. (2018).
Although the above-mentioned approaches represent user profiles from several views, they are not free from issues. The “bag of words” model leads to process a huge information and can not detect syntactical similarity among words. The “keyphrase” model uses the meaningful contextual information to define user profile, it also involves the processing of a large amount of information. For the identification of topics from papers, the existing solutions merely consider correlation of words. This may lead to inaccurate identification of topics. User profile based on external information gives an overhead to users and usually suffers from sparsity problem due to lack of interest in providing information. Moreover, the user preference remains unchanged while making recommendation in different context. Another issue is that in real-life applications user intention should be beyond user preference and better be governed by past interactions. Further, a paper belong to a preferred topic of a user may not be liked next time by that user. Hence, it is required to concentrate on what users want (user intention) after what users like (user preference). In capturing user recent intention, sequential algorithms Nguyen et al. (2018); Zhu et al. (2017); Yu et al. (2015); Armentano and Amandi (2012) are good and recently have gained attention among researchers. The sequential prediction model predicts user’s next behaviour from recent interaction. The dynamic user intention modeling has been justified well in many text-based recommendation systems, such as web page recommendation Singh and Sharma (2019); Gao and Dong (2016); Hawalah and Fasli (2015), news article recommendation Agarwal and Singhal (2014), e-learning Köck and Paramythis (2011) etc. Nevertheless, for a research paper recommendation system, an improved approach is yet to be reported.
To address the aforementioned limitations, this work finalizes the following objectives.
To propose a topic model that overcomes the drawbacks of the existing topic modeling approaches.
To model user intention from previous interactions with topic of interest considering time stamp as a temporal context.
To check the performance of a research paper recommendation system considering the proposed user intention modeling.
In realization of the above objectives, a comprehensive model of “user preference learning” and “user intention prediction” is proposed. In order to learn the user preference, this work proposes a hybrid topic model. The hybrid topic model combines Latent Dirichlet Allocation (LDA) and Word2Vec to study the probabilistic distribution of words over topics of papers followed by contextual relationship of words. Afterwards, an algorithm is proposed to decide the true topic of papers. Next, the user preferences are collected from the user’s log file in terms of topic of interest. The preferable topics are extracted from the clicked papers only. Further, a sequential model comprising with Long Short Term memory (LSTM) is used to model user intention from the historical sequence of topic of interest. The model utilize time stamp as a temporal context feature. This framework enables to model dynamic behaviour of users considering long and short term analyses.
The main research contributions are highlighted as follows;
This work proposes a hybrid topic model to identify topic of a paper.
The proposed hybrid topic model alleviates all the drawbacks of existing topic models.
A user intention prediction model is presented which able to capture users dynamic interest or demand at a particular moment.
The proposed topic model is applied to the dataset of two thousands records collected from the Scopus repository and demonstrate the effectiveness of the proposed topic model with comparing to the existing model. Also, the effectiveness of user intention model is proved by comparing with the baseline model.
The rest of the paper is organized as follows. Section 2 presents the related work associated with the summarizing of the characteristics of a user profile. In Section 3, the proposed user preference learning and user intention prediction are discussed. Section 4 provides the experiment and analysis of results. Next, threats to validity of the proposed approach are discussed in Section 5. Finally, Section 6 concludes the paper.
2 Related work
The state of the art research paper recommendation systems mostly emphasize on the preference learning of users. This section outlines some of the existing works related to the different techniques of modeling user behaviour.
Sugiyama et al. Sugiyama and Kan (2010) proposed a paper recommendation system, where a user profile is represented using a feature vector comprising unique terms obtained from a researcher’s past published papers. Each term is defined by term frequency-inverse document frequency. Subsequently, papers were recommended using similarity matching between feature vectors of candidate papers and user preferences.
Dhanda and Verma Dhanda and Verma (2016) presented a recommender system based on incremental dataset. The system collected publication date, publishing authority as a preference from user’s liked paper. Further this user preferences were used to generate output papers.
Hong et al. Hong et al. (2012, 2013)
defined user profile by the topic given by a user. It is updated on a new topic provided by the user. Then cosine similarity was used to find related papers for recommendation.
A deep learning based research paper recommendation system proposed by Hassan Hassan (2017) to create user profile based on implicit and explicit feedback of a user. In the first step, explicitly they collected user’s preferred articles and extracted title and abstract to create a vector of topics. Next, user’s short term interest is collected implicitly from the transaction log. Finally, papers were recommended using cosine similarity between the candidate papers and papers that were stored in user profile.
Alshaikh Al Alshaikh et al. (2017b) introduced an interested way of representing user profile. In his content based research paper recommendation system, a Dynamic Normalized Tree of Concepts (DNTC) is used to build user profile. The tree maintains the parent-child relationship between concepts following ontology. From the user’s each reading paper, top N concepts are retrieved. further, a tree with weight is build to explore the semantic relation between concepts. The tree is normalized by the number of reading papers of a user and dynamically updated based on the time sequence of user’s log data. If a user reads new paper in new time, the weight of the tree is recalculated to update. The DNTC represents user’s short term interest. Afterwards, a distance measure was used to find the most similar papers related to user short term interest.
Alshaikh Al Alshaikh et al. (2017a) incorporated long term and short term interest of users to present his profile. Here, a dynamic sliding window was used with DNTC to reflect short term interest of a user. the length of the sliding window depended on the latest papers read by the user. The concept of all papers read by the user considered as a long term preference. After that, using a tree distance measure papers are recommended that matches with their long term interest.
Ferrara et al. Ferrara et al. (2011) proposed a content based recommendation approach where user profile was constructed from the tagged paper of users. From each paper, the weighted keyphrases such as uni-gram, bi-gram, and tri-gram are calculated. These three different list of keyphrases created three different user profile. The weight of each keyphrase was multiplied by inverse document frequency of associated keyphrase. In the final step, similarity matching was done between candidate papers and user profile
Sun et al. Sun et al. (2018) suggested a hybrid article recommendation in social networks. The metadata of published papers in social network for example title, abstract, published journal, year were used to define profile. The authors utilized three connectivity graphs to find the relation between user-user, user-article and user-keywords. These three relations were represented by three matrices. Finally, Random Walk Restart (RWR) method was employed to find recommended articles.
Gautam et al. Gautam and Kumar (2012) collected personal information of users explicitly. They used user given tags or keywords associated with their academic data to build profile. Further, cosine similarity was used to recommend papers.
Bulut et al. Bulut et al. (2018, 2019) provided a feature based user profile. To generate profile, the authors considered a user’s past publication. All the required metadata such as title, year, author, abstract, and keyword of each article was extracted and merged together in a profile. In this work, also cosine similarity was utilized to find similar interested papers.
Wang et al. Wang et al. (2018) proposed a hybrid article recommendation system where, for the content based part, they assumed user preference from user given tag and title of the article read by a user. Further, they represent user preference in terms of weighted bag of words to create user profile. Finally, recommended articles were found out my matching similarity between article profile and researcher profile.
The characteristics of user profiles, modeled in different article recommendation systems are summarized in Table 2. From the survey, it can be observed that all the existing works considered user preference only that is what user likes to decide their relevant papers. Whereas, capturing change of preferences means what user wants is very important for accurate recommendation. It is completely missing in the existing approaches.
|Sl. No.||Schemes||Modelling technique||SPL||Intention prediction||LSTPA|
|1||Sugiyama and Kan (2010)||Bag of words||past publication||No||No|
|2||Dhanda and Verma (2016)||Publishing date,|
|publishing authority||Explicit feedback||No||No|
|3||Hong et al. (2012, 2013)||Topics of interest||Explicitly from user||No||No|
|4||Hassan (2017)||Topics||implicit and explicit feedback||No||No|
|5||Al Alshaikh et al. (2017b)||Tree of concepts||past publication||No||No|
|6||Al Alshaikh et al. (2017a)||Tree of concepts||past publication||yes||Yes|
|7||Ferrara et al. (2011)||Keyphrase||Tagged paper||No||No|
|8||Sun et al. (2018)||Metadata of papers||social network||No||No|
|9||Gautam and Kumar (2012)||Explicit information, tagged keyword||Explicit feedback||No||No|
|10||Bulut et al. (2018, 2019)||Features of papers||Past publication||No||No|
|11||Wang et al. (2018)||Bag of words||user given tag||No||No|
SPL: Source for Preference Learning, LSTPA: Long-Short Term Preference Analysis
3 Proposed approach
For the recommendation system, user behavior monitoring is a crucial task. In this context, proposed approach describes the strategy of extracting user’s preference and predicting user intention for accurate recommendation. For a given user, the papers that she clicked from a recommendation list are considered as relevant papers for building user profile. These relevant papers are traced from user browsing activities. Suppose, a user has clicked a set of papers . Now, the primary goal is to extract the preference of user in terms of topic of interest from each relevant paper . Further, consider the historical preference of the user is , where, is the -th topic of interest of user at time . Hence, the second target is to predict the intention of user at time .
Overview of the proposed approach The proposed approached is categorised in two phases: (1) User preference learning and (2) User intention prediction. In the first phase, it explains the procedure to learn user preference implicitly. Further, the proposed approach presents LSTM based sequential model to predict user intention. An overview of the proposed approach is shown in Figure 1.
3.1 User preference learning
User preference plays an important role to generate user profile in a recommendation system. An accurate user profile easily deal the personalized recommendation. There are two methods to capture user preference (1) Explicit and (2) Implicit. In explicit method, users have to provide their personal details or preferences by themselves and this creates burden to users. Hence, the explicit capturing is not a well accepted method and users may loose their interest shortly. Whereas, implicit method involves capturing user’s preference without any intervention of users. In this regard, the proposed method collect user preferences implicitly considering user’s clicked data. In the research paper recommendation system, the clicked data refers a set of papers. A paper contains several features. The proposed method considers a latent features of papers that is “topics” as preference of users. In other words, the proposed approach tries to capture a user’s preference in terms of topics of interests. In order to extract the topic of papers, a hybrid model is proposed. The hybrid topic model comprises with Latent Dirichlet Allocation (LDA) Jelodar et al. (2019) and Word2Vec Rong (2014)
method. For calculating the probability distributions of words of a paper among predefined several number of topics, LDA, a topic modelling techniques is considered. It may be noted that, LDA does not consider the contextual correlation among words and thus fails to predict true topics of words. Word-Word correlation measure is necessary to justify the true identification of topics of a paper. In this context, a word embedding technique Word2Vec is taken into account. Word2Vec assist to group semantically and syntactically similar words under a particular topic. This work proposes an approach how to combine LDA and Word2Vec to obtain perfect word-topic distribution.
Further, the topic of a paper is decided from the maximum likelihood of words among topics. The necessary steps involved in the proposed approach to predict the topic of papers is shown in Fig. 2.
For applying LDA and Word2Vec both, corpus needs to be pre-processed. In this process, a database, say comprising the title of all papers is considered. Further, the dataset is pre-processed to remove stop words, punctuation, numbers and so on. Along with this, words tokenization, lemmitization and stemming has been done. So, each title is tokenized into words and it will be the input for next step.
3.1.2 Calculation of word-topic distribution using LDA
The first step to identify the topic of a paper is calculating the distribution of words over several topics. Here, LDA, an unsupervised machine learning technique is used to quantify the probability of a words to be in a specific topic. To quantify the probability distribution of words among topics using LDA, a “Bag-Of-Word” (BOW) model is required to estimate. Hence, in the next step, with the tokenized words generated in the pre-process step, a dictionary with unique words is constructed. Then,is represented as “Bag-Of-Word” (BOW) model. Now, The LDA is applied to BOW model to calculate probability distribution of words into number of topics. Here, is predetermined. The best value of is decided by fitting LDA several times with different values and calculating topic coherence score Syed and Spruit (2017) every time. Finally, the value of for which highest topic coherence score is obtained, is taken as a best value of . Initially, for each title , each word is assigned randomly to one of topics and formed a matrix . Suppose, The dataset contains number of titles. Each title contains number of words and the total number of words in vocabulary is . Now, for each word in , its topic assignment is updated based on two predictions (1) , where and . This distribution figures out the number of words of given title belong to the topic and (2) , where, and . It calculates the number of titles that indicates the number of papers are in topic because of the word . The steps to calculate above two predictions and the updating of probability is explained below.
In second, for each topic the proportion
which is denoted by the random variable, say,is estimated. It is a Dirichlet distribution Townes (2020) of the words in topic , parameterized by . is dimensional vector of positive real numbers with sum upto one. The posterior estimation of that means probability of word for topic can be estimated as;
Where, is the matrix of word-topic count.
The third step samples another random variable, say, which represents the probability , where and . It is another Dirichlet distribution of topics for each title and parameterized by . is a
dimensional vector of positive real numbers with sum upto one. The posterior probability of topicin title that is is computed using following equation;
Where, is the matrix of title-topic count.
Now, for each title , identify the topic of each word based on multinomial distribution Townes (2020) given . In addition, the other words which are in the same topic are identified from the word distribution . This process is repeated for all the papers to improve the topic assignment of words. This reassignment of the probability of a word to a topic can be expressed mathematically as follows;
Where, is the probability that the topic is assigned to the - word, represents the topic assignment of other words, is the title of a paper contains the - word,
Top 10 words under each topic which is obtained from LDA technique is shown in Figure 3.
3.1.3 Improvised word-topic distribution using word embedding
Though LDA reduce the dimensionality of text corpus and represents papers in low dimensional vector, it has some drawbacks too. A “bag-of-word” model is used to represent papers in LDA. Hence, LDA suffers the sparsity problem for representing papers. In addition, it does not produce good result for small training data. Besides this common problems, another important issue is not considering semantic correlation of words during distribution of words into topics. To achieve better word-topic distribution, it is necessary to consider the context of words. In this regard, word embedding is most promising technique. In order to embed words by semantic and syntactic relations, a skip-gram model, variant of Word2vec model is chosen. The Word2vec model is a word embedding technique which represents words of a large text in n-dimensional vector space. It follows the distributional hypothesis Rubinstein et al. (2015) that words comes in the similar context have similar meaning. There is a two techniques to train the Word2Vec model: (1) Continuous Bag Of Word (CBOW) and (2) Skip-gram. Skip-gram model is preferable over CBOW model where training corpus is not so large. Hence, in this work skip-gram model is considered.
Word2Vec forms a vocabulary of unique words considering the tokenized words, obtained in the pre-processing steps from the database . Further, a -dimensional feature space or vector space is created where, each unique word of vocabulary is assigned to a corresponding vector in vector space. Along with this, a training sample with context-target word pairs is prepared depending upon context window size. The context window is a very important hyper-parameter Caselles-Dupré et al. (2018) to determine the number of contextual neighbours of the target word while estimating the vector representation. This work has selected the context window size is . In the proposed method, top words are extracted from the matrix of the previous step. For each word, context words are found using Skip-gram model.
A Skip-gram model is a fully connected neural network which is constructed with a input layer, one hidden layer and output layer.
The representation of skip-gram model is shown in the Figure 4
. The number of neurons present in input and output layers are equal to the size of vocabulary. The hidden layer consists number of neurons. Next, the one-hot representation of target word is fed to the network. According to the one-hot representation, for a given target word , only one of the positions, () will be , rest of the positions will be filled with . For example, if the vocabulary size is and a word say, “Big” is in the position . Then, the one-hot representation of the word “Big” is shown in the Table 3.
There is no activation function between input layer and hidden layer. Hence, the weighted sum of input is directly copied to hidden layer. The weight of the hidden layer is represented by the matrix say,with the dimension . Therefore, hidden layer (H) can be expressed as:
Now, assume there is context words. The output layer expecting multinomial distributions. Each output is computed using the weight matrix say, between hidden layer and output layer. The goal of this layer is to a set a parameter such that it maximize the conditional probability that means probability of the being predicted as the context of for all training pairs. If all the training pairs are denoted as , this objective function can be expressed as;
In the skip-gram model, above conditional probability that is closeness of target word () and context word () is quantified using soft-max function as follows;
where, , are vector representations of and . is the all contexts. The parameters are , for , and .
The above equation can be re-written by taking logarithm form and switching product to sum.
Now, all the dissimilar word in reference to context are isolated. Further, the topic is assigned to every word vector according to the topic of target word and stored in a matrix say, . A snapshot of is shown in Figure 5.
3.1.4 Word tokenization
Word tokenization means splitting of large text into several words. In order to decide the topic of papers, each title of the database is tokenized into words. Further, to make it more understandable, common words are removed from each title.
3.1.5 Deciding dominant topic of papers
The goal of this last step is to decide the dominant topic of papers. A paper may contain multiple topics due to it’s word distribution among several topics. Therefore, it is necessary to find most promising topic of a paper for categorisation. In this process, the database with tokenized words are employed. For a single word say, , belongs to each title, the corresponding topic is searched from improvised topic-word distribution matrix . For each word (), the topic is selected according to the topic of the word assigned in . It may be noted that a word may belong to more than one topics. In this scenario, a specific topic for the word is selected based on the highest probability score to its topic in the word-topic distribution matrix . Finally, the topic of a paper is finalized considering frequency of words belong to a topic. The overview of the approach is shown in the Algorithm 1.
3.2 User intention prediction
Extracted preference which is obtained in the user preference extraction method discussed in Section can be used as a feature of user profile in research paper recommendation system. A user profile is categorized in two types: (1) static and (2) dynamic. Static profile contains such kind of user information (e.g. age, sex ) which does not require any modification. Generally, the information of the profile is supplied by the user himself. In contrast, dynamic profile is automatically generated by the recommendation system and the features which it contains, undergo changes over time. Since, the proposed approach captures the topic of interest of a user which may change in different context or time, it is dynamic. It will update accordingly as well as increase in size and variation. In future, a range of variations in topic preference will increase the difficulty of decision task in recommendation. For example, if a user profile contains as preferable topics, it is difficult to predict that which topic of papers the user will want in next session. To mitigate this problem, it is required to analysis all the historical interactions of a user. In this regard, the conventional approaches of modelling user profile struggles to get all historical sequences(long term and short term) of user-item interaction Fang et al. (2019)
, hence, leads to imperfect user modelling. In this scenario, sequential modelling is good choice in academic and practical application. However, in sequential model time plays an important role. Also, time differentiates the user’s interest in short-term and long-term categories. The short-term interest reflects the current interest of a user, which is changeable. In opposite, long term interest are more stable. The sequential modeling efficiently capture user’s long term preference across different sessions as well as short term preference within a session. In case of sequential modeling, deep learning based methods has gained a lot of attention than machine learning based method such as Markov chainJannach and Ludewig (2017), session based k-NN He and McAuley (2016)
. The proposed approach considers a deep sequential topic analysis technique to predict the future topic of interest of a user from his past preferable sequence of topics. Specifically, a variation of recurrent neural network (RNN) that is Long Short Term Memory (LSTM) is used to combine long term and short term interest about topic to predict the future topic of interest. Let, the historical topic sequence of a useris denoted by and defined by , where means that a user likes topics at time . The task is to predict next preferable topics given a certain user at a certain time . In addition, the proposed approach considers temporal features such as “time difference ” between two clicked papers. In addition, few external features such as “Liked” (whether the user liked the paper or not), “session number” is taken into account. The overview of the proposed approach is shown in Figure 6.
3.2.1 Data normalization
Normalization is required to scale data from it’s original range to a range of values 0 to 1. Normalization helps the prediction model to learn optimal parameter easily of each input. The normalization of a value can be executed using following equation.
3.2.2 Data transformation
The next task is to transform the sequential data into supervised data, since neural network is a supervised model. Therefore, the data should be in the format like
. In kerasKetkar (2017), there is a function “look back”, which efficiently transform sequential data to supervised data. Look-back is used to process past data upto () to predict at time . For example, if the sequence data is like and look-back is , the transformed data will be looked as follows;
3.2.3 Data reformation
One of the important fact about the LSTM model is the 3-dimensional input format of data. Therefore, it is required to reshape the 2-dimensional data into 3-dimensional form such as (batch_size, time steps, input_dim). In this context, the number of time steps is equal to number of LSTM cells, input_dim is equal to number of features, and batch_size is the number of windows of data that has to be passed at once.
3.2.4 Future topic of interest prediction
After the reformation of data, the immediate task is to generate a LSTM model to train the data. The architecture of LSTM is shown in the Figure 7.
It has been seen from the Figure 6, LSTM is a sequential model, where several neural network module are connected sequentially. Each LSTM cell consists four gates with a common cell state to control retention and updating information learned from sequence data. The description of gates and cell state as follows;
input gate : It consists input vector.
forget gate : It decides the amount of information to be allowed.
output gate : It consists output vector generated by each LSTM cell.
cell state : It runs through the entire network and carries information. LSTM has the ability to add or remove information to cell state using gates.
Let, is the input vector at time , is the output vector at time , and information of cell state at time is denoted by . The first step of the LSTM is to decide which information of the cell state
has to be removed. The decision has been taken by the forget gate using a sigmoid function (). The forget gate uses the input values and output values at previous time steps i.e. to decide the output between 0 and 1. indicates the completely remove of the value. On the other side, indicates to keep the value completely. The mathematical expression of this function is;
Where, and are the weight and bias at forget layer.
Further, the cell state stores new information executing two steps. First, the input gate choose which values will be updated using sigmoid function. Second, a function is used to create a new vector to add to the previous states values. Mathematically, the above two steps are expressed as;
where, and are weight and bias at the input gate layer.
where, and are weight and bias simultaneously. Finally, these two values are combined to update the state values as follows;
The cell state is updated adding new value with the values selected by forget layer at previous time step. At the end, the output is decided by the output gate after filtering with sigmoid function (). After that, the filtered vector is multiplied by function to get the output in the range between to . The execution at output layer can be expressed with the following two equations.
The equation 3.2.4 determines the portion of the current state is allowed to be shown as output and can be used for next iteration of training.
The parameter of the LSTM used in this approach is optimized by “Adam” , a variation of Stochastic Gradient Descent optimizer. Adaptive Moment Estimation (Adam) computes adaptive learning rate for each parameter efficiently and very fast.
4 Experiment and experimental result
This section presents objectives of the experiment, dataset description, experimental set up, procedure, evaluation metrics and results observed.
4.1 Objectives of the experiments
The objectives of this experiment are finalized to answer the following research questions:
RQ1: Does the proposed hybrid topic extraction model is comparable to that of the state of the art topic models?
RQ2: How much effective is the LSTM-based user intention model?
RQ3: How does the proposed approach influences the performance of a research paper recommendation system?
4.2 Data set
The proposed approach considered two data set. The first data set includes corpus of titles was collected from Scopus 111DataSources: https://www.scopus.com/search/form.uri?display=basic. This data set was used to generate topic of papers using the proposed hybrid topic extraction model. In this work, titles of papers were considered. Second, user’s preference data is used to evaluate proposed sequential model. This preference data was collected from 12/10/2019 to 18/5/2020 using the proposed recommendation system. The statistics of the preference data set are shown in Table 4.
|Data set 2|
|Number of users||50|
|Number of items||5213|
|Number of features||3|
|Name of features||
4.3 Experimental environment
The proposed approach is implemented in Google Colab Notebook 4. All the codes are implemented and executed Python version 3.6 and keras programming environment.
4.4 Experimental procedure
Experiments were started with the preparation of data set, which was as follow.
Data preparation: For the preference learning model, a data set was prepared comprising research paper titles. Further, for evaluating a user’s intention prediction model, the user’s preference data were collected from a user’s log data maintained in the system’s database. From the user’s log data, clicked papers were considered. Next, all the required features such as titles, topics, time differences, session numbers were extracted as the user profile.
4.5 Experiments vis-a-vis objectives
Experiment 1 To show proposed hybrid topic extraction model performs better than LDA alone (RQ1).
The goal of this experiment to prove the efficacy of the proposed hybrid topic extraction model. To show the better performance of the proposed model over LDA alone, The experiment was divided into two parts. In the firs part, only LDA was applied on the dataset1 and decided topics of papers and add it to dataset1. The number of topics are decided using topic coherence score. the result is shown in the Figure 8. Further, Figure 9
represents the distribution of words for a specific topic. Next, the dataset1 was split into training and testing set. Further three classifiers such as Support Vector Machine (SVM), Logistic Regression (LR), and Random Forest (RF) were applied to training data and validated on the test data. The model was validated using average value of F1 micro and F1 macro obtained from 5-fold cross validation. F1 micro and F1 macro are broadly used to validate classifier on multi class dataLi and Guo (2013). The results are shown in Table 5.
In the second part, proposed hybrid model was applied on the same dataset and assigned topics of papers to dataset1 according to that. The word2vec was trained with the own dataset. For the training, the parameter of word2vec were set as like, window size was 5, dimension was 200, and, min count 5. The model was implemented using gensim word2vec model. In particular, skip-gram, a variation of word2vec was implemented in keras. t-SNE was used to visualize the words across models. Figure 10 shows the vocabulary of words generated from word2vec. Finally, on the resultant dataset, three classifiers were applied likewise experiment 1. Table 5 presents the classifier’s result in terms of F1 micro and F1 macro.
|F1 micro||F1 macro||F1 micro||F1 macro||F1 micro||F1 macro|
|LDA + Word2Vec||72.13%||74.48%||62.6%||64.13%||64.55%||66.12%|
From Table 5, it has been proved that proposed hybrid model performs better than LDA topic model for all cases.
Experiment 2 To show the sequential model considered in this work performs better than the existing models (RQ2).
This experiment was conducted considering the dataset 2 that is historical interaction data of users and divided into two parts training and testing data. Further, LSTM based sequential model was applied on the training and testing data successively. Finally, the performance of the model were decided using the metric Accuracy and Root Mean Square Error (RMSE) over both the training and testing data. After that, this performance was compared with three existing sequential models such as Frequent pattern mining (FPM), Markov Chain Model (MCM), Convolution Neural Network (CNN). In this experiment, same dataset was considered for evaluating all models. The existing models are described as follows:
FPM: Frequent Pattern Mining (FPM) is a data mining technique. FPM was used on historical sequential topic data to predict future topic of interest.
MCM: Markov Chain Model (MCM) was applied on the user preference data and predict future preference.
CNN Convolution Neural Network (CNN) is a deep learning model. CNN was applied in similar way to predict future topic preference.
Table 6 and Figure 11 presents the results of performance metrics. From the results, it has been observed that the sequential model, used in this work, performed better on both train and test dataset.
|Training data||Testing data|
Experiment 3 To compare the result of the paper recommendation according to user evaluation with the results obtained from existing search engine (RQ3).
In order to prove the significance of the proposed approach based on user perspective, a user survey was conducted. In this survey, 100 users were selected from researcher group.
For each participant, 10 sessions was conducted. In each session, four baseline models and the proposed model were used arbitrarily and users were unaware about the type of models. Before starting the experiment, each participant was requested to give 10 keywords from their area of interest. From these 1000 keywords, collected from all participants, duplicates keywords were removed. Finally, 640 keywords were selected randomly and divided into 40 sets. In each session participant choose a set randomly. In the first session, candidate papers that against the search query were extracted from the dataset and provided to the participant. Then, the response from them were recorded. Further, all the required features were extracted from their clicked papers and stored in a user profile. From next time onward, this profile was used to predict the user intention and recommend to that specific participant. After every session, profile was updated. Finally, the average results of nine sessions were considered as a final evaluation of the users.
To compare the performance of the proposed model three popular search engine, namely, Google Scholar (GS), Microsoft Academic Search (MAS), and (3) Citeseer were considered. The experiment 2 was repeated for evaluating the results of search engines.
The following four metrics such as Recall@10, Precision@10, MAP@10 and CTR were used to evaluate the results. All metrics were computed based upon the judgment of users. The definition of above metrics are as follows:
Recall@k: Recall@k can be defined using the following equation;
is the number of recommended papers and is the number of relevant papers according to users. In this context, user’s clicked papers were assumed as relevant papers.
precision@10: Precision@10 can be defined as follows;
MAP@10: This is the average of reciprocal ranks of a paper in the recommendation list. The reciprocal rank is set to 0 if the rank is above 10. MRR@10 takes into account the rank of the item. Each metric is evaluated 10 times and averaged.
CTR rate: It measures how many recommended papers are clicked by the user.
Result The comparative result is shown in Figure 12 . From the results, it is observed that the model comprising proposed method outperforms the search engines.
5 Validity limitations
The claims of this work are under some assumptions and limited considerations. This section highlights the considered features and other ways of improvement.
Times tamp: This work utilized an LSTM model to predict a user’s intention where time stamps were used as the contextual feature. However, it could not properly capture the drift of a user’s intention. It can be modeled better using timeshare LSTM.
External context: Other than time, other external contexts, such as user experience, user understanding level, author name, etc. can be used to enhance the prediction.
Item-item relation: Item-item relation is also an important factor to understand the user intention, which is not incorporated in the proposed approach.
This work aims on modeling user intention needed for a research paper recommendation system. The first step is to categorize a paper into a topic. A hybrid topic model (HTM) comprising LDA (Latent Dirichlet Allocation) and Word2Vec is proposed. HTM decides the topic of a paper considering probability distribution of words among several topics and correlation among words. The result establishes that HTM performs better categorization than either LDA or Word2Vec models. In the second step, a sequential LSTM model is employed to predict the intention of a user in terms of her topics of interest. The considered LSTM model predicts the intention of the user from the history of interaction and time interval as a contextual feature. This essentially captures long term and short term interests of users and thus dynamic nature of users’ intentions. The experimental results substantiates the efficacy of the proposed approach. In future, this work can be extended to explore more features as users’ preference and a recommendation technique using multi-criteria preference analysis.
Handling skewed results in news recommendations by focused analysis of semantic user profiles. In 2014 International Conference on Reliability Optimization and Information Technology (ICROIT), pp. 74–79. Cited by: §1.
- A Novel Short-term and Long-term User Modelling Technique for a Research Paper Recommender System.. In KDIR, pp. 255–262. Cited by: §1, Table 2, §2.
- A research paper recommender system using a Dynamic Normalized Tree of Concepts model for user modelling. In 2017 11th International Conference on Research Challenges in Information Science (RCIS), pp. 200–210. Cited by: §1, Table 2, §2.
- Building rich user profile based on intentional perspective. Procedia Computer Science 73, pp. 342–349. Cited by: §1.
- Modeling sequences of user actions for statistical goal recognition. User Modeling and User-Adapted Interaction 22 (3), pp. 281–311. Cited by: §1.
- A paper recommendation system based on user’s research interests. In 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pp. 911–915. Cited by: Table 2, §2.
- A Paper Recommendation System Based on User Interest and Citations. In 2019 1st International Informatics and Software Engineering Conference (UBMYK), pp. 1–5. Cited by: Table 2, §2.
Word2vec applied to recommendation: Hyperparameters matter. In Proceedings of the 12th ACM Conference on Recommender Systems, pp. 352–356. Cited by: §3.1.3.
- Understanding LSTM Networks. Note: http://colah.github.io/posts/2015-08-Understanding-LSTMs/Online; accessed 25th May, 2021 Cited by: Figure 7.
- A survey on recommendation system. International Journal of Computer Applications 160 (7). Cited by: §1.
- Recommender system for academic literature with incremental dataset. Procedia Computer Science 89, pp. 483–491. Cited by: Table 2, §2.
- Deep learning-based sequential recommender systems: Concepts, algorithms, and evaluations. In International Conference on Web Engineering, pp. 574–577. Cited by: §3.2.
- A keyphrase-based paper recommender system. In Italian research conference on digital libraries, Vol. 249, pp. 14–25. Cited by: §1, Table 2, §2.
- A Context-awareness Based Dynamic Personalized Hierarchical Ontology Modeling Approach.. In FNC/MobiSPC, pp. 380–385. Cited by: §1.
- User profiles for personalized information access. In The adaptive web, Vol. 4321, pp. 54–89. Cited by: §1.
- An improved framework for tag-based academic information sharing and recommendation system. In Proceedings of the World Congress on Engineering, Vol. 2, pp. 1–6. Cited by: §1, Table 2, §2.
- Personalized research paper recommendation using deep learning. In Proceedings of the 25th conference on user modeling, adaptation and personalization, pp. 327–330. Cited by: §1, Table 2, §2.
- Dynamic user profiles for web personalisation. Expert Systems with Applications 42 (5), pp. 2547–2569. Cited by: §1.
- Fusing similarity models with markov chains for sparse sequential recommendation. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 191–200. Cited by: §3.2.
- UserProfile-based personalized research paper recommendation system. In 2012 8th International Conference on Computing and Networking Technology (INC, ICCIS and ICMIC), pp. 134–138. Cited by: §1, Table 2, §2.
Personalized research paper recommendation system using keyword extraction based on userprofile. Journal of Convergence Information Technology 8 (16), pp. 106. Cited by: §1, Table 2, §2.
- When recurrent neural networks meet the neighborhood for session-based recommendation. In Proceedings of the Eleventh ACM Conference on Recommender Systems, pp. 306–310. Cited by: §3.2.
- Latent Dirichlet Allocation (LDA) and Topic modeling: models, applications, a survey. Multimedia Tools and Applications 78 (11), pp. 15169–15211. Cited by: §3.1.
- User profiling trends, techniques and applications. arXiv preprint arXiv:1503.07474. Cited by: §1.
- Introduction to keras. In Deep learning with Python, pp. 97–111. Cited by: §3.2.2.
- Activity sequence modelling and dynamic clustering for personalized e-learning. User Modeling and User-Adapted Interaction 21 (1), pp. 51–97. Cited by: §1.
- Active Learning with Multi-Label SVM Classification.. In IJCAI, pp. 1479–1485. Cited by: §4.5.
- Word2Vec(skip-gram model. Note: https://towardsdatascience.com/word2vec-skip-gram-model-part-1-intuition-78614e4d6e0bOnline; accessed 25th May, 2021 Cited by: Figure 4.
- Understanding user behaviour through action sequences: from the usual to the unusual. IEEE transactions on visualization and computer graphics 25 (9), pp. 2838–2852. Cited by: §1.
- Survey on content based recommendation system. Int. J. Comput. Sci. Inf. Technol 7 (1), pp. 281–284. Cited by: §1.
- word2vec parameter learning explained. arXiv preprint arXiv:1411.2738. Cited by: §3.1.
How well do distributional models capture different types of semantic knowledge?.
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), Vol. 2, pp. 726–730. Cited by: §3.1.3.
- A multi-agent framework for context-aware dynamic user profiling for web personalization. In Software Engineering, pp. 1–16. Cited by: §1.
- Research on personalized hybrid recommendation system. In 2017 International Conference on Computer, Information and Telecommunication Systems (CITS), pp. 133–137. Cited by: §1.
- Scholarly paper recommendation via user’s recent research interests. In Proceedings of the 10th annual joint conference on Digital libraries, pp. 29–38. Cited by: §1, Table 2, §2.
- A hybrid approach for article recommendation in research social networks. Journal of Information Science 44 (5), pp. 696–711. Cited by: Table 2, §2.
Full-text or abstract? Examining topic coherence scores using latent dirichlet allocation.
2017 IEEE International conference on data science and advanced analytics (DSAA), pp. 165–174. Cited by: §3.1.2.
- Review of Probability Distributions for Modeling Count Data. arXiv preprint arXiv:2001.04343. Cited by: item 1, item 2, item 4.
- HAR-SI: A novel hybrid article recommendation approach integrating with social information in scientific social network. Knowledge-Based Systems 148, pp. 85–99. Cited by: §1, Table 2, §2.
- Modeling user activity patterns for next-place prediction. IEEE Systems Journal 11 (2), pp. 1060–1071. Cited by: §1.
- What to Do Next: Modeling User Behaviors by Time-LSTM.. In IJCAI, Vol. 17, pp. 3602–3608. External Links: Cited by: §1.