RaKUn: Rank-based Keyword extraction via Unsupervised learning and Meta vertex aggregation

07/15/2019 ∙ by Blaž Škrlj, et al. ∙ Jozef Stefan Institute 0

Keyword extraction is used for summarizing the content of a document and supports efficient document retrieval, and is as such an indispensable part of modern text-based systems. We explore how load centrality, a graph-theoretic measure applied to graphs derived from a given text can be used to efficiently identify and rank keywords. Introducing meta vertices (aggregates of existing vertices) and systematic redundancy filters, the proposed method performs on par with state-of-the-art for the keyword extraction task on 14 diverse datasets. The proposed method is unsupervised, interpretable and can also be used for document visualization.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

rakun

Rank-based Unsupervised Keyword Extraction via Metavertex Aggregation


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction and related work

Keywords are terms (i.e. expressions) that best describe the subject of a document [2]. A good keyword effectively summarizes the content of the document and allows it to be efficiently retrieved when needed. Traditionally, keyword assignment was a manual task, but with the emergence of large amounts of textual data, automatic keyword extraction methods have become indispensable. Despite a considerable effort from the research community, state-of-the-art keyword extraction algorithms leave much to be desired and their performance is still lower than on many other core NLP tasks [13]. The first keyword extraction methods mostly followed a supervised approach [14, 24, 31]

: they first extract keyword features and then train a classifier on a gold standard dataset. For example, KEA

[31]

, a state of the art supervised keyword extraction algorithm is based on the Naive Bayes machine learning algorithm. While these methods offer quite good performance, they rely on an annotated gold standard dataset and require a (relatively) long training process. In contrast, unsupervised approaches need no training and can be applied directly without relying on a gold standard document collection. They can be further divided into statistical and graph-based methods. The former, such as YAKE

[7, 6], KP-MINER [10] and RAKE [25], use statistical characteristics of the texts to capture keywords, while the latter, such as Topic Rank [3], TextRank [22], Topical PageRank [29] and Single Rank [30], build graphs to rank words based on their position in the graph. Among statistical approaches, the state-of-the-art keyword extraction algorithm is YAKE [7, 6]

, which is also one of the best performing keyword extraction algorithms overall; it defines a set of five features capturing keyword characteristics which are heuristically combined to assign a single score to every keyword. On the other hand, among graph-based approaches, Topic Rank

[3] can be considered state-of-the-art; candidate keywords are clustered into topics and used as vertices in the final graph, used for keyword extraction. Next, a graph-based ranking model is applied to assign a significance score to each topic and keywords are generated by selecting a candidate from each of the top-ranked topics. Network-based methodology has also been successfully applied to the task of topic extraction [28].

The method that we propose in this paper, RaKUn, is a graph-based keyword extraction method. We exploit some of the ideas from the area of graph aggregation-based learning, where, for example, graph convolutional neural networks and similar approaches were shown to yield high quality vertex representations by aggregating their neighborhoods’ feature space

[5]. This work implements some of the similar ideas (albeit not in a neural network setting), where redundant information is aggregated into meta vertices in a similar manner. Similar efforts were shown as useful for hierarchical subnetwork aggregation in sensor networks [8] and in biological use cases of simulation of large proteins [9].

The main contributions of this paper are as follows. The notion of load centrality was to our knowledge not yet sufficiently exploited for keyword extraction. We show that this fast measure offers competitive performance to other widely used centralities, such as for example the PageRank centrality (used in [22]). To our knowledge, this work is the first to introduce the notion of meta vertices with the aim of aggregating similar vertices, following similar ideas to the statistical method YAKE [7], which is considered a state-of-the-art for the keyword extraction. Next, as part of the proposed RaKUn algorithm we extend the extraction from unigrams also to bigram and threegram keywords based on load centrality scores computed for considered tokens. Last but not least, we demonstrate how arbitrary textual corpora can be transformed into weighted graphs whilst maintaining global sequential information, offering the opportunity to exploit potential context not naturally present in statistical methods.

The paper is structured as follows. We first present the text to graph transformation approach (Section 2), followed by the introduction of the RaKUn keyword extractor (Section 3). We continue with qualitative evaluation (Section 4) and quantitative evaluation (Section 5), before concluding the paper in Section 6.

2 Transforming texts to graphs

We first discuss how the texts are transformed to graphs, on which RaKUn operates. Next, we formally state the problem of keyword extraction and discuss its relation to graph centrality metrics.

2.1 Representing text

In this work we consider directed graphs. Let represent a graph comprised of a set of vertices and a set of edges (

), which are ordered pairs. Further, each edge can have a real-valued weight assigned. Let

represent a document comprised of tokens . The order in which tokens in text appear is known, thus is a totally ordered set. A potential way of constructing a graph from a document is by simply observing word co-occurrences. When two words co-occur, they are used as an edge. However, such approaches do not take into account the sequence nature of the words, meaning that the order is lost. We attempt to take this aspect into account as follows. The given corpus is traversed, and for each element , its successor , together with a given element, forms a directed edge . Finally, such edges are weighted according to the number of times they appear in a given corpus. Thus the graph, constructed after traversing a given corpus, consists of all local neighborhoods (order one), merged into a single joint structure. Global contextual information is potentially kept intact (via weights), even though it needs to be detected via network analysis as proposed next.

2.2 Improving graph quality by meta vertex construction

A naïve approach to constructing a graph, as discussed in the previous section, commonly yields noisy graphs, rendering learning tasks harder. Therefore, we next discuss the selected approaches we employ in order to reduce both the computational complexity and the spatial complexity of constructing the graph, as well as increasing its quality (for the given down-stream task).

First, we consider the following heuristics which reduce the complexity of the graph that we construct for keyword extraction: Considered token length (while traversing the document , only tokens of length are considered), and next, lemmatization (tokens can be lemmatized, offering spatial benefits and avoiding redundant vertices in the final graph). The two modifications yield a potentially “simpler” graph, which is more suitable and faster for mining.

Even if the optional lemmatization step is applied, one can still aim at further reducing the graph complexity by merging similar vertices. This step is called meta vertex construction. The motivation can be explained by the fact, that even similar lemmas can be mapped to the same keyword (e.g., mechanic and mechanical; normal and abnormal). This step also captures spelling errors (similar vertices that will not be handled by lemmatization), spelling differences (e.g., British vs. American English), non-standard writing (e.g., in Twitter data), mistakes in lemmatization or unavailable or omitted lemmatization step.

Figure 1: Meta vertex construction. Sets of highlighted vertices are merged into a single vertex. The resulting graph has less vertices, as well as edges.

The meta-vertex construction step works as follows. Let represent the set of vertices, as defined above. A meta vertex is comprised of a set of vertices that are elements of , i.e. . Let denote the -th meta vertex. We construct a given so that for each , ’s initial edges (prior to merging it into a meta vertex) are rewired to the newly added . Note that such edges connect to vertices which are not a part of . Thus, both the number of vertices, as well as edges get reduced substantially. This feature is implemented via the following procedure:

  1. Meta vertex candidate identification. Edit distance and word lengths distance are used to determine whether two words should be merged into a meta vertex (only if length distance threshold is met, the more expensive edit distance is computed).

  2. The meta vertex creation. As common identifiers, we use the stemmed version of the original vertices and if there is more than one resulting stem, we select the vertex from the identified candidates that has the highest centrality value in the graph and its stemmed version is introduced as a novel vertex (meta vertex).

  3. The edges of the words entailed in the meta vertex are next rewired to the meta vertex.

  4. The two original words are removed from the graph.

  5. The procedure is repeated for all candidate pairs.

A schematic representation of meta vertex construction is shown in Figure 1. The yellow and blue groups of vertices both form a meta vertex, the resulting (right) graph is thus substantially reduced, both with respect to the number of vertices, as well as the number of edges.

3 Keyword identification

Up to this point, we discussed how the graph used for keyword extraction is constructed. In this work, we exploit the notion of load centrality, a fast measure for estimating the importance of vertices in graphs. This metric can be defined as follows.

3.0.1 Load centrality

The load centrality of a vertex falls under the family of centralities which are defined based on the number of shortest paths that pass through a given vertex , i.e. , where represents the number of shortest paths that pass from vertex to vertex via and the number of all shortest paths between and (see [4, 11]). The considered load centrality measure is subtly different from the better known betweenness centrality; specifically, it is assumed that each vertex sends a package to each other vertex to which it is connected, with routing based on a priority system: given an input of flow arriving at vertex with destination ’, divides equally among all neighbors of minimum shortest path to the target. The total flow passing through a given via this process is defined as ’s load. Load centrality thus maps from the set of vertices to real values. For detailed description and computational complexity analysis, see [4]. Intuitively, vertices of the graph with the highest load centrality represent key vertices in a given network. In this work, we assume such vertices are good descriptors of the input document (i.e. keywords). Thus, ranking the vertices yields a priority list of (potential) keywords.

3.0.2 Formulating the RaKUn algorithm

We next discuss how the considered centrality is used as part of the whole keyword extraction algorithm RaKUn, summarized in Algorithm 1.

Data: Document , consisting of tokens
Parameters : General: number of keywords , minimal token length ; Meta vertex parameters: edit distance threshold , word length difference threshold , Multi-word keywords parameters: path length , 2-gram frequency threshold
Result: A set of keywords
corpusGraph EmptyGraph;
   Initialization.
1 for  do
2       edge ;
3       if edge not in corpusGraph and  then
             add edge to corpusGraph ;
               Graph construction.
4            
5       end if
      updateEdgeWeight(corpusGraph, edge) ;
         Weight update.
6      
7 end for
8corpusGraph generateMetaVertices(corpusGraph, , );
tokenRanks loadCentrality(corpusGraph) ;
   Initial token ranks.
scoredKeywords generateKeywords(, , tokenRanks) ;
   Keyword search.
9 = scoredKeywords[:];
return
Algorithm 1 RaKUn algorithm.

The algorithm consists of three main steps described next. First, a graph is constructed from a given ordered set of tokens (e.g., a document) (lines 1 to 8). The resulting graph is commonly very sparse, as most of the words rarely co-occur. The result of this step is a smaller, denser graph, where both the number of vertices, as well as edges is lower. Once constructed, load centrality (line 10) is computed for each vertex. Note that at this point, should the top vertices by centrality be considered, only single term keywords emerge. As it can be seen from line 11, to extend the selection to 2- and 3-grams, the following procedure is proposed:

2-gram keywords.

Keywords comprised of two terms are constructed as follows. First, pairs of first order keywords (all tokens) are counted. If the support (= number of occurrences) is higher than (line 11 in Algorithm 1), the token pair is considered as potential 2-gram keyword. The load centralities of the two tokens are averaged, i.e. , and the obtained keywords are considered for final selection along with the computed ranks.

3-gram keywords.

For construction of 3-gram keywords, we follow a similar idea to that of bigrams. The obtained 2-gram keywords (previous step) are further explored as follows. For each candidate 2-gram keyword, we consider two extension scenarios: Extending the 2-gram from the left side. Here, the in-neighborhood of the left token is considered as a potential extension to a given keyword. Ranks of such candidates are computed by averaging the centrality scores in the same manner as done for the 2-gram case. Extending the 2-gram from the right side. The difference with the previous point is that all outgoing connections of the rightmost vertex are considered as potential extensions. The candidate keywords are ranked, as before, by averaging the load centralities, i.e. .

Having obtained a set of (keyword, score) pairs, we finally sort the set according to the scores (descendingly), and take top keywords as the result. We next discuss the evaluation the proposed algorithm.

4 Qualitative evaluation

Figure 2: Keyword visualization. Red dots represent keywords, other dots represent the remainder of the corpus graph.
Figure 3: Keyword visualization. A close-up view shows some examples of keywords and their location in the corpus graph. The keywords are mostly located in the central part of the graph.

RaKUn can be used also for visualization of keywords in a given document or document corpus. A visualization of extracted keywords is applied to an example from wiki20 [21] (for dataset description see Section 5.1), where we visualize both the global corpus graph, as well as a local (document) view where keywords are emphasized, see Figures 2 and 3, respectively. It can be observed that the global graph’s topology is far from uniform — even though we did not perform any tests of scale-freeness, we believe the constructed graphs are subject to distinct topologies, where keywords play prominent roles.

5 Quantitative evaluation

This section discusses the experimental setting used to validate the proposed RaKUn approach against state-of-the-art baselines. We first describe the datasets, and continue with the presentation of the experimental setting and results.

5.1 Datasets

For RaKUn evaluation, we used 14 gold standard datasets from the list of [7, 6], from which we selected datasets in English. Detailed dataset descriptions and statistics can be found in Table 1, while full statistics and files for download can be found online111https://github.com/LIAAD/KeywordExtractor-Datasets. Most datasets are from the domain of computer science or contain multiple domains. They are very diverse in terms of the number of documents—ranging from wiki20 with 20 documents to Inspec with 2,000 documents, in terms of the average number of gold standard keywords per document—from 5.07 in kdd to 48.92 in 500N-KPCrowd-v1.1—and in terms of the average length of the documents—from 75.97 in kdd to SemEval2017 with 8332.34.

Dataset Desc. No. docs Avg. keywords Avg. doc length
500N-KPCrowd-v1.1 [18] Broadcast news transcriptions 500 48.92 408.33
Inspec [15] Scientific journal papers from Computer Science collected between 1998 and 2002 2000 14.62 128.20
Nguyen2007 [23] Scientific conference papers 209 11.33 5201.09
PubMed Full-text papers collected from PubMed Central 500 15.24 3992.78
Schutz2008[26] Full-text papers collected from PubMed Central 1231 44.69 3901.31
SemEval2010 [17] Scientific papers from the ACM Digital Library 243 16.47 8332.34
SemEval2017 [1] 500 paragraphs selected from 500 ScienceDirect journal articles, evenly distributed among the domains of Computer Science, Material Sciences and Physics 500 18.19 178.22
citeulike180 [19] Full-text papers from the CiteULike.org 180 18.42 4796.08
fao30 [20] Agricultural documents from two datasets based on Food and Agriculture Organization (FAO) of the UN 30 33.23 4777.70
fao780 [20] Agricultural documents from two datasets based on Food and Agriculture Organization (FAO) of the UN 779 8.97 4971.79
kdd [12] Abstracts from the ACM Conference on Knowledge Discovery and Data Mining (KDD) during 2004-2014 755 5.07 75.97
theses100 Full master and Ph.D. theses from the University of Waikato 100 7.67 4728.86
wiki20 [21] Computer science technical research reports 20 36.50 6177.65
www [12] Abstracts of WWW conference papers from 2004-2014 1330 5.80 84.08
Table 1: Selection of keyword extraction datasets in English language

5.2 Experimental setting

We adopted the same evaluation procedure as used for the series of results recently introduced by YAKE authors [6]222We attempted to reproduce YAKE evaluation procedure based on their experimental setup description and also thank the authors for additional explanation regarding the evaluation. For comparison of results we refer to their online repository https://github.com/LIAAD/yake [7]. Five fold cross validation was used to determine the overall performance, for which we measured Precision, Recall and F1 score, with the latter being reported in Table 2.333The complete results and the code are available at https://github.com/SkBlaz/rakun Keywords were stemmed prior to evaluation.444This being a standard procedure, as suggested by the authors of YAKE. As the number of keywords in the gold standard document is not equal to the number of extracted keywords (in our experiments =10), in the recall we divide the correctly extracted keywords by the number of keywords parameter , if in the gold standard number of keywords is higher than .

Selecting default configuration. First, we used a dedicated run for determining the default parameters. The cross validation was performed as follows. For each train-test dataset split, we kept the documents in the test fold intact, whilst performing a grid search on the train part to find the best parametrization. Finally, the selected configuration was used to extract keywords on the unseen test set. For each train-test split, we thus obtained the number of true and false positives, as well as true and false negatives, which were summed up and, after all folds were considered, used to obtain final F1 scores, which served for default parameter selection. The grid search was conducted over the following parameter range Num keywords: , Num tokens (the number of tokens a keyword can consist of): Count threshold (minimum support used to determine potential bigram candidates): Word length difference threshold (maximum difference in word length used to determine whether a given pair of words shall be aggregated): , Edit length difference (maximum edit distance allowed to consider a given pair of words for aggregation): , Lemmatization: [yes, no].

Even if one can use the described grid-search fine-tunning procedure to select the best setting for individual datasets, we observed that in nearly all the cases the best settings were the same. We therefore selected it as the default, which can be used also on new unlabeled data. The default parameter setting was as follows. The number of tokens was set to 1, Count threshold was thus not needed (only unigrams), for meta vertex construction Word length difference threshold was set to 3 and Edit distance to 2. Words were initially lemmatized. Next, we report the results using these selected parameters (same across all datasets), by which we also test the general usefulness of the approach.

5.3 Results

The results are presented in Table 2, where we report on F1 with the default parameter setting of RaKUn, together with the results from related work, as reported in the github table of the YAKE [7]555https://github.com/LIAAD/yake/blob/master/docs/YAKEvsBaselines.jpg (accessed on: June 11, 2019).

Dataset RaKUn YAKE Single Rank KEA KP-MINER Text Rank Topic Rank Topical PageRank
500N-KPCrowd-v1.1 0.428 0.173 0.157 0.159 0.093 0.111 0.172 0.158
Inspec 0.054 0.316 0.378 0.150 0.047 0.098 0.289 0.361
Nguyen2007 0.096 0.256 0.158 0.221 0.314 0.167 0.173 0.148
PubMed 0.075 0.106 0.039 0.216 0.114 0.071 0.085 0.052
Schutz2008 0.418 0.196 0.086 0.182 0.230 0.118 0.258 0.123
SemEval2010 0.091 0.211 0.129 0.215 0.261 0.149 0.195 0.125
SemEval2017 0.112 0.329 0.449 0.201 0.071 0.125 0.332 0.443
citeulike180 0.250 0.256 0.066 0.317 0.240 0.112 0.156 0.072
fao30 0.233 0.184 0.066 0.139 0.183 0.077 0.154 0.107
fao780 0.094 0.187 0.085 0.114 0.174 0.083 0.137 0.108
kdd 0.046 0.156 0.085 0.063 0.036 0.050 0.055 0.089
theses100 0.069 0.111 0.060 0.104 0.158 0.058 0.114 0.083
wiki20 0.190 0.162 0.038 0.134 0.156 0.074 0.106 0.059
www 0.060 0.172 0.097 0.072 0.037 0.059 0.067 0.101
#Wins 4 3 2 2 3 0 0 0
Table 2: Performance comparison with state-of-the-art approaches.

We first observe that on the selection of datasets, the proposed RaKUn wins more than any other method. We also see that it performs notably better on some of the datasets, whereas on the remainder it performs worse than state-of-the-art approaches. Such results demonstrate that the proposed method finds keywords differently, indicating load centrality, combined with meta vertices, represents a promising research venue. The datasets, where the proposed method outperforms the current state-of-the-art results are: 500N-KPCrowd-v1.1, Schutz2008, fao30 and wiki20. In addition, RaKUn also achieves competitive results on citeulike180. A look at the gold standard keywords in these datasets reveals that they contain many single-word units which is why the default configuration (which returns unigrams only) was able to perform so well.

Four of these five datasets (500N-KPCrowd-v1.1, Schutz2008, fao30, wiki20) are also the ones with the highest average number of keywords per document with at least 33.23 keywords per document, while the fifth dataset (citeulike180) also has a relatively large value (18.42). Similarly, four of the five well-performing datasets (Schutz2008, fao30, citeulike180, wiki20) include long documents (more than 3,900 words), with the exception being 500N-KPCrowd-v1.1. For details, see Table 1. We observe that the proposed RaKUn outperforms the majority of other competitive graph-based methods. For example, the most similar variants Topical PageRank and TextRank do not perform as well on the majority of the considered datasets. Furthermore, RaKUn also outperforms KEA, a supervised keyword learner (e.g., very high difference in performance on 500N-KPCrowd-v1.1 and Schutz2008 datasets), indicating unsupervised learning from the graph’s structure offers a more robust keyword extraction method than learning a classifier directly.

6 Conclusions and further work

In this work we proposed RaKUn, a novel unsupervised keyword extraction algorithm which exploits the efficient computation of load centrality, combined with the introduction of meta vertices, which notably reduce corpus graph sizes. The method is fast, and performs well compared to state-of-the-art such as YAKE and graph-based keyword extractors. In further work, we will test the method on other languages. We also believe additional semantic background knowledge information could be used to prune the graph’s structure even further, and potentially introduce keywords that are inherently not even present in the text (cf.[27]). The proposed method does not attempt to exploit meso-scale graph structure, such as convex skeletons or communities, which are known to play prominent roles in real-world networks and could allow for vertex aggregation based on additional graph properties. We believe the proposed method could also be extended using the Ricci-Oliver [16] flows on weighted graphs.

6.0.1 Acknowledgements

The work was supported by the Slovenian Research Agency through a young researcher grant [BŠ], core research programme (P2-0103), and projects Semantic Data Mining for Linked Open Data (N2-0078) and Terminology and knowledge frames across languages (J6-9372). This work was supported also by the EU Horizon 2020 research and innovation programme, Grant No. 825153, EMBEDDIA (Cross-Lingual Embeddings for Less-Represented Languages in European News Media).

References

  • [1] Augenstein, I., Das, M., Riedel, S., Vikraman, L., McCallum, A.: Semeval 2017 task 10: Scienceie - extracting keyphrases and relations from scientific publications. CoRR abs/1704.02853 (2017)
  • [2] Beliga, S., Meštrović, A., Martinčić-Ipšić, S.: An overview of graph-based keyword extraction methods and approaches. Journal of information and organizational sciences 39(1), 1–20 (2015)
  • [3]

    Bougouin, A., Boudin, F., Daille, B.: Topicrank: Graph-based topic ranking for keyphrase extraction. In: International Joint Conference on Natural Language Processing (IJCNLP). pp. 543–551 (2013)

  • [4] Brandes, U.: On variants of shortest-path betweenness centrality and their generic computation. Social Networks 30(2), 136–145 (2008)
  • [5] Cai, H., Zheng, V.W., Chang, K.C.C.: A comprehensive survey of graph embedding: Problems, techniques, and applications. IEEE Transactions on Knowledge and Data Engineering 30(9), 1616–1637 (2018)
  • [6] Campos, R., Mangaravite, V., Pasquali, A., Jorge, A.M., Nunes, C., Jatowt, A.: A text feature based automatic keyword extraction method for single documents. In: Pasi, G., Piwowarski, B., Azzopardi, L., Hanbury, A. (eds.) Advances in Information Retrieval. pp. 684–691. Springer International Publishing, Cham (2018)
  • [7] Campos, R., Mangaravite, V., Pasquali, A., Jorge, A.M., Nunes, C., Jatowt, A.: Yake! collection-independent automatic keyword extractor. In: European Conference on Information Retrieval. pp. 806–810. Springer (2018)
  • [8] Chan, H., Perrig, A., Song, D.: Secure hierarchical in-network aggregation in sensor networks. In: Proceedings of the 13th ACM conference on Computer and communications security. pp. 278–287. ACM (2006)
  • [9] Doruker, P., Jernigan, R.L., Bahar, I.: Dynamics of large proteins through hierarchical levels of coarse-grained structures. Journal of computational chemistry 23(1), 119–127 (2002)
  • [10] El-Beltagy, S.R., Rafea, A.: Kp-miner: A keyphrase extraction system for english and arabic documents. Information Systems 34(1), 132–144 (2009)
  • [11] Goh, K.I., Kahng, B., Kim, D.: Universal behavior of load distribution in scale-free networks. Phys. Rev. Lett. 87, 278701 (Dec 2001)
  • [12]

    Gollapalli, S.D., Caragea, C.: Extracting keyphrases from research papers using citation networks. In: Twenty-Eighth AAAI Conference on Artificial Intelligence (2014)

  • [13] Hasan, K.S., Ng, V.: Automatic keyphrase extraction: A survey of the state of the art. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). vol. 1, pp. 1262–1273 (2014)
  • [14] Hulth, A.: Improved automatic keyword extraction given more linguistic knowledge. In: Proceedings of the 2003 conference on Empirical methods in natural language processing. pp. 216–223. Association for Computational Linguistics (2003)
  • [15] Hulth, A.: Improved automatic keyword extraction given more linguistic knowledge. In: Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing. pp. 216–223. EMNLP ’03 (2003)
  • [16] Jin, M., Kim, J., Gu, X.D.: Discrete surface ricci flow: Theory and applications. In: IMA International Conference on Mathematics of Surfaces. pp. 209–232. Springer (2007)
  • [17] Kim, S.N., Medelyan, O., Kan, M.Y., Baldwin, T.: Semeval-2010 task 5: Automatic keyphrase extraction from scientific articles. In: Proceedings of the 5th International Workshop on Semantic Evaluation. pp. 21–26. SemEval ’10 (2010)
  • [18] Marujo, L., Viveiros, M., da Silva Neto, J.P.: Keyphrase cloud generation of broadcast news. CoRR abs/1306.4606 (2013)
  • [19] Medelyan, O., Frank, E., Witten, I.H.: Human-competitive tagging using automatic keyphrase extraction. In: Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3 - Volume 3. pp. 1318–1327. EMNLP ’09 (2009)
  • [20] Medelyan, O., Witten, I.H.: Domain-independent automatic keyphrase indexing with small training sets. Journal of the American Society for Information Science and Technology 59(7), 1026–1040
  • [21] Medelyan, O., Witten, I.H., Milne, D.: Topic indexing with wikipedia. In: Proceedings of the AAAI WikiAI workshop. vol. 1, pp. 19–24 (2008)
  • [22] Mihalcea, R., Tarau, P.: Textrank: Bringing order into text. In: Proceedings of the 2004 conference on empirical methods in natural language processing (2004)
  • [23] Nguyen, T.D., Kan, M.Y.: Keyphrase extraction in scientific publications. In: Goh, D.H.L., Cao, T.H., Sølvberg, I.T., Rasmussen, E. (eds.) Asian Digital Libraries. Looking Back 10 Years and Forging New Frontiers. pp. 317–326. Springer Berlin Heidelberg, Berlin, Heidelberg (2007)
  • [24] Nguyen, T.D., Luong, M.T.: Wingnus: Keyphrase extraction utilizing document logical structure. In: Proceedings of the 5th international workshop on semantic evaluation. pp. 166–169. Association for Computational Linguistics (2010)
  • [25] Rose, S., Engel, D., Cramer, N., Cowley, W.: Automatic keyword extraction from individual documents. Text mining: applications and theory pp. 1–20 (2010)
  • [26] Schutz, A.T., et al.: Keyphrase extraction from single documents in the open domain exploiting linguistic and statistical methods. Master’s thesis, National University of Ireland (2008)
  • [27] Škrlj, B., Kralj, J., Lavrač, N., Pollak, S.: Towards robust text classification with semantics-aware recurrent neural architecture. Machine Learning and Knowledge Extraction 1(2), 575–589 (2019)
  • [28] Spitz, A., Gertz, M.: Entity-centric topic extraction and exploration: A network-based approach. In: European Conference on Information Retrieval. pp. 3–15. Springer (2018)
  • [29] Sterckx, L., Demeester, T., Deleu, J., Develder, C.: Topical word importance for fast keyphrase extraction. In: Proceedings of the 24th International Conference on World Wide Web. pp. 121–122. ACM (2015)
  • [30] Wan, X., Xiao, J.: Single document keyphrase extraction using neighborhood knowledge. In: AAAI. vol. 8, pp. 855–860 (2008)
  • [31] Witten, I.H., Paynter, G.W., Frank, E., Gutwin, C., Nevill-Manning, C.G.: Kea: Practical automated keyphrase extraction. In: Design and Usability of Digital Libraries: Case Studies in the Asia Pacific, pp. 129–152. IGI Global (2005)