Document Similarity for Texts of Varying Lengths via Hidden Topics

03/26/2019 ∙ by Hongyu Gong, et al. ∙ ibm University of Illinois at Urbana-Champaign 0

Measuring similarity between texts is an important task for several applications. Available approaches to measure document similarity are inadequate for document pairs that have non-comparable lengths, such as a long document and its summary. This is because of the lexical, contextual and the abstraction gaps between a long document of rich details and its concise summary of abstract information. In this paper, we present a document matching approach to bridge this gap, by comparing the texts in a common space of hidden topics. We evaluate the matching algorithm on two matching tasks and find that it consistently and widely outperforms strong baselines. We also highlight the benefits of incorporating domain knowledge to text matching.



There are no comments yet.


page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Measuring the similarity between documents is of key importance in several natural processing applications, including information retrieval Salton and Buckley (1988), book recommendation Gopalan et al. (2014), news categorization Ontrup and Ritter (2002) and essay scoring Landauer (2003). A range of document similarity approaches have been proposed and effectively used in recent applications, including Lai et al. (2015); Bordes et al. (2015). Central to these tasks is the assumption that the documents being compared are of comparable lengths.

Heredity: Inheritance and Variation of Traits
All cells contain genetic information in the form of DNA
molecules. Genes are regions in the DNA that contain
the instructions that code for the formation of proteins.
Pedigree Analysis: A Family Tree of Traits
Introduction: Do you have the same hair color or eye color
as your mother? When we look at members of a family it
is easy to see that some physical characteristics or traits
are shared. To start this project, you should draw a pedigree
showing the different members of your family. Ideally you
should include multiple people from at least three
Materials and Equipment: Paper, Pen, Lab notebook
Procedure: Before starting this science project, you should
Table 1: A Sample Concept-Project Matching

Advances in natural language understanding techniques, such as text summarization and recommendation, have generated new requirements for comparing documents. For instance, summarization techniques (extractive and abstractive) are capable of automatically generating textual summaries by converting a long document of several hundred words into a condensed text of only a few words while preserving the core meaning of the original text

Kedzie and McKeown (2016). Conceivably, a related aspect of summarization is the task of bidirectional matching of a summary and a document or a set of documents, which is the focus of this study. The document similarity considered in this paper is between texts that have significant differences not only in length, but also in the abstraction level (such as a definition of an abstract concept versus a detailed instance of that abstract concept).

As an illustration, consider the task of matching a Concept with a Project as shown in Table 1. Here a Concept is a grade-level science curriculum item and represents the summary. A Project, listed in a collection of science projects, represents the document. Typically, projects are long texts, including an introduction, materials and procedures, whereas science concepts are significantly shorter in comparison having a title and a concise and abstract description. The concepts and projects are described in detail in Section 5.1. The matching task is to automatically suggest a hands-on project for a given concept in the curriculum, such that the project can help reinforce a learner’s basic understanding of the concept. Conversely, given a science project, one may need to identify the concept it covers by matching it to a listed concept in the curriculum. This would be conceivable in the context of an intelligent tutoring system.

Challenges to the matching task mentioned above include: 1) A mismatch in the relative lengths of the documents being compared – a long piece of text (henceforth termed document) and a short piece of text (termed summary) – gives rise to the vocabulary mismatch problem, where the document and the summary do not share a majority of key terms. 2) The context mismatch problem arising because a document provides a reasonable amount of text to infer the contextual meaning of a key term, but a summary only provides a limited context, which may or may not involve the same terms considered in the document. These challenges render existing approaches to comparing documents–for instance, those that rely on document representations (e.g., Doc2Vec Le and Mikolov (2014))–inadequate, because the predominance of non-topic words in the document introduces noise to its representation while the summary is relatively noise-free, rendering Doc2Vec inadequate for comparing them.

Our approach to the matching problem is to allow a multi-view generalization of the document. Here, multiple hidden topic vectors are used to establish a common ground to capture as much information of the document and the summary as possible and eventually to score the relevance of the pair. We empirically validate our approach on two tasks – that of project-concept matching in grade-level science and that of scientific paper-summary matching–using both custom-made and publicly available datasets. The main contributions of this paper are:

  • We propose an embedding-based hidden topic model to extract topics and measure their importance in long documents.

  • We present a novel geometric approach to compare documents with differing modality (a long document to a short summary) and validate its performance relative to strong baselines.

  • We explore the use of domain-specific word embeddings for the matching task and show the explicit benefit of incorporating domain knowledge in the algorithm.

  • We make available the first dataset111Our code and data are available at: on project-concept matching in the science domain to help further research in this area.

2 Related Works

(a) word geometry of general embedding (b) word geometry of science domain embeddings
Figure 1: Two key words “forces” and “matters” are shown in red and blue respectively. Red words represent different senses of “forces”, and blue words carry senses of “matters”. “forces” mainly refers to “army” and “matters” refers to “issues” in general embedding of (a), whereas “forces” shows its sense of “gravity” and “matters” shows the sense of “solids” in science-domain embedding of (b)

Document similarity approaches quantify the degree of relatedness between two pieces of texts of comparable lengths and thus enable matching between documents. Traditionally, statistical approaches (e.g., Metzler et al. (2007)) and vector-space-based methods (including the robust Latent Semantic Analysis (LSA) Dumais (2004)

) have been used. More recently, neural network-based methods have been used for document representation and comparison. These methods include average word embeddings

Mikolov et al. (2013), Doc2Vec Le and Mikolov (2014), Skip-Thought vectors Kiros et al. (2015), recursive neural network-based methods Socher et al. (2014), LSTM architectures Tai et al. (2015)

, and convolutional neural networks

Blunsom et al. (2014).

Considering works that avoid using an explicit document representation for comparing documents, the state-of-the-art method is word mover’s distance (WMD), which relies on pre-trained word embeddings Kusner et al. (2015). Given these embeddings, WMD defines the distance between two documents as the best transport cost of moving all the words from one document to another within the space of word embeddings. The advantages of WMD are that it is free of hyper-parameters and achieves high retrieval accuracy on document classification tasks with documents of comparable lengths. However, it is computationally expensive for long documents as mentioned in Kusner et al. (2015).

Clearly, what is lacking in prior literature is a study of matching approaches for documents with widely different sizes. It is this gap in the literature that we expect to fill by way of this study.

Latent Variable Models. Latent variable models including count-based and probabilistic models have been studied in many previous works. Count-based models such as Latent Semantic Indexing (LSI) compare two documents based on their combined vocabulary Deerwester et al. (1990)

. When documents have highly mismatched vocabularies such as those that we study, relevant documents might be classified as irrelevant. Our model is built upon word-embeddings which is more robust to such a vocabulary mismatch.

Probabilistic models such as Latent Dirichlet Analysis (LDA) define topics as distributions over words Blei et al. (2003). In our model, topics are low-dimensional real-valued vectors (more details in Section 4.2).

Figure 2: The system for document-summary matching

3 Domain Knowledge

Domain information pertaining to specific areas of knowledge is made available in texts by the use of words with domain-specific meanings or senses. Consequently, domain knowledge has been shown to be critical in many NLP applications such as information extraction and multi-document summarization Cheung and Penn (2013a), spoken language understanding Chen et al. (2015), aspect extraction Chen et al. (2013) and summarization Cheung and Penn (2013b).

As will be described later, our distance metric for comparing a document and a summary relies on word embeddings. We show in this work, that embeddings trained on a science-domain corpus lead to better performance than embeddings on the general corpus (WikiCorpus). Towards this, we extract a science-domain sub-corpus from the WikiCorpus, and the corpus extraction will be detailed in Section 5.

To motivate the domain-specific behavior of polysemous words, we will qualitatively explore how domain-specific embeddings differ from the general embeddings on two polysemous science terms: forces and matters. Considering the fact that the meaning of a word is dictated by its neighbors, for each set of word embeddings, we plot the neighbors of these two terms in Figure 1 on to 2 dimensions using Locally Linear Embedding (LLE), which preserves word distances Roweis and Saul (2000). We then analyze the sense of the focus terms–here, forces and matters.

From Figure 1(a), we see that for the word forces, its general embedding is close to army, soldiers, allies indicating that it is related with violence and power in a general domain. Shifting our attention to Figure 1(b), we see that for the same term, its science embedding is closer to torque, gravity, acceleration implying that its science sense is more about physical interactions. Likewise, for the word matters, its general embedding is surrounded by affairs and issues, whereas, its science embedding is closer to particles and material, prompting that it represents substances. Thus, we conclude that domain specific embeddings (here, science), is capable of incorporating domain knowledge into word representations. We use this observation in our document-summary matching system to which we turn next.

4 Model

Our model that performs the matching between document and summary is depicted in Figure 2. It is composed of three modules that perform preprocessing, document topic generation, and relevance measurement between a document and a summary. Each of these modules is discussed below.

4.1 Preprocessing

The preprocessing module tokenizes texts and removes stop words and prepositions. This step allows our system to focus on the content words without impacting the meaning of original texts.

4.2 Topic Generation from Documents

We assume that a document (a long text) is a structured collection of words, with the ‘structure’ brought about by the composition of topics. In some sense, this ‘structure’ is represented as a set of hidden topics. Thus, we assume that a document is generated from certain hidden “topics”, analogous to the modeling assumption in LDA. However, unlike in LDA, the “topics” here are neither specific words nor the distribution over words, but are are essentially a set of vectors. In turn, this means that words (represented as vectors) constituting the document structure can be generated from the hidden topic vectors.

Introducing some notation, the word vectors in a document are , and the hidden topic vectors of the document are , where , in our experiments.

Linear operations using word embeddings have been empirically shown to approximate their compositional properties (e.g. the embedding of a phrase is nearly the sum of the embeddings of its component words) Mikolov et al. (2013). This motivates the linear reconstruction of the words from the document’s hidden topics while minimizing the reconstruction error. We stack the topic vectors as a topic matrix . We define the reconstructed word vector for the word as the optimal linear approximation given by topic vectors: , where


The reconstruction error for the whole document is the sum of each word’s reconstruction error and is given by: . This being a function of the topic vectors, our goal is to find the optimal so as to minimize the error :


where is the Frobenius norm of a matrix.

Without loss of generality, we require the topic vectors to be orthonormal, i.e., . As we can see, the optimization problem (2) describes an optimal linear space spanned by the topic vectors, so the norm and the linear dependency of the vectors do not matter. With the orthonormal constraints, we simplify the form of the reconstructed vector as:


We stack word vectors in the document as a matrix . The equivalent formulation to problem (2) is:



is an identity matrix.

The problem can be solved by Singular Value Decomposition (SVD), using which, the matrix

can be decomposed as , where ,, and is a diagonal matrix where the diagonal elements are arranged in a decreasing order of absolute values. We show in the supplementary material that the first vectors in the matrix are exactly the solution to .

We find optimal topic vectors by solving problem (4). We note that these topic vectors are not equally important, and we say that one topic is more important than another if it can reconstruct words with smaller error. Define as the reconstruction error when we only use topic vector to reconstruct the document:


Now define as the importance of topic , which measures the topic’s ability to reconstruct the words in a document:


We show in the supplementary material that the higher the importance is, the smaller the reconstruction error is. Now we normalize as so that the importance does not scale with the norm of the word matrix , and so that the importances of the topics sum to . Thus,


The number of topics

is a hyperparameter in our model. A small

may not cover key ideas of the document, whereas a large may keep trivial and noisy information. Empirically we find that captures most important information from the document.

4.3 Topic Mapping to Summaries

We have extracted topic vectors from the document matrix , whose importance is reflected by . In this module, we measure the relevance of a document-summary pair. Towards this, a summary that matches the document should also be closely related with the “topics” of that document. Suppose the vectors of the words in a summary are stacked as a matrix , where is the vector of the j-th word in a summary. Similar to the reconstruction of the document, the summary can also be reconstructed from the documents’ topic vectors as shown in Eq. (3). Let be the reconstruction of the summary word given by one topic :

Let be the relevance between a topic vector and summary word

. It is defined as the cosine similarity between

and :


Furthermore, let be the relevance between a topic vector and the summary, defined to be the average similarity between the topic vector and the summary words:


The relevance between a topic vector and a summary is a real value between 0 and 1.

As we have shown, the topics extracted from a document are not equally important. Naturally, a summary relevant to more important topics is more likely to better match the document. Therefore, we define as the relevance between the document and the summary , and is the sum of topic-summary relevance weighted by the importance of the topic:


where is the importance of topic as defined in (7). The higher is, the better the summary matches the document.

We provide a visual representation of the documents as shown in Figure 3 to illustrate the notion of hidden topics. The two documents are from science projects: a genetics project, Pedigree Analysis: A Family Tree of Traits ScienceBuddies (2017a), and a weather project, How Do the Seasons Change in Each Hemisphere ScienceBuddies (2017b). We project all embeddings to a three-dimensional space for ease of visualization.

Figure 3: Words mode and genes from the document on genetics and words storm and atmospheric from document on weather are represented by pink and blue points respectively. Linear space of hidden topics in genetics form the pink plane, where summary word heredity (the red point) roughly lies. Topic vectors of the document on weather form the blue plane, and the summary word weather (the darkblue point) lies almost on the same plane.

As seen in Figure 3, the hidden topics reconstruct the words in their respective documents to the extent possible. This means that the words of a document lie roughly on the plane formed by their corresponding topic vectors. We also notice that the summary words (heredity and weather respectively for the two projects under consideration) lie very close to the plane formed by the hidden topics of the relevant project while remaining away from the plane of the irrelevant project. This shows that the words in the summary (and hence the summary itself) can also be reconstructed from the hidden topics of documents that match the summary (and are hence ‘relevant’ to the summary). Figure 3 visually explains the geometric relations between the summaries, the hidden topics and the documents. It also validates the representation power of the extracted hidden topic vectors.

5 Experiments

In this section, we evaluate our document-summary matching approach on two specific applications where texts of different sizes are compared. One application is that of concept-project matching useful in science education and the other is that of summary-research paper matching.

Word Embeddings. Two sets of 300-dimension word embeddings were used in our experiments. They were trained by the Continuous Bag-of-Words (CBOW) model in word2vec Mikolov et al. (2013) but on different corpora. One training corpus is the full English WikiCorpus of size GB Al-Rfou et al. (2013). The second consists of science articles extracted from the WikiCorpus. To extract these science articles, we manually selected the science categories in Wikipedia and considered all subcategories within a depth of 3 from these manually selected root categories. We then extracted all articles in the aforementioned science categories resulting in a science corpus of size GB. The word vectors used for documents and summaries are both from the pretrained word2vec embeddings.

Baselines We include two state-of-the-art methods of measuring document similarity for comparison using their implementations available in gensim Řehůřek and Sojka (2010).
(1) Word movers’ distance (WMD) Kusner et al. (2015). WMD quantifies the distance between a pair of documents based on word embeddings as introduced previously (c.f. Related Work). We take the negative of their distance as a measure of document similarity (here between a document and a summary).
(2) Doc2Vec Le and Mikolov (2014). Document representations have been trained with neural networks. We used two versions of doc2vec: one trained on the full English Wikicorpus and a second trained on the science corpus, same as the corpora used for word embedding training. We used the cosine similarity between two text vectors to measure their relevance.

For a given document-summary pair, we compare the scores obtained using the above two methods with that obtained using our method.

5.1 Concept-Project matching

Science projects are valuable resources for learners to instigate knowledge creation via experimentation and observation. The need for matching a science concept with a science project arises when learners intending to delve deeper into certain concepts search for projects that match a given concept. Additionally, they may want to identify the concepts with which a set of projects are related.

We note that in this task, science concepts are highly concise summaries of the core ideas in projects, whereas projects are detailed instructions of the experimental procedures, including an introduction, materials and a description of the procedure, as shown in Table 1. Our matching method provides a way to bridge the gap between abstract concepts and detailed projects. The format of the concepts and the projects is discussed below.

method topic_science topic_wiki wmd_science wmd_wiki doc2vec_science doc2vec_wiki
Table 2: Classification results for the Concept-Project Matching task. All performance differences were statistically significant at .

Concepts. For the purpose of this study we use the concepts available in the Next Generation Science Standards (NGSS) NGSS (2017). Each concept is accompanied by a short description. For example, one concept in life science is Heredity: Inheritance and Variation of Traits. Its description is All cells contain genetic information in the form of DNA molecules. Genes are regions in the DNA that contain the instructions that code for the formation of proteins. Typical lengths of concepts are around 50 words.
Projects. The website Science Buddies ScienceBuddies (2017c) provides a list of projects from a variety of science and engineering disciplines such as physical sciences, life sciences and social sciences. A typical project consists of an abstract, an introduction, a description of the experiment and the associated procedures. A project typically has more than 1000 words.
Dataset. We prepared a representative dataset pairs of projects and concepts involving unique concepts from NGSS and unique projects from Science Buddies. Engineering undergraduate students annotated each pair with the decision whether it was a good match or not and received research credit. As a result, each concept-project pair received at least three annotations, and upon consolidation, we considered a concept-project pair to be a good match when a majority of the annotators agreed. Otherwise it was not considered a good match. The ratio between good matches and bad matches in the collected data was .

Classification Evaluation. Annotations from students provided the ground truth labels for the classification task. We randomly split the dataset into tuning and test instances with a ratio of

. A threshold score was tuned on the tuning data, and concept-project pairs with scores higher than this threshold were classified as a good matches during testing. We performed 10-fold cross validation, and report the average precision, recall, F1 score and their standard deviation in Table 


Our topic-based metric is denoted as “topic”, and the general-domain and science-domain embeddings are denoted as “wiki” and “science” respectively. We show the performance of our method against the two baselines while varying the underlying embeddings, thus resulting in 6 different combinations. For example, “topic_science” refers to our method with science embeddings. From the table (column 1) we notice the following: 1) Our method significantly outperforms the two baselines by a wide margin (10%) in both the general domain setting as well as the domain-specific setting. 2) Using science domain-specific word embeddings instead of the general word embeddings results in the best performance across all algorithms. This performance was observed despite the word embeddings being trained on a significantly smaller corpus compared to the general domain corpus.

Besides the classification metrics, we also evaluated the directed matching from concepts to projects with ranking metrics.

(a) Precision@1 (b) Precision@3 (c) Precision@6
Figure 4: Ranking Performance of All Methods

Ranking Evaluation Our collected dataset resulted in having a many-to-many matching between concepts and projects. This is because the same concept was found to be a good match for multiple projects and the same project was found to match many concepts. The previously described classification task evaluated the bidirectional concept-project matching. Next we evaluated the directed matching from concepts to projects, to see how relevant these top ranking projects are to a given input concepts. Here we use precision@k Radlinski and Craswell (2010)

as the evaluation metric, considering the percentage of relevant ones among top-ranking projects.

For this part, we only considered the methods using science domain embeddings as they have shown superior performance in the classificaiton task. For each concept, we check the precision@k of matched projects and place it in one of k+1 bins accordingly. For example, for k=3, if only two of the three top projects are a correct match, the concept is placed in the bin corresponding to . In Figure  4, we show the percentage of concepts that fall into each bin for the three different algorithms for k=1,3,6.

We observe that recommendations using the hidden topic approach fall more in the high value bin compared to others, performing consistently better than two strong baselines. The advantage becomes more obvious at precision@6. It is worth mentioning that wmd_science falls behind doc2vec_science in the classification task while it outperforms in the ranking task.

5.2 Text Summarization

The task of matching summaries and documents is commonly seen in real life. For example, we use an event summary “Google’s AlphaGo beats Korean player Lee Sedol in Go” to search for relevant news, or use the summary of a scientific paper to look for related research publications. Such matching constitutes an ideal task to evaluate our matching method between texts of different sizes.

Dataset. We use a dataset from the CL-SciSumm Shared Task Jaidka et al. (2016). The dataset consists of 730 ACL Computational Linguistics research papers covering 50 categories in total. Each category consists of a reference paper (RP) and around 10 citing papers (CP) that contain citations to the RP. A human-generated summary for the RP is provided and we use the 10 CP as being relevant to the summary. The matching task here is between the summary and all CPs in each category.
Evaluation. For each paper, we keep all of its content except the sections of experiments and acknowledgement (these sections were omitted because often their content is often less related to the topic of the summary). The typical summary length is about words, while a paper has more than words. For each topic, we rank all papers in terms of their relevance generated by our method and baselines using both sets of embeddings. For evaluation, we use the information retrieval measure of precision@k, which considers the number of relevant matches in the top-k matchings Manning et al. (2008). For each combination of the text similarity approaches and embeddings, we show precision@k for different k’s in Figure 5. We observe that our method with science embedding achieves the best performance compared to the baselines, once again showing not only the benefits of our method but also that of incorporating domain knowledge.

Figure 5: Summary-Article Matching

6 Discussion

Analysis of Results. From the results of the two tasks we observe that our method outperforms two strong baselines. The reason for WMD’s poor performance could be that the many uninformative words (those unrelated to the central topic) make WMD overestimate the distance between the document-summary pair. As for doc2vec, its single vector representation may not be able to capture all the key topics of a document. A project could contain multifaceted information, e.g., a project to study how climate change affects grain production is related to both environmental science and agricultural science.

Effect of Topic Number. The number of hidden topics is a hyperparameter in our setting. We empirically evaluate the effect of topic number in the task of concept-project mapping. Figure 6 shows the F1 scores and the standard deviations at different .

Figure 6: F1 score on concept-project matching with different topic numbers

As we can see, optimal is 18. When is too small, hidden topics are too few to capture key information in projects. Thus we can see that the increase of topic number from to brings a big improvement to the performance. Topic numbers larger than the optimal value degrade the performance since more topics incorporate noisy information. We note that the performance changes are mild when the number of topics are in the range of [18, 31]. Since topics are weighted by their importance, the effect of noisy information from extra hidden topics is mitigated.

Interpretation of Hidden Topics. We consider the summary-paper matching as an example with around 10 papers per category. We extracted the hidden topics from each paper, reconstructed words with these topics as shown in Eq. (3), and selected the words which had the smallest reconstruction errors. These words are thus closely related to the hidden topics, and we call them topic words to serve as an interpretation of the hidden topics. We visualize the cloud of such topic words on the set of papers about word sense disambiguation as shown in Figure 7. We see that the words selected based on the hidden topics cover key ideas such as disambiguation, represent, classification and sentence. This qualitatively validates the representation power of hidden topics. More examples are available in the supplementary material.

We interpret this to mean that proposed idea of multiple hidden topics captures the key information of a document. The extracted “hidden topics” represent the essence of documents, suggesting the appropriateness of our relevance metric to measure the similarity between texts of different sizes. Even though our focus in this study was the science domain we point out that the results are more generally valid since we made no domain-specific assumptions.

Figure 7: Topic words from papers on word sense disambiguation

Varying Sensitivity to Domain. As shown in the results, the science-domain embeddings improved the classification of concept-project matching for the topic-based method by in F1-score, WMD by and doc2vec by , thus underscoring the importance of domain-specific word embeddings.

Doc2vec is less sensitive to the domain, because it provides document-level representation. Even if some words cannot be disambiguated due to the lack of domain knowledge, other words in the same document can provide complementary information so that the document embedding does not deviate too much from its true meaning.

Our method, also a word embedding method, is not as sensitive to domain as WMD. It is robust to the polysemous words with domain-sensitive semantics, since hidden topics are extracted in the document level. Broader contexts beyond just words provide complementary information for word sense disambiguation.

7 Conclusion

We propose a novel approach to matching documents and summaries. The challenge we address is to bridge the gap between detailed long texts and its abstraction with hidden topics. We incorporate domain knowledge into the matching system to gain further performance improvement. Our approach has beaten two strong baselines in two downstream applications, concept-project matching and summary-research paper matching.


  • Al-Rfou et al. (2013) Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual nlp. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 183–192, Sofia, Bulgaria. Association for Computational Linguistics.
  • Blei et al. (2003) David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation.

    Journal of machine Learning research

    , 3(Jan):993–1022.
  • Blunsom et al. (2014) Phil Blunsom, Edward Grefenstette, and Nal Kalchbrenner. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics.
  • Bordes et al. (2015) Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075.
  • Chen et al. (2015) Yun-Nung Chen, William Yang Wang, Anatole Gershman, and Alexander I Rudnicky. 2015.

    Matrix factorization with knowledge graph propagation for unsupervised spoken language understanding.

    In ACL (1), pages 483–494.
  • Chen et al. (2013) Zhiyuan Chen, Arjun Mukherjee, Bing Liu, Meichun Hsu, Malu Castellanos, and Riddhiman Ghosh. 2013. Exploiting domain knowledge in aspect extraction. In EMNLP, pages 1655–1667.
  • Cheung and Penn (2013a) Jackie Chi Kit Cheung and Gerald Penn. 2013a. Probabilistic domain modelling with contextualized distributional semantic vectors. In ACL (1), pages 392–401.
  • Cheung and Penn (2013b) Jackie Chi Kit Cheung and Gerald Penn. 2013b. Towards robust abstractive multi-document summarization: A caseframe analysis of centrality and domain. In ACL (1), pages 1233–1242.
  • Deerwester et al. (1990) Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American society for information science, 41(6):391.
  • Dumais (2004) Susan T Dumais. 2004. Latent semantic analysis. Annual review of information science and technology, 38(1):188–230.
  • Gopalan et al. (2014) Prem K Gopalan, Laurent Charlin, and David Blei. 2014. Content-based recommendations with poisson factorization. In Advances in Neural Information Processing Systems, pages 3176–3184.
  • Jaidka et al. (2016) Kokil Jaidka, Muthu Kumar Chandrasekaran, Sajal Rustagi, and Min-Yen Kan. 2016. Overview of the cl-scisumm 2016 shared task. In In Proceedings of Joint Workshop on Bibliometric-enhanced Information Retrieval and NLP for Digital Libraries (BIRNDL 2016).
  • Kedzie and McKeown (2016) Chris Kedzie and Kathleen McKeown. 2016. Extractive and abstractive event summarization over streaming web text. In IJCAI, pages 4002–4003.
  • Kiros et al. (2015) Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294–3302.
  • Kusner et al. (2015) Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In International Conference on Machine Learning, pages 957–966.
  • Lai et al. (2015) Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In AAAI, volume 333, pages 2267–2273.
  • Landauer (2003) Thomas K Landauer. 2003. Automatic essay assessment. Assessment in education: Principles, policy & practice, 10(3):295–308.
  • Le and Mikolov (2014) Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1188–1196.
  • Manning et al. (2008) Christopher D Manning, Prabhakar Raghavan, Hinrich Schütze, et al. 2008. Introduction to information retrieval, volume 1. Cambridge university press Cambridge.
  • Metzler et al. (2007) Donald Metzler, Susan Dumais, and Christopher Meek. 2007. Similarity measures for short segments of text. In European Conference on Information Retrieval, pages 16–27. Springer.
  • Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119.
  • NGSS (2017) NGSS. 2017. Available at: Accessed: 2017-06-30.
  • Ontrup and Ritter (2002) Jorg Ontrup and Helge Ritter. 2002.

    Hyperbolic self-organizing maps for semantic navigation.

    In Advances in neural information processing systems, pages 1417–1424.
  • Radlinski and Craswell (2010) Filip Radlinski and Nick Craswell. 2010. Comparing the sensitivity of information retrieval metrics. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, pages 667–674. ACM.
  • Řehůřek and Sojka (2010) Radim Řehůřek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50, Valletta, Malta. ELRA.
  • Roweis and Saul (2000) Sam T Roweis and Lawrence K Saul. 2000. Nonlinear dimensionality reduction by locally linear embedding. science, 290(5500):2323–2326.
  • Salton and Buckley (1988) Gerard Salton and Christopher Buckley. 1988. Term-weighting approaches in automatic text retrieval. Information processing & management, 24(5):513–523.
  • ScienceBuddies (2017a) ScienceBuddies. 2017a. Available at: Accessed: 2017-06-30.
  • ScienceBuddies (2017b) ScienceBuddies. 2017b. Available at: Accessed: 2017-06-30.
  • ScienceBuddies (2017c) ScienceBuddies. 2017c. Available at: Accessed: 2017-06-30.
  • Socher et al. (2014) Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. Transactions of the Association for Computational Linguistics, 2:207–218.
  • Tai et al. (2015) Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015.

    Improved semantic representations from tree-structured long short-term memory networks.


    Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

    , volume 1, pages 1556–1566.

Appendix A Optimization Problem in Finding Hidden Topics

We first show that problem (2) is equivalent to the optimization problem (4). The reconstruction of word is , and where


Problem (11) is a standard quadratic optimization problem which is solved by , where is the pseudoinverse of . With the orthonormal constraints on , we have . Therefore, , and .

Given the topic vectors , the reconstruction error is defined as:


where is a matrix stacked by word vectors in a document. Now the equivalence has been shown between problem (2) and (4).

Next we show how to derive hidden topic vectors from the optimization problem (4) via Singular Value Decomposition. The optimization problem is:

Let . Then we have:

Since , we only need to minimize:

It is equivalent to the maximization of .

Let be the trace of a matrix , we can see that


Eq. (14) is based on one property of trace: for two matrices and .

The optimization problem (4) now can be rewritten as:


We apply Lagrangian multiplier method to solve the optimization problem (16). The Lagrangian function with multipliers is:

By taking derivative of with respect to , we can get

If is the solution to the equation above, we have


which indicates that the optimal topic vector

is the set of eigenvectors of


The eigenvector of can be computed using Singular Value Decomposition (SVD). SVD decomposes matrix can be decomposed as , where , , and is a diagonal matrix. Because

where , and it is also a diagonal matrix. As is seen, gives eigenvectors of

, and the corresponding eigenvalues are the diagonal elements in


We note that not all topics are equally important, and the topic which recover word vectors with smaller error are more important. When , we can find the most important topic which minimizes the reconstruction error among all vectors. Equivalently, the optimization in (16) can be written as:


The formula (18) indicates that the most important topic vector is the eigenvector corresponds to the maximum eigenvalue. Similarly, we can find that the larger the eigenvalue is, the smaller reconstruction error the topic achieves, and the more important the topic is.

Also we can find that

As we can see, can be used to quantify the importance of the topic , and it is the unnormalized importance score we define in Eq. (6).

Henceforth, the vectors in corresponding to the largest eigenvalues are the solution to optimal hidden vectors , and the topic importance is measured by .

Appendix B Interpretation of Hidden Topics

(a) Graph model to cluster senses (b) Finite-state automaton as language analyzer
Figure 8: Topic Word Visualization to Interpret Hidden Topics

Mathematically the hidden topics are orthonormal vectors, and do not carry physical meaning. To gain a deeper insight of these hidden topics, we can establish the connections between topics and words. For a given paper, we can extract several hidden topics

by solving the optimization problem (2).

For each word in the document, we reconstruct with hidden topic vectors as below:

The reconstruction error for word is . We select words with small reconstruction errors since they are closely relevant to extract topic vectors, and could well explain the hidden topics. We collect these highly relevant words from papers in the same category, which are natural interpretations of hidden topics. The cloud of these topic words are shown in Figure 8. The papers are about graph modeling to cluster word senses in Figure 8(a). As we can see, topic words such as graph, clusters, semantic, algorithms well capture the key ideas of those papers. Similarly, Figure 8(b) presents the word cloud for papers on finite-state automaton and language analyzer. Core concepts such as language, context, finite-state, transducer and linguistics are well preserved by the extracted hidden topics.