KeyVec: Key-semantics Preserving Document Representations

09/27/2017 ∙ by Bin Bi, et al. ∙ Microsoft 0

Previous studies have demonstrated the empirical success of word embeddings in various applications. In this paper, we investigate the problem of learning distributed representations for text documents which many machine learning algorithms take as input for a number of NLP tasks. We propose a neural network model, KeyVec, which learns document representations with the goal of preserving key semantics of the input text. It enables the learned low-dimensional vectors to retain the topics and important information from the documents that will flow to downstream tasks. Our empirical evaluations show the superior quality of KeyVec representations in two different document understanding tasks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, the use of word representations, such as word2vec (Mikolov et al., 2013a, b) and GloVe (Pennington et al., 2014)

, has become a key “secret sauce” for the success of many natural language processing (NLP), information retrieval (IR) and machine learning (ML) tasks. The empirical success of word embeddings raises an interesting research question:

Beyond words, can we learn fixed-length distributed representations for pieces of texts?

The texts can be of variable-length, ranging from paragraphs to documents. Such document representations play a vital role in a large number of downstream NLP/IR/ML applications, such as text clustering, sentiment analysis, and document retrieval, which treat each piece of text as an instance. Learning a good representation that captures the semantics of each document is thus essential for the success of such applications.

In this paper, we introduce KeyVec, a neural network model that learns densely distributed representations for documents of variable-length. In order to capture semantics, the document representations are trained and optimized in a way to recover key information of the documents. In particular, given a document, the KeyVec model constructs a fixed-length vector to be able to predict both salient sentences and key words in the document. In this way, KeyVec conquers the problem of prior embedding models which treat every word and every sentence equally, failing to identify the key information that a document conveys. As a result, the vectorial representations generated by KeyVec can naturally capture the topics of the documents, and thus should yield good performance in downstream tasks.

We evaluate our KeyVec on two text understanding tasks: document retrieval and document clustering. As shown in the experimental section 5, KeyVec yields generic document representations that perform better than state-of-the-art embedding models.

Figure 1: KeyVec Model (best viewed in color)

2 Related Work

Le et al. proposed a Paragraph Vector model, which extends word2vec to vectorial representations for text paragraphs (Le and Mikolov, 2014; Dai et al., 2015). It projects both words and paragraphs into a single vector space by appending paragraph-specific vectors to typical word2vec. Different from our KeyVec, Paragraph Vector does not specifically model key information of a given piece of text, while capturing its sequential information. In addition, Paragraph Vector requires extra iterative inference to generate embeddings for unseen paragraphs, whereas our KeyVec embeds new documents simply via a single feed-forward run.

In another recent work (Djuric et al., 2015), Djuric et al. introduced a Hierarchical Document Vector (HDV) model to learn representations from a document stream. Our KeyVec differs from HDV in that we do not assume the existence of a document stream and HDV does not model sentences.

3 KeyVec Model

Given a document consisting of sentences , our KeyVec model aims to learn a fixed-length vectorial representation of , denoted as . Figure 1 illustrates an overview of the KeyVec model consisting of two cascaded neural network components: a Neural Reader and a Neural Encoder, as described below.

3.1 Neural Reader

The Neural Reader learns to understand the topics of every given input document with paying attention to the salient sentences. It computes a dense representation for each sentence in the given document, and derives its probability of being a salient sentence. The identified set of salient sentences, together with the derived probabilities, will be used by the Neural Encoder to generate a document-level embedding.

Since the Reader operates in embedding space, we first represent discrete words in each sentence by their word embeddings. The sentence encoder in Reader then derives sentence embeddings

from the word representations to capture the semantics of each sentence. After that, a Recurrent Neural Network (RNN) is employed to derive document-level semantics by consolidating constituent sentence embeddings. Finally, we identify key sentences in every document by computing the probability of each sentence being salient.

3.1.1 Sentence Encoder

Specifically, for the -th sentence with words, Neural Reader maps each word into a word embedding . Pre-trained word embeddings like word2vec or GloVe may be used to initialize the embedding table. In our experiments, we use domain-specific word embeddings trained by word2vec on our corpus.

Given the set of word embeddings for each sentence, Neural Reader then derives sentence-level embeddings using a sentence encoder :

(1)

where

is implemented by a Convolutional Neural Network (CNN) with a max-pooling operation, in a way similar to 

(Kim, 2014). Note that other modeling choices, such as an RNN, are possible as well. We used a CNN here because of its simplicity and high efficiency when running on GPUs. The sentence encoder generates an embedding of 150 dimensions for each sentence.

3.1.2 Identifying Salient Sentences

Given the embeddings of sentences in a document , Neural Reader computes the probability of each sentence being a key sentence, denoted as .

We employ a Long Short-Term Memory (LSTM) 

(Hochreiter and Schmidhuber, 1997) to compose constituent sentence embeddings into a document representation. At the -th time step, LSTM takes as input the current sentence embedding , and computes a hidden state . We place an LSTM in both directions, and concatenate the outputs of the two LSTMs. For the -th sentence, is semantically richer than sentence embedding , as incorporates the context information from surrounding sentences to model the temporal interactions between sentences. The probability of sentence being a key sentence then follows a logistic sigmoid of a linear function of :

(2)

where is a trainable weight vector, and is a trainable bias scalar.

3.2 Neural Encoder

The Neural Encoder computes document-level embeddings based on the salient sentences identified by the Reader. In order to capture the topics of a document and the importance of its individual sentences, we perform a weighted pooling over the constituent sentences, with the weights specified by , which gives the document-level embedding through a transformation:

(3)

where is a trainable weight matrix, and

is a trainable bias vector.

Weighted pooling functions are commonly used as the attention mechanism (Bahdanau et al., 2015) in neural sequence learning tasks. The “share” each sentence contributes to the final embedding is proportional to its probability of being a salient sentence. As a result, will be dominated by salient sentences with high , which preserves the key information in a document, and thus allows long documents to be encoded and embedded semantically.

4 Model Learning

In this section, we describe the learning process of the parameters of KeyVec. Similarly to most neural network models, KeyVec

can be trained using Stochastic Gradient Descent (SGD), where the Neural Reader and Neural Encoder are jointly optimized. In particular, the parameters of Reader and Encoder are learned simultaneously by maximizing the joint likelihood of the two components:

(4)

where and denotes the log likelihood functions of Reader and Encoder, respectively.

4.1 Reader’s Objective:

To optimize Reader, we take a surrogate approach to heuristically generate a set of salient sentences from a document collection, which constitute a training dataset for learning the probabilities of salient sentences

parametrized by . More specifically, given a training set of documents (e.g., body-text of research papers) and their associated summaries (e.g., abstracts) , where is a gold summary of document , we employ a state-of-the-art sentence similarity model, DSSM (Huang et al., 2013; Shen et al., 2014), to find the set of top-111 in our experiments sentences in , such that the similarity between and any sentence in the gold summary is above a pre-defined threshold. Note that here we assume each training document is associated with a gold summary composed of sentences that might not come from . We make this assumption only for the sake of generating the set of salient sentences which is usually not readily available.

The log likelihood objective of the Neural Reader is then given by maximizing the probability of being the set of key sentences, denoted as :

(5)

where is the set of non-key sentences. Intuitively, this likelihood function gives the probability of each sentence in the generated key sentence set being a key sentence, and the rest of sentences being non-key ones.

4.2 Encoder’s Objective:

The final output of Encoder is a document embedding , derived from LSTM’s hidden states of Reader. Given our goal of developing a general-purpose model for embedding documents, we would like to be semantically rich to encode as much key information as possible. To this end, we impose an additional objective on Encoder: the final document embedding needs to be able to reproduce the key words in the document, as illustrated in Figure 1.

In document , the set of key words is composed of top 30 words in (i.e., the gold summary of ) with the highest TF-IDF scores. Encoder’s objective is then formalized by maximizing the probability of predicting the key words in using the document embedding :

(6)

where is implemented as a softmax function with output dimensionality being the size of the vocabulary.

Combining the objectives of Reader and Encoder yields the joint objective function in Eq (4). By jointly optimizing the two objectives with SGD, the KeyVec model is capable of learning to identify salient sentences from input documents, and thus generating semantically rich document-level embeddings.

Model P@10 MAP MRR
word2vec averaging 0.221 0.176 0.500
(public release 300d)
word2vec averaging 0.223 0.193 0.546
(academic corpus)
Paragraph Vector 0.227 0.177 0.495
KeyVec 0.279 0.232 0.619
Table 1: Evaluation of document retrieval with different embedding models

5 Experiments and Results

To verify the effectiveness, we evaluate the KeyVec model on two text understanding tasks that take continuous distributed vectors as the representations for documents: document retrieval and document clustering.

5.1 Document Retrieval

The goal of the document retrieval task is to decide if a document should be retrieved given a query. In the experiments, our document pool contained 669 academic papers published by IEEE, from which top- relevant papers are retrieved. We created 70 search queries, each composed of the text in a Wikipedia page on a field of study (e.g., https://en.wikipedia.org/wiki/Deep_learning

). We retrieved relevant papers based on cosine similarity between document embeddings of 100 dimensions for Wikipedia pages and academic papers. For each query, a good document-embedding model should lead to a list of academic papers in one of the 70 fields of study.

Table 1 presents P@10, MAP and MRR results of our KeyVec model and competing embedding methods in academic paper retrieval. word2vec averaging generates an embedding for a document by averaging the word2vec vectors of its constituent words. In the experiment, we used two different versions of word2vec: one from public release, and the other one trained specifically on our own academic corpus (113 GB). From Table 1, we observe that as a document-embedding model, Paragraph Vector gave better retrieval results than word2vec averagings did. In contrast, our KeyVec outperforms all the competitors given its unique capability of capturing and embedding the key information of documents.

5.2 Document Clustering

In the document clustering task, we aim to cluster the academic papers by the venues in which they are published. There are a total of 850 academic papers, and 186 associated venues which are used as ground-truth for evaluation. Each academic paper is represented as a vector of 100 dimensions.

Model F1 V-measure ARI
word2vec averaging 0.019 0.271 0.003
(public release 300d)
word2vec averaging 0.079 0.548 0.066
(academic corpus)
Paragraph Vector 0.083 0.553 0.070
KeyVec 0.090 0.597 0.079
Table 2: Evaluation of document clustering with different embedding models

To compare embedding methods in academic paper clustering, we calculate F1, V-measure (a conditional entropy-based clustering measure (Rosenberg and Hirschberg, 2007)), and ARI (Adjusted Rand index (Hubert and Arabie, 1985)). As shown in Table 2, similarly to document retrieval, Paragraph Vector performed better than word2vec averagings in clustering documents, while our KeyVec consistently performed the best among all the compared methods.

6 Conclusions

In this work, we present a neural network model, KeyVec, that learns continuous representations for text documents in which key semantic patterns are retained.

In the future, we plan to employ the Minimum Risk Training scheme to train Neural Reader directly on original summary, without needing to resort to a sentence similarity model.

References

  • Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. ICLR.
  • Dai et al. (2015) Andrew M. Dai, Christopher Olah, and Quoc V. Le. 2015. Document embedding with paragraph vectors. In

    NIPS Deep Learning Workshop

    .
  • Djuric et al. (2015) Nemanja Djuric, Hao Wu, Vladan Radosavljevic, Mihajlo Grbovic, and Narayan Bhamidipati. 2015. Hierarchical neural language models for joint representation of streaming documents and their content. In Proceedings of the 24th International Conference on World Wide Web. WWW ’15, pages 248–255.
  • Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735–1780.
  • Huang et al. (2013) Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22Nd ACM International Conference on Information & Knowledge Management. ACM, New York, NY, USA, CIKM ’13, pages 2333–2338.
  • Hubert and Arabie (1985) Lawrence Hubert and Phipps Arabie. 1985. Comparing partitions. Journal of Classification 2(1):193–218.
  • Kim (2014) Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1746–1751.
  • Le and Mikolov (2014) Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014. pages 1188–1196.
  • Mikolov et al. (2013a) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a.

    Efficient estimation of word representations in vector space.

    ICLR Workshop .
  • Mikolov et al. (2013b) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems. Curran Associates Inc., USA, NIPS’13, pages 3111–3119.
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP). pages 1532–1543.
  • Rosenberg and Hirschberg (2007) Andrew Rosenberg and Julia Hirschberg. 2007. V-measure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning(EMNLP-CoNLL). pages 410–420.
  • Shen et al. (2014) Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Grégoire Mesnil. 2014. A latent semantic model with convolutional-pooling structure for information retrieval. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. ACM, New York, NY, USA, CIKM ’14, pages 101–110.