Vector of Locally-Aggregated Word Embeddings (VLAWE): A novel document-level embedding

02/23/2019
by   Radu Tudor Ionescu, et al.
0

In this paper, we propose a novel representation for text documents based on aggregating word embedding vectors into document embeddings. Our approach is inspired by the Vector of Locally-Aggregated Descriptors used for image representation, and it works as follows. First, the word embeddings gathered from a collection of documents are clustered by k-means in order to learn a codebook of semnatically-related word embeddings. Each word embedding is then associated to its nearest cluster centroid (codeword). The Vector of Locally-Aggregated Word Embeddings (VLAWE) representation of a document is then computed by accumulating the differences between each codeword vector and each word vector (from the document) associated to the respective codeword. We plug the VLAWE representation, which is learned in an unsupervised manner, into a classifier and show that it is useful for a diverse set of text classification tasks. We compare our approach with a broad range of recent state-of-the-art methods, demonstrating the effectiveness of our approach. Furthermore, we obtain a considerable improvement on the Movie Review data set, reporting an accuracy of 93.3 state-of-the-art approach.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/23/2019

Vector of Locally-Aggregated Word Embeddings (VLAWE): A Novel Document-level Representation

In this paper, we propose a novel representation for text documents base...
07/25/2017

From Image to Text Classification: A Novel Approach based on Clustering Word Embeddings

In this paper, we propose a novel approach for text classification based...
07/18/2019

Evaluating the Utility of Document Embedding Vector Difference for Relation Learning

Recent work has demonstrated that vector offsets obtained by subtracting...
09/07/2018

Learning Embeddings of Directed Networks with Text-Associated Nodes---with Applications in Software Package Dependency Networks

A network embedding consists of a vector representation for each node in...
07/08/2018

Latent Semantic Analysis Approach for Document Summarization Based on Word Embeddings

Since the amount of information on the internet is growing rapidly, it i...
11/29/2018

Sequential Embedding Induced Text Clustering, a Non-parametric Bayesian Approach

Current state-of-the-art nonparametric Bayesian text clustering methods ...
04/19/2016

Using Apache Lucene to Search Vector of Locally Aggregated Descriptors

Surrogate Text Representation (STR) is a profitable solution to efficien...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, word embeddings Bengio et al. (2003); Collobert and Weston (2008); Mikolov et al. (2013); Pennington et al. (2014)

have had a huge impact in natural language processing (NLP) and related fields, being used in many tasks including sentiment analysis

Dos Santos and Gatti (2014); Fu et al. (2018), information retrieval Clinchant and Perronnin (2013); Ye et al. (2016) and word sense disambiguation Bhingardive et al. (2015); Butnaru et al. (2017); Chen et al. (2014); Iacobacci et al. (2016), among many others. Starting from word embeddings, researchers proposed various ways of aggregating word embedding vectors to obtain efficient sentence-level or document-level representations Butnaru and Ionescu (2017); Cheng et al. (2018); Clinchant and Perronnin (2013); Conneau et al. (2017); Cozma et al. (2018); Fu et al. (2018); Hill et al. (2016); Kiros et al. (2015); Kusner et al. (2015); Le and Mikolov (2014); Shen et al. (2018); Torki (2018); Zhao et al. (2015); Zhou et al. (2016, 2018). Although the mean (or sum) of word vectors is commonly adopted because of its simplicity Mitchell and Lapata (2010), it seems that more complex approaches usually yield better performance Cheng et al. (2018); Conneau et al. (2017); Cozma et al. (2018); Fu et al. (2018); Hill et al. (2016); Kiros et al. (2015); Torki (2018); Zhao et al. (2015); Zhou et al. (2016, 2018). To this end, we propose a simple yet effective approach for aggregating word embeddings into document embeddings. Our approach is inspired by the Vector of Locally-Aggregated Descriptors (VLAD) Jégou et al. (2010, 2012)

used in computer vision to efficiently represent images for various image classification and retrieval tasks. To our knowledge, we are the first to adapt and use VLAD in the text domain.

Our document-level representation is constructed as follows. First, we apply a pre-trained word embedding model, such as GloVe Pennington et al. (2014), on all the words from a set of training documents in order to obtain a set of training word vectors. The word vectors are clustered by k-means in order to learn a codebook of semnatically-related word embeddings. Each word embedding is then associated to its nearest cluster centroid (codeword). The Vector of Locally-Aggregated Word Embeddings (VLAWE) representation of a text document is then computed by accumulating the differences between each codeword vector and each word vector that is both present in the document and associated to the respective codeword. Since our approach considers cluster centroids as reference for building the representation, it can easily accommodate new words, not seen during k-means training, simply by associating them to the nearest cluster centroids. Thus, VLAWE is robust to vocabulary distribution gaps between training and test, which can appear when the training set is particularly smaller or from a different domain. Certainly, the robustness holds as long as the word embeddings are pre-trained on a very large set of documents, e.g. the entire Wikipedia.

We plug the VLAWE representation, which is learned in an unsupervised manner, into a classifier, namely Support Vector Machines (SVM), and show that it is useful for a diverse set of text classification tasks. We consider five benchmark data sets: Reuters-21578

Lewis (1997), RT-2k Pang and Lee (2004), MR Pang and Lee (2005), TREC Li and Roth (2002) and Subj Pang and Lee (2004). We compare VLAWE with recent state-of-the-art methods Butnaru and Ionescu (2017); Cheng et al. (2018); Fu et al. (2018); Hill et al. (2016); Iyyer et al. (2015); Kim (2014); Kiros et al. (2015); Le and Mikolov (2014); Liu et al. (2017); Shen et al. (2018); Torki (2018); Xue and Zhou (2009); Zhao et al. (2015); Zhou et al. (2016, 2018), demonstrating the effectiveness of our approach. Furthermore, we obtain a considerable improvement on the Movie Review (MR) data set, surpassing the state-of-the-art approach of Cheng-IJCAI-2018 by almost .

The rest of the paper is organized as follows. We present related works on learning document-level representations in Section 2. We describe the Vector of Locally-Aggregated Word Embeddings in Section 3. We present experiments and results on various text classification tasks in Section 4. Finally, we draw our conclusion in Section 5.

2 Related Work

There are various works Butnaru and Ionescu (2017); Cheng et al. (2018); Conneau et al. (2017); Fu et al. (2018); Hill et al. (2016); Iyyer et al. (2015); Kim (2014); Kiros et al. (2015); Kusner et al. (2015); Le and Mikolov (2014); Clinchant and Perronnin (2013); Shen et al. (2018); Torki (2018); Zhao et al. (2015); Zhou et al. (2018)

that propose to build effective sentence-level or document-level representations based on word embeddings. While most of these approaches are based on deep learning

Cheng et al. (2018); Conneau et al. (2017); Hill et al. (2016); Iyyer et al. (2015); Kim (2014); Kiros et al. (2015); Le and Mikolov (2014); Zhao et al. (2015); Zhou et al. (2018), there have been some approaches that are inspired by computer vision research, namely by the bag-of-visual-words Butnaru and Ionescu (2017) and by Fisher Vectors Clinchant and Perronnin (2013). The relationship between the bag-of-visual-words, Fisher Vectors and VLAD is discussed in Jégou et al. (2012). The discussion can be transferred to describe the relantionship of our work and the closely-related works of Ionescu-KES-2017 and Perronnin-ACL-2013.

3 Method

The Vector of Locally-Aggregated Descriptors (VLAD) Jégou et al. (2010, 2012) was introduced in computer vision to efficiently represent images for various image classification and retrieval tasks. We propose to adapt the VLAD representation in order to represent text documents instead of images. Our adaptation consists of replacing the Scale-Invariant Feature Transform (SIFT) image descriptors Lowe (2004) useful for recognizing object patterns in images with word embeddings Mikolov et al. (2013); Pennington et al. (2014) useful for recognizing semantic patterns in text documents. We coin the term Vector of Locally-Aggregated Word Embeddings (VLAWE) for the resulting document representation.

The VLAWE representation is derived as follows. First, each word in the collection of training documents is represented as a word vector using a pre-trained word embeddings model. The result is a set of word vectors. As for the VLAD model, the next step is to learn a codebook of representative meta-word vectors (codewords) using k-means. Each codeword is the centroid of the cluster :

(1)

where is the number of word vectors assigned to cluster and

is the number of clusters. Since word embeddings carry semantic information by projecting semantically-related words in the same region of the embedding space, it means that the resulting clusters contain semantically-related words. The formed centroids are stored in a randomized forest of k-d trees to reduce search cost, as described in

Philbin et al. (2007); Ionescu et al. (2013); Ionescu and Popescu (2014, 2015a). Each word embedding is associated to a single cluster , such that the Euclidean distance between and the corresponding codeword is minimum, for all . For each document and each codeword , the differences of the vectors and the codeword are accumulated into column vectors:

(2)

where is the set of word embeddings in a given text document. The final VLAWE embedding for a given document is obtained by stacking together the -dimensional residual vectors , where is equal to the dimension of the word embeddings:

(3)

Therefore, the VLAWE document embedding is has components.

The VLAWE vector undergoes two normalization steps. First, a power normalization is performed by applying the following operator independently on each component (element):

(4)

where and is the absolute value of . Since words in natural language follow the Zipf’s law Powers (1998), it seems natural to apply the power normalization in order to reduce the influence of highly frequent words, e.g. common words or stopwords, which can corrupt the representation. As Jegou-PAMI-2012, we empirically observed that this step consistently improves the quality of the representation. The power normalized document embeddings are then -normalized. After obtaining the normalized VLAWE representations, we employ a classification method to learn a discriminative model for each specific text classification task.

4 Experiments

4.1 Data Sets

We exhibit the performance of VLAWE on five public data sets: Reuters-21578 Lewis (1997), RT-2k Pang and Lee (2004), MR Pang and Lee (2005), TREC Li and Roth (2002) and Subj Pang and Lee (2004).

The Reuters-21578 data set Lewis (1997) contains articles collected from Reuters newswire. Following Joachims-ECML-1998 and Yang-SIGIR-1999, we select the categories (topics) that have at least one document in the training set and one in the test set, leading to a total of categories. We use the ModeApte evaluation Xue and Zhou (2009), in which unlabeled documents are eliminated, leaving a total of documents. The collection is already divided into documents for training and documents for testing.

The RT-2k data set Pang and Lee (2004) consists of movie reviews taken from the IMDB movie review archives. There are positive reviews rated with four or five stars, and negative reviews rated with one or two stars. The task is to discriminate between positive and negative reviews.

The Movie Review (MR) data set Pang and Lee (2005) consists of positive and negative sentences. Each sentence is selected from one movie review. The task is to discriminate between positive and negative sentiment.

TREC Li and Roth (2002) is a question type classification data set, where questions are divided into classes. The collection is already divided into questions for training and questions for testing.

The Subjectivity (Subj) Pang and Lee (2004) data set contains objective and subjective sentences. The task is to classify a sentence as being either subjective or objective.

Method Reuters-21578 RT-2k MR TREC Subj
Average of word embeddings (baseline)
BOW (baseline)
TF + FA + CP + SVM Xue and Zhou (2009) - - - -

Paragraph vectors Le and Mikolov (2014)
- -

CNN Kim (2014)
-

DAN Iyyer et al. (2015)
- - - -

Combine-skip Kiros et al. (2015)
- -

Combine-skip + NB Kiros et al. (2015)
- - -

AdaSent Zhao et al. (2015)
- -

SAE + embs. Hill et al. (2016)
- -

SDAE + embs. Hill et al. (2016)
- -

FastSent + AE Hill et al. (2016)
- -

BLSTM Zhou et al. (2016)
- -

BLSTM-Att Zhou et al. (2016)
- -

BLSTM-2DCNN Zhou et al. (2016)
- -

DC-TreeLSTM Liu et al. (2017)
- -

BOSWE Butnaru and Ionescu (2017)
- - -

TreeNet Cheng et al. (2018)
- -

TreeNet-GloVe Cheng et al. (2018)
- -

BOMV Fu et al. (2018)
- - -

SWEM-average Shen et al. (2018)
- -

SWEM-concat Shen et al. (2018)
- -

COV + Mean Torki (2018)
- -

COV + BOW Torki (2018)
- -

COV + Mean + BOW Torki (2018)
- -

DARLM Zhou et al. (2018)
- -

VLAWE (ours)
Table 1: Performance results (in ) of our approach (VLAWE) versus several state-of-the-art methods Butnaru and Ionescu (2017); Cheng et al. (2018); Fu et al. (2018); Hill et al. (2016); Iyyer et al. (2015); Kim (2014); Kiros et al. (2015); Le and Mikolov (2014); Liu et al. (2017); Shen et al. (2018); Torki (2018); Xue and Zhou (2009); Zhao et al. (2015); Zhou et al. (2016, 2018) on the Reuters-21578, RT-2k, MR, TREC and Subj data sets. The top three results on each data set are highlighted in red, green and blue, respectively. Best viewed in color.

4.2 Evaluation and Implementation Details

In the experiments, we used the pre-trained word embeddings computed with the GloVe toolkit provided by Pennington-EMNLP-2014. The pre-trained GloVe model contains -dimensional vectors for million words and phrases. Most of the steps required for building the VLAWE representation, such as the k-means clustering and the randomized forest of k-d trees, are implemented using the VLFeat library Vedaldi and Fulkerson (2008). We set the number of clusters (size of the codebook) to , leading to a VLAWE representation of components. Similar to Jegou-PAMI-2012, we set for the power normalization step in Equation (4), which consistently leads to near-optimal results on all data sets. In the learning stage, we employ the Support Vector Machines (SVM) implementation provided by LibSVM Chang and Lin (2011). We set the SVM regularization parameter to in all our experiments. In the SVM, we use the linear kernel. For optimal results, the VLAWE representation is combined with the BOSWE representation Butnaru and Ionescu (2017), which is based on the PQ kernel Ionescu and Popescu (2013, 2015b).

We follow the same evaluation procedure as Kiros-NIPS-2015 and Hill-NAACL-2016, using

-fold cross-validation when a train and test split is not pre-defined for a given data set. As evaluation metrics, we employ the micro-averaged

measure for the Reuters-21578 data set and the standard classification accuracy for the RT-2k, the MR, the TREC and the Subj data sets, in order to fairly compare with the related art.

4.3 Results

We compare VLAWE with several state-of-the-art methods Butnaru and Ionescu (2017); Cheng et al. (2018); Fu et al. (2018); Hill et al. (2016); Iyyer et al. (2015); Kim (2014); Kiros et al. (2015); Le and Mikolov (2014); Liu et al. (2017); Shen et al. (2018); Torki (2018); Xue and Zhou (2009); Zhao et al. (2015); Zhou et al. (2016, 2018) as well as two baseline methods, namely the average of word embeddings and the standard bag-of-words (BOW). The corresponding results are presented in Table 1.

First, we notice that our approach outperforms both baselines on all data sets, unlike other related methods Le and Mikolov (2014); Hill et al. (2016). In most cases, our improvements over the baselines are higher than . On the Reuters-21578 data set, we surpass the closely-related approach of Ionescu-KES-2017 by around . On the RT-2k data set, we surpass the related works of Fu-ESA-2018 and Ionescu-KES-2017 by around . To our knowledge, our accuracy of on RT-2k Pang and Lee (2004) surpasses all previous results reported in literature. On the MR data set, we surpass most related works by more than . To our knowledge, the best accuracy on MR reported in previous literature is , and it is obtained by Cheng-IJCAI-2018. We surpass the accuracy of Cheng-IJCAI-2018 by almost , reaching an accuracy of using VLAWE. On the TREC data set, we reach the third best performance, after methods such as Cheng et al. (2018); Zhou et al. (2016, 2018). Our performance on TREC is about lower than the state-of-the-art accuracy of . On the Subj data set, we obtain an accuracy of . There are two state-of-the-art methods Cheng et al. (2018); Zhao et al. (2015) reporting better performance on Subj. Compared to the best one of them Cheng et al. (2018), our accuracy is lower. Overall, we consider that our results are noteworthy.

4.4 Discussion

Method MR
VLAWE ()
VALWE (PCA)
VLAWE (full, )
Table 2: Performance results (in ) of the full VLAWE representation (with ) versus two compact versions of VLAWE, obtained either by setting or by applying PCA.

The k-means clustering algorithm and, on some data sets, the cross-validation procedure can induce accuracy variations due to the random choices involved. We have conducted experiments to determine how large are the accuracy variations. We observed that the accuracy can decrease by up to , which does not bring any significant differences to the results reported in Table 1.

Figure 1: Accuracy on MR for different numbers of k-means clusters.

Even for a small number of clusters, e.g. , the VLAWE document representation can grow up to thousands of features, as the number of features is , where is the dimensionality of commonly used word embeddings. However, there are several document-level representations that usually have a dimensionality much smaller than . Therefore, it is desirable to obtain a more compact VLAWE representation. We hereby propose two approaches that lead to more compact representations. The first one is simply based on reducing the number of clusters. By setting for instance, we obtain a

-dimensional representation. The second one is based on applying Principal Component Analysis (PCA), to reduce the dimension of the feature vectors. Using PCA, we propose to reduce the size of the VLAWE representation to

components. In Table 2, the resulting compact representations are compared against the full VLAWE representation on the MR data set. Although the compact VLAWE representations provide slightly lower results compared to the VLAWE representation based on components, we note that the differences are insignificant. Furthermore, both compact VLAWE representations are far above the state-of-the-art method Cheng et al. (2018).

In Figure 1, we illustrate the performance variation on MR, when using different values for . We notice that the accuracy tends to increase slightly, as we increase the number of clusters from to . Overall, the VLAWE representation seems to be robust to the choice of , always surpassing the state-of-the-art approach Cheng et al. (2018).

5 Conclusion

We proposed a novel representation for text documents which is based on aggregating word embeddings using k-means and on computing the residuals between each word embedding allocated to a given cluster and the corresponding cluster centroid. Our experiments on five benchmark data sets prove that our approach yields competitive results with respect to the state-of-the-art methods.

Acknowledgments

We thank the reviewers for their useful comments. This research is supported by University of Bucharest, Faculty of Mathematics and Computer Science, through the 2019 Mobility Fund.

References

  • Bengio et al. (2003) Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A Neural Probabilistic Language Model.

    Journal of Machine Learning Research

    , 3:1137–1155.
  • Bhingardive et al. (2015) Sudha Bhingardive, Dhirendra Singh, Rudramurthy V, Hanumant Harichandra Redkar, and Pushpak Bhattacharyya. 2015. Unsupervised Most Frequent Sense Detection using Word Embeddings. In Proceedings of NAACL, pages 1238–1243.
  • Butnaru and Ionescu (2017) Andrei Butnaru and Radu Tudor Ionescu. 2017. From Image to Text Classification: A Novel Approach based on Clustering Word Embeddings. In Proceedings of KES, pages 1784–1793.
  • Butnaru et al. (2017) Andrei Butnaru, Radu Tudor Ionescu, and Florentina Hristea. 2017. ShotgunWSD: An unsupervised algorithm for global word sense disambiguation inspired by DNA sequencing. In Proceedings of EACL, pages 916–926.
  • Chang and Lin (2011) Chih-Chung Chang and Chih-Jen Lin. 2011. LibSVM: A Library for Support Vector Machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
  • Chen et al. (2014) Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A Unified Model for Word Sense Representation and Disambiguation. In Proceedings of EMNLP, pages 1025–1035.
  • Cheng et al. (2018) Zhou Cheng, Chun Yuan, Jiancheng Li, and Haiqin Yang. 2018. TreeNet: Learning Sentence Representations with Unconstrained Tree Structure. In Proceedings of IJCAI, pages 4005–4011.
  • Clinchant and Perronnin (2013) Stéphane Clinchant and Florent Perronnin. 2013. Aggregating continuous word embeddings for information retrieval. In Proceedings of CVSC Workshop, pages 100–109.
  • Collobert and Weston (2008) Ronan Collobert and Jason Weston. 2008.

    A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning.

    In Proceedings of ICML, pages 160–167.
  • Conneau et al. (2017) Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. In Proceedings of EMNLP, pages 670–680.
  • Cozma et al. (2018) Mădălina Cozma, Andrei Butnaru, and Radu Tudor Ionescu. 2018. Automated essay scoring with string kernels and word embeddings. In Proceedings of ACL, pages 503–509.
  • Dos Santos and Gatti (2014) Cícero Nogueira Dos Santos and Maira Gatti. 2014.

    Deep Convolutional Neural Networks for Sentiment Analysis of Short Texts.

    In Proceedings of COLING, pages 69–78.
  • Fu et al. (2018) Mingsheng Fu, Hong Qu, Li Huang, and Li Lu. 2018. Bag of meta-words: A novel method to represent document for the sentiment classification. Expert Systems with Applications, 113:33–43.
  • Hill et al. (2016) Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016.

    Learning Distributed Representations of Sentences from Unlabelled Data.

    In Proceedings of NAACL, pages 1367–1377.
  • Iacobacci et al. (2016) Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Embeddings for Word Sense Disambiguation: An Evaluation Study. In Proceedings of ACL, pages 897–907.
  • Ionescu and Popescu (2013) Radu Tudor Ionescu and Marius Popescu. 2013. Kernels for Visual Words Histograms. In Proceedings of ICIAP, pages 81–90.
  • Ionescu and Popescu (2014) Radu Tudor Ionescu and Marius Popescu. 2014. Objectness to improve the bag of visual words model. In Proceedings of ICIP, pages 3238–3242.
  • Ionescu and Popescu (2015a) Radu Tudor Ionescu and Marius Popescu. 2015a. Have a SNAK. Encoding Spatial Information with the Spatial Non-alignment Kernel. In Proceedings of ICIAP, pages 97–108.
  • Ionescu and Popescu (2015b) Radu Tudor Ionescu and Marius Popescu. 2015b. PQ kernel: a rank correlation kernel for visual word histograms. Pattern Recognition Letters, 55:51–57.
  • Ionescu et al. (2013) Radu Tudor Ionescu, Marius Popescu, and Cristian Grozea. 2013. Local Learning to Improve Bag of Visual Words Model for Facial Expression Recognition. In Proceedings of WREPL.
  • Iyyer et al. (2015) Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daumé III. 2015. Deep Unordered Composition Rivals Syntactic Methods for Text Classification. In Proceedings of ACL, pages 1681–1691.
  • Jégou et al. (2010) Hervé Jégou, Matthijs Douze, Cordelia Schmid, and Patrick Pérez. 2010. Aggregating local descriptors into a compact image representation. In Proceedings of CVPR, pages 3304–3311.
  • Jégou et al. (2012) Hervé Jégou, Florent Perronnin, Matthijs Douze, Jorge Sánchez, Patrick Perez, and Cordelia Schmid. 2012. Aggregating local image descriptors into compact codes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(9):1704–1716.
  • Joachims (1998) Thorsten Joachims. 1998. Text Categorization with Suport Vector Machines: Learning with Many Relevant Features. In Proceedings of ECML, pages 137–142, London, UK, UK. Springer-Verlag.
  • Kim (2014) Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In Proceedings of EMNLP, pages 1746–1751.
  • Kiros et al. (2015) Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-Thought Vectors. In Proceedings of NIPS, pages 3294–3302.
  • Kusner et al. (2015) Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In Proceedings of ICML, pages 957–966.
  • Le and Mikolov (2014) Quoc Le and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. In Proceedings of ICML, pages 1188–1196.
  • Lewis (1997) David Lewis. 1997. The Reuters-21578 text categorization test collection. http://www.daviddlewis.co m/resources/testcollections/reuters21578/.
  • Li and Roth (2002) Xin Li and Dan Roth. 2002. Learning question classifiers. In Proceedings of COLING, pages 1–7.
  • Liu et al. (2017) Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Dynamic compositional neural networks over tree structure. In Proceedings of IJCAI, pages 4054–4060.
  • Lowe (2004) David G. Lowe. 2004. Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision, 60(2):91–110.
  • Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of NIPS, pages 3111–3119.
  • Mitchell and Lapata (2010) Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8):1388–1429.
  • Pang and Lee (2004) Bo Pang and Lillian Lee. 2004. A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts. In Proceedings of ACL, pages 271–278.
  • Pang and Lee (2005) Bo Pang and Lillian Lee. 2005. Seeing Stars: Exploiting Class Relationships For Sentiment Categorization With Respect To Rating Scales. In Proceedings of ACL, pages 115–124.
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of EMNLP, pages 1532–1543.
  • Philbin et al. (2007) James Philbin, Ondrej Chum, Michael Isard, Josef Sivic, and Andrew Zisserman. 2007. Object retrieval with large vocabularies and fast spatial matching. In Proceedings of CVPR, pages 1–8.
  • Powers (1998) David Powers. 1998. Applications and explanations of Zipf’s law. In Proceedings of NeMLaP/CoNLL, pages 151–160.
  • Shen et al. (2018) Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Chunyuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms. In Proceedings of ACL, pages 440–450.
  • Torki (2018) Marwan Torki. 2018. A Document Descriptor using Covariance of Word Vectors. In Proceedings of ACL, pages 527–532.
  • Vedaldi and Fulkerson (2008) Andrea Vedaldi and B. Fulkerson. 2008. VLFeat: An Open and Portable Library of Computer Vision Algorithms. http://www.vlfeat.org/.
  • Xue and Zhou (2009) Xiao-Bing Xue and Zhi-Hua Zhou. 2009. Distributional features for text categorization. IEEE Transactions on Knowledge and Data Engineering, 21(3):428–442.
  • Yang and Liu (1999) Yiming Yang and Xin Liu. 1999. A re-examination of text categorization methods. In Proceedings of SIGIR, pages 42–49.
  • Ye et al. (2016) Xin Ye, Hui Shen, Xiao Ma, Răzvan Bunescu, and Chang Liu. 2016. From word embeddings to document similarities for improved information retrieval in software engineering. In Proceedings of ICSE, pages 404–415.
  • Zhao et al. (2015) Han Zhao, Zhengdong Lu, and Pascal Poupart. 2015. Self-Adaptive Hierarchical Sentence Model. In Proceedings of IJCAI, pages 4069–4076.
  • Zhou et al. (2016) Peng Zhou, Zhenyu Qi, Suncong Zheng, Jiaming Xu, Hongyun Bao, and Bo Xu. 2016.

    Text Classification Improved by Integrating Bidirectional LSTM with Two-dimensional Max Pooling.

    In Proceedings of COLING, pages 3485–3495.
  • Zhou et al. (2018) Qianrong Zhou, Xiaojie Wang, and Xuan Dong. 2018. Differentiated attentive representation learning for sentence classification. In Proceedings of IJCAI, pages 4630–4636.