An end-to-end Neural Network Framework for Text Clustering

by   Jie Zhou, et al.

The unsupervised text clustering is one of the major tasks in natural language processing (NLP) and remains a difficult and complex problem. Conventional methods generally treat this task using separated steps, including text representation learning and clustering the representations. As an improvement, neural methods have also been introduced for continuous representation learning to address the sparsity problem. However, the multi-step process still deviates from the unified optimization target. Especially the second step of cluster is generally performed with conventional methods such as k-Means. We propose a pure neural framework for text clustering in an end-to-end manner. It jointly learns the text representation and the clustering model. Our model works well when the context can be obtained, which is nearly always the case in the field of NLP. We have our method evaluated on two widely used benchmarks: IMDB movie reviews for sentiment classification and 20-Newsgroup for topic categorization. Despite its simplicity, experiments show the model outperforms previous clustering methods by a large margin. Furthermore, the model is also verified on English wiki dataset as a large corpus.



There are no comments yet.


page 3


Vec2GC – A Graph Based Clustering Method for Text Representations

NLP pipelines with limited or no labeled data, rely on unsupervised meth...

Representation Learning for Natural Language Processing

This book aims to review and present the recent advances of distributed ...

Clustering Text Using Attention

Clustering Text has been an important problem in the domain of Natural L...

Smaller Text Classifiers with Discriminative Cluster Embeddings

Word embedding parameters often dominate overall model sizes in neural m...

Attentive Representation Learning with Adversarial Training for Short Text Clustering

Short text clustering has far-reaching effects on semantic analysis, sho...

Self-Taught Convolutional Neural Networks for Short Text Clustering

Short text clustering is a challenging problem due to its sparseness of ...

Deep clustering with concrete k-means

We address the problem of simultaneously learning a k-means clustering a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The knowledge of text categorization benefits multiple natural language understanding tasks, such as dialogue Ge and Xu (2015), question-answering Yao and Durme (2014)

, document summarization 

Bairi et al. (2015) and information retrieval Manning et al. (2008). Supervised methods for text classification are often applied in a wide range and generally perform better than unsupervised clustering methods. However, with the explosive growth of the Internet, unsupervised methods begin to reveal its advantages.

Labeling the text data costs heavy manual efforts. It is impractical to afford such a cost for a large amount of data. This problem is seriously enlarged when we don’t have the prior information of the corpus. After processing more data, we will be aware that the total number of categories might be increased, or the boundaries between different categories should be re-defined. People might also have additional interests to look at the effects of a various number of categories at different levels on the system. All this causes the much more increased labeling efforts.

Clustering methods circumvent the above difficulties since they do not require data annotations. But the following difficulties are still in front of us, especially in text clustering. First, the large vocabulary brings sparsity problem in text representations, while conventional clustering tools such as k-Means are mainly designed for dense features. Second, the exploded corpus size and the increased requirements on the number of categories decrease the efficiency of conventional tools further. Third, conventional methods often suffer from the isolated stages for learning the text representation and training the clustering model, which leads to the difficulty in unified optimization.

Neural methods address the sparsity problem by representing the text with continuous distributed vectors (embeddings) 

Le and Mikolov (2014); Kiros et al. (2015); Tai et al. (2015); Triantafillou et al. (2016). But the clustering step still relies on conventional tools such as k-Means Manning et al. (2008), that the experience of neural methods in handling big data has little contribution to speeding up the whole pipeline.

In this paper, we propose an end-to-end neural framework for text clustering. Instead of conventionally trying to find out which cluster each instance belongs to, our model clusters the instances by determining whether two instances have the same or different categories, which is a binary classification problem. The true category distribution is considered as a latent variable and represented by a hidden layer vector in the neural framework. The binary label is obtained by a simple artificial rule. Implemented with a pure neural network, our framework unifies the representation learning and clustering procedures into an end-to-end system.

We evaluate our method on two widely used benchmarks: IMDB Movie Reviews (IMDB) Maas et al. (2011) for sentiment classification and 20-Newsgroup (20NG) Lang (1995) for topic categorization. Experimental results show that our method outperforms conventional methods by a large margin. We also verify the performance of our model on the English wiki corpus which has neither predefined categories nor clear boundaries between categories.

2 Method

Most clustering methods try to find which cluster the current sample belongs to in an iterative way, explicitly in real feature value space such as k-Means Lloyd (1982); Manning et al. (2008)

, or implicitly in parameter space such as Gaussian Mixture Model (GMM) 

Jain (2010) and Latent Dirichlet Allocation (LDA) Blei et al. (2003).

Instead, “whether these two samples belong to the same cluster” conducts our model optimization. Category information is not the final output of our model. Instead, it is learned as an latent variable, a hidden layer in the neural network framework. Thus, the clustering problem is transformed into a binary classification problem. The binary labels are automatically constructed under the following rule which is widely applicable.

In clustering stage, a sample refers to a single sentence, or a word sequence more generally. In the inference stage, we can obtain the category information at the sentence level, or any higher level by averaging the category distribution over each sentence within this level (e.g. paragraph or document).

The spirit of utilizing a pair of instances comes from the field of learning to rank Cao et al. (2007); Liu et al. (2009) and is also used in image classification or area Koch et al. (2015)

. Noted that previous works requires manual label while our work is purely an unsupervised learning.

2.1 Prerequisite and Pseudo Labels

A pair of word sequences is built as an input instance in our neural end-to-end text clustering model. It is natural that the co-occurrence of sequences within a short distance is likely to put them in the same category (label ) than those far from each other (label ). We will not pay attention to the precision of these pseudo labels. Our main goal is to exhibit that the model will finally converge to give the true categories against these noisy pseudo labels.

Two points will be further explained in the following. First, in the actual situation, two sequences distant from each other may also have the same category, and those next to each other may have different categories. The detailed inconsistency will not affect our model performance. We only need this assumption statistically established, since our neural network framework is also a statistical model resistant to high level label noise as we tested in experiments part. This result is out of our expectation at the beginning but quantitatively verified.

Second, our method is not restricted by the text structure. The text corpus could be organized at paragraph level, document level, or without any structural boundaries. Our method holds as long as the distance can be defined. Even under a specific condition where all sentences are isolated without context information, a sub-sequence could be considered as a sample. Then an instance, a pair of two samples, within the same sentence have positive label.

2.2 Instances Construction

Examples of positive and negative instances are given in Fig. 1. For the corpus organized at paragraph level, we randomly select two sentences from the same or different paragraphs to build a positive (within the same category) or negative (with different categories) instance respectively (Fig. 1a). For the corpus without any structural boundary, we select two sentences next to each other to build a positive instance and those far from each other to build a negative instance (Fig. 1b).

Figure 1: Construct instances from a document with (top graph a) or without structural boundary (bottom graph b).

In inference stage, we predict the category distribution for each sequence. The higher level (such as paragraph) category is obtained by averaging over all sequences within this level.

2.3 Model Topology

The whole framework of our end-to-end clustering system is shown in Fig. 2 which includes three parts.

  • Part-a and Part-b deal with the two input sequences in a pair respectively. They share the same topology and parameters. The words in input sequences are mapped to 300-dimensional GloVe word embeddings trained with 840 billion tokens Pennington et al. (2014)

    and fed to the stacked recurrent neural network to generate the representation of input sequence. The


    layer generates the probability distribution of categorization. We only need to set the category size to the distribution vector of the

    softmax layer.

  • Part-c measures whether the two sequences belong to the same category using cosine similarity of the two distribution vectors generated by part-a and part-b.

Figure 2: The end-to-end neural network topology for text clustering.

We employ the bi-directional long short-term memory (LSTM)

Hochreiter and Schmidhuber (1997) network to process the input sequences ({} and {}). Our framework is flexible to use different sequence learning layers, for example, using stacked LSTMs or CNNs. The operation is employed to extract the representation after LSTM layers. means to assign the maximum value from a time series of input vectors of each dimension to the output. Then a softmax function is used to transform the representation into a normalized vector. cosine metric is used to measure the similarity between two normalized vectors. At last, we compute the square error (SE) cost between the cosine similarity and the pseudo label ( for positive pair and for negative one).

Although it is not mathematically or strictly guaranteed that the normalized vector represents the category distribution, we unseal this phenomenon in our experiments.

In the inference stage, only part-a is used and it works like a classifier to predict the category distribution of the input sequence.

The detailed parameter setting will be shown in experiments part. We only stress here that the dimension of normalized vector (category distribution) can be arbitrarily assigned, which denotes the number of categories. Since we verified our methods on two benchmarks with the knowledge of true labels, we assign the true categories as the dimension of the distribution layer. This is consistent with conventional pipelines.

2.4 Further thinking

  • on Label: noise difficulty: Our strategy to construct the pseudo labels will inevitably bring much noise into the training instances. In the worst case of two-category clustering problem with homogeneous category distribution, the labels of negative pairs are pure noises. Because half of the instances are really from different categories while the other half are from the same category.

  • on Prediction: contradiction difficulty: At the beginning stage in training, instances from the same category may activate different softmax dimensions. It means different softmax indexes may denote the same category while several other categories activate the same softmax dimension. The following training stage has to solve this contradiction. This contradiction also leads to the iterative instability.

Corresponding to these two difficulties respectively, experiments on two typical benchmarks will be analyzed to provide a further insight.

3 Experiments

We resort to ground truth to evaluate our clustering method in a quantitative and rigorous way. We choose two widely used benchmarks: IMDB movie reviews (IMDB) dataset and 20-Newsgroup (20NG) dataset. On IMDB, which is a two-category problem, we will test our ability of the resistance to strong pseudo label noise in the assumed negative pairs (Noise difficulty). On 20NG, we will test the model ability in dealing with multi-category problems (Contradiction difficulty). For further verification, we also select the English Wiki (EnWiki) dataset, which consists of huge data and does not have clear boundaries between categories as in IMDB and 20NG.

A series of evaluation methods are used in our experiments. Metrics using ground truth are Accuracy, F-score (weighted, micro and macro), Adjusted Random Index(ARI), Adjusted Mutual Information(AMI) and Normalized Mutual Information(NMI)

Vinh et al. (2010). We also employed the internal metric Davies-Bouldin Index (DBI) Davies and Bouldin (1979) which does not rely on ground truth.

After obtaining the clustering results, we follow the general way to use Hungarian algorithm Papadimitriou and Steiglitz (1982) to assign the predicted category name to each cluster for evaluation. The Hungarian algorithm searches the mapping of category name to each cluster with highest accuracy score from all possible category permutations.

3.1 Imdb

3.1.1 Dataset

IMDB Maas et al. (2011)

is one of the largest open datasets for sentiment analysis and is generally used as a two-category classification benchmark. Each paragraph is considered as a single review which consists of several sentences. This dataset has three partitions:

k labeled reviews for training, k labeled reviews for testing and k unlabeled reviews. There are two types of labels and the label distributions in training and testing data are balanced. We combine the original training part and unlabeled part to form our training set. We ignore the label information when training our clustering model. The performance is evaluated on the original test set.

3.1.2 Model Training

We prepare the instances for our model training as introduced in the above instances construction section. Each positive pair is randomly chosen from the same paragraph and each negative pair is built from different paragraphs. We have equal numbers of both sets. Since IMDB is a two-category problem, we meet with the noise problem in sampling the negative instances as mentioned in previous sections. According to our sampling rule, half of the negative instances are correctly labeled and the other half are wrongly labeled. Thus, the negative instances are pure noise. Nevertheless, our positive instances are guaranteed to be correctly labeled.

For the IMDB dataset, we use the single layer bi-directional LSTM to process the input sequence. The LSTM layer has memory blocks. The learning rate is set to be and regularization is set to be

. The softmax layer dimension is equal to the number of categories which is

in this task.

3.1.3 Results

We show our clustering results together with several conventional methods in Tab. 1

. The first two methods are based on k-Means algorithm. Singular value decomposition (SVD) or Paragraph Vector (PV) are employed to obtain low dimensional vectors to represent the text and then cluster these vectors using k-Means. The vector dimension is set to be

. People also use a topic model Latent Dirichlet Allocation (LDA) to carry out this task Maas et al. (2011) based on sparse text representations.

Our neural based method outperforms the others by a large margin of nearly

points. We find this accuracy is not far from a simple supervised method MNB-uni (Multinomial Naive Bayes with uni-gram) which gives

Wang and Manning (2012).

Approach Acc
SVD+k-Means 62.9
PV + k-Means 72.3
LDA 67.4
NMF 62.3
Ours 78.1
Table 1: Text clustering results on IMDB dataset. We compare our neural based method with four conventional methods, SVD+k-Means, PV(Paragraph Vector)k-Means Pelaez et al. (2015), LDA Maas et al. (2011) and NMF (Non-negative matrix factoring).

3.1.4 Analysis

As we mentioned in previous section, there are a lot of contradiction updates and instabilities during the model training. Especially the strong noises in this two-category problem strengthen this obstacle.

We randomly initialize the network, and all instances are predicted randomly at the beginning. When a negative pair is predicted to be a positive pair, both input sequences are inclined to be moved into the other classes. This phenomenon results in the above difficulties. This process is shown in Fig. 3. We exhibit the update process of selected instances. Two negative pairs depicted with blue lines (circle) and two positive pairs depicted with red lines (square). Both positive and negative pairs include one easy instance and one hard instance respectively. Easy instance denotes the instance that converges fast into its true state and stays at its state, as the top flat curve and the bottom flat curve behave in Fig. 3. Hard instances, the two middle lines in Fig. 3, fluctuate dramatically between two states (several times), and then converge to their final states.

Figure 3: The evolution of the probability in negative class.

In Fig. 3, an instance that belongs to one class is first assigned to the other class and then moves back to its true class. Our model exhibits its ability to overcome local minimums. On the contrary, k-Means based methods often restricted by it greedy properties. Once an instance is assigned to one class, it is very difficult to escape from this class.

3.2 20-Newsgroup

3.2.1 Dataset

The 20-Newsgroup (20NG) dataset 111 jason/20Newsgroups/ Lang (1995) is a widely used benchmark for multi-category document clustering. It contains documents across different categories. The dataset is split into train set and test set with k and k documents respectively. These categories are organized into main subjects as listed in Tab. 2. Categories within the same subject are closely related to each other, and the others are highly partitioned. Due to its difficulty, a lot of works focus on a selected subset of categories or a group of selected category pairs. In our work, we addressed this problem on all categories.
sci.crypt talk.religion.misc
sci.electronics alt.atheism soc.religion.christian talk.politics.misc
Table 2: Two-level categories of 20-Newsgroup.

For a better illustration, we also provide experiment results on another partition with selected groups of categories Zhang et al. (2011). The group names are ‘comp’, ‘sci’, ‘rec’, ‘talk’. The first word of each category name denotes the group it belongs to. Then the subset will be clustered into classes.

With these two experiments, we will compare the performance at both high and detailed levels of category partition. Noting that on the contrary to the experiments on IMDB dataset, here we have at most homogeneous classes. This means almost negative instances (pairs of sentences) have correct assumed labels.

3.2.2 Model Training

We follow the same way as used in IMDB experiment to prepare the instances. We construct the positive and negative instances by sampling the sentence pairs from the same paragraph or different paragraphs respectively. The final training corpus consists of positive instances and negative instances.

We can set an arbitrary cluster number (dimension of softmax layer) to our model but this will introduce post processing steps for evaluation. For the sake of simplicity, we set the cluster number to be equivalent to the ground truth.

For the 20NG dataset, two stacked bi-directional LSTMs are used to process the input sequence, as shown in Fig. 2. In clustering the whole categories, a smaller learning rate is used. While on a selected subset with categories, we keep the same learning rate as in IMDB experiments.

Figure 4: (a): Black line: evolution of the training cost value. Red line: the clustering accuracy. The procedure can be split into stages. (b): evolution of the mean and max value of LSTM layers. (c): The distribution of types of true labels in each cluster at stages respectively.
Approach F-score F-score-micro F-score-macro Accuracy ARI AMI NMI
TF-IDF+k-Means 33.0 33.1 32.0 33.1 14.2 33.7 37.0
NMF 35.1 34.2 34.2 34.2 18.3 33.4 34.5
LDA 37.5 31.6 30.3 31.6 13.8 33.8 37.1
LSA+k-Means 39.4 37.5 38.1 37.5 18.1 37.0 38.7
Ours 42.2 50.8 40.6 50.8 41.6 53.1 57.1
TF-IDF+k-Means 59.8 58.4 59.3 58.4 25.6 29.1 29.2
NMF - - - 64.3 - - 44.3
LDA 54.7 61.8 53.2 61.8 35.3 34.3 37.7
LSA+k-Means 63.7 63.1 63.3 63.1 28.7 33.5 34.7
GMM - - - 51.9 - - 20.5
PLSA - - - 66.5 - - 47.6
MDCU - - - 69.0 - - 40.8
Ours 78.9 79.1 78.6 79.1 55.3 52.9 53.0
Table 3: The clustering results on categories and categories. We compare our method with TF-IDF+k-Means, NMF, LDA, LSA+k-Means, GMM, PLSA and MDCU Zhang et al. (2011)

. We obtain the best performance on both category partitions with all evaluation metrics.

3.2.3 Results

First, we cluster the full 20NG dataset into categories. This is the most difficult partition on this benchmark, because of the number of categories and the high similarity between pairs of similar categories, such as “” and “” (see Tab. 2).

We compare our results with widely used baselines, including nonnegative matrix factorization (NMF), latent dirichlet allocation (LDA), Latent semantic analysis (LSA)+k-Means and TF-IDF+k-Means, gaussian mixture model (GMM) and probabilistic latent semantic analysis (PLSA). There are limited comparable other works on category partion. Many of people focus on a subset of this problem, including clustering a group of selected categories, or a pair of categories. In Chen Chen et al. (2016) and Palla Palla et al. (2012), a full version of this data set is investigated with dirichlet process based methods (MMDPM and DPVC), but only a smaller vocabulary of words is used with the consideration of efficiency. Thus results obtained therein (with f-score of ) are much lower than ours (see Tab. 3). On category partition, we also have the results in Zhang et al. (2011) for comparison.

With categories, we obtain the best performance with all evaluation metrics. We note that the improvement on f-score is smaller than that on ARI and accuracy. During the training, we set the number of clusters to be . Actually our model only predicts classes, that is no instance is predicted to be the other classes. This phenomenon decreases the f-score much, while ARI is designed to specifically deal with this problem. Accuracy is basically an instance level evaluation rather than cluster level. So the improvement lies between that of F-score and ARI. A larger cluster size could be set for better evaluation score. But this will introduce some post processing techniques. Here we just show a straightforward way in model training which has demonstrated its advantages over other works.

Next, for a better illustration of the clustering performance, we only consider the selected groups of categories (-category simply speaking), which are ‘comp’, ‘rec’, ‘sci’ and ‘talk’. Here we use accuracy as the evaluation metrics in accordance with the work of Zhang Zhang et al. (2011). We list our results together with conventional tools, such as k-Means, GMM, PLSA, and Max Margin Document Clustering with Universum (MDCU)Zhang et al. (2011) results in Tab. 3. Experiment results show that our method has the best performance with all evaluation metrics. Compared to -category results, here we obtain the consistent improvement amplitude on Accuracy, ARI and F-score because all categories are predicted.

After considering performance on both category levels and looking into the detailed cases, we find our model works well in predicting the high level categories. Mistakes exist in distinguishing the subtle differences, such as “” and “”, “talk.politics.guns” and ‘talk.politics.mideast”, where the improvement is also enlarged with our model. In addition, compared to the improvement on IMDB benchmark, it appears that our model is more advantageous under difficult conditions.

cluster-1 He went - with a era in innings pitched.
The cardinals responded by scoring three runs in the bottom of the fourth inning.
Rangers won the match 3-0 and therefore won the title.
cluster-2 Religious believers may or may not accept such symbolic interpretations.
Opposing views are not non-existent within the realm of christian eschatology.
Many great philosophers have spoken of the importance of exercising both humility and confidence.
cluster-3 Chrysler corporation only made 701 gtx convertibles in 1969.
Cosworth technology was then renamed as mahle powertrain on 1 july 2005.
In the route gained five new alexander dennis enviro diesel-electric hybrid single-deckers.
cluster-4 T-bag responds by starting to poison lechero’s mind against his men.
When he refuses, she slams the door on him in apparent disgust.
Later that night, buffy gets drunk with spike at his crypt.
cluster-5 Unlike all other final fantasy games, players cannot manually equip characters with armor.
Produced by bandai, the game was first introduced in Japan in February 2003.
Various weapons and accessories can be attached to many player and ai objects.
cluster-6 In april 1944, the squadron shifted from bomber escort to ground attack duties.
The entire squadron then transferred to tunis in June to attack enemy shipping.
On 6 March 1945, the two gunboats arrived at eniwetok.
Table 4: The clustering results on groups of categories. Our method outperforms the others by at least accuracy points.

3.2.4 Analysis

In this part, we provide a deep insight to the clustering process through experiment on -category problem in Fig. 4.

In the first graph (graph (a)), we exhibits the evolution of the square error cost (black squares) during the training procedure and the corresponding accuracy value (red empty square) respectively. We find there are two flat parts (denoted with two horizontal grey dotted lines) on the curve inferring some local minimums during parameter update. From these flat parts, we split the training procedure into four regions, marked with I, II, III and IV in Fig. 4.

The transforming from one region to the next are generally accompanied with the iterative instability which is described in Sec. 2.4. This instability is reflected by the non-monotonous or dramatic change of the model parameters. We select the mean and max LSTM layer values as an example shown in graph (b) (we rescaled the parameter value curve for better visualization).

In graph (c), we show the detailed clustering results at stages. For each stage, we have four clusters and each cluster contains instances with different true labels denoted by different colors. At the beginning stage (I) in training, parameters have not been well updated and ‘comp’ dominates of the clusters we predicted. Now we can predict only types of category named ‘comp’ and ‘talk’. In the second stage (II), the changes happen in the second cluster (from the left) and we are able to predict three types of category. In stage III, our model enters into the final way to correctly organize all clusters. Each cluster is dominated by instances with different category. At last (stage IV), the distributions are further optimized in all clusters.

3.3 English Wiki

3.3.1 Dataset

We downloaded the corpus from the English wiki website 222 We remove the structural information (including the head part, tail part, ) from webpages, and only keep plain texts in the main body. Each sentence is considered as a single instance. Wee keep the original sentence order in the corpus. There are million sentences in this corpus with vocabulary size of million.

3.3.2 Model Training

We assume that two sentences next to each other have the same topic. On the contrary, two sentences far from each other have different topics with high probability. Following this assumption, we build the positive pair by selecting the consecutive two sentences. We build the negative pair by randomly selecting two sentences, the distance between which is greater than sentences. The model topology is the same as shown in Fig. 2

and the hyperparameters are the same as those used in previous two corpus. We cluster this corpus in


3.3.3 Results and analysis

Conventional clustering tools generally are not able to deal with such a large corpus. We only exhibits the performance of our model. There is no ground truth for exact evaluation. We show the intrinsic metrics DBI in Fig. 5. DBI measures the ratio of cluster radius over the distance between cluster centroid in the worst case. The value denotes the sum of two cluster radius is equivalent to their distance, which means clusters are separated.

Figure 5: DBI curve during the training process.

We show the example clusters in Tab. 4. Both cluster-1 and cluster-5 describe the games. Cluster-1 focus on sports games while cluster-5 focus on electronic games. Meanwhile, sentence in cluster-5 also refers to weapons (see the last sentence), but it can be distinguished from the cluster-6 about war related topic. Furthermore, sentences with rare overlap words can also be clustered together, reflecting the advantage of the purely neural based end-to-end system.

4 Related Work

Conventional text clustering methods are mainly based on Expectation-Maximization (EM) algorithms like k-Means

Manning et al. (2008). However, k-Means can only give the hard boundaries among clusters. The distance between each instance and its centroid cannot be naturally converted to the probabilistic distribution. This property also results in a difficulty for it to be leveraged by downstream tools. Furthermore, its performance relies on its initialization. Latent Dirichlet Allocation (LDA) Blei et al. (2003) is an unsupervised method that clusters similar words into topic groups. LDA assumes the multinomial distribution of each word and the corresponding parameters are drawn from the Dirichlet distribution. However, for large corpus, the information distribution may deviate from the assumed distributions.

The basics of text clustering is measuring the similarity of two texts, which is the distance between two text representations. Traditional text representations like bag-of-words and term frequency-inverse document frequency (TF-IDF) cause sparsity problems for short texts. The dense vector representation of text can be constructed by the ensemble of word embeddings Mikolov et al. (2013) in the text. The Siamese CBOW model Kenter et al. (2016) constructs the sentence embedding by averaging the word embeddings and uses the embedding similarities among the sentence, its adjacent sentences and randomly chosen sentences as training target to fine tune the sentence embedding. The Word Mover’s Distance (WMD) Kusner et al. (2015) measures the similarity of two texts by calculating the minimum accumulate distance from all the embedded words in one text to the embedded words in the other text. These methods ignore the syntax of words order and the semantic relationship of texts.

Several semantic representation of text (sequence embedding) methods based on neural networks have been proposed and showed advantages in a variety of downstream natural language understanding tasks. The Paragraph Vector (PV) Le and Mikolov (2014) learns the text embedding by leveraging the text representation as context to predict following word using the paragraph vector and word vectors together. The Skip-Thought Vector (STV) model Kiros et al. (2015); Tang et al. (2017) is an encoder-decoder neural network that learns the sequence embedding directly by predicting the surrounding sequences of each input sequence. Hill Hill et al. (2016)

provides systematic evaluation and comparison of unsupervised models that construct distributed representations of texts. However, the optimal text representation method depends on different tasks.

Recently, models that directly learn pairwise text similarities are proposed based on siamese networks Bromley et al. (1994)

. Siamese convolutional neural networks followed by similarity measurement layer are constructed by He

He et al. (2015) to learn text semantic representations and trained with similarity labeled text pairs. Mueller Mueller and Thyagarajan (2016) present a Siamese LSTM network that scores the similarity of two sentences. The similarity is calculated with the Manhattan distance between text representations. However, to train these models, sequence pairs with well labeled similarity scores are required. The Deep Structured Semantic Model (DSSM) Huang et al. (2013)

has a siamese like structure that learns the query phrase embedding and the document embedding in the common semantic space with deep neural networks, using the cosine similarity between the representations of queries and documents as the target. Inspired by these works, we attempt to employ the siamese deep neural network for end-to-end text clustering. We utilize unlabeled corpus and construct training instances with adjacent and distant sequences pairs. Rather than scoring the similarity of text representations, we target the similarity of the category distributions generated from the two sequence representations in a text pair.

5 Conclusion

We present a purely neural based end-to-end method for unsupervised text clustering. The sequence representation learning and clustering model are integrated in an unified framework. We evaluated our model on two widely used benchmarks, IMDB movie reviews and -Newsgroup. The clustering results outperform the other methods by a large margin on both tasks. Our model exhibits the strong ability in resistance to the data noise introduced by our pseudo labels. It exhibits even better performance when we address tasks with more category such as -newsgroup.

Under this framework, there still exist several aspects to improve the model further due to its flexibility. More sampling strategies might be explored to construct instances with higher confidence. We can also change this single pair topology to a pair-wise topology taking double pairs of instances as input. That is, the model takes two pairs of sequences as input for each time, and determines if one pair is inclined to be positive than the other. At last and of the most importance, we expect this end-to-end property could contribute to a wide range of complex natural language understanding tasks.


  • Bairi et al. (2015) Ramakrishna Bairi, Rishabh K Iyer, Ganesh Ramakrishnan, and Jeff A Bilmes. 2015. Summarization of multi-document topic hierarchies using submodular mixtures. In Proceedings of ACL 2015, pages 553–563.
  • Blei et al. (2003) D. M. Blei, A. Y. Ng, and M. I. Jordan. 2003. Latent dirichlet allocation.

    Journal of Machine Learning Research

    , 3(4-5):993–1022.
  • Bromley et al. (1994) Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. 1994. Signature verification using a” siamese” time delay neural network. In Advances in Neural Information Processing Systems, pages 737–744.
  • Cao et al. (2007) Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to rank: from pairwise approach to listwise approach. In Proceedings of the 24th international conference on Machine learning, pages 129–136. ACM.
  • Chen et al. (2016) Gang Chen, Haiying Zhang, and Caiming Xiong. 2016. Maximum margin dirichlet process mixtures for clustering. In Proceedings of AAAI 2016, pages 1491–1497.
  • Davies and Bouldin (1979) David L Davies and Donald W Bouldin. 1979. A cluster separation measure. IEEE transactions on pattern analysis and machine intelligence, 2:224–227.
  • Ge and Xu (2015) Wendong Ge and Bo Xu. 2015. Dialogue management based on sentence clustering. In Proceedings of ACL 2015.
  • He et al. (2015) Hua He, Kevin Gimpel, and Jimmy J Lin. 2015. Multi-perspective sentence similarity modeling with convolutional neural networks. In Proceedings of EMNLP 2015, pages 1576–1586.
  • Hill et al. (2016) Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning Distributed Representations of Sentences from Unlabelled Data. In Proceedings of NAACL-HLT 2016, pages 1367–1377.
  • Hochreiter and Schmidhuber (1997) S. Hochreiter and J. Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780.
  • Huang et al. (2013) Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of CIKM 2013, pages 2333–2338.
  • Jain (2010) Anil K Jain. 2010. Data clustering: 50 years beyond k-means. Pattern recognition letters, 31(8):651–666.
  • Kenter et al. (2016) Tom Kenter, Alexey Borisov, and Maarten de Rijke. 2016. Siamese cbow: Optimizing word embeddings for sentence representations. arXiv preprint arXiv:1606.04640.
  • Kiros et al. (2015) Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294–3302.
  • Koch et al. (2015) Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. 2015. Siamese neural networks for one-shot image recognition. In

    ICML Deep Learning Workshop

    , volume 2.
  • Kusner et al. (2015) Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In Proceedings of ICML 2015, pages 957–966.
  • Lang (1995) Ken Lang. 1995. Newsweeder: Learning to filter netnews. In Proceedings of ICML 1995, pages 331–339.
  • Le and Mikolov (2014) Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of ICML 2014, pages 1188–1196.
  • Liu et al. (2009) Tie-Yan Liu et al. 2009. Learning to rank for information retrieval. Foundations and Trends® in Information Retrieval, 3(3):225–331.
  • Lloyd (1982) Stuart Lloyd. 1982. Least squares quantization in pcm. IEEE transactions on information theory, 28(2):129–137.
  • Maas et al. (2011) Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of ACL 2011, pages 142–150.
  • Manning et al. (2008) Christopher D Manning, Prabhakar Raghavan, Hinrich Schütze, et al. 2008. Introduction to information retrieval, volume 1. Cambridge university press Cambridge.
  • Mikolov et al. (2013) Tomas Mikolov, Greg Corrado, Kai Chen, and Jeffrey Dean. 2013.

    Efficient Estimation of Word Representations in Vector Space.

    Proceedings of ICLR 2013, pages 1–12.
  • Mueller and Thyagarajan (2016) Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similarity. In Proceedings of AAAI 2016, pages 2786–2792.
  • Palla et al. (2012) Konstantina Palla, Zoubin Ghahramani, and David A. Knowles. 2012. A nonparametric variable clustering model. In Advances in Neural Information Processing Systems, pages 2987–2995.
  • Papadimitriou and Steiglitz (1982) Christos H Papadimitriou and Kenneth Steiglitz. 1982. Combinatorial optimization: algorithms and complexity. Courier Corporation.
  • Pelaez et al. (2015) Alejandro Pelaez, Talal Ahmed, and Mohsen Ghassemi. 2015. Sentiment analysis of imdb movie reviews. Machine learning, 198(536).
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP 2014, pages 1532–1543.
  • Tai et al. (2015) Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075.
  • Tang et al. (2017) Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, and Virginia R de Sa. 2017. Trimming and improving skip-thought vectors. arXiv preprint arXiv:1706.03148.
  • Triantafillou et al. (2016) Eleni Triantafillou, Jamie Ryan Kiros, Raquel Urtasun, and Richard Zemel. 2016. Towards generalizable sentence embeddings. Proceedings of ACL 2016, page 239.
  • Vinh et al. (2010) Nguyen Xuan Vinh, Julien Epps, and James Bailey. 2010. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research, 11(Oct):2837–2854.
  • Wang and Manning (2012) Sida Wang and Christopher D. Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of ACL 2012, pages 90–94.
  • Yao and Durme (2014) Xuchen Yao and Benjamin Van Durme. 2014. Information extraction over structured data: Question answering with freebase. In Proceedings of ACL 2014, pages 956–966.
  • Zhang et al. (2011) Dan Zhang, Jingdong Wang, and Luo Si. 2011. Document clustering with universum. In Proceedings of AIGIR 2011, pages 873–882.