In this paper we show how the performance of tweet clustering can be improved by leveraging character-based neural networks. The proposed approach overcomes the limitations related to the vocabulary explosion in the word-based models and allows for the seamless processing of the multilingual content. Our evaluation results and code are available on-line at https://github.com/vendi12/tweet2vec_clusteringREAD FULL TEXT VIEW PDF
Recent work has shown that deeper character-based neural machine transla...
Character-level features are currently used in different neural network-...
Despite the long history of named-entity recognition (NER) task in the
We propose a new character-based text classification framework for
Quality assessment of resistance spot welding (RSW) joints of metal shee...
Automatic analysis of poetic rhythm is a challenging task that involves
The area of constrained clustering has been extensively explored by
Our use case scenario, as part of the InVID project222http://www.invid-project.eu, originates from the needs of professional journalists responsible for reporting breaking news in a timely manner. News often appear on social media exclusively or right before they appear in the traditional news media. Social media is also responsible for the rapid propagation of inaccurate or incomplete information (rumors). Therefore, it is important to provide efficient tools to enable journalists rapidly detect breaking news in social media streams [Petrovic et al.2013].
The SNOW 2014 Data Challenge provided the task of extracting newsworthy topics from Twitter. The results of the challenge confirmed that the task is ambitious: The best result was 0.4 F-measure.
Breaking-news detection involves 3 subtasks: selection, clustering, and ranking of tweets. In this paper, we address the task of tweet clustering as one of the pivotal subtasks required to enable effective breaking news detection from Twitter.
Traditional approaches to clustering textual documents involve construction of a document-term matrix, which represents each document as a bag-of-words. These approaches also require language-specific sentence and word tokenization.
Word-based approaches fall short when applied to social media data, e.g., Twitter, where a lot of infrequent or misspelled words occur within very short documents. Hence, the document representation matrix becomes increasingly sparse.
One way to overcome sparseness in a tweet-term matrix is to consider only the terms that appear frequently across the collection and drop all the infrequent terms. This procedure effectively removes a considerable amount of information content. As a result, all tweets that do not contain any of the frequent terms receive a null-vector representation. These tweets are further ignored by the model and cannot influence clustering outcomes in the subsequent time intervals, where the frequency distribution may change, which hinders the detection of emerging topics.
Artificial neural networks (ANNs) allow to generate dense vector representation (embeddings), which can be efficiently generated on the word- as well as character levels [dos Santos and Zadrozny2014, Zhang et al.2015, Dhingra et al.2016]. The main advantage of the character-based approaches is their language-independence, since they do not require any language-specific parsing.
The major contribution of our work is the evaluation of the character-based neural embeddings on the tweet clustering task. We show how to employ character-based tweet embeddings for the task of tweet clustering and demonstrate in the experimental evaluation that the proposed approach significantly outperforms the current state-of-the-art in tweet clustering for breaking news detection.
The remaining of this paper is structured as follows: Section 2 provides an overview of the related work; we describe the setup of an extensive evaluation in Section 3; report and discuss the results in Sections 4 and 5, respectively; conclusion (Section 6) summarizes our findings and directions for future work.
There has been a continuous effort over the recent years to design effective and efficient algorithms capable of detecting newsworthy topics in the Twitter stream [Hayashi et al.2015, Ifrim et al.2014, Vosecky et al.2013, Wurzer et al.2015]. These current state-of-the-art approaches build upon the bag-of-words document model, which results in high-dimensional, sparse representations that do not scale well and are not aware of semantic similarities, such as paraphrases.
The problem becomes evident in case of tweets that contain short texts with a long tail of infrequent slang and misspelled words. The performance of the such approaches over Twitter datasets is very low, with F-measure up to 0.2 against the annotated Wikipidea articles as reference topics [Wurzer et al.2015] and 0.4 against the curated topic pool [Papadopoulos et al.2014].
Artificial neural networks (ANNs) allow to generate dense vector representations (embeddings). Word2vec [Mikolov et al.2013] is by far the most popular approach. It accumulates the co-occurrence statistics of words that efficiently summarizes their semantics.
Brigadir et al. brigadir_adaptive_2014 demonstrated encouraging results using the word2vec Skip-gram model to generate event timelines from tweets. Moran et al. moran_enhancing_2016 achieved an improvement over the state-of-the-art first story detection (FSD) results by expanding the tweets with their semantically related terms using word2vec.
Neural embeddings can be efficiently generated on the character level as well. They repeatedly outperformed the word-level baselines on the tasks of language modeling [Kim et al.2016], part-of-speech tagging [dos Santos and Zadrozny2014], and text classification [Zhang et al.2015]. The main advantage of the character-based approach is its language-independence, since it does not depend on any language-specific preprocessing.
Dhingra et al. dhingra_tweet2vec:_2016 proposed training a recurrent neural network on the task of hashtag prediction. Vosoughi et al. vosoughi_tweet2vec:_2016 demonstrated an improved performance of a character-based neural autoencoder on the task of paraphrase and semantic similarity detection in tweets.
Our work extends the evaluation of the Tweet2Vec model [Dhingra et al.2016] to the tweet clustering task, versus the traditional document-term matrix representation. To the best of our knowledge, this work is the first attempt to evaluate the performance of character-based neural embeddings on the tweet clustering task.
We use the SNOW 2014 test dataset [Papadopoulos et al.2014] in our evaluation. It contains the IDs of about 1 million tweets produced within 24 hours.
We retrieved 845,626 tweets from the Twitter API, since other tweets had already been deleted from the platform. The preprocessing procedure: remove RT prefixes, urls and user mentions, bring all characters to lower case and separate punctuation with spaces (the later is necessary only for the word-level baseline).
The dataset is further separated into 5 subsets corresponding to the 1-hour time intervals (18:00, 22:00, 23:15, 01:00 and 01:30) that are annotated with the list of breaking news topics. In total, we have 48,399 tweets for clustering evaluation; the majority of them (42,758 tweets) are in English.
The dataset comes with the list of the breaking news topics. These topics were manually selected by the independent evaluators from the topic pool collected from all challenge participants (external topics). The list of topics contains 70 breaking news headlines extracted from tweets (e.g., “The new, full Godzilla trailer has roared online”). Each topic is annotated with a few (at most 4) tweet IDs, which is not sufficient for an adequate evaluation of a tweet clustering algorithm.
We enrich the topic annotations by collecting larger tweet clusters using fuzzy string matching333https://github.com/seatgeek/fuzzywuzzy for each of the topic labels. Fuzzy string matching uses the Levenstein (edit) distance [Levenshtein1966] between the two input strings as the measure of similarity. Levenstein distance corresponds to the minimum number of character edits (insertions, deletions, or substitutions) required to transform one string into the other. We choose only the tweets for which the similarity ratio with the topic string is greater than 0.9 threshold.
A sample tweet cluster produced with the fuzzy string matching for the topic “Justin Trudeau apologizes for Ukraine joke”:
Justin Trudeau apologizes for Ukraine joke: Justin Trudeau said he’s spoken the head…
Justin Trudeau apologizes for Ukraine comments http://t.co/7ImWTRONXt
Justin Trudeau apologizes for Ukraine hockey joke #cdnpoli
In total, we matched 2,585 tweets to 132 clusters using this approach. The resulting tweet clusters represent the ground-truth topics within different time intervals. The cluster size varies from 1 to 361 tweets with an average of 20 tweets per cluster (median: 6.5).
This simple procedure allows us to automatically generate high-quality partial labeling. We further use this topic assignment as the ground-truth class labels to automatically evaluate different flat clustering partitions.
Our baseline is the tweet representation approach that was used in the winner-system of SNOW 2014 Data Challenge444https://github.com/heerme/twitter-topics [Ifrim et al.2014]. This approach represents a collection of tweets as a tweet-term matrix by keeping the bigrams and trigrams that occur at least in 10 tweets.
This approach includes two stages: (1) training a neural network to predict hashtags using the subset of tweets that contain hashtags (88,148 tweets in our case); (2) encoding: use the trained model to produce tweet embeddings for all the tweets regardless whether they contain hashtags or not. We use Tweet2Vec implementation555https://github.com/bdhingra/tweet2vec to produce tweet embeddings.
Tweet2Vec is a bi-directional recurrent neural network that consumes textual input as a sequence of characters. The network architecture includes two Gated Recurrent Units (GRUs)[Cho et al.2014]
: forward and backward GRUs. GRU is an optimized version of a Long Short-Term Memory (LSTM) architecture[Hochreiter and Schmidhuber1997]. It includes 2 gates that control the information flow. The gates (reset and update gate) regulate how much the previous output state () influences the current state ().
The two GRUs are identical, but the backward GRU receives the same sequence of tweet-characters in reverse order. Each GRU computes its own vector-representation for every substring () using the current character vector () and the vector-representation it computed a step before (). These two representations of the same tweet are combined in the next layer of the neural network to produce the final tweet embedding (see more details in Dhingra et.al. dhingra_tweet2vec:_2016).
The network is trained in minibatches with an objective function to predict the previously removed hashtags. A hashtag can be considered as the ground-truth cluster label for tweets. Therefore, the network is trained to optimize for the correct tweet classification, which corresponds to a supervised version of the tweet clustering task annotated with the cluster assignment, i.e. hashtags.
In order to predict the hashtags the tweet embeddings are passed through the linear layer, which produces the output in the size of the number of hashtags, which we observed in the training dataset. The softmax layer on top normalizes the scores from the linear layer to generate the hashtag probabilities for every input tweet.
Tweet embeddings are produced by passing the tweets through the trained Tweet2Vec model (encoder). In this way we can obtain vector representations for all the tweets including the ones that do not contain any hashtags. The result is a matrix of size , where is the number of tweets and is the number of hidden states (500).
To cluster tweet vectors (character-based tweet embeddings produced by the neural network for Tweet2Vec evaluation or the document-term matrix for TweetTerm) we employ the hierarchical clustering algorithm implementation fromfastcluster library [Müllner2013].
Hierarchical clustering includes computing pairwise distances between the tweet vectors, followed by their linkage into a single dendrogram. There are several distance metrics (Euclidean, Manhattan, cosine, etc.) and linkage methods to compare distances (single, average, complete, weighted, etc.). We evaluated the performance of different methods using the cophenetic correlation coefficient (CPCC) [Sokal and Rohlf1962] and found the best performing combination: Euclidean distance and average linkage method.
The hierarchical clustering dendrogram can produce different flat clusterings for the same dataset: from n single-member clusters with one document per cluster to a single cluster that contains all n documents. The distance threshold defines the granularity (number and size) of the produced clusters.
Grid search helps us to determine the optimal distance threshold for the dendrogram cut-off. We generated a list of values in the range from 0.1 to 1.5 with 0.1 increment step and examine their performance with respect to the ground-truth cluster assignment. We produce flat clusterings for each value of the distance threshold from the grid and compare them with respect to the quality metrics.
Since we also want to be able to select the optimal distance threshold in absence of the true labels, we examine the scores provided by the mean Silhouette coefficient [Rousseeuw1987]
. Silhouette is an unsupervised intrinsic evaluation metric (cluster validity index) that measures the quality of the produced clusters and can be used for unsupervised intrinsic evaluation (i.e., without the ground-truth labels). It was reported to outperform alternative methods in a comparative study of 30 validity indices[Arbelaitz et al.2013].
We evaluate the clustering results using the standard metrics for extrinsic clustering evaluation: homogeneity, completeness, V-Measure [Rosenberg and Hirschberg2007], Adjusted Rand Index (ARI) [Hubert and Arabie1985] and Adjusted Mutual Information (AMI) [Nguyen et al.2010]. All metrics return a score on the range [0; 1] for the pair of sets that contain ground truth and cluster labels as input. The higher the score the more similar the two clusterings are.
The Homogeneity score represents the measure for purity of the produced clusters. It penalizes clustering, where members of different classes get clustered together. Thus, the best homogeneity scores are always at the bottom of the dendrogram, i.e., at the level of the leaves, where each document belongs to its own cluster. Completeness, on the contrary, favors larger clusters and reduces the score if the members of the same class are split into different clusters. Therefore, the top of the dendrogram, where all the documents reside in a single cluster always achieves the maximum completeness score.
is designed to balance out the two extremes of homogeneity and completeness. It is the harmonic mean of the two and corresponds to the Normalized Mutual Information (NMI) score.
AMI score is an extension of NMI adjusted for chance. The more clusters are considered the more chance the labelings correlate. AMI allows us to compare the clustering performance across different time intervals since it normalizes the score by the number of labeled clusters in each interval.
Finally, ARI is an alternative way to assess the agreement between two clusterings. It counts all pairs clustered together or separated in different clusters. ARI also accounts for the chance of an overlap in a random label assignment.
Our partial labeling covers a small subset of the data and by design provides the clusters with the high degree of string overlap with the annotated topics. Therefore, we extend the clustering evaluation to the rest of the dataset to evaluate whether the models can uncover less straight-forward semantic similarities in tweets. We select the results for manual evaluation motivated by the cluster label (headline) selection task.
The next step in the breaking news detection pipeline after the clustering task is headline selection (cluster labeling task). The most common approach to label a cluster of tweets is to select a single tweet as a representative member for the whole cluster [Papadopoulos et al.2014]. We decided to test this assumption and manually check how many clusters loose their semantics when represented with a single tweet.
Headline selection motivates the coherence assessment of the produced clusters since the clusters discarded at this stage will never make it to the final results. To explore coherence of the produced clusters we pick several tweets in each cluster and check whether they are semantically similar.
The tweet selected as a headline (cluster label) can be the first published tweet as in First Story Detection (FSD) task, also used in Ifrim et al ifrim_event_2014. Alternative approaches include selection of the most recent tweet published on the topic, or the tweet that is semantically most similar to all other tweets in the cluster, i.e., the tweet closest to the centroid of the cluster (medoid-tweets). Therefore, we sample 5 tweets from each cluster: the first published tweet, the most recent tweet and three medoid-tweets.
We set up a manual evaluation task as follows:
Take the top 20 largest clusters sorted by the number of tweets that belong to the cluster.
For each cluster:
Take the first and the last published tweet (tweets are previously sorted by the publication date).
Take three medoid-tweets, i.e., the tweets that appear closest to the centroid of the cluster.
Add the 5 tweets to the set associated with the cluster (removing exact duplicate tweets)
For all clusters, where the set of selected tweets contains at least two unique tweets: 4 human evaluators independently assess the coherence of each cluster.
According to the evaluation setup each model produced 20 top-clusters for each of the 5 intervals, i.e., clusters per model. We manually evaluate only the clusters that contain more than 1 distinct representative tweet (Clusters1). All other clusters, i.e., the ones for which all 5 selected tweets are identical (Clusters=1), are considered correct by default.
Results for all 5 intervals were evaluated together in a single pool and the models were anonymized to avoid biases. Each evaluator independently assigned a single score to each cluster:
Correct – all tweets report the same news;
Partial – some tweets are not related;
Incorrect – all tweets are not related.
Partial and Incorrect labels reflect different types of clustering errors. Partial error is less severe indicating that the tweets of the cluster are semantically similar, but they report different news (events) and should be split into several clusters. Incorrect clusters indicate a random collection of tweets with no semantic similarities.
|Model||Dataset||Clusters||Correct (%)||Errors (%)|
Table 1 summarizes the results of our evaluation using the ground-truth partial labeling. The scores highlighted with the bold font indicate the best result among the two competing approaches for the same subset of tweets corresponding to the respective time interval.
Tweet2Vec exhibits better clustering performance comparing to the baseline according to the majority of the evaluation metrics in all the intervals. In all cases Tweet2Vec model wins in terms of Homogeneity score and TweetTerm wins in Completeness. This result shows that Tweet2Vec is better at separating tweets that are not similar enough than the baseline model. Tweet2Vec fails only once to perfectly separate the ground-truth clusters (18:00 interval). This result shows that Tweet2Vec is able to replicate the results of the fuzzy string matching algorithm that was used to generate the ground-truth labeling.
The rise in V-Measure correlates with the decline of the Silhouette coefficient and the steep drop in the number of produced clusters (see Figure 1). We observed that the optimal distance threshold for Tweet2Vec clustering according to V-Measure is on the interval [0.8; 1] (see Table 1: Distance threshold), which is also consistent with the findings reported in Ifrim et. al ifrim_event_2014.
Results of the manual cluster evaluation by four independent evaluators are summarized in Table 2. Bold font indicates the maximum scores achieved across the competing representation approaches. Tables 3 and 4 show sample clusters produced by both models alongside their average score.
TweetTerm assigns a 0-vector representation to tweets that do not contain any of the frequent terms. Hence, all these tweets end up in a single “garbage” cluster. Therefore, we discount the number of the expected “garbage” clusters (1 cluster per interval = 5 clusters) from the score count for TweetTerm (Table 2).
Tweet2Vec model produces the largest number of perfectly homogeneous clusters for which all 5 selected tweets are identical (see Table 2 column Clusters=1). The percentage of correct results among the manually evaluated clusters is higher for the TweetTerm model, but the number of errors (Incorrect) is higher as well. Tweet2Vec produced the highest total % of correct clusters due to the larger proportion of detected clusters that contain identical tweets (Clusters=1). Tweet2Vec also produced the least number of incorrect clusters: at most 2 incorrect clusters per 100 clusters (Precision: 0.98).
The results of Tweet2Vec on the multilingual dataset are lower than on the English-language tweets. However, we do not have alternative results to compare since the baseline approach is not language-independent and requires additional functionality (word-level tokenizers) to handle tweets in other languages, e.g., Arabic or Chinese. We provide this evaluation results to demonstrate that Tweet2Vec overcomes this limitation and is able to cluster tweets in different languages. In particular, we obtained correct clusters of Russian and Arabic tweets.
We observed that leaving the urls does not significantly affect clustering performance, i.e., the model tolerates noise. However, replacement of the urls and user mentions with placeholders as in Dhingra et. al. dhingra_tweet2vec:_2016 generates syntactic patterns in text, such as @user @user @user, which causes semantically unrelated tweets appear within the same cluster.
Our experimental evaluation showed that the character-based embeddings produced with a neural network outperform the document-term baseline on the tweet clustering task. The baseline approach (TweetTerm) shows a very good performance in comparison with the simplicity of its implementation, but it naturally falls short in recognizing patterns beyond simple n-gram matching.
We attribute this result to the inherent limitation of the document-term model retaining only the frequent terms and disregarding the long tail of infrequent patterns. This limitation appears crucial in the task of emergent news detection, in which the topics need to be detected long before they become popular. Neural embeddings, in contrast, can retain a sufficient level of detail in their representations and are able to mirror the fuzzy string matching performance beyond simple n-gram matching.
It becomes apparent from the sample clustering results (Tables 3 and 4) that both models perform essentially the same task of unveiling patterns shared between a group of strings. While TweetTerm operates only on the patterns of identical n-grams, Tweet2Vec goes beyond this limitation by providing room for a variation within the n-gram substring similar to fuzzy string matching. This effect allows to capture subtle variations in strings, e.g., misspellings, which word-based approaches are incapable of.
Our error analysis also revealed the limitation of the neural embeddings to distinguish between semantic and syntactic similarity in strings (see Incorrect samples in Table 3). Tweet2Vec, as a recurrent neural network approach, represents not only the characters but also their order in string that may be a false similarity signal. It is evident that the neural representations in our example would benefit from the stop-word removal or an analogous to TF/IDF weighting scheme to avoid capturing punctuation and other merely syntactic patterns.
Neural networks gain performance when more data is available. We could use only 88,148 tweets from the dataset to train the neural network, which can appear insufficient to unfold the potential of the model to recognize more complex patterns. Also, due to the scarce annotation available we could use only a small subset of the original dataset for our clustering evaluation. Since most of the SNOW tweets are in English, another dataset is needed for comprehensive multilingual clustering evaluation.
We showed that character-based neural embeddings enable accurate tweet clustering with minimum supervision. They provide fine-grained representations that can help to uncover fuzzy similarities in strings beyond simple n-gram matching. We also demonstrated the limitation of the current approach unable to distinguish semantic from syntactic patterns in strings, which provides a clear direction for the future work.
The presented work was supported by the InVID Project (http://www.invid-project.eu/), funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 687786. Mihai Lupu was supported by Self-Optimizer (FFG 852624) in the EUROSTARS programme, funded by EUREKA, the BMWFW and the European Union, and ADMIRE (P25905-N23) by FWF. We thank to Bhuwan Dhingra for the support in using Tweet2Vec and Linda Andersson for the review and helpful comments.
NewsKDD - Workshop on Data Science for News Publishing at The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, August 24-27, 2014, New York, NY, USA.
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, pages 1724–1734.
Tweet2vec: Character-based distributed representations for social media.In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany.
Proceedings of the 31th International Conference on Machine Learning, ICML 2014, 21-26 June, 2014, Beijing, China, pages 1818–1826.
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 2741–2749.
Silhouettes: A graphical aid to the interpretation and validation of cluster analysis.Journal of Computational and Applied Mathematics, 20:53 – 65.