Content-driven, unsupervised clustering of news articles through multiscale graph partitioning

08/03/2018 ∙ by M. Tarik Altuncu, et al. ∙ Imperial College London 0

The explosion in the amount of news and journalistic content being generated across the globe, coupled with extended and instantaneous access to information through online media, makes it difficult and time-consuming to monitor news developments and opinion formation in real time. There is an increasing need for tools that can pre-process, analyse and classify raw text to extract interpretable content; specifically, identifying topics and content-driven groupings of articles. We present here such a methodology that brings together powerful vector embeddings from Natural Language Processing with tools from Graph Theory that exploit diffusive dynamics on graphs to reveal natural partitions across scales. Our framework uses a recent deep neural network text analysis methodology (Doc2vec) to represent text in vector form and then applies a multi-scale community detection method (Markov Stability) to partition a similarity graph of document vectors. The method allows us to obtain clusters of documents with similar content, at different levels of resolution, in an unsupervised manner. We showcase our approach with the analysis of a corpus of 9,000 news articles published by Vox Media over one year. Our results show consistent groupings of documents according to content without a priori assumptions about the number or type of clusters to be found. The multilevel clustering reveals a quasi-hierarchy of topics and subtopics with increased intelligibility and improved topic coherence as compared to external taxonomy services and standard topic detection methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 5

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. introduction

The production of news content is growing at an astonishing rate. To help manage and monitor such sheer amount of text, there is an increasing need to develop efficient methods that can provide insight into emerging content areas, and help with the stratification of articles and written pieces according to ‘topics’ that follow intrinsically from content similarity. This is in contrast with traditional approaches to article classification, typically based on keywords, word frequencies, and man-made hierarchies. Methodologies that provide automatic clustering of articles based on content, directly from free text without external labels or categories, could thus provide alternative ways to monitor the generation and emergence of news content beyond standard, broad classes.

The area of text mining and natural language processing (NLP) has a long history. Classic methods for topic detection have been usually based on characterisation of documents using weighted word frequencies (Bag-of-Words (BoW) representation) followed by statistical methods, notably Latent Dirichlet Allocation (LDA), to cluster documents into topics. However, such methods do not fully capture the highly meaningful chains of association present in natural language, nor the hierarchical relationships between topics at different levels of resolution. Recently, powerful methods based on deep neural networks have been introduced to represent words and documents through vector embeddings (Le and Mikolov, 2014) and shown marked improvements over statistical BoW descriptions. Some approaches have considered clustering based on such vectors (Hashimoto et al., 2016) but those approaches lack a full multiscale graph analysis leading to a hierarchy of topics. Network-theoretic methods applied to text analysis (Lancichinetti et al., 2015) suffer from the same limitation as well as lacking the additional power afforded by text vector embeddings.

Here we present a method that combines the advantages of both: paragraph vectors to represent text (Le and Mikolov, 2014), and a multi-scale community detection algorithm (Delvenne et al., 2010) applied to the similarity graph between document vectors. This approach allows us to obtain different graph partitions (from finer to coarser) corresponding to groupings of articles with consistent content at different levels of resolution (from more specific to more generic). Therefore the groupings of articles emerge directly in an unsupervised manner from the vector embedding, which captures both syntactic and semantic characteristics of the text, instead of fitting to pre-designed classifications.

2. methodology

2.1. Data

As part of the DS+J 2017 Workshop (dsj, 2017), Vox Media made available a dataset containing all the articles published by Vox before March 21, 2017. 111The corpus of Vox articles is accessible on https://data.world/elenadata/vox-articles. We used this dataset and analysed the published content for the whole year of 2016. This corresponds to a corpus of 9021 articles in a wide range of topics, from politics to sport to human interest with a clear focus on the US, where Vox is based. Given that 2016 was an electoral year in the US, news about the election and the primaries feature heavily in the corpus, as do other events such as Brexit and the Olympic Games.

2.2. Computational framework for the clustering

A schematic of our computational pipeline is presented in Figure 1. Our methodology consists of text processing and graph operations.

2.2.1. Text operations.

Before starting the analysis, it is necessary to apply standard pre-processing to the text. The pre-processing step tokenises documents into words, removing tokens with no distinct meaning (such as digits and stop words), and normalises the rest of the tokens to clean suffixes using stemming methods, as implemented in the NLTK module (Bird et al., 2009). This protocol is applied to all the text we analyse.

It was also noticed that the raw articles contained repeated meaningless wrapper sentences (mostly header or footer scripts, directions to interact with multi-media content, signatures, etc), which introduce a considerable amount of noise in the analysis. To eliminate such scripted sentences in the news articles, we created a hashed dictionary with sentence tokens as keys and frequency of the sentence in the corpus as values. We then reconstructed the articles only with the sentences that have frequency up to 2. (Allowing for sentences with frequency of 2 capture the possibility that quotations from a press release might have been copied in multiple places of an original article.) This procedure reduces substantially (but not totally) the amount unwanted, meaningless wrapper phrases.

Figure 1. Schematic of the data pipeline for the analysis of the Vox Media corpus documents.

Our text operations continue with the training of a Doc2vec model (Le and Mikolov, 2014) based on a corpus of 5.4 million articles from WikiPedia. Doc2vec, also known as paragraph vectors (PV), is a neural network model. It starts with the random initial vectors for each word and document and adjusts the vectors over iterations of the algorithm. In this case, we apply the PV-DBOW method which uses document vectors to predict a set of words sampled from the document. Each iteration covers this learning phase throughout the corpus of documents. Here, we use the Gensim (Rehurek and Sojka, 2010) module with parameters as suggested in (Lau and Baldwin, 2016)

. In our final optimised model, the parameters are set to: training method = dbow, number of dimensions for feature vectors size = 300, number of epochs = 10, window size = 5, minimum count = 20, number of negative samples = 5, random down-sampling threshold for frequent words = 0.001. By using a large Wikipedia corpus, our Doc2vec model is trained on a standard collection of English language documents on a variety of topics thus providing a good representation of the general text encountered in news articles.

Once the Doc2vec model is trained on the Wikipedia corpus, we apply it to the Vox news articles (pre-processed and cleaned from wrappers). The model is used to infer a 300-dimensional paragraph vector for each of the 9021 articles. This paragraph vector, which encapsulates a variety of semantic and syntactic characteristics of the text, is used as the article descriptor in our analysis.

2.2.2.

Graph operations.

Our aim is to find groups of articles with high in-group similarity and low out-of-group similarity based on paragraph vectors. This classic clustering problem can be approached in a myriad of ways, from simple hierarchical clustering to

-means to spectral clustering 

(von Luxburg, 2007; Schaub et al., 2017). We discuss the comparison to some of these standard methods in Section 3.3. Here we take a graph-theoretical approach that uses multi-scale community detection to find groups in a similarity graph of the articles.

We start by obtaining the pairwise cosine similarity for all pairs of articles. These distances form a similarity matrix, which can be thought of as a weighted adjacency matrix of a complete graph. To construct a sparser graph that preserves the local geometry of the data and retains global connectivity in the dataset, we construct a geometric graph using the MST-kNN method with

 (Veenstra et al., 2017). The result is a similarity graph where the nodes are articles and the weighted edges represent similarities between them.

We apply Markov Stability 222The code for Markov Stability is open and accessible at https://github.com/michaelschaub/PartitionStability and http://wwwf.imperial.ac.uk/~mpbara/Partition_Stability/, last accessed on March 24, 2018 (Delvenne et al., 2010; Lambiotte et al., 2008; Delvenne et al., 2013; Schaub et al., 2012; Lambiotte et al., 2014) to the similarity graph in order to extract the multi-scale community structure intrinsic to the graph. Markov Stability scans across all levels of resolution and finds consistent and robust partitions without imposing a priori the number of communities to be found.

We summarise the Markov Stability (MS) graph partitioning algorithm here. For details, see (Delvenne et al., 2010; Delvenne et al., 2013; Schaub et al., 2012; Lambiotte et al., 2014). Given the (symmetric) adjacency matrix of the (undirected) similarity graph , obtained as above, we define , the vector of degrees, and . The random walk Laplacian matrix is defined as where

is the identity matrix of size

. The transition matrix of the associated continuous time Markov process taking place on the graph is given by for  (Delvenne et al., 2010; Lambiotte et al., 2008; Delvenne et al., 2013; Lambiotte et al., 2014). The time parameter in this process is denoted the Markov time.

MS is a hard clustering method that searches for partitions at each Markov time through the optimisation of a cost function, as follows. For a given partition, we have a 0-1 membership matrix that maps nodes to clusters (or communities). We then define the clustered autocovariance matrix:

(1)

where is the steady-state distribution of the Markov process and . is a matrix, specific to the partition on the graph , and its element

quantifies the probability that a random walker starting from community

will end in community at time minus the probability that such event occurs by chance at stationarity.

With these definitions, we introduce the cost function for the graph partitioning optimisation. The Markov Stability of a partition with membership matrix at time is defined as the trace of the clustered autocovariance matrix of the diffusion process (2):

(2)

From this definition, a partition that maximises is one in which the diagonal elements of are large and the off-diagonal elements small, i.e., such a partition is comprised of well-defined communities, in the sense that they retain within them the probabilistic flow of the Markov process over time . Our aim is therefore to find partitions that maximise the Markov Stability, parametrically as a function of the Markov time . The maximisation of (2) is an NP-Hard problem (hence no guaranteed global optimality). However, there are efficient optimisation methods that work well in practice. Our implementation uses the Louvain Algorithm (Blondel et al., 2008) which is both computationally efficient and known to give good results on benchmarks (Lambiotte et al., 2014; Lambiotte et al., 2008; Bacik et al., 2016).

Figure 2. The top plot presents the results of the Markov Stability algorithm, showing the number of clusters of the optimized partition (red), the variation of Information for the ensemble of optimized solutions at each time (blue), and the variation of Information between the optimized partitions across Markov time (background colourmap). Relevant partitions are indicated by dips of and extended plateaux of . We choose 4 levels with different resolutions (from 28 communities to 5) in our analysis. The Sankey diagram (bottom) illustrates how the communities of documents (indicated by numbers and colours) map across Markov time scales. Note that the community structure across scales presents a strong quasi-hierarchical character—a result of the analysis and the properties of the data as it is not imposed a priori. The partitions for the four chosen levels () are also shown as colourings on the similarity graph (MST-kNN with ).

The dynamic sweeping provided by the Markov process is key to the MS community detection procedure. As grows, the Markov process explores larger horizons within the graph, and the communities become coarser. Hence the Markov time can be understood as providing a varying level of resolution. In other words, scanning provides a means of finding intrinsically good partitions of the graph at all time scales. Indeed, not all the optimised partitions found in our scan are equally relevant. We look for robust and persistent partitions, as follows. At each , we run the Louvain algorithm 500 times with different initialisations and collect all the optimised partitions. We then compute the mean Variation of Information (Meilă, 2007) of this ensemble, , which provides a measure of the difference between all the partitions found by running the algorithm repeatedly. A low value of indicates a highly reproducible result under the optimization, i.e., a robust partition. In addition, we look for partitions that are persistent across Markov times (i.e., partitions that are optimal over an extended scale). To quantify this, we compute the variation of information between optimised partitions across time, , and look for partitions found over prolonged Markov time spans. Relevant partitions are those where shows a dip and has an extended plateau, thus indicating robustness and persistence (Bacik et al., 2016; Lambiotte et al., 2014).

We study robust MS partitions of the similarity graph of documents obtained at selected resolution scales, from finer to coarser. The communities of nodes in a partition correspond to groups of documents with similar content. An example of this analysis is presented in Fig. 2.

2.3. A posteriori analysis of clustering results

2.3.1. Visualising membership with Sankey diagrammes.

The relationship between different partitions (found across scales or using different methods) is captured through Sankey diagrammes. This standard visualisation represents membership across groupings, and reveals visually the quasi-hierarchical structure in the data (see Figs. 2 and 3) or the consistency of groups across partitions (see Fig. 4).

2.3.2. Content summaries for clusters.

To aid our interpretation of the clusters of documents found with MS, we extract descriptive features a posteriori. Specifically, for each cluster we generate word clouds to represent common words in it, and the paragraph vector for the cluster centroid, from which we obtain the nearest Wikipedia article, Vox Media article, and Word2vec words to each cluster (see Fig. 3).

2.3.3. Evaluating the quality of the clusterings.

The absence of a ground truth in our study leads us to use two different types of measures to quantify the consistency and relevance of content within the clusters of documents. We used these two measures to evaluate the quality of our clusters in comparison with other methods and against commercial news classification services.

Measuring intrinsic Topic Coherence

To check for topic coherence within each cluster, we utilized the pointwise mutual information (PMI), an information-theoretical score that reflects semantic similarities between words based on their probability of being used together in the same document. The PMI score for a pair of words is given by:

(3)

The probabilities of the words and their co-occurrence , , are obtained from our Vox corpus. To quantify the topic coherence of each cluster of documents, we use , the median PMI score between its 15 most common words (changing the number of words considered gives similar results).

Finally, to obtain an aggregate topic coherence score for a set of clusters, we take the weighted average of the cluster scores. The PMI score has been shown as best performing (Newman et al., 2009, 2010) when compared to human interpretation of topics on different applications and corpora (Newman et al., 2011; Fang et al., 2016).

Comparing different partitions

To compare how similar two partitions and of the same graph are, we use the normalised mutual information (NMI):

(4)

where is the Mutual Information, and and are the entropies of the two cluster assignments. The score is bounded , and an increased value of reflects higher similarity of the partitions. We will use the score to compare the partitions obtained by MS and other methods against the classification assigned by commercial services.

3. Results

3.1. Multiscale clustering of the Vox corpus of news articles

Figure 2 shows the results of applying Markov Stability to the similarity graph of articles constructed from the 9021 news articles of the Vox corpus published during 2016. MS sweeps across all scales (i.e., across Markov times) and reveals that the news corpus presents a high level of structure in its content, as can be seen by the existence of strong modular substructures at different levels of resolution (i.e., long plateaux of at a variety of Markov times with corresponding dips in ). This multiscale structure is indicative of the presence of groups of similar documents at different levels of granularity. To illustrate our results, we concentrate on four chosen resolution levels of increasing coarseness with 28, 15, 9, and 5 clusters of news articles.

The multi-level Sankey diagramme (Fig. 2, bottom) shows that the clusters of similar documents present a quasi-hierarchical structure across scales, i.e., finer groups of similar articles integrate with other finer groups into coarser groups in a consistent manner. It is important to remark that this result emerges directly from the structure of the data, as MS does not impose the existence of such a hierarchy in the multiscale clusterings. This organisation reflects the fact that the news clusters display a relationship based on content at different levels of granularity, as shown below.

3.2. Describing the Topic Clusters

Figure 3. The Sankey diagramme illustrates the consistency between the finer partition of 15 clusters and the coarser partition of 5 clusters. We also show the corresponding word clouds of both levels, which highlight the specificity of the topics found in the text and its consistent integration into 5 broader areas (‘Entertainment’ ’Politics’ ’Healthcare’ ‘Social issues’ and ‘Brexit’). Hence the multi-resolution coarsening in the content mirrors the multi-level, quasi-hierarchical community structure found in the document similarity graph. In addition to the word clouds, for the 5-cluster partition we also include the closest Words, closest Wikipedia page, and closest Vox article to the centroid of each of the clusters. Our vector representations allow us to find such mappings directly from the data, thus allowing a further use of the original data to aid the interpretability of the obtained clusters.

We illustrate the methodology with the analysis of selected levels of granularity. Figure 3 illustrates the relationship between two of the scales in the MS partitioning (15 and 5 clusters) using a Sankey diagramme and word clouds that provide insight as to the content of the different clusters.

As shown above, both levels show a strong quasi-hierarchical structure, with smaller ‘sub-topics’ integrating to form larger ‘topics’. This is particularly clear in the merger of several groups of documents (12, 4, 5, 6, 15 from the 15-way partition) to form the large ’Politics’ group 2 in the 5-way partition. Similarly, the merger of the communities 3, 11, 13 from the 15-cluster partition (which correspond to environment, healthcare, and healthcare reform) give rise to the larger group 1 in the 5-way partition that contains science and technology issues (with health as a main component). The group 4 in the 5-way clustering is the result of a merger of several groups related to justice, law enforcement and social issues, including a large component of the ’Black lives’ campaign. Other groups remain mostly unchanged between these two scales. For instance, the large group 2 of the 15-cluster partition dominates the even larger group 5 in the 5-way clustering corresponding to Entertainment, TV shows, movies and sport. Interestingly, note that the relatively small group of articles referring to Brexit, remain in their own group (from group 7 in the 15-cluster partition to group 3 in the 5-cluster partition), as their content is highly distinctive and different to the mainly US-centric Vox news corpus. This example highlights the sensitivity of the method to truly distinct topics in the corpus.

The fact that our method is based on vector embeddings for the documents has additional advantages. Not only does each document have a vector representations but each cluster can also be represented as a vector, i.e., the average vector sum of all document vectors within the cluster. Once this cluster vector has been obtained, we can use our embedding model to assign to each cluster representative words and documents from both the training set (Wikipedia in this case) and the dataset (Vox corpus). To assign the representative words and documents, we exploit the vector space representation, which allows us to define a distance metric, so that representative instances are chosen to be geometrically close to the cluster centroid. In Figure 3 we illustrate this procedure applied to the 5-way partition, where we show the three closest words (stemmed) from the Word2vec model vocabulary, the three closest Wikipedia pages (title reported), and the three Vox articles (first sentences reported) to the centroid of each of the five MS clusters.

The results of the 28-cluster partition also show the same quasi-hierarchical content structure, as seen in the Sankey diagramme of Fig. 2 and confirmed with the word clouds presented in Fig. 4 and the temporal profiles of some of the clusters in Fig. 5. Some of these features are discussed in more detail below.

3.3. Evaluating the quality of the topic clusters

Broad corpora, such as news, pose difficulties in the assignment of a ‘ground truth’ for classification purposes. Traditionally, the categorisation of news is based non-exclusive labels denoting generic themes that cut across broad areas (e.g., ’News’, ’Entertainment’, ’Human interest’, etc) and complemented by more specific categories (e.g., ’Weather’, ’Law’, ’Technology’, etc). This approach usually leads to unbalanced groupings. In particular, the Vox news corpus analysed here does not contain a viable categorisation; a hand-coded ’Category’ column in the dataset is highly unbalanced with almost half of the articles belonging to the category ’Latest’.

Our analysis in Section 3.2 reveals that the content-based MS clusters reflect consistent topics with different levels of detail. To evaluate the relevance of the clusters, and in the absence of an external ground truth, we follow two complementary routes.

3.3.1. Intrinsic Topic Coherence: Comparison Against Common Clustering Methods

We compute the median PMI score  (3) of the MS partitions as a measure of intrinsic topic coherence.

Furthermore, we compared the topic coherence of our MS results against clusterings produced with three widely-used, standard methods: hierarchical clustering with Ward linkage (Jr., 1963) with both Bag of Words TF-iDF vectors (Ward-BoW) and our own Doc2vec vectors (Ward-D2V)333BoW vectors generated using the default parameters of Scikit-Learn’s (Pedregosa et al., 2011) TfidfVectorizer class. Ward clustering applied through the same module’s AgglomerativeClustering class using ’wards’ for the linkage parameter.; and LDA probabilistic topic models obtained by using the Gensim (Rehurek and Sojka, 2010) module (with 5 passes) to train over the Vox news dataset.

Table 1 shows the topic coherence scores different for each of the methods for different resolutions . (Note that for all methods, the value of increases for finer clusters, due to the dictionary containing more similar words in smaller groups.) MS and Ward-BoW clusters consistently provide higher topic coherence scores across resolutions.

scores 28 15 9 5
MS D2V 0.23 0.257 0.136 0.127
Ward BoW 0.241 0.209 0.146 0.138
Ward D2V 0.212 0.156 0.1 0.071
LDA 0.144 0.145 0.109 0.124
Table 1. Median Topic Coherence scores ( based on the top 15 words) for different clustering methods (rows) at different levels of clustering resolution (number of clusters, columns).

3.3.2. Comparisons against Commercial Taxonomy and Classification Services

Figure 4. Sankey diagramme comparing the 28-way MS clustering against the classification provided by external taxonomy services Open Calais and Google Cloud. The external services produce unbalanced classifications, with large generic categories such as ’News’ and ’Entertainment’. Our partition is consistent across both services, and provides additional detail for the categories, as indicated by the word clouds and the coloured groupings (see legend) derived from our multilayer Sankey diagramme in Figure 2.

The MS clusters of documents are purely content-driven with no external labeling or categorisation. Yet the clusters capture topical information and we expect that, in many cases, the content-driven groupings will be well aligned with standard categories used in news classifcation.

To explore this issue, we turned to comparisons of the clusters with external ‘news ontologies’. We used two external, well-used commercial sources for news categorisation: the Natural Language API of Google Cloud Platform 444Google has opened its text analysis capabilities as a commercial service under the name of Google Cloud Natural Language. More information is available on https://cloud.google.com/natural-language/ and the Open Calais API of Thomson Reuters 555Thomson Reuters has an API service, accessible from http://www.opencalais.com/. We used Python Requests Library and standard demo accounts on both of these services for comparison purposes. For Google Cloud, we used only the top category and we manually merged Google Cloud topics with high similarity (e.g., ’Weather’ and ’News/Weather’; ’Finance’ and ’News/Finance’, etc). Open Calais specialises in news and research topics and uses the news classifications according to the International Press Telecommunications Council (IPTC)’s taxonomy 666Full list of Open Calais supported IPTC topics at http://www.opencalais.com/wp-content/uploads/folder/ThomsonReutersOpenCalaisAPIUserGuideR11_7.pdf. In our case, Open Calais was not able to classify a significant fraction of the Vox corpus. We stored those articles as ’Unclassified’ according to Calais.

We utilised both Google Cloud and Open Calais to classify our Vox news corpus into categories. Both classification services have resulted to 17 categories, including two ambiguous ones: ’Unclassified’ and ’Others’. Hence the most relevant level of resolution for the MS clustering to compare against these services is the 15-way partition. Interestingly, the topic coherence for the commercial services is higher than that achieved by LDA but lower than the one achieved by MS with a similar number of clusters: (LDA-15)=0.144 ¡ (Open Calais)=0.157 ¡ (Open Google Cloud)=0.204 ¡ (MS-15) =0.257 (see Table 1).

To compare our clusters to the classifications obtained by the commercial taxonomy services, we obtained the Normalized Mutual Information (NMI) scores (4) between MS and other methods against Google Cloud and Open Calais classifications(Table 2). Our results show that MS has the highest similarity to both commercial services, thus emphasising the relevance of the clusters as compared to classifications that employ user-based knowledge.

NMI Scores MS D2V LDA Ward D2V Ward BoW
Open Calais 0.303 0.263 0.270 0.202
Google Cloud 0.351 0.330 0.322 0.266
Table 2. The NMI scores between the four clustering methods (15 clusters) against the two commercial taxonomy services show that the MS clustering is closest to the output of both commercial classifications.
Figure 5. The time profile of number of articles per month published during 2016 for 9 of the communities (in the 28-way partition) which are associated with the US election (see Fig. 4). The temporal patterns show that the communities are related to events associated with the US electoral cycle (e.g., primaries, nomination, convention, election) as well as particular events that took place during the 2016 US campaign related to Trump and Clinton controversies.

This correspondence can also be seen in Figure 4, where we present the Sankey diagramme that compares of our finer 28-cluster partition with both Google Cloud Natural Language API and Open Calais API. As expected, several of our communities fall consistently within the broad categories of ’News’ and ’Entertainment’ across both Google Cloud Natural Language and Open Calais. Yet the analysis also reveals the finer nuances and groupings that the content-driven categories can bring to the fore, as shown by the word clouds of 26 communities (of the 28, since 2 are dominated by wrapper artifacts embedded in the text). Using the multi-layer Sankey diagramme of our MS analysis in Fig. 2 and the word clouds we establish groupings of documents that fall within a branch of related content, thus allowing a deeper understanding of the different topics appearing in the corpus. In Fig 4, we have coloured these branches and the legend indicates their general topic. For instance, there are three groupings related to US politics and the US election: ’Campaign/Vote’, ’Trump’, and ’Foreign Secretary/Clinton emails’ involving 9 communities of the 28 groupings. A more detailed investigation of these clusters shows that they are thematically related to Republican and Democrat topics, and also to temporal events that took place during 2016. To see this explicitly, we have plotted in Figure 5 the time line for each of those 9 communities, which show distinct temporal profiles. Several of these clusters are localised in time (e.g., clusters 4, 6, 19, 27) while others are constant throughout the year. The groupings can be directly related to different aspects of the US electoral cycle (primaries, nominations, national conventions, campaign) as well as particular events particular to the 2016 US campaign, such as the different Trump and Clinton controversies.

4. conclusions

We have introduced here a methodology for extracting multi-scale topic clusters from a corpus of text documents using no a priori labelling, in an unsupervised manner. Our application on a corpus of 9201 Vox media news articles shows consistency of topic clusters across different resolutions. The use of vector embeddings allows us to exploit the capabilities of novel text tools to aid with enhanced interpretation. Future work will be aimed at comparative analyses of multiple news outlets from different geographies and/or different political tendencies in order to enable quantification of the similarity and difference in topics each media outlet presents based on content.

References

  • (1)
  • dsj (2017) 2017. Workshop for Data Science + Journalism (DS+J). https://sites.google.com/view/dsandj2017/
  • Bacik et al. (2016) Karol A Bacik, Michael T Schaub, Mariano Beguerisse-Díaz, Yazan N Billeh, and Mauricio Barahona. 2016. Flow-Based Network Analysis of the Caenorhabditis elegans Connectome. PLOS Computational Biology 12, 8 (2016), 1–27. https://doi.org/10.1371/journal.pcbi.1005055
  • Bird et al. (2009) Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python (1st ed.). O’Reilly Media, Inc.
  • Blondel et al. (2008) Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. 2008. Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment 2008, 10 (2008), P10008. https://doi.org/10.1088/1742-5468/2008/10/P10008
  • Delvenne et al. (2013) Jean-Charles Delvenne, Michael T. Schaub, Sophia N. Yaliraki, and Mauricio Barahona. 2013. The Stability of a Graph Partition: A Dynamics-Based Framework for Community Detection. Springer New York, New York, NY, 221–242. https://doi.org/10.1007/978-1-4614-6729-8_11
  • Delvenne et al. (2010) J-C Delvenne, S N Yaliraki, and M Barahona. 2010. Stability of graph communities across time scales. Proceedings of the National Academy of Sciences of the United States of America 107, 29 (7 2010), 12755–60. https://doi.org/10.1073/pnas.0903215107
  • Fang et al. (2016) Anjie Fang, Craig Macdonald, Iadh Ounis, and Philip Habel. 2016. Topics in Tweets: A User Study of Topic Coherence Metrics for Twitter Data. In Advances in Information Retrieval, Nicola Ferro, Fabio Crestani, Marie-Francine Moens, Josiane Mothe, Fabrizio Silvestri, Giorgio Maria Di Nunzio, Claudia Hauff, and Gianmaria Silvello (Eds.). Springer International Publishing, Cham, 492–504.
  • Hashimoto et al. (2016) Kazuma Hashimoto, Georgios Kontonatsios, Makoto Miwa, and Sophia Ananiadou. 2016.

    Topic detection using paragraph vectors to support active learning in systematic reviews.

    Journal of Biomedical Informatics 62 (8 2016), 59–65. https://doi.org/10.1016/J.JBI.2016.06.001
  • Jr. (1963) Joe H Ward Jr. 1963. Hierarchical Grouping to Optimize an Objective Function. J. Amer. Statist. Assoc. 58, 301 (1963), 236–244. https://doi.org/10.1080/01621459.1963.10500845
  • Lambiotte et al. (2008) R. Lambiotte, J.-C. Delvenne, and M. Barahona. 2008. Laplacian Dynamics and Multiscale Modular Structure in Networks. ArXiv e-prints (Dec. 2008). arXiv:physics.soc-ph/0812.1770
  • Lambiotte et al. (2014) R Lambiotte, J C Delvenne, and M Barahona. 2014. Random Walks, Markov Processes and the Multiscale Modular Organization of Complex Networks. IEEE Transactions on Network Science and Engineering 1, 2 (7 2014), 76–90. https://doi.org/10.1109/TNSE.2015.2391998
  • Lancichinetti et al. (2015) Andrea Lancichinetti, M Irmak Sirer, Jane X Wang, Daniel Acuna, Konrad Körding, and Luis A Nunes Amaral. 2015. High-Reproducibility and High-Accuracy Method for Automated Topic Classification. Phys. Rev. X 5, 1 (jan 2015), 11007. https://doi.org/10.1103/PhysRevX.5.011007
  • Lau and Baldwin (2016) Jey Han Lau and Timothy Baldwin. 2016. An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding Generation. In Proceedings of the 1st Workshop on Representation Learning for NLP, Rep4NLP@ACL 2016, Berlin, Germany, August 11, 2016. 78–86. https://doi.org/10.18653/v1/W16-1609
  • Le and Mikolov (2014) Quoc Qv Le and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents.

    International Conference on Machine Learning - ICML 2014

    32 (2014), 1188–1196.
    https://doi.org/10.1145/2740908.2742760
  • Meilă (2007) Marina Meilă. 2007. Comparing clusterings—an information based distance.

    Journal of Multivariate Analysis

    98, 5 (5 2007), 873–895.
    https://doi.org/10.1016/J.JMVA.2006.11.013
  • Newman et al. (2011) David Newman, Edwin V Bonilla, and Wray Buntine. 2011. Improving Topic Coherence with Regularized Topic Models. In Advances in Neural Information Processing Systems 24, J Shawe-Taylor, R S Zemel, P L Bartlett, F Pereira, and K Q Weinberger (Eds.). Curran Associates, Inc., 496–504. http://papers.nips.cc/paper/4291-improving-topic-coherence-with-regularized-topic-models.pdf
  • Newman et al. (2009) David Newman, Sarvnaz Karimi, and Lawrence Cavedon. 2009. External evaluation of topic models. In in Australasian Doc. Comp. Symp., 2009, Judy Kay, Paul Thomas, and Andrew Trotman (Eds.). School of Information Technologies, University of Sydney, 11–18.
  • Newman et al. (2010) David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin. 2010. Automatic Evaluation of Topic Coherence. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (HLT ’10). Association for Computational Linguistics, Stroudsburg, PA, USA, 100–108. http://dl.acm.org/citation.cfm?id=1857999.1858011
  • Pedregosa et al. (2011) F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, and E Duchesnay. 2011. Scikit-learn: Machine Learning in {P}ython. Journal of Machine Learning Research 12 (2011), 2825–2830.
  • Rehurek and Sojka (2010) Radim Rehurek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. ELRA, Valletta, Malta, 45–50.
  • Schaub et al. (2017) Michael T. Schaub, Jean-Charles Delvenne, Martin Rosvall, and Renaud Lambiotte. 2017. The many facets of community detection in complex networks. Applied Network Science 2, 1 (15 Feb 2017), 4. https://doi.org/10.1007/s41109-017-0023-6
  • Schaub et al. (2012) Michael Thomas Schaub, Jean Charles Delvenne, Sophia N. Yaliraki, and Mauricio Barahona. 2012. Markov dynamics as a zooming lens for multiscale community detection: Non clique-like communities and the field-of-view limit. PLoS ONE (2012). https://doi.org/10.1371/journal.pone.0032210
  • Veenstra et al. (2017) Patrick Veenstra, Colin Cooper, and Steve Phelps. 2017. Spectral clustering using the kNN-MST similarity graph. In 2016 8th Computer Science and Electronic Engineering Conference, CEEC 2016 - Conference Proceedings. Institute of Electrical and Electronics Engineers Inc., 222–227. https://doi.org/10.1109/CEEC.2016.7835917
  • von Luxburg (2007) Ulrike von Luxburg. 2007. A tutorial on spectral clustering. Statistics and Computing 17, 4 (dec 2007), 395–416. https://doi.org/10.1007/s11222-007-9033-z