Prior-aware Dual Decomposition: Document-specific Topic Inference for Spectral Topic Models

11/19/2017 ∙ by Moontae Lee, et al. ∙ 0

Spectral topic modeling algorithms operate on matrices/tensors of word co-occurrence statistics to learn topic-specific word distributions. This approach removes the dependence on the original documents and produces substantial gains in efficiency and provable topic inference, but at a cost: the model can no longer provide information about the topic composition of individual documents. Recently Thresholded Linear Inverse (TLI) is proposed to map the observed words of each document back to its topic composition. However, its linear characteristics limit the inference quality without considering the important prior information over topics. In this paper, we evaluate Simple Probabilistic Inverse (SPI) method and novel Prior-aware Dual Decomposition (PADD) that is capable of learning document-specific topic compositions in parallel. Experiments show that PADD successfully leverages topic correlations as a prior, notably outperforming TLI and learning quality topic compositions comparable to Gibbs sampling on various data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Unsupervised topic modeling represents a collection of documents as a combination of topics, which are distributions over words, and document compositions, which are distributions over topics (Hofmann, 1999; Blei et al., 2003). Though topic modeling is commonly applied to text documents, because of no assumptions about word order or syntax/grammar, topic models are flexibly applicable to any data that consists of groups of discrete observations, such as movies in review datasets and songs in playlists in collaborative filtering (Lee et al., 2015), or people present in various events in social network (Liu et al., 2009). Topics are useful for quickly assessing the main themes, common genres, and underlying communities in various types of data. But most applications of topic models depend heavily on document compositions as a way to retrieve documents that are representative of query topics or to measure connections between topics and metadata like time variables (Blei & Lafferty, 2007; Steyvers & Griffiths, 2008; Hall et al., 2008; Talley et al., 2011; Goldstone & Underwood, 2014; Erlin, 2017). Learning topic compositions is particularly useful because one can compactly summarize individual documents in terms of topics rather than words.111The number of topics is by far smaller than the size of vocabulary in the general non-overcomplete settings.

Spectral topic models have emerged as alternatives to traditional likelihood-based inference such as Variational Bayes (Blei et al., 2003) or Gibbs Sampling (Griffiths & Steyvers, 2004)

. Spectral methods do not operate on the original documents, but rather on summary statistics. Once constructing the higher-order moments of word co-occurrence as statistically unbiased estimators for the generative process, we can perform moment-matching via matrix or tensor factorization. The Anchor Word algorithms

(Arora et al., 2012, 2013; Bansal et al., 2014; Lee et al., 2015; Huang et al., 2016) factorize the second-order co-occurrence matrix between pairs of words in order to match its posterior moments. Tensor decomposition algorithms (Anandkumar et al., 2012a, b, 2013) factorize the third-order tensor among triples of words toward matching its population moments.

Comparing to the traditional inference algorithms, these spectral methods have three main advantages. First, training is transparent and deterministic. We can state exactly why the algorithms make each decision, and the results do not change with random initializations. Second, we can make provable guarantees of optimality given reasonable assumptions, such as the existence of topic-specific anchor words in matrix models or uncorrelated/sparse topics in tensor models. Third, because the input to the algorithm is purely in terms of word-word relationships, we can limit our interaction with the training documents to a single trivially-parallelizable pre-processing step to construct the co-occurrence statistics. Then we can learn various sizes of topic models without revisiting the training documents.

But the efficiency advantage of factoring out the documents is also a weakness: we lose the ability to say anything about the documents themselves. In practice, users of spectral topic models must go back and apply traditional inference on the original documents. That is, given topic-word distributions and a sequence of words for each document, they need to estimate the posterior probability of topics, still with the assumption of a sparse Dirichlet prior on the topic composition. Estimating topics with a Dirichlet prior is NP-Hard even for trivial models

(Sontag & Roy, 2011). Long running of Gibbs Sampling for topic inference can be relatively accurate, but has no provable guarantees (Yao et al., 2009). Using Varitional Bayes often falls in local-minima, learning inconsistent models for various numbers of topics.

Recently, (Arora et al., 2016)

proposes the Thresholded Linear Inverse (TLI) method to learn the document-specific topic distributions for spectral topic models. TLI tries to compute one single inverse of word-topic matrix so that it can inversely transform the term-frequency vector of each document back to a topic composition vector. Unfortunately the matrix inversion is expansive for large vocabularies, and numerically unstable, often producing NaN entries and thereby learning inferior compositions to likelihood-based inference. Even though TLI is armed with the provable guarantees, its thresholding scheme turns out to quickly lose both precision and recall as the number of topics increases. Due to the lack of prior knowledge over topics in inference, TLI proposes unlikely combinations of topics, being further degraded when the given document is not long enough to provide sufficient information.

Figure 1: LDA asserts a topic composition for each document . provides prior information for the entire corpus.

In this work we propose one benchmark method and another elaborated method. They perform optimally in different settings, in which the critical difference is the degree of correlation between topics. If topics appear independently, with no meaningful correlation structure, a Simple Probabilistic Inverse (SPI) performs well. In contrast to TLI, SPI uses word-specific topic distribution, which is the natural and probabilistically coherent linear estimator. If there are complex correlations between topics, as is often the case in real data, our Prior-aware Dual Decomposition (PADD) is capable of learning quality document-specific topic compositions by leveraging the learned prior over topics in terms of topic correlations. PADD regularizes topic correlations of each document to be not too far from the overall topic correlations, thereby guessing reasonable compositions even if the given document is not sufficiently long.

Since second-order spectral models naturally learn topic correlations from the decomposition of co-occurrence statistics, this paper focuses mostly on the framework of

Joint Stochastic Matrix Factorization (JSMF)

. The rectified anchor word algorithm in JSMF is proven to learn quality topics and their correlations comparable to probabilistic inference (Lee et al., 2015). Above all, it is shown that most large topic models satisfy the anchor word assumption (Ding et al., 2015). However, third-order tensor models can also use our PADD by constructing the moments from some parametric priors that are capable of modeling topic correlations (Arabshahi & Anandkumar, 2017). In that case, the second-order population moment estimated by their learned hyper-parameters serves as a surrogate for the prior information over topics, allowing PADD to infer topic compositions.

2 Foundations and Related Work

Figure 2: JSMF asserts a joint distribution over topic pairs for each document . serves as a prior for the entire corpus.

In this section, we formalize matrix-based spectral topic modeling, especially JSMF. We call particular attention to the presence of topic-topic matrix that represents joint distribution between pairs of topics in overall corpus. This matrix will serve as a prior for document-specific joint probabilities between pairs of topics in later sections.

Suppose that a dataset has documents consisting of tokens drawn from a vocabulary of words. Topic models assume that there are topics prepared for this dataset where each topic is a distribution over words. Denoting all topics by the column-stochastic matrix where the -th column vector corresponds to the topic , each document is written by first choosing a composition of topics from a certain prior . Then from the first position to its length , a topic is selected with respect to the composition , and a word is chosen with respect to the topic . Different models adopt different . For example, in the popular Latent Dirichlet Allocation (LDA), (Blei et al., 2003) as depicted in Figure 1. For the Correlated Topic Models (CTM), (Blei & Lafferty, 2007), .

Let be the worddocument matrix where the -th column vector indicates the observed termfrequencies in the document . If we denote all topic compositions by another columnstochastic matrix whose -th column vector is , the two main tasks for topic modeling are to learn topics (i.e., the word-topic matrix ) and their compositions (i.e., the topic-document matrix ). Inferring the latent variables and are coupled through the observed terms, making exact inference intractable. Likelihood-based algorithms such as Variational EM and MCMC update both parts until convergence by iterating through documents multiple times. If denoting the word-probability matrix by , which is the column-normalized , these two learning tasks can be viewed as Non-negative Matrix Factorization (NMF): , where and are also coupled.

Joint Stochastic Matrix Factorization

The word-probability matrix is highly noisy statistics due to its extreme sparsity, and it does not scale well with the size of dataset. Instead, let be the word co-occurrence matrix where indicates the joint probability to observe a pair of words . Then we can represent the topic modeling as a second-order non-negative matrix factorization: where we decompose the joint-stochastic into the column-stochastic (i.e., the word-topic matrix) and the joint-stochastic so called the topic-topic matrix. If the ground-truth topic compositions that generate the data is known, we can define the posterior topic-topic matrix by where indicates the joint posterior probability for a pair of latent topics . In this second-order factorization, is constructed as an unbiased estimator from which we can identify and close to the truthful topics and their correlations.222It is proven that the learned is close to the and the prior (i.e., the population moment) for sufficiently large . It allows us to perform topic modeling (Arora et al., 2012)..

It is helpful to compare the matrix-based view of JSMF to the generative view of standard topic models. The generative view focuses on how to produce streams of word tokens for each document, and the resulting correlations between words could be implied but not explicitly modeled. In the matrix-based view, in contrast, we begin with word cooccurrence matrix which explicitly models the correlations between words and produce pairs of words rather than individual words. Given the prior topic correlations between pairs of topics, each document has its own topic correlations from as a joint distribution .333Strictly speaking, and (also ) are all joint distributions, neither covariances or correlations. However, as the covariance/correlations , which are directly inducible from , we keep using the naming convention from previous work, calling them topic correlations. Then for each of the possible pairs of positions, a topic pair is selected first from , then a pair of words is chosen with respect to the topics (, as illustrated in Figure 2. Two important implications are:

  • The matrix of topic correlations represents the prior without specifying any particular parametric family.

  • is a rank-1 matrix with , providing the fully generative story for documents.

Note that the columns of

in spectral topic models are sets of parameters rather than random variables which are sampled from another distribution

(e.g., ). Other work relaxes this assumption (Nguyen et al., 2014), but we find that it is not an issue for the present work. As putting a prior over is the crux of modern topic modeling (Asuncion et al., 2009), our flexible matrix prior allows us to identify the topics from without hurting the quality of topics. However, learning and via the Anchor Word algorithms might seem loosely decoupled because the algorithms first recover and then from and . Previous work has found that rectifying is essential for quality spectral inference in JSMF (Lee et al., 2015). The empirical must match the geometric structure of its posterior , otherwise the model will fit noise. Because this rectification step alternatingly projects based on the geometric structures of and until convergence, the rest of inference would no longer require mutual updates.

Related work

Second-order word co-occurrence is not by itself sufficient to identify topics (Anandkumar et al., 2013), so much work on second-order topic models adopts the separability assumption such that each topic has an anchor word which occurs only in the context of that topic.444Indeed, according to (Ding et al., 2015), most large topic models are proven separable. However, the first Anchor Word algorithm (Arora et al., 2012) is not able to produce meaningful topics due to numerical instability. A second version (Arora et al., 2013) works if is sufficiently large, but the quality of topics is not fully satisfactory in real data even with the large enough . Also, this version is not able to learn meaningful topic correlations. (Lee et al., 2015) proposes the rectification step within JSMF, finally learning both quality topics and their quality correlations comparable to probabilistic inference in any condition.

There have been several extensions to the anchor words assumption that also provide identifiability. The Catchwords algorithm (Bansal et al., 2014) assumes that each topic has a group of catchwords that occur strictly more frequently in that topic than in other topics. Their Thresholded SVD algorithm learns better topics under this assumption, but at the cost of slower inference. Another minimal condition for identifiability is sufficient scatteredness (Huang et al., 2016). The authors try to minimize the NMF objectives with an additional non-convex constraint, outperforming the second version of the Anchor Word algorithm (Arora et al., 2013) on smaller number of topics.

Another approach to guarantee identifiability is to leverage third-order moments. The popular CP-decomposition (Hitchcock, 1927) transforms the third-order tensor into a orthogonally decomposable form555This step is called the whitening, which is conceptually similar to the rectification in JSMF. and learns the topics under the assumption that the topics are uncorrelated (Anandkumar et al., 2012a). Another method is to perform Tucker decomposition (Tucker, 1966), which does not assume uncorrelated topics. This approach requires additional sparsity constraints for identifiability and includes more parameters to learn (Anandkumar et al., 2013). While correlations between topics are not an immediate by-product of tensor-based models, the PADD method presented here is still applicable for learning topic compositions of these models if the modeler chooses proper priors that can capture rich correlations (Arabshahi & Anandkumar, 2017).666One can also use the simple Dirichlet prior, although in theory it only captures negative correlations between topics.

3 Document-specific Topic Inference

In Bayesian settings, learning topic compositions of individual documents is an inference problem which is coupled with learning topics . As each update depends also on the parametric prior and its hyper-parameters , must be optimized as well for fully maximizing the likelihood of the data (Wallach et al., 2009). From higher-order moments, contrastively in spectral settings, we can recover the latent topics and their correlations , the flexible prior that Bayesian models should painfully learn.777It means that the learned implies the information of the proper prior with respect to the data. If , we can indeed estimate via moment-matching with . Since and are both provided and fixed, it is natural to formulate learning each column of as an estimation than an inference.

Besides revisiting the likelihood-based inferences, Thresholded Linear Inverse (TLI) is the only existing algorithm to the best of our knowledge, which is recently designed for the second-order spectral inference (Arora et al., 2016). In this section, we begin with describing TLI and our SPI that only use the learned topics , and then we propose our main algorithm PADD that uses the learned correlations as well. By formulating the estimation as a dual decomposition (Komodakis et al., 2011; Rush & Collins, 2012), PADD can effectively learn the compositions given and .

3.1 Simple Probabilistic Inverse (SPI)

Recall that selecting words in the document is the series of multinomial choices (i.e., ). Denote by , then the conditional expectation satisfies . If there is a left-inverse of that satisfies , then . However, not every left inverse is equivalent. The less is close to , the more bias the estimation causes. On the other hand, large entries of

increases variance of the estimation. Quality document-topic compositions could be recovered by finding one of the left-inverses that properly balances the bias and the variance. One can choose

as the optimizer that minimizes its largest entry under the small bias constraint: .

Let be the true topic distribution that is used for generating the document . Denoting the value at the optimum by , one can bound the maximum violation by for an arbitrary prior from which (Arora et al., 2016). Thus the TLI algorithm first computes the best left-inverse of given the fixed and linearly predicts via one single estimator . Then for every column of , it thresholds out each of the unlikely topics whose mass is smaller than . While TLI is supported by provable guarantees, it quickly loses accuracy if the given document exhibits correlated topics, its length is not sufficiently large, or is not sparse enough. In addition, since the algorithm does not provide any guidance on the optimal bias/variance trade-off, users might end up computing many inverses with different ’s.888Recall that computing this inverse is expansive and unstable.

We instead propose the Simple Probabilistic Inverse (SPI) method, which is a one shot benchmark algorithm that predicts as without any additional learning costs. Recall that Anchor Word algorithms first recover whose , and then convert it into whose via Bayes rule (Arora et al., 2013). For the probabilistic perspective, is a more natural linear estimator without having any negative entries like .999Due to the low-bias constraint, is destined to have many negative entries, thus yielding negative probability masses on the predicted topic compositions even if its pure column sums are all close to 1. While such negative masses are fixed via the thresholding step, equally zeroing-out both tiny positive masses and non-negligible negative masses is questionable. By construction, in contrast, the predicted topic composition via SPI is more likely to contain all possible topics that each word in the given document can be sampled from, no matter how negligible they are. But it can be still useful for certain applications that require extremely fast estimations with high recalls. We later see in which conditions the SPI works reasonably well through the various experiments.

def PADD

1:   column-normalize()
2:  , ,
3:  repeat
4:     
5:     for each  do
6:        
7:         (initial guess)
8:        
9:     end for
10:     
11:  until the convergence
12:  return  
Algorithm 1 Estimate the best compositions .
(Master problem governing the overall estimation)

3.2 Prior-aware Dual Decomposition (PADD)

To better infer topic compositions, PADD uses the learned correlations as well as the learned topics . While people have been more interested in finding better inference methods, many algorithms including the family of Variational Bayes and Gibbs Sampling turn out to be different only in the amount of smoothing the document-specific parameters for each update (Asuncion et al., 2009). On the other hand. a good prior and the proper hyper-parameter are critical, allowing us to perform successful topic modeling with less information about documents, but their choices are rarely considered (Wallach et al., 2009).

Second-order spectral models do not specify as a parametric family , but the posterior topic-topic matrix closely captures topic prevalence and correlations. Since the learned is close to given a sufficient number of documents, one can estimate better topic compositions by matching the overall topic correlations (by ) as well as the individual word observations (by ).101010Because the learned and the posterior moment are close to the population moment if is sufficiently large, PADD might not able to find quality compositions if both and are small. However, this problem also happens in probabilistic topic models, which is due to lack of information. For a collection of documents, PADD tries to find the best compositions that satisfies the following optimization:

(1)
subject to

Solutions from (1) tries to match the observed word-probability as individuals (i.e., loss minimization), while simultaneously matching the learned topic correlations as a whole (i.e., regularization). Therefore, whereas the performance of TLI depends only on the quality of the estimated word-topic matrix , PADD also leverages the learned correlations to perform an analogous estimation to the prior-based probabilistic inference. With further tuning of the balance between the loss and the regularization with respect to the particular task, PADD can be more flexible for various types of data, whose topics might not empirically well fit to any known parametric prior.

def ADMM-DR

1:  
2:  repeat
3:     
4:     
5:     
6:  until 
7:  
8:  return  

( is the orthogonal projection to the simplex . See the reference for the detailed implementation (Duchi et al., 2008).)

Algorithm 2 Estimate the best individual .
(Subproblem running for each document in parallel)
Figure 3: Artificial experiment on Semi-Synthetic (SS) corpus with highly sparse topics but little correlations. X-axis: # topics . Y-axis: higher numbers are better for the left three columns, lower numbers are better for the right four. SPI performs the best with

3.3 Parallel formulation with ADMM

It is not easy to solve (1) due to the non-linear coupling constraint . We can construct a Lagrangian by adding a symmetric matrix of dual variables . Then is equal to

(2)

The equation (2) implies that given a fixed dual matrix , minimizing the Lagrangian can be decomposed into subproblems, allowing us to use the dual decomposition (Komodakis et al., 2011; Rush & Collins, 2012). Each subproblem tries to find the best topic composition that minimizes .111111The operation indicates the Frobenius product, which is the matrix version of the inner product. Once every subproblem is solved and has provided the current optimal solution , the master problem simply updates the dual matrix based on its subgradient: , and then distributes it back to each subproblem. For robust estimation, we adopt the Alternating Direction Method of Multiplier (ADMM) (Bioucas-Dias & Figueiredo, 2010) with Douglas-Rachford (DR) splitting (Li & Pong, 2016). Then the overall procedures become as illustrated in Algorithm 1 and 2.

Note first that master problem in Algorithm 1 computes the matrix inverse, but the computation is cheap and stable. This is because it only algebraically inverts matrix rather than solving a constraint optimization to invert matrix as TLI does. Note also that the overall algorithm repeats the master problem only small number of times, whereas each subproblem repeats the convergence loop more times. The exponentiated gradient algorithm (Arora et al., 2013) is also applicable for quick inference, but tuning the learning rate would be less intuitive, although PADD should be also careful in tuning the learning rate due to its non-linear characteristics. Note last that we need not further project the subgradient to the set of symmetric matrices because only the symmetric matrices and are added and subtracted from for every iteration of the master problem.

Why does it work?

Probabilistic topic models try to infer both topics and document compositions that approximately maximize the marginal likelihood of the observed documents: , considering all possible topic distributions under the prior . However, if the goal in spectral settings is to find the best individual composition provided with the learned and , the following MAP estimation

(3)

is a reasonable pointwise choice in the likelihood perspective. Recall that the MLE parameters that maximize the likelihood of the multinomial choices is to assign the word-probability parameters equal to the empirical frequencies

. The loss function of the PADD objective tries to find the best

that makes , maximizing the second term in Equation (3).

While sampling a rank-1 correlation matrix from is not yet known but an interesting research topic, it is true that PADD tries to maximize the first term in (3) by preventing ’s far deviation from the learned topic correlations , which is a good approximation of the prior: the population moment

. Indeed, when learning the document-specific topic distributions for spectral topic models, it is shown that a proper point estimation is likely a good solution also in the perspective of Bayesian inference because the posterior is concentrated on the

-ball of the point estimator with large chance (Arora et al., 2016).

4 Experimental Results

Figure 4: Realistic experiment on Semi-Real (SR) corpus with non-trivial topic correlations. X-axis: # topics . Y-axis: higher numbers are better for the left three columns, lower numbers are better for the right four. PADD is consistent and comparable to Gibbs Sampling.

Evaluating the learned topic compositions is not easy for real data because no ground truth compositions exist for quantitative comparison. In addition, qualitatively measuring coherency of topics in each document composition is no longer feasible because there are too many documents, and topics in each document may not be as obviously coherent as words in each topic (Chang et al., 2009). Synthesizing documents from scratch is an option as we can manipulate the ground truth and , but the resulting documents would not be realistic. Thus we use two generative processes to create two classes of semi documents that emulate the properties of real documents but that are also by construction compatible with specific models.

The semi-synthetic setting involves sampling from a fitted LDA based on Dirichlet prior, whereas the semi-real setting involves sampling from a fitted CTM (Blei & Lafferty, 2007) based on Logistic-normal prior. Given the original training data and the number of topics , we first run JSMF121212 is constructed form as an unbiased estimator. (Lee et al., 2015) and CTM-Gibbs(, K) (Chen et al., 2013), learning and , respectively. For each number of documents , we synthesize Semi-Synthetic (SS) corpus and Semi-Real (SR) corpus by first sampling each column of the ground-truth topic compositions from and , respectively, and then sample each document with respect to and .131313We randomly synthesize SS and SR so that their average document lengths coincide with its original training corpus . While it is less realistic, SS uses the sparsity controlled parameter for fair comparison to the experiemtns of TLI in (Arora et al., 2016), whereas SR exploits the learned hyper-parameter so that it can maximally simulate the real world characteristics with non-trival correlations between topics For Fully-Real (FR) experiment, we also prepare the unseen documents , which is 10% of the original data which has never been used in training (i.e., ). In FR, we also test on the training set .

As strong baselines, we run TLI and SPI on the SS corpus with the ground-truth real parameters and , while Gibbs Sampling uses both and the ground-truth hyper-parameter . For SR corpus, we run both TLI and SPI as isomorphic to SS with . Since we only know the Logistic-Normal parameters for SR, however, Gibbs Sampling uses the approximated the Dirichlet hyper-parameter via matching the topic-topic moments between and .141414It is done by solving the over-determined system. We verified that our learned hyper-parameter outperforms exhaustive line-search in terms of the moment-matching loss. In contrast, we run PADD with both the learned topics and their correlations for each of and . While the goal of synthetic experiments is to compare the learned to the ground-truth , we cannot access for real experiments. For FR corpora and , therefore, Gibbs Sampling with and the fitted (again by moment-matching based on ) serves as the ground-truth . We then run TLI, SPI, and PADD similarly with , , and .

Data Algo Precision Recall F1-score -error -error Hellinger Prior-dist Non-supp
NIPS Rand .092/.107 .426/.429 .260/.268 1.60/1.55 0.52/0.44 0.74/0.72 .0123/.0126 .908/.894
(1.3k/ TLI .396/.421 .954/.890 .513/.536 0.88/0.88 0.29/0.24 0.44/0.45 .0085/.0080 .617/.604
143) SPI .169/.193 .994/.994 .271/.307 1.20/1.14 0.41/0.34 0.55/0.53 .0118/.0118 .771/.755
PADD .691/.734 .876/.740 .759/.744 0.47/0.55 0.16/0.18 0.27/0.31 .0049/.0016 .382/.324
Blog Rand .095/.100 .424/.428 .261/.263 1.55/1.53 0.49/0.46 0.70/0.69 .0101/.0100 .905/.900
(11k/ TLI .421/.422 .856/.814 .530/.527 0.95/0.97 0.28/0.27 0.47/0.49 .0066/.0067 .619/.623
1.2k) SPI .170/.176 .982/.979 .279/.289 1.18/1.16 0.39/0.37 0.51/0.50 .0106/.0107 .782/.776
PADD .642/.671 .800/.758 .713/.715 0.61/0.62 0.21/0.21 0.33/0.34 .0057/.0044 .428/.403
NY Rand .061/.062 .427/.427 .207/.208 1.73/1.73 0.62/0.61 0.80/0.79 .0188/.0188 .939/.938
Times TLI .321/.320 .925/.922 .428/.427 1.16/1.16 0.40/0.40 0.56/0.56 .0078/.0197 .690/.690
(263k/ SPI .183/.183 .964/.964 .281/.281 1.29/1.29 0.47/0.47 0.58/0.58 .0143/.0143 .775/.775
30k) PADD .474/.543 .899/.904 .575/.668 0.76/0.71 0.27/0.28 0.39/0.37 .0005/.0113 .520/.493
Table 1: Real experiment on Fully-Real (FR) corpora. For each entry, a pair of values indicates the corresponding metrics on training/unseen documents. Averaged across all models with different ’s. Rand estimates randomly. For two new metrics: Prior-dist and Non-supp, smaller numbers are better. PADD performs the best considering topic compositions learned by Gibbs Sampling as the ground-truth.

To support comparisons with previous work, we use the same standard datasets: NIPS papers and NYTimes articles with the same vocabulary curation as (Lee et al., 2015). By adding political blogs (Eisenstein & Xing, 2010), the sizes of training documents in our datasets exhibit various orders of magnitudes. For evaluating information retrieval performance, we first find the prominent topics whose cumulative mass is close to 0.8 for each document, and compute the precision, recall, and F1 score as (Yao et al., 2009). For measuring distributional similarity, we use KL-divergence and Hellinger distance. In contrast to assymetric KL, Hellinger is a symmetric and normalized distance used in evaluating the CTM. For comparing the reconstruction errors with TLI, we also report -error and -error (Arora et al., 2016). For fully real experiments, we additionally report the distance to prior and the mass on non-supports, the total probability that each algorithm puts on non-prominent topics of the ground-truth composition.

In the Semi-Synthetic (SS) experiments given in Figure 3, SPI performs the best as the number of topics increases. As expected, SPI is good at recalling all possible topics though loosing precision at the same time. Note that Gibbs Sampling shows relatively high -error especially for the models with large . This is because puts too sparse prior on topics, causing small miss predictions to be notably harmful even with sufficiently mixed Gibbs Sampling.151515We throw out the first 200 burn-in iterations and keep going 1,000 further iterations to gather the samples from the posterior. We use the JAVA version of Mallet but providing the learned topics and the fitted hyper-parameter based on as fixed information. Only the topic composition is updated over iterations. The same problem happens also in (Arora et al., 2016). Despite the unrealistic nature of SS, PADD outperforms TLI by a large margin in most cases, showing similar behaviors to probabilistic Gibbs Sampling across all datasets. TLI performs well only for tiny topic models.

The situation is quite different in Semi-Real (SR) experiments shown in Figure 4. High recalls of SPI is no longer helpful due to the drastic loss of precision, whereas PADD is comparable to Gibbs Sampling across all models and datasets even with only 1k documents. This is because PADD captures the rich correlations through its prior-aware formulation. TLI performs poorly due to its linear nature without considering topic correlations. When testing on Fully-Real (FR) corpora, PADD shows the best performance on both training documents and the unseen documents. Considering Gibbs Sampling with the ground-truth parameters still loses the accuracy in other settings, the metrics evaluated against the Gibbs Sampling in Table 1 is noteworthy. Prior-dist, the Frobenius distance to the prior , implies PADD-learned likely improves than other algorithms. While TLI uses provably guaranteed thresholding,161616We use the same less conservative threshold and unbias setting with as conducted in (Arora et al., 2016). We also try to loosen the unbias constraint when the inversion is failed. However it does not help. Non-supp values show that it still puts more than half of probability mass on the non-prominent topics in average.

For inferring topic compositions by PADD, we iterates 15 times for the master procedure PADD and 150 times for the slave procedure ADMM-DR with . We verify that PADD converges well despite its non-convex nature. Inference quality is almost equivalent when running only 100 times for each slave procedure, and is not sensitive to these parameters as well. When using the equivalent level of parallel processing, computing via TLI takes 2,297 seconds,171717We also observe that AP-rectification in JSMF significantly boosts the condition of on various datasets, removing TLI’s failures in computing the left-inverse . However, even if the inverse is computed, TLI sometimes yields NaN values due to numerical instability of matrix inversion. We omit those results in evaluation to prevent TLI’s quality degradation. whereas PADD takes 849 seconds for the entire inference on the semi-synthetic NIPS dataset with and . SPI is the by far fastest without requiring anything than one matrix multiplication, whereas Gibbs takes 3,794 seconds on the same machine. While we choose ADMM-DR mainly for the tightest optimization, one can easily incorporate faster gradient-based algorithms inside our formulation of prior-aware dual decomposition.

5 Discussion

Fast and accurate topic inference for new documents is a vital component of a topic-based workflow, especially for spectral algorithms that do not by themselves produce document-topic distributions. We first verify that our Simple Probabilistic Inverse (SPI) performs well on semi-synthetic documents that have no meaningful correlation structure. When a word is contributed to two or more topics based on the overall data, SPI naively distributes its prediction weights to those topics without considering the potential contribution of other co-occurring words and their correlations via underlying topics within the specific query document. However one can also threshold or post-process SPI’s estimation analogous to TLI, proposing a future research idea for extremely fast composition inference.

We then verify the power of Prior-aware Dual Decomposition (PADD). Being armed with theoretical motivation and efficient parallel implementation, PADD performs comparable to probabilistic Gibbs Sampling especially for realistic data. The experimental results show that PADD also well predicts the topic compositions of unseen real data, notably outperforming the existing TLI method. With robust and efficient topic inference that is aware of topical correlations latent in the data, we can now fill out the necessary tools to make spectral topic models a full competitor to likelihood-based methods. Although the benefits of PADD for topic inference are mostly demonstrated in second-order spectral methods, but are applicable in any topic inference setting.

References

  • Anandkumar et al. (2012a) Anandkumar, Anima, Foster, Dean P., Hsu, Daniel, Kakade, Sham, and Liu, Yi-Kai. A spectral algorithm for latent Dirichlet allocation. In NIPS, 2012a.
  • Anandkumar et al. (2012b) Anandkumar, Animashree, Kakade, Sham M, Foster, Dean P, Liu, Yi-Kai, and Hsu, Daniel. Two svds suffice: Spectral decompositions for probabilistic topic modeling and latent dirichlet allocation. Technical report, 2012b.
  • Anandkumar et al. (2013) Anandkumar, Animashree, Hsu, Daniel J., Janzamin, Majid, and Kakade, Sham. When are overcomplete topic models identifiable? uniqueness of tensor tucker decompositions with structured sparsity. CoRR, 2013.
  • Arabshahi & Anandkumar (2017) Arabshahi, Forough and Anandkumar, Animashree. Spectral methods for correlated topic models. AISTATS, 2017.
  • Arora et al. (2012) Arora, S., Ge, R., and Moitra, A. Learning topic models – going beyond SVD. In FOCS, 2012.
  • Arora et al. (2013) Arora, Sanjeev, Ge, Rong, Halpern, Yonatan, Mimno, David, Moitra, Ankur, Sontag, David, Wu, Yichen, and Zhu, Michael. A practical algorithm for topic modeling with provable guarantees. In ICML, 2013.
  • Arora et al. (2016) Arora, Sanjeev, Ge, Rong, Koehler, Frederic, Ma, Tengyu, and Moitra, Ankur. Provable algorithms for inference in topic models. In ICML, pp. 2859–2867, 2016.
  • Asuncion et al. (2009) Asuncion, Arthur, Welling, Max, Smyth, Padhraic, and Teh, Yee Whye. On smoothing and inference for topic models. In

    In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence

    , 2009.
  • Bansal et al. (2014) Bansal, Trapit, Bhattacharyya, Chiranjib, and Kannan, Ravindran. A provable svd-based algorithm for learning topics in dominant admixture corpus. In Advances in Neural Information Processing Systems 27. 2014.
  • Bioucas-Dias & Figueiredo (2010) Bioucas-Dias, José M and Figueiredo, Mário AT. Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing. In Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), 2010 2nd Workshop on, pp. 1–4, 2010.
  • Blei & Lafferty (2007) Blei, D. and Lafferty, J. A correlated topic model of science. Annals of Applied Statistics, pp. 17–35, 2007.
  • Blei et al. (2003) Blei, D., Ng, A., and Jordan, M. Latent Dirichlet allocation. Journal of Machine Learning Research, pp. 993–1022, 2003. Preliminary version in NIPS 2001.
  • Chang et al. (2009) Chang, Jonathan, Boyd-Graber, Jordan, Wang, Chong, Gerrish, Sean, and Blei, David M. Reading tea leaves: How humans interpret topic models. In NIPS, 2009.
  • Chen et al. (2013) Chen, Jianfei, Zhu, Jun, Wang, Zi, Zheng, Xun, and Zhang, Bo. Scalable inference for logistic-normal topic models. In NIPS, pp. 2445–2453, 2013.
  • Ding et al. (2015) Ding, W., Ishwar, P., and Saligrama, V. Most large topic models are approximately separable. In 2015 Information Theory and Applications Workshop (ITA), 2015.
  • Duchi et al. (2008) Duchi, John, Shalev-Shwartz, Shai, Singer, Yoram, and Chandra, Tushar. Efficient projections onto the l 1-ball for learning in high dimensions. In Proceedings of the 25th international conference on Machine learning, pp. 272–279, 2008.
  • Eisenstein & Xing (2010) Eisenstein, Jacob and Xing, Eric. The CMU 2008 political blog corpus. Technical report, CMU, March 2010.
  • Erlin (2017) Erlin, Matt. Topic modeling, epistemology, and the english and german novel. Cultural Analytics, May 2017.
  • Goldstone & Underwood (2014) Goldstone, Andrew and Underwood, Ted. The quiet transformations of literary studies: What thirteen thousand scholars could tell us. New Literary History, 45(3), Summer 2014.
  • Griffiths & Steyvers (2004) Griffiths, T. L. and Steyvers, M. Finding scientific topics. Proceedings of the National Academy of Sciences, 101:5228–5235, 2004.
  • Hall et al. (2008) Hall, David, Jurafsky, Daniel, and Manning, Christopher D. Studying the history of ideas using topic models. In EMNLP, pp. 363–371, 2008.
  • Hitchcock (1927) Hitchcock, Frank. L. The expression of a tensor or a polyadic as a sum of products. Journal of Mathematics and Physics, 6(1):164–189, 1927.
  • Hofmann (1999) Hofmann, T. Probabilistic latent semantic analysis. In UAI, pp. 289–296, 1999.
  • Huang et al. (2016) Huang, Kejun, Fu, Xiao, and Sidiropoulos, Nikolaos D. Anchor-free correlated topic modeling: Identifiability and algorithm. In NIPS, 2016.
  • Komodakis et al. (2011) Komodakis, Nikos, Paragios, Nikos, and Tziritas, Georgios. Mrf energy minimization and beyond via dual decomposition. IEEE transactions on pattern analysis and machine intelligence, 33(3):531–552, 2011.
  • Lee et al. (2015) Lee, Moontae, Bindel, David, and Mimno, David. Robust spectral inference for joint stochastic matrix factorization. In NIPS, 2015.
  • Li & Pong (2016) Li, Guoyin and Pong, Ting Kei. Douglas–rachford splitting for nonconvex optimization with application to nonconvex feasibility problems. Mathematical Programming, 159(1-2):371–401, 2016.
  • Liu et al. (2009) Liu, Yan, Niculescu-Mizil, Alexandru, and Gryc, Wojciech. Topic-link lda: Joint models of topic and author community. In ICML, 2009.
  • Nguyen et al. (2014) Nguyen, Thang, Hu, Yuening, and Boyd-Graber, Jordan. Anchors regularized: Adding robustness and extensibility to scalable topic-modeling algorithms. In Association for Computational Linguistics, 2014.
  • Rush & Collins (2012) Rush, Alexander M and Collins, Michael.

    A tutorial on dual decomposition and lagrangian relaxation for inference in natural language processing.

    J. Artif. Intell. Res.(JAIR), 45:305–362, 2012.
  • Sontag & Roy (2011) Sontag, D. and Roy, D. Complexity of inference in latent Dirichlet allocation. In NIPS, pp. 1008–1016, 2011.
  • Steyvers & Griffiths (2008) Steyvers, Mark and Griffiths, Thomas L. Rational analysis as a link between human memory and information retrieval. The probabilistic mind: Prospects for Bayesian cognitive science, pp. 329–349, 2008.
  • Talley et al. (2011) Talley, Edmund M, Newman, David, Mimno, David, II, Bruce W Herr, Wallach, Hanna M, Burns, Gully A P C, Leenders, Miriam, and McCallum, Andrew. Database of nih grants using machine-learned categories and graphical clustering. Nature Methods, 8(7):443–444, June 2011.
  • Tucker (1966) Tucker, Ledyard R. Some mathematical notes on three-mode factor analysis. Psychometrika, 31:279–311, 1966.
  • Wallach et al. (2009) Wallach, Hanna M., Mimno, David M., and McCallum, Andrew. Rethinking lda: Why priors matter. In NIPS. 2009.
  • Yao et al. (2009) Yao, Limin, Mimno, David, and McCallum, Andrew. Efficient methods for topic model inference on streaming document collections. In KDD, 2009.