1 Introduction
Topic models [1] assume that each document in a text corpus is generated from an admixture of topics, where, each topic is a distribution over words in a Vocabulary. An admixture is a convex combination of distributions. Words in the document are then picked in i.i.d. trials, each trial has a multinomial distribution over words given by the weighted combination of topic distributions. The problem of inference, recovering the topic distributions from such a collection of documents, is provably NPhard. Existing literature pursues techniques such as variational methods [2] or MCMC procedures [3]
for approximating the maximum likelihood estimates.
Given the intractability of the problem one needs further assumptions on topics to derive polynomial time algorithms which can provably recover topics. A possible (strong) assumption is that each document has only one topic but the collection can have many topics. A document with only one topic is sometimes referred as a pure topic document. [7] proved that a natural algorithm, based on SVD, recovers topics when each document is pure and in addition, for each topic, there is a set of words, called primary words, whose total frequency in that topic is close to 1. More recently, [6] show using tensor methods that if the topic weights have Dirichlet distribution, we can learn the topic matrix. Note that while this allows nonpure documents, the Dirichlet distribution gives essentially uncorrelated topic weights.
In an interesting recent development [4, 5] gave the first provable algorithm which can recover topics from a corpus of documents drawn from admixtures, assuming separability. Topics are said to be separable if in every topic there exists at least one Anchor word. A word in a topic is said to be an an Anchor word for that topic if it has a high probability in that topic and zero probability in remaining topics. The requirement of high probability in a topic for a single word is unrealistic.
Our Contributions:
Topic distributions, such as those learnt in LDA, try to model the cooccurrence of a group of words which describes a theme. Keeping this in mind we introduce the notion of Catchwords. A group of words are called Catchwords of a topic, if each word occurs strictly more frequently in the topic than other topics and together they have high frequency. This is a much weaker assumption than separability. Furthermore we observe, empirically, that posterior topic weights assigned by LDA to a document often have the property that one of the weights is significantly higher than the rest. Motivated by this observation, which has not been exploited by topic modeling literature, we suggest a new assumption. It is natural to assume that in a text corpus, a document, even if it has multiple themes, will have an overarching dominant theme. In this paper we focus on document collections drawn from dominant admixtures. A document collection is said to be drawn from a dominant admixture if for every document, there is one topic whose weight is significantly higher than other topics and in addition, for every topic, there is a small fraction of documents which are nearly purely on that topic. The main contribution of the paper is to show that under these assumptions, our algorithm, which we call TSVD , indeed provably finds a good approximation in total error to the topic matrix. We prove a bound on the error of our approximation which does not grow with dictionary size , unlike [5] where the error grows linearly with .
Problem Definition:
will denote respectively, the number of words in the dictionary, number of topics and number of documents. are large, whereas, is to be thought of as much smaller. Let . For each topic, there is a fixed vector in giving the probability of each word in that topic. Let be the matrix with these vectors as its columns.
Documents are picked in i.i.d. trials. To pick document , one first picks a vector of topic weights according to a fixed distribution on . Let be the weighted combination of the topic vectors. Then the words of the document are picked in i.i.d. trials; each trial picks a word according to the multinomial distribution with as the probabilities. All that is given as data is the frequency of words in each document, namely, we are given the matrix , where Note that , where, the expectation is taken entrywise.
In this paper we consider the problem of finding given .
2 Previous Results
In this section we review literature related to designing provable algorithms for topic models. For an overview of topic models we refer the reader to the excellent survey[1]. Provable algorithms for recovering topic models was started by [7]. Latent Semantic Indexing(LSI) [8] remains a successful method for retrieving similar documents by using SVD. [7] showed that one can recover from a collection of documents, with pure topics, by using SVD based procedure under the additional Primary Words Assumption. [6] showed that in the admixture case, if one assumes Dirichlet distribution for the topic weights, then, indeed, using tensor methods, one can learn to error provided some added assumptions on numerical parameters like condition number are satisfied.
The first provably polynomial time algorithm for admixture corpus was given in [4, 5]. For a topic , a word is an anchor word if
Theorem 2.1
[4] If every topic has an anchor word, there is a polynomial time algorithm that returns an such that with high probability,
where, is the condition number of , is the minimum expected weight of a topic and is the number of words in each document.
Note that the error grows linearly in the dictionary size , which is often large. Note also the dependence of on parameters , which is, and on , which is . If, say, the word “run” is an anchor word for the topic “baseball” and , then the requirement is that every 10 th word in a document on this topic is “run”. This seems too strong to be realistic. It would be more realistic to ask that a set of words like  “run”, “hit”, “score”, etc. together have frequency at least 0.1 which is what our catchwords assumption does.
3 Learning Topics from Dominant Admixtures
Informally, a document is said to be drawn from a Dominant Admixture if the document has one dominant topic. Besides its simplicity, we show empirical evidence from real corpora to demonstrate that topic dominance is a reasonable assumption. The Dominant Topic assumption is weaker than the Pure Topic assumption. More importantly SVD based procedures proposed by [7] will not apply. Inspired by the Primary words assumption we introduce the assumption that each topic has a set of Catchwords which individually occur more frequently in that topic than others. This is again a much weaker assumption than both Primary Words and Anchor Words assumptions and can be verified experimentally. In this section we establish that by applying SVD on a matrix, obtained by thresholding the worddocument matrix, and subsequent means clustering can learn topics having Catchwords from a Dominant Admixture corpus.
3.1 Assumptions: Catchwords and Dominant admixtures
Let be nonnegative reals satisfying:
(1)  
(2) 
Dominant topic Assumption (a) For , document has a dominant topic such that
(b)For each topic , there are at least documents in each of which topic has weight at least .
Catchwords Assumption: There are disjoint sets of words  such that with defined in (9)
(3)  
(4)  
(5) 
Part (b.) of the Dominant Topic Assumption is in a sense necessary for “identifiability”  namely for the model to have a set of document vectors so that every document vector is in the convex hull of these vectors. The Catchwords assumption is natural to describe a theme as it tries to model a unique group of words which is likely to cooccur when a theme is expressed. This assumption is close to topics discovered by LDA like models, which try to model of cooccurence of words. If , then, the assumption (5) says . In fact if , we do not expect to see word (in topic ), so it cannot be called a catchword at all.
A slightly different (but equivalent) description of the model will be useful to keep in mind. What is fixed (not stochastic) are the matrices and the distribution of the weight matrix . To pick document , we can first pick the dominant topic in document and condition the distribution of on this being the dominant topic. One could instead also think of being picked from a mixture of distributions. Then, we let and pick the words of the document in i.i.d multinomial trials as before. We will assume that
where, is the probability of topic being dominant. This is only approximately valid, but the error is small enough that we can disregard it.
For , let be the probability that and and the corresponding “empirical probability”:
(6)  
(7) 
Note that is a real number, whereas,
is a random variable with
. We need a technical assumption on the (which is weaker than unimodality).NoLocalMin Assumption: We assume that does not have a local minimum, in the sense:
(8) 
The justification for the this assumption is twofold. First, generally, Zipf’s law kind of behaviour where the number of words plotted against relative frequencies declines as a power function has often been observed. Such a plot is monotonically decreasing and indeed satisfies our assumption. But for Catchwords, we do not expect this behaviour  indeed, we expect the curve to go up initially as the relative frequency increases, then reach a maximum and then decline. This is a unimodal function and also satisfies our assumption. Indeed, we have empirically observed, see EXPTS, that these are essentially the only two behaviours.
Relative sizes of parameters Before we close the section we discuss the values of the parameters are in order. Here,
is large. For asymptotic analysis, we can think of it as going to infinity.
is also large and can be thought of as going to infinity. [In fact, if , then, intuitively, we see that there is no use of a corpus of more than constant size  since our model has i.i.d. documents, intuitively, the number of samples we need should depend mainly on ]. is (much) smaller, but need not be constant.refers to a generic constant independent of ; its value may be different in different contexts.
3.2 The TSVD Algorithm
Existing SVD based procedures for clustering on raw worddocument matrices fail because the spread of frequencies of a word within a topic is often more (at least not significantly less) than the gap between the word’s frequencies in two different topics. Hypothetically the frequency for the word run, in the topic Sports, may range from say 0.01 on up. But in other topics, it may range from 0 to 0.005 say. The success of the algorithm will lie on correctly identifying the dominant topics such as sports by identifying that the word run has occurred with high frequency. In this example, the gap (0.010.005) between Sports and other topics is less than the spread within Sports (1.00.01), so a 2clustering approach (based on SVD) will split the topic Sports into two. While this is a toy example, note that if we threshold the frequencies at say 0.01, ideally, sports will be all above and the rest all below the threshold, making the succeeding job of clustering easy.
There are several issues in extending beyond the toy case. Data is not onedimensional. We will use different thresholds for each word; word will have a threshold . Also, we have to compute . Ideally we would not like to split any , namely, we would like that for each and and each , either most have or most have . We will show that our threshold procedure indeed achieves this. One other nuance: to avoid conditioning, we split the data into two parts and , compute the thresholds using and actually do the thresholding on . We will assume that the intial had columns, so each part now has columns. Also, partitions the columns of as well as those of . The columns of thresholded matrix are then clustered, by a technique we call Project and Cluster, namely, we project the columns of to its dimensional SVD subspace and cluster in the projection. The projection before clustering has recently been proven [9] (see also [10]) to yield good starting cluster centers. The clustering so found is not yet satisfactory. We use the classic Lloyd’s means algorithm proposed by [12]. As we will show, the partition produced after clustering, of is close to the partition induced by the Dominant Topics, . Catchwords of topic are now (approximately) identified as the most frequently occurring words in documents in . Finally, we identify nearly pure documents in (approximately) as the documents in which the catchwords occur the most. Then we get an approximation to by averaging these nearly pure documents. We now describe the precise algorithm.
3.3 Topic recovery using Thresholded SVD

Randomly partition the columns of into two matrices and of columns each.

Thresholding

Compute Thresholds on For each , let be the highest value of such that ;

Do the thresholding on :


SVD Find the best rank approximation to .

Identify Dominant Topics

Project and Cluster Find (approximately) optimal means clustering of the columns of .

Lloyd’s Algorithm Using the clustering found in Step 4(a) as the starting clustering, apply Lloyd’s algorithm means algorithm to the columns of (, not ).

Let be the partition of corresponding to the clustering after Lloyd’s. //*Will prove that *//


Identify Catchwords

For each , compute the th highest element of .

Let where, .


Find Topic Vectors Find the highest among all and return the average of these as our approximation to .
Theorem 3.1
Main Theorem Under the Dominant Topic, Catchwords and NoLocalMin assumptions, the algorithm succeeds with high probability in finding an so that
A note on the sample complexity is in order. Notably, the dependence of on is best possible (namely ) within logarithmic factors, since, if we had fewer than documents, a topic which is dominant with probability only may have none of the documents in the collection. The dependence of on needs to be at least : to see this, note that we only assume that there are
nearly pure documents on each topic. Assuming we can find this set (the algorithm approximately does), their average has standard deviation of about
in coordinate . If topic vector has entries, each of size , to get an approximation of to error , we need the per coordinate error to be at most which implies . Note that to get comparable error in [4], we need a quadratic dependence on .There is a long sequence of Lemmas to prove the theorem. The Lemmas and the proofs are given in Appendix. The essence of the proof lies in proving that the clustering step correctly identifies the partition induced by the dominant topics. For this, we take advantage of a recent development on the means algorithm from [9] [see also [10]], where, it is shown that under a condition called the Proximity Condition, Lloyd’s
means algorithm starting with the centers provided by the SVDbased algorithm, correctly identifies almost all the documents’ dominant topics. We prove that indeed the Proximity Condition holds. This calls for machinery from Random Matrix theory (in particular bounds on singular values). We prove that the singular values of the thresholded worddocument matrix are nicely bounded. Once the dominant topic of each document is identified, we are able to find the Catchwords for each topic. Now, we rely upon part (b.) of the Dominant Topic assumption : that is there is a small fraction of nearly Pure Topicdocuments for each topic. The Catchwords help isolate the nearly puretopic documents and hence find the topic vectors. The proofs are complicated by the fact that each step of the algorithm induces conditioning on the data for example, after clustering, the document vectors in one cluster are not anymore independent.
4 Experimental Results
We compare the thresholded SVD based kmeans (TSVD^{2}^{2}2Resources available at: http://mllab.csa.iisc.ernet.in/tsvd) algorithm 3.2 with the algorithms of [5], RecoverKL and RecoverL2, using the code made available by the authors^{3}^{3}3http://www.cs.nyu.edu/~halpern/files/anchorwordrecovery.zip. We first provide empirical support for the algorithm assumptions in Section 3.1, namely the dominant topic and the catchwords assumption. Then we show on 4 different semisynthetic data that TSVD provides as good or better recovery of topics than the Recover algorithms. Finally on reallife datasets, we show that the algorithm performs as well as [5] in terms of perplexity and topic coherence.
Implementation Details:
TSVD parameters () are not known in advance for real corpus. We tested empirically for multiple settings and the following values gave the best performance. Thresholding parameters used were: , . For finding the catchwords, in step 5. For finding the topic vectors (step 6), taking the top 50% () gave empirically better results. The same values were used on all the datasets tested. The new algorithm is sensitive to the initialization of the first kmeans step in the projected SVD space. To remedy this, we run 10 independent random initializations of the algorithm with KMeans++ [13] and report the best result.
Datasets: We use four real word datasets in the experiments. As preprocessing steps we removed standard stopwords, selected the vocabulary size by termfrequency and removed documents with less than 20 words. Datasets used are: (1) NIPS^{4}^{4}4http://archive.ics.uci.edu/ml/datasets/Bag+of+Words: Consists of 1,500 NIPS full papers, vocabulary of 2,000 words and mean document length 1023. (2) NYT^{4}^{4}footnotemark: 4: Consists of a random subset of 30,000 documents from the New York Times dataset, vocabulary of 5,000 words and mean document length 238. (3) Pubmed^{4}^{4}footnotemark: 4: Consists of a random subset of 30,000 documents from the Pubmed abstracts dataset, vocabulary of 5,030 words and mean document length 58. (4) 20NewsGroup^{5}^{5}5http://qwone.com/~jason/20Newsgroups (20NG): Consist of 13,389 documents, vocabulary of 7,118 words and mean document length 160.
4.1 Algorithm Assumptions
To check the dominant topic and catchwords assumptions, we first run 1000 iterations of Gibbs sampling on the real corpus and learn the posterior documenttopic distribution () for each document in the corpus (by averaging over 10 savedstates separated by 50 iterations after the 500 burnin iterations). We will use this posterior documenttopic distribution as the document generating distribution to check the two assumptions.
Dominant topic assumption: Table 1 shows the fraction of the documents in each corpus which satisfy this assumption with (minimum probability of dominant topic) and (maximum probability of nondominant topics). The fraction of documents which have almost pure topics with highest topic weight at least 0.95 () is also shown. The results indicate that the dominant topic assumption is well justified (on average 64% documents satisfy the assumption) and there is also a substantial fraction of documents satisfying almost pure topic assumption.
Catchwords assumption: We first find a clustering of the documents
by assigning all documents which have highest posterior probability for the same topic into one cluster. Then we use step 5 of TSVD (Algorithm
3.2) to find the set of catchwords for each topiccluster, i.e. , with the parameters: , (taking into account constraints in Section 3.1, ). Table 1 reports the fraction of topics with nonempty set of catchwords and the average per topic frequency of the catchwords^{6}^{6}6. Results indicate that most topics on real data contain catchwords (Table 1, secondlast column). Moreover, the average pertopic frequency of the group of catchwords for that topic is also quite high (Table 1, last column).NoLocalMin Assumption: To provide support and intuition for the localmin assumption we consider the quantity , in (7). Recall that , we will analyze the behavior of as a function of for some topics and words . As defined, we need a fixed to check this assumption and so we generate semisynthetic data with a fixed from LDA model trained on the real NYT corpus (as explained in Section 4.2.1). We find catchwords and the sets as in the catchwords assumption above and plot separately for some random catchwords and noncatchwords by fixing some random . Figure 1 shows the plots. As explained in 3.1, the plots are monotonically decreasing for noncatchwords and satisfy the assumption. On the other hand, the plots for catchwords are almost unimodal and also satisfy the assumption.
Corpus  % s with Dominant  % s with Pure  % Topics  CW Mean  

Topics ()  Topics ()  with CW  Frequency  
NIPS  1500  50  56.6%  2.3%  96%  0.05 
NYT  30000  50  63.7%  8.5%  98%  0.07 
Pubmed  30000  50  62.2%  5.1%  78%  0.05 
20NG  13389  20  74.1%  39.5%  85%  0.06 
4.2 Empirical Results
4.2.1 Topic Recovery on SemiSynthetic Data
Semisynthetic Data: Following [5], we generate semisynthetic corpora from LDA model trained by MCMC, to ensure that the synthetic corpora retain the characteristics of real data. Gibbs sampling is run for 1000 iterations on all the four datasets and the final wordtopic distribution is used to generate varying number of synthetic documents with documenttopic distribution drawn from a symmetric Dirichlet with hyperparameter 0.01. For NIPS, NYT and Pubmed we use topics, for 20NewsGroup , and mean document lengths of 1000, 300, 100 and 200 respectively. Note that the synthetic data is not guaranteed to satisfy dominant topic assumption for every document (on average about 80% documents satisfy the assumption for value of tested in Section 4.1)
Topic Recovery: We learn the wordtopic distributions () for the semisynthetic corpora using TSVD and the Recover algorithms of [5]. Given these learned topic distributions and the original datagenerating distributions (), we align the topics of and by bipartite matching and rearrange the columns of in accordance to the matching with . Topic recovery is measured by the average of the error across topics (called reconstruction error [5]), , defined as: .
We report reconstruction error in Table 2 for TSVD and the Recover algorithms, RecoverL2 and RecoverKL. TSVD has smaller error on most datasets than the RecoverKL algorithm. We observed performance of TSVD to be always better than RecoverL2. Best performance is observed on NIPS which has largest mean document length, indicating that larger leads to better recovery. Results on 20NG are slightly worse than RecoverKL for small sample size (though better than RecoverL2), but the difference is small. While the values in Table 2 are averaged values, Figure 2 shows that TSVD algorithm achieves much better topic recovery (27% improvement in error over RecoverKL) for majority of the topics (90%) on most datasets.
Corpus  Documents  RecoverL2  RecoverKL  TSVD  % Improvement 
NIPS  40,000  0.342  0.308  0.115  62.7% 
50,000  0.346  0.308  0.145  52.9%  
60,000  0.346  0.311  0.131  57.9%  
Pubmed  40,000  0.388  0.332  0.288  13.3% 
50,000  0.378  0.326  0.280  14.1%  
60,000  0.372  0.328  0.284  13.4%  
20NG  40,000  0.126  0.120  0.124  3.3% 
50,000  0.118  0.114  0.113  0.9%  
60,000  0.114  0.110  0.106  3.6%  
NYT  40,000  0.214  0.208  0.195  6.3% 
50,000  0.211  0.206  0.185  10.2%  
60,000  0.205  0.200  0.194  3.0% 
4.2.2 Topic Recovery on Real Data
Perplexity:
A standard quantitative measure used to compare topic models and inference algorithms is perplexity [2]. Perplexity of a set of test documents, where each document consists of words, denoted by , is defined as: . To evaluate perplexity on real data, the heldout sets consist of 350 documents for NIPS, 10000 documents for NYT and Pubmed, and 6780 documents for 20NewsGroup. Table 3 shows the results of perplexity on the 4 datasets. TSVD gives comparable perplexity with RecoverKL, results being slightly better on NYT and 20NewsGroup which are larger datasets with moderately high mean document lengths.
Topic Coherence:
[11] proposed Topic Coherence as a measure of semantic quality of the learned topics by approximating user experience of topic quality on top words of a topic. Topic coherence is defined as , where is the document frequency of a word , is the document frequency of and together, and is a small constant. We evaluate TC for the top 5 words of the recovered topic distributions and report the average and standard deviation across topics. TSVD gives comparable results on Topic Coherence (see Table 3).
Topics on Real Data:
Table 4 shows the top 5 words of all 50 matched pair of topics on NYT dataset for TSVD, RecoverKL and Gibbs sampling. Most of the topics recovered by TSVD are more closer to Gibbs sampling topics. Indeed, the total average error with topics from Gibbs sampling for topics from TSVD is 0.034, whereas for RecoverKL it is 0.047 (on the NYT dataset).
Corpus  Perplexity  Topic Coherence  
RKL  RL2  TSVD  RKL  RL2  TSVD  
NIPS  754  749  835  86.4 24.5  88.6 22.7  65.2 29.4 
NYT  1579  1685  1555  105.2 25.0  102.1 28.2  107.6 25.7 
Pubmed  1188  1203  1307  94.0 22.5  94.4 22.5  84.5 28.7 
20NG  2431  2565  2390  93.7 13.6  89.4 20.7  90.4 27.0 
Summary: We evaluated the proposed algorithm, TSVD, rigorously on multiple datasets with respect to the state of the art (Recover), following the evaluation methodology of [5]. In Table 2 we show that the L1 reconstruction error for the new algorithm is small and on average 19.6% better than the best results of the Recover algorithms [5]. We also demonstrate that on real datasets the algorithm achieves comparable perplexity and topic coherence to Recover (Table 3. Moreover, we show on multiple real datasets that the algorithm assumptions are well justified in practice.
Conclusion
Real world corpora often exhibits the property that in every document there is one topic dominantly present. A standard SVD based procedure will not be able to detect these topics, however TSVD, a thresholded SVD based procedure, as suggested in this paper, discovers these topics. While SVD is timeconsuming, there have been a host of recent samplingbased approaches which make SVD easier to apply to massive corpora which may be distributed among many servers. We believe that apart from topic recovery, thresholded SVD can be applied even more broadly to similar problems, such as matrix factorization, and will be the basis for future research.
TSVD  RecoverKL  Gibbs 

zzz_elian zzz_miami boy father zzz_cuba  zzz_elian boy zzz_miami father family  zzz_elian zzz_miami boy father zzz_cuba 
cup minutes add tablespoon oil  cup minutes tablespoon add oil  cup minutes add tablespoon oil 
game team yard zzz_ram season  game team season play zzz_ram  team season game coach zzz_nfl 
book find british sales retailer  book find school woman women  book find woman british school 
run inning hit season game  run season game inning hit  run season game hit inning 
church zzz_god religious jewish christian  pope church book jewish religious  religious church jewish jew zzz_god 
patient drug doctor cancer medical  patient drug doctor percent found  patient doctor drug medical cancer 
music song album musical band  black reporter zzz_new_york zzz_black show  music song album band musical 
computer software system zzz_microsoft company  web www site cookie cookies  computer system software technology mail 
house dog water hair look  room show look home house  room look water house hand 
zzz_china trade zzz_united_states nuclear official  zzz_china zzz_taiwan government trade zzz_party  zzz_china zzz_united_states zzz_u_s zzz_clinton zzz_american 
zzz_russian war rebel troop military  zzz_russian zzz_russia war zzz_vladimir_putin rebel  war military zzz_russian soldier troop 
officer police case lawyer trial  zzz_ray_lewis police case officer death  police officer official case investigation 
car driver wheel race vehicles  car driver truck system model  car driver truck vehicle wheel 
show network zzz_abc zzz_nbc viewer  con zzz_mexico son federal mayor  show television network series zzz_abc 
com question information zzz_eastern sport  com information question zzz_eastern sport  com information daily question zzz_eastern 
book author writer com reader  zzz_john_rocker player team right braves  book word writer author wrote 
zzz_al_gore zzz_bill_bradley campaign president democratic  zzz_al_gore zzz_bill_bradley campaign president percent  zzz_al_gore campaign zzz_bill_bradley president democratic 
actor film play movie character  goal play team season game  film movie award actor zzz_oscar 
school student teacher district program  school student program million children  school student teacher program children 
tax taxes cut billion plan  zzz_governor_bush tax campaign taxes plan  tax plan billion million cut 
percent stock market fund investor  million percent tax bond fund  stock market percent fund investor 
team player season coach zzz_nfl  team season player coach zzz_cowboy  team player season coach league 
family home friend room school  look gun game point shot  family home father son friend 
primary zzz_mccain voter zzz_john_mccain zzz_bush  zzz_john_mccain zzz_george_bush campaign republican voter  zzz_john_mccain zzz_george_bush campaign zzz_bush zzz_mccain 
zzz_microsoft court company case law  zzz_microsoft company computer system software  zzz_microsoft company window antitrust government 
company million percent shares billion  million company stock percent shares  company million companies business market 
site web sites com www  web site zzz_internet company com  web site zzz_internet online sites 
scientist human cell study researcher  dog quick jump altered food  plant human food product scientist 
baby mom percent home family  mate women bird film idea  women look com need telegram 
point game half shot team  point game team season zzz_laker  game point team play season 
zzz_russia zzz_vladimir_putin zzz_russian zzz_boris_yeltsin zzz_moscow  zzz_clinton government zzz_pakistan zzz_india zzz_united_states  government political election zzz_vladimir_putin zzz_russia 
com zzz_canada www fax information  chocolate food wine flavor buy  www com hotel room tour 
room restaurant building fish painting  zzz_kosovo police zzz_serb war official  building town area resident million 
loved family show friend play  film show movie music book  film movie character play director 
prices percent worker oil price  percent stock market economy prices  percent prices economy market oil 
million test shares air president  air wind snow shower weather  water snow weather air scientist 
zzz_clinton flag official federal zzz_white_house  zzz_bradley zzz_al_gore campaign zzz_gore zzz_clinton  zzz_clinton president gay mayor zzz_rudolph_giuliani 
files article computer art ball  show film country right women  art artist painting museum show 
con percent zzz_mexico federal official  official zzz_iraq government zzz_united_states oil  zzz_mexico drug government zzz_united_states mexican 
involving book film case right  test women study student found  plane flight passenger pilot zzz_boeing 
zzz_internet companies company business customer  company companies deal zzz_internet zzz_time_warner  media zzz_time_warner television newspaper cable 
zzz_internet companies company business customer  newspaper zzz_chronicle zzz_examiner zzz_hearst million  million money worker company pay 
goal play games king game  zzz_tiger_wood shot tournament tour player  zzz_tiger_wood tour tournament shot player 
zzz_american zzz_united_states zzz_nato camp war  zzz_israel zzz_lebanon peace zzz_syria israeli  zzz_israel peace palestinian talk israeli 
team season game player play  team game point season player  race won win fight team 
reporter zzz_earl_caldwell zzz_black black look  corp group list oil meeting  black white zzz_black hispanic reporter 
campaign zzz_republican republican zzz_party primary  zzz_bush zzz_mccain campaign republican voter  gun bill law zzz_congress legislation 
zzz_bush zzz_mccain campaign primary republican  flag black zzz_confederate right group  flag zzz_confederate zzz_south_carolina black zzz_south 
zzz_john_mccain campaign zzz_george_bush zzz_bush republican  official government case officer security  court law case lawyer right 
References
 [1] Blei, D. Introduction to probabilistic topic models. Communications of the ACM, pp. 77–84, 2012.

[2]
Blei, D., Ng, A., and Jordan, M.
Latent Dirichlet allocation.
Journal of Machine Learning Research
, pp. 3:993–1022, 2003. Preliminary version in Neural Information Processing Systems 2001.  [3] Griffiths, T. L. and Steyvers, M. Finding scientific topics. Proceedings of the National Academy of Sciences, 101:5228–5235, 2004.
 [4] Arora, S., Ge, R., and Moitra, A. Learning topic models – going beyond SVD. In Foundations of Computer Science, 2012b.
 [5] Arora, S., Ge, R., Halpern, Y., Mimno, D., Moitra, A., Sontag, D., Wu, Y., and Zhu M. A practical algorithm for topic modeling with provable guarantees. In Internation Conference on Machine Learning, 2013
 [6] Anandkumar, A., Foster, D., Hsu, D., Kakade, S., and Liu, Y. A Spectral Algorithm for Latent Dirichlet Allocation In Neural Information Processing Systems, 2012.
 [7] Papadimitriou, C., Raghavan, P., Tamaki H., and Vempala S. Latent semantic indexing: a probabilistic analysis. Journal of Computer and System Sciences, pp. 217–235, 2000. Preliminary version in PODS 1998.
 [8] Deerwester, S., Dumais, S., Landauer, T., Furnas, G., and Harshman, R. Indexing by latent semantic analysis. Journal of the American Society for Information Science, pp. 391–407, 1990.
 [9] Kumar, A., and Kannan, R. Clustering with spectral norm and the kmeans algorithm. In Foundations of Computer Science, 2010
 [10] Awashti, P., and Sheffet, O. Improved spectralnorm bounds for clustering. In Proceedings of Approx/Random, 2012.

[11]
Mimno, D., Wallach, H., Talley, E., Leenders, M. and McCallum, A.
Optimizing semantic coherence in topic models.
In
Empirical Methods in Natural Language Processing
, pp. 262–272, 2011.  [12] Lloyd, Stuart P. Least squares quantization in PCM, IEEE Transactions on Information Theory 28 (2): 129–137,1982.
 [13] Arthur, D., and Vassilvitskii, S. Kmeans++: The advantages of careful seeding. In Proceedings of ACMSIAM symposium on Discrete algorithms, pp. 1027–1035, 2007
 [14] McDiarmid, C. On the method of Bounded Differences. Surveys in Combinatorics: London Math. Soc. Lecture Note Series 141. Cambridge University Press., 1989.
 [15] Vershynin R. Introduction to nonasymptotic analysis of random matrices. In ArXiv:1011.3027v6 [math.PR] 4 Oct 2011
Appendix A Line of Proof
We describe the Lemmas we prove to establish the result. The detailed proofs are in the Section B.
a.1 General Facts
We start with a consequence of the nolocalminimum assumption. We use that assumption solely through this Lemma.
Lemma A.1
Let be as in (6). If for some and , and also then, .
Next, we state a technical Lemma which is used repeatedly. It states that for every , the empirical probability that is close to the true probability. Unsurprisingly, we prove it using HC. But we will state a consequence in the form we need in the sequel.
Lemma A.2
a.1.1 Properties of Thresholding
Say that a threshold “splits” if has a significant number of with and also a significant number of with . Intuitively, it would be desirable if no threshold splits any , so that, in , for each , either most have or most have . We now prove that this is indeed the case with proper bounds. We henceforth refer to the conclusion of the Lemma below by the mnemonic “no threshold splits any ”.
Lemma A.3
(No Threshold Splits any ) For a fixed , with probability at least , the following holds:
Let be a matrix whose columns are given by
’s columns corresponding to all are the same. The entries of the matrix are fixed (real numbers) once we have (and the thresholds are determined). Note: We have “integrated out ”, i.e.,
(So, think of for ’s columns being picked first from which is calculated. for columns of are not yet picked until the are determined.) But are random variables before we fix . The following Lemma is a direct consequence of “no threshold splits any ”.
Lemma A.4
Let . With probability at least (over the choice of ):
(10) 
where, .
So far, we have proved that for every , the threshold does not split any . But this is not sufficient in itself to be able to cluster (and hence identify the ), since, for example, this alone does not rule out the extreme cases that for most in every , is above the threshold (whence for almost all ) or for most in no is above the threshold, whence, for almost all . Both these extreme cases would make us loose all the information about due to thresholding; this scenario and milder versions of it have to be proven not to occur. We do this by considering how thresholds handle catchwords. Indeed we will show that for a catchword , a has above the threshold and a has below the threshold. Both statements will only hold with high probability, of course and using this, we prove that and are not too close for in different ’s. For this, we need the following Lemmas.
Lemma A.5
For , and , we have with ,
Lemma A.6
With probability at least , we have
a.1.2 Proximity
Next, we wish to show that clustering as in TSVD identifies the dominant topics correctly for most documents, i.e., that for all . For this, we will use a theorem from [9] [see also [10]] which in this context says:
Theorem A.7
If all but a fraction of the the satisfy the “proximity condition”, then TSVD identifies the dominant topic in all but fraction of the documents correctly after polynomial number of iterations.
To describe the proximity condition, first let be the maximum over all directions of the square root of the meansquared distance of to , i.e.,
The parameter should remind the reader of standard deviation, which is indeed what this is, since . Our random variables being dimensional vectors, we take the maximum standard deviation in any direction.

is said to satisfy the proximity condition with respect to , if for each and each and and each and , the projection of onto the line joining and is closer to by at least
than it is to . [Here, is a constant.]
To prove proximity, we need to bound . This will be the task of the subsection B.1 which relies heavily on Random Matrix Theory.
Appendix B Proofs of Correctness
We start by recalling the HöffdingChernoff (HC) inequality in the form we use it.
Lemma B.1
HöffdingChernoff If is the average of independent random variables with values in and , then, for an ,

(of Lemma A.1) Abbreviate by . We claim that either (i) or (ii) To see this, note that if both (i) and (ii) fail, we have and with . But then there has to be a local minimum of between and . If (i) holds, clearly, and so the lemma follows. So, also if (ii) holds.

(of Lemma A.2) Note that where, is the indicator variable of . and we apply HC with and . We have , as is easily seen by calculating the roots of the quadratic . Thus we get the claimed for . Note that the same proof applies for as well as .
To prove the second assertion, let and , then, satisfies the quadratic inequalities:
By bounding the roots of these quadratics, it is easy to see the second assertion after some calculation.

(of Lemma A.3) Note that is a random variable which depends only on . So, for , are independent of . Now, if
by Lemma (A.1), we have
Since for all , we also have
(11) Pay a failure probability of and assume the conclusion of Lemma (A.2) and we have:
Now, it is easy to see that increases as increases subject to (11). So,
contradicting the definition of in the algorithm. This completes the proof of the Lemma.

(of Lemma A.4): After paying a failure probability of , assume no threshold splits any . [The factors of and come in because we are taking the union bound over all words and all topics.] Then,
Wlg, assume that . Then, with probability, at least , . Now, either and all are zero and then , or , whence, . So, and . So,