Spectral Methods for Correlated Topic Models

05/30/2016 ∙ by Forough Arabshahi, et al. ∙ 0

In this paper, we propose guaranteed spectral methods for learning a broad range of topic models, which generalize the popular Latent Dirichlet Allocation (LDA). We overcome the limitation of LDA to incorporate arbitrary topic correlations, by assuming that the hidden topic proportions are drawn from a flexible class of Normalized Infinitely Divisible (NID) distributions. NID distributions are generated through the process of normalizing a family of independent Infinitely Divisible (ID) random variables. The Dirichlet distribution is a special case obtained by normalizing a set of Gamma random variables. We prove that this flexible topic model class can be learned via spectral methods using only moments up to the third order, with (low order) polynomial sample and computational complexity. The proof is based on a key new technique derived here that allows us to diagonalize the moments of the NID distribution through an efficient procedure that requires evaluating only univariate integrals, despite the fact that we are handling high dimensional multivariate moments. In order to assess the performance of our proposed Latent NID topic model, we use two real datasets of articles collected from New York Times and Pubmed. Our experiments yield improved perplexity on both datasets compared with the baseline.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Topic models are a popular class of exchangeable latent variable models for document categorization. The goal is to uncover hidden topics based on the distribution of word occurrences in a document corpus. Topic models are admixture models, which go beyond the usual mixture model that allows for only one hidden topic to be present in each document. In contrast, topic models incorporate multiple topics in each document. It is assumed that each document has a latent proportions of different topics, and the observed words are drawn in a conditionally independent manner, given the set of topics.

Latent Dirichlet Allocation (LDA) is the most popular topic model [8], in which the topic proportions are drawn from the Dirichlet distribution. While LDA has widespread applications, it is limited by the choice of the Dirichlet distribution. Notably, Dirichlet distribution can only model negative correlations [6]

, and thus, is unable to incorporate arbitrary correlations among the topics that may be present in different document corpora. Another drawback is that the elements with similar means need to have similar variances. While there have been previous attempt to go beyond the Dirichlet distribution, e.g. 

[7, 21], their correlation structures are still limited, learning these models is usually difficult and no guaranteed algorithms exist. Furthermore, As discussed in [20], the correlation structure considered in [7], gives rise to spurious correlations resulting in a better perplexity on the held-out set even when the recovered topics are less interpretable. The work of [5] provides a provably correct algorithm for learning topic models that also allow for certain correlations among the topics, however, it requires “anchor word” separability assumptions for the proof of correctness.

In this work, we consider a flexible class of topic models, and propose guaranteed and efficient algorithms for learning them. We employ the class of Normalized Infinitely Divisible (NID) distributions to model the topic proportions [10, 18]

. These are a class of distributions on the simplex, formed by normalizing a set of independent draws from a family of positive Infinitely Divisible (ID) distributions. The draws from an ID distribution can be represented as a sum of an arbitrary number of i.i.d. random variables. The concept of infinite divisibility was introduced in 1929 by Bruno de Finetti, and the most fundamental results were developed by Kolmogorov, Lévy and Khintchine in the 1930s. The idea of using normalized random probability measures with independent increments have also been used in the context of non-parametric models to go beyond the Dirichlet Process

[17].

The Gamma distribution is an example of an ID distribution, and the Dirichlet distribution is obtained by normalizing a set of independent draws from Gamma distributions. We show that the class of NID topic models significantly generalize the LDA model: they can incorporate both positive and negative correlations among the topics and they involve additional parameters to vary the variance and higher order moments, while fixing the mean.

There are mainly three categories of algorithms for learning topic models, viz., variational inference [8, 7], Gibbs sampling [11, 19, 9], and spectral methods [2, 22]. Among them, spectral methods have gained increasing prominence over the last few years, due to their efficiency and guaranteed learnability. In this paper, we develop novel spectral methods for learning latent NID topic models.

Spectral methods have previously been proposed for learning LDA [2]

, and in addition, other latent variable models such as Independent Component Analysis (ICA), Hidden Markov Models (HMM) and mixtures of ranking distributions 

[3]. The idea is to learn the parameters based on spectral decomposition of low order moment tensors (third or fourth order). Efficient algorithms for tensor decomposition have been proposed before [3], and implies consistent learning with (low order) polynomial computational and sample complexity.

The main difficulty in extending spectral methods to the more general class of NID topic models is the presence of arbitrary correlations among the hidden topics which need to be “untangled”. For instance, take the case of a single topic model (i.e. each document has only one topic); here, the third order moment, which is the co-occurrence tensor of word triplets, has a CANDECOMP/PARAFAC (CP) decomposition, and computing the decomposition yields an estimate of the topic-word matrix. In contrast, for the LDA model, such a tensor decomposition is obtained by a combination of moments up to the third order. In other words, the moments of the LDA model need to be appropriately “centered” in order to have the tensor decomposition form.

Finding such a moment combination has so far been an “art form”, since it is based on explicit manipulation of the moments of the hidden topic distribution. So far, there is no principled mechanism to automatically find the moment combination with the CP decomposition form. For arbitrary topic models, however, finding such a combination may not even be possible. In general, one requires all the higher order moments for learning.

In this work, we show that surprisingly, for the flexible class of NID topic models, moments up to third order suffice for learning, and we provide an efficient algorithm for computing the coefficients to combine the moments. The algorithm is based on computation of a univariate integral, that involves the Levy measure of the underlying ID distribution. The integral can be computed efficiently through numerical integration since it is only univariate, and has no dependence on the topic or word dimensions. Intriguingly, this can be accomplished, even when there exists no closed form probability density functions (pdf) for the NID variables.

The paper is organized as follows. In Section 2, we propose our “Latent Normalized Infinitely Divisible Topic Models” and present its generative process. We dedicate Section 3 to the properties of NID distributions and indicate how they overcome the drawbacks of the Dirichlet distribution and other distributions on the simplex. In Section 4 we present our efficient learning algorithm with guaranteed convergence for the proposed topic model based on spectral decomposition. Finally, we conclude the paper in Section 6.

2 Latent Normalized Infinitely Divisible Topic Models

Topic models incorporate relationships between words and a set of hidden topics. We represent the words

using one-hot encoding, i.e.

if the word in the vocabulary occurs, and

is the standard basis vector. The proportions of topics in a document is represented by vector

. We assume that is drawn from an NID distribution.

The detailed generative process of a latent NID topic model for each document is as follows

  • Draw independent variables, from a family of ID distributions.

  • Set to where .

  • For each word ,

    • Choose a topic and represent it with one-hot encoding.

    • Choose a word vector as a standard basis vector with probability

      (1)

      conditioned on the drawn topic , and is the topic-word matrix.

From (1), we also have

(2)

When the is drawn from the Gamma distribution, we obtain the Dir distribution for the hidden vector , and the LDA model through the above generative process.

Our goal is to recover the topic-word matrix given the document collection. In the following section we introduce the class of NID distribution and discuss its properties.

3 Properties of NID distributions

Figure 1: Graphical Model Representation of the Latent NID Topic Model. are a collection of independent Infinitely Divisible positive variables that are characterized by the collection of their corresponding Lévy measures And are the resulting NID variables representing topic proportions in a document of length with words

NID distributions are a flexible class of distributions on the simplex and have been applied in a range of domains. This includes hierarchical mixture modeling with Normalized Inverse-Gaussian distribution 

[16], and modeling overdispersion with the normalized tempered stable distribution [14], both of which are examples of NID distributions. For more applications, see [10]. Let us first define the concept of infinite divisibility and present the properties of an ID distribution, and then consider the NID distributions.

3.1 Infinitely Divisible Distributions

If random variable has an Infinitely Divisible (ID) distribution, then for any there exists a collection of i.i.d random variables such that . In other words, an Infinitely Divisible distribution can be expressed as the sum of an arbitrary number of independent identically distributed random variables.

The Poisson distribution, compound Poisson, the negative binomial distribution, Gamma distribution, and the trivially degenerate distribution are examples of Infinitely Divisible distributions; as are the normal distribution, Cauchy distribution, and all other members of the stable distribution family. The Student’s t-distribution is also another example of Infinitely Divisible distributions. The uniform distribution and the binomial distribution are not infinitely divisible, as are all distributions with bounded (finite) support.

The special decomposition form of ID distributions makes them natural choices for certain models or applications. E.g. a compound Poisson distribution is a Poisson sum of IID random variables. The discrete compound Poisson distribution, also known as the stuttering Poisson distribution, can model batch arrivals (such as in a bulk queue [1]) and can incorporate Poisson mixtures.

In the sequel, we limit the discussion to ID distributions on in order to ensure that the Normalized ID variables are on the simplex. Let us now present how ID distributions can be characterized.

Lévy measure:

A -finite Borel measure on is called a Lévy measure if . According to the Lévy-Khintchine representation given below, the Lévy measure uniquely characterizes an ID distribution along with a constant scale . This implies that every Infinitely Divisible distribution corresponds to a Lévy process, which is a stochastic process with independent increments.

Lévy-Khintchine representation

[Theorem 16.14 [13]] Let and indicate the set of probability measures and the set of -finite measures on a non-empty set , respectively. Let and let be the log-Laplace transform of . Then is Infinitely Divisible, if and only if there exists a and a -finite measure with

(3)

such that

(4)

In this case the pair is unique, is called the Lévy measure of and is called the deterministic part. It can be shown that .

In particular, let

indicate the characteristic function of an Infinitely Divisible random variable

with pdf and corresponding pair , where is the imaginary unit. Based on the Lévy-Khintchine representation it holds that where is typically referred to as the Laplace exponent of . This implies that the Laplace exponent of an ID variable is also completely characterized by pair . It holds for ID variables that if is a well-defined Lévy measure, so is for any , which indicates that is also a well-defined Laplace exponent of an ID variable.

3.2 Normalized Infinitely Divisible Distributions

As defined in [10], a Normalized Infinitely Divisible (NID) random variable is a random variable that is formed by normalizing independent draws of strictly positive (not necessarily coinciding) Infinitely Divisible distributions. More specifically, let be a set of independent strictly positive Infinitely Divisible random variables and . An NID distribution is defined as the distribution of the random vector on the -dimensional simplex, denoted as . The strict positivity assumption implies that is on the simplex [10, 18].

Let denote Natural numbers . As stated by the Lévy-Khintchine theorem, a collection of ID positive variables for is completely characterized by the collection of the corresponding Lévy measures . It was shown in [18] that this also holds for the normalized variables for .

In this paper, we assume that the ID variables are drawn independently from ID distributions that are characterized with the corresponding collection of Lévy measures , respectively. Which in turn translates respectively to variables with Laplace exponents . Variables will allow the distribution to vary in the interior of the simplex, providing the asymmetry needed to model latent models. The homogeneity assumption on the Lévy measure or the Laplace exponent provides the structure needed for guaranteed learning (Theorem 1). The overall graphical model representation is shown in Figure 1

(a) Gamma,
(b) -stable,
(c) inverse Gaussian,
(d) Gamma,
(e) -stable,
(f) inverse Gaussian,
Figure 2: Heat map of the pdf of three examples of the NID class that have closed form with respect to their parameters. All the figures have . For the Inverse Gaussian the distribution moves from the center to the vertices of the simplex as goes from to with fixed and for the -stable we have the same behavior when changes from to with fixed .

If the original ID variables have probability densities for all , then the distribution of vector , where is, There are only three members of the NID class that have closed form densities namely, the Gamma distribution, Gamma, the Inverse Gaussian distribution, , and the -stable distribution with . and to ensure positive support for the Stable distribution. As noted earlier, Gamma reduces to the Dirichlet distribution. An interested reader is referred to [10, 18] for the closed form of each distribution.

Figure 2 depicts the heatmap of the density of these distributions on the probability simplex for different value of their parameters. Note that all the distributions have the same parameter and hence, the same mean values. However, their concentration properties are widely varying, showing that the NID class can incorporate variations in higher order moments through additional parameters.

Gamma ID distribution:

When the ID distribution is Gamma with parameters , we have the Dirichlet distribution as the resulting NID distribution. The Laplace exponent for this distribution will, therefore, be

-stable ID distribution:

The variables are drawn from the positive stable distribution with , and which ensures that the distribution is on . The Laplace exponent of this distribution is Note that the -stable distribution can be represented in closed form for .

Inverse Gaussian ID distribution:

The random variables are drawn from the Inverse-Gaussian (IG) distribution . The Laplace exponent of this distribution is

Note: The Dirichlet distribution, the -Stable distribution and the Inverse Gaussian distribution are all special cases of the generalized Inverse Gaussian distribution [10].

As mentioned earlier, the class of NID distributions is capable of modeling positive and negative correlations among the topics. This property is depicted in Figure 3. These figures show the proportion of positively correlated topics for the three presented distributions. As we can see the Inverse Gaussian NID distribution can capture both positive and negative correlations.

lambda[l] Positively Correlated Proportion[l]

(a) Gamma NID.

mu[l] Positively Correlated Proportion[l]

(b) Inverse Gaussian NID.

alpha[l] Positively Correlated Proportion[l]

(c) -stable NID.
Figure 3: Proportion of positively correlated elements of special cases of an NID distribution with 10 elements with respect to the parameter of the Laplace exponent for a fixed randomly drawn vector .

4 Learning NID Topic Models through Spectral Methods

In this section we will show how the form of the moments of NID distributions enable efficient learning of this flexible class.

In order to be able to guarantee efficient learning using higher order moments, the moments need to have a very specific structure. Namely, the moment of the underlying distribution of needs to form a diagonal tensor. If the components of where indeed independent, this is obtained through the cumulant tensor. On the other hand, for LDA, it has been shown by Anandkumar et. al. [2] that a linear combination of moments of up to third order of forms a diagonal tensor for the Dirichlet distribution. Below, we extend the result to the more general class of NID distributions.

4.1 Consistency of Learning through Moment Matching

Assumption 1

ID random variables for are said to be partially homogeneous if they share the same Lévy measure. This implies that the corresponding Laplace exponent of variable is given for some , and is the Laplace exponent of the common Lévy measure.

Under the above assumption, we prove guaranteed learning of NID models through spectral methods. This is based on the following moment forms for NID models, which admit a CP tensor decomposition. The components of the decomposition will be the columns of the topic-word matrix: .

Define

(5)

where is the Laplace exponent of the NID distribution and .

Theorem 1

(Moment Forms for NID models) Let and be respectively the following matrix and tensor constructed from the following moments of the data,

(6)
(7)

where,

(9)
(10)

Then given Assumption 1,

(11)

for a set of ’s and ’s which are a function of the parameters of the distribution.

Remark 1: efficient computation of and :

What makes Theorem 1 specially intriguing is the fact that weights , and can be computed through univariate integration, which can be computed efficiently, regardless of the dimensionality of the problem.

Remark 2: investigation of special cases

When the ID distribution is Gamma with parameters , we have the Dirichlet distribution as the resulting NID distribution. Weights and reduce to the results of Anandkumar et. al. [2] for the Gamma distribution, which are and When the variables are drawn from the positive stable distribution weights and in Theorem 1 can be represented in closed form as and .

It is hard to find closed form representation of the weights for other stable distributions and the Inverse Gaussian distribution. Therefore, we give the form of the weights with respect to the parameters of each distribution in Figure 4. As it can be seen in Figures 1(e) and 4(b), as increases, the distribution gets more centralized on the simplex. Therefore, as depicted in Figure 3(a) the weight becomes more negative to compensate for it. The same holds in Figure 3(b).

The above result immediately implies guaranteed learning for non-degenerate topic-word matrix .

Assumption 2

Topic-word matrix has linearly independent columns and the parameters .

Corollary 1

(Guaranteed Learning of NID Topic Models using Spectral Methods) Given empirical versions of moments and in (6) and (7), using tensor decomposition algorithm from [3], under the above assumption, we can consistently estimate topic-word matrix and parameters with polynomial computational and sample complexity.

The overall procedure is given in Algorithm 1.

Remark 3: third order moments suffice

For the flexible class of latent NID topic models, only moments up to the third order suffice for efficient learning.

v1[l] gamma[l]

(a) Weight of theorem 1 for a Stable ID distribution with , , and vs. for

v1[l] gamma[l]

(b) Weight of theorem 1 for an Inverse Gaussian distribution vs. and for
Figure 4: Weight for two different examples of the NID distribution. Weights and in the theorem have similar behavior w.r.p the parameters.
1:Chosen NID distribution and hidden dimension
2:Parameters of NID distribution and topic-word matrix
3:Estimate empirical moments , and .
4:Compute weights , and in (9) and (10) for the given NID distribution by numerical integration.
5:Estimate tensors and in (6) and (7) .
6:Decompose tensor into its rank- components using the algorithm in [3] that requires .
7:Return columns of as the components of the decomposition.
Algorithm 1 Parameter Learning

Remark 4: Sample Complexity

Following [2], Algorithm 1 can recover matrix under Assumption 2 with polynomial sample complexity.

Remark 5: Implementation Efficiency

In order to make the implementation efficient we use the discussion in [3]. Specifically, as mentioned in [3], we can find a whitening transformation from matrix that lowers the data dimension from the vocabulary space to the topic space. We then use the same whitening transformation to go back to the original space and recover the parameters of the model.

Overview of the proof of Theorem 1

We begin the proof by forming the following second order and third order tensors using the moments of the NID distribution given in Lemma 1.

(12)
(13)

Weights , and are as in Equations (9) and (10). They are computed by setting the off-diagonal entries of matrix in Equation 12 and in Equation 13 to . Due to the homogeneity assumption, all the off-diagonal entries can be simultaneously made to vanish with these choices of coefficients for and . We obtain and where ’s are the standard basis vectors, and this implies they are diagonal tensors. Due to this fact and the exchangeability of the words given topics according to (2), Equations 11 follow.

The exact forms of and are obtained by the following moment forms for NID distributions.

Lemma 1 ([18])

The moments of NID variables satisfy

(14)

where and can be written in terms of the partial Bell polynomial as

(15)

in which is the -th derivative of with respect to .

5 Experiments

In this section we apply our proposed latent NID topic modeling algorithm to New York Times and Pubmed articles [15]. The New York Times dataset contains about documents and the pubmed data contains around million documents. The vocabulary size for both the datasets are around .

Topic Top Words in descending order of importance
1 protein, region, dna, family, sequence, gene, form-12, analysis.abstract, model, tumoural
2 cell, mice.abstract, expression.abstract, activity.abstract, primary, tumor, antigen, human, t-cell, vitro
3 tumor, treatment, receptor, lesional, children–a, effect.abstract, factor, rat1, renal-cell, response-1
4 patient, treatment, therapy, clinical, disease, level.abstract, effect.abstract, treated, tumor, surgery
5 activity.abstract, rat1, concentration, dna, human, effect.abstract, exposure.abstract, animal-based, reactional, inhibition.abstract
6 patient, children–a, women.abstract, treatment, level.abstract, syndrome, disordered, disease, year-1, therapy
7 effect.abstract, receptor, level.abstract, rat1, mutational, gene, concentration, women.abstract, insulin, expression.abstract
8 acid, strain, concentration, women.abstract, test, pregnancy–a, drug, system–a, function.abstract, water
9 strain, protein, system–a, muscle, mutational, species, growth, diagnosis-based, analysis.abstract, gene
10 infection.abstract, hospital, programed, strain, medical, alpha, information, health, children–a, data.abstract
Table 1: Top 10 Words for Pubmed, K = 10
Topic Top Words in descending order of importance
1 seeded, soldier, firestone, bobby-braswell, michigan-state, actresses, gary-william, preview, school-district, netanyahu
2 diane, question, newspaper, copy, fall, held, tonight, send, guard, slugged
3 abides, acclimate, acetate, alderman, analogues, annexing, ansar, antitax, antitobacco, argyle
4 percent, school, quarter, company, taliban, high, stock, race, companies, john-mccain
5 test, deal, contract, tiger-wood, question, houston-chronicle, copy, won, seattle-post-intelligencer ,tax
6 tonight, diane, question, newspaper, file, copy, fall, slugged, onlytest, xxx
7 company, com, market, stock, won, los-angeles-daily-new, business, eastern, web, commentary
8 abides, acclimate, acetate, alderman, analogues, annexing, ansar, antitax, antitobacco, argyle
9 company, game, run, los-angeles-daily-new, percent, team, season, stock, companies, games
10 working-girl, abides, acclimate, acetate, alderman, analogues, annexing, ansar, antitax, antitobacco
11 diane, newspaper, fall, tonight, question, held, copy, bush, slugged, police
12 hurricanes, policies, surgery, productivity, courageous, emergency, singapore, orange-bowl, regarding, telecast
13 abides, acclimate, acetate, alderman, analogues, annexing, ansar, antitax, antitobacco, argyle
14 company, com, won, stock, market, eastern, commentary, business, web, deal
15 company, stock, market, business, investor, technology, analyst, cash, sell, executives
16 tonight, question, diane, file, newspaper, copy, fall, slugged, onlytest, xxx
17 defense, held, children, fight, assistant, surgery, michael-bloomberg, worker, bird, omar
18 percent, company, stock, companies, quarter, school, market, analyst, high, corp
19 school, student, yard, released, guard, premature, teacher, touchdown, publication, leader
20 school, percent, student, yard, high, taliban, flight, air, afghanistan, plan
Table 2: NID Top 10 Words for NYtimes, K = 20
Dataset NYtimes Pubmed
NID
LDA
Table 3: Perplexity comparison accross different datasets
Dataset NYtimes Pubmed
NID
LDA
Table 4: PMI comparison accross different datasets
Shared Words boston-globe, tonight, question, newspaper, spot, percent, file, diane, copy, fall
Table 5: 10 Shared words: New York times dataset

Hyperparameter Tuning

In practice, we can tune for hyperparameters to compute the best fitting

and . Therefore, we will not limit ourselves to a single parametric NID family. We learn the weights during the learning process and employ a non-parametric estimation of the Lévy-Khintchine representation through the univariate integrals of Equations 9 and 10. Due to the one-dimensional nature of the integrations, a small number of parameters will suffice for good performance. The following paragraph describes the process in more detail.

We first split the data into train and test sets randomly. We then use the train data to learn the model parameters, ’s and the columns of the topic-word matrix , as well as the weights , and in Equations 6 and 7, respectively. We do so by finding the best low rank approximation of tensor that minimizes the Frobenius Norm difference between the right-hand-side of Equation 7 and its low rank approximation. The recovered components are the columns of the topic-word matrix and the parameters are recovered from the decomposition weights. Once we find the best , and we use the test data to find the best NID distribution described by the weights such that the likelihood of the test data is maximized under that choice of NID distribution.

Results:

We compare our proposed latent NID topic model with the spectral LDA method [2]. It has been shown in [12] that spectral LDA is more efficient and achieves better perplexity compared to the conventional LDA [8]. Table 2 provides a sketch of the top words per topics recovered by our latent NID topic model on the New Yowk times dataset and Table 1 shows the top words recovered from the pubmed dataset. We have also provided the the top words recovered by LDA for the New York times dataset for comparison purposes in Table 6 in the appendix. Besides from the top words, we also present the shared words among the recovered topics for the New York Times dataset in Table 5. The presence of words such as “tonight”, “question” and “fall” among these words makes a lot of sense since they are general words that are not usually indicative of any specific topic.

We use the well-known likelihood perplexity measure [8] to evaluate the generalization performance of our proposed topic modeling algorithm as well as the Pointwise Mutual Information (PMI) score [4]

to assess the coherence of the recovered topics. Perplexity is defined as the inverse of the geometric mean per-word of the estimated likelihood. We refer to our proposed method as

and compare it against [2] where the distribution of the hidden space is fixed to be Dirichlet. It should be noted that lower perplexity indicates better generalization performance and higher PMI indicated better topic coherence. Figure 5 shows the perplexity and PMI score for the NID and LDA methods across different number of topics for the New York Times dataset. Similar comparisons including the Pubmed dataset results are also provided in Tables 3 and 4. The results suggest that if we allow the corpus to choose the best underlying topic distribution, we can get better generalization performance as well as topic coherence on the held-out set compared to fixing the underlying distribution to Dirichlet. The improved perplexity of our proposed method is indicative of correlations in the underlying documents that are not captured by the Dirichlet distribution. Thus, latent NID topic models are capable of successfully capturing correlations within topics while providing guarantees for exact recovery and efficient learning as proven in Section 4.

Last but not least, the naive Variational Inference implementation of [8] 111available at: http://www.cs.princeton.edu/ blei/lda-c/, does not scale to the current datasets used in this paper. The naive implementation of the spectral LDA, however, takes only about a minute to run on the NYtimes dataset and about 15 minutes to run on the Pubmed dataset. It is, therefore, of great importance to have a class of models that can be learned using spectral methods mainly because of their inherent scalability, ease of implementation and statistical guarantees. As we show in this paper, latent NID topic models are such a class of models. The correlated topic model framework of [7]

also uses Variational Inference to perform learning and it is limited to the logit-normal distribution. latent NID topic models are not only scalable, but are also capable of modeling arbitrary correlations without requiring a fixed prior distribution on the topic space.

(a) Perplexity score
(b) Pointwise Mutual Information (PMI)
Figure 5: Perplexity and PMI score for the NYtimes dataset across different number of topics

6 Conclusion

In this paper we introduce the new class of Latent Normalized Infinitely Divisible (NID) topic models that generalize previously proposed topic models such as LDA. We provide guaranteed efficient learning for this class of distributions using spectral methods through untangling the dependence of the hidden topics. We provide evidence that our proposed NID topic model overcomes the shortcomings of the Dirichlet distribution by allowing for both positive and negative correlations among the topics. In the end we use two real world datasets to validate our claims in practice.The improved likelihood perplexity score indicates that if we allow the model to pick the underlying distribution we will get better generalization results.

References

  • [1] RM Adelson. Compound poisson distributions. OR, 17(1):73–75, 1966.
  • [2] Anima Anandkumar, Yi-kai Liu, Daniel J Hsu, Dean P Foster, and Sham M Kakade. A spectral algorithm for latent dirichlet allocation. In Advances in Neural Information Processing Systems, pages 917–925, 2012.
  • [3] Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models.

    The Journal of Machine Learning Research

    , 15(1):2773–2832, 2014.
  • [4] Animashree Anandkumar, Ragupathyraj Valluvan, et al. Learning loopy graphical models with latent variables: Efficient methods and guarantees. The Annals of Statistics, 41(2):401–435, 2013.
  • [5] Sanjeev Arora, Rong Ge, Yonatan Halpern, David M Mimno, Ankur Moitra, David Sontag, Yichen Wu, and Michael Zhu. A practical algorithm for topic modeling with provable guarantees. In ICML (2), pages 280–288, 2013.
  • [6] Ali Shojaee Bakhtiari and Nizar Bouguila. Online learning for two novel latent topic models. In Information and Communication Technology, pages 286–295. Springer, 2014.
  • [7] David Blei and John Lafferty. Correlated topic models. Advances in neural information processing systems, 18:147, 2006.
  • [8] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. the Journal of machine Learning research, 3:993–1022, 2003.
  • [9] Jianfei Chen, Jun Zhu, Zi Wang, Xun Zheng, and Bo Zhang. Scalable inference for logistic-normal topic models. In Advances in Neural Information Processing Systems, pages 2445–2453, 2013.
  • [10] Stefano Favaro, Georgia Hadjicharalambous, and Igor Prünster. On a class of distributions on the simplex. Journal of Statistical Planning and Inference, 141(9):2987–3004, 2011.
  • [11] Thomas L Griffiths and Mark Steyvers. Finding scientific topics. Proceedings of the National Academy of Sciences, 101(suppl 1):5228–5235, 2004.
  • [12] Furong Huang. Discovery of latent factors in high-dimensional data using tensor methods. arXiv preprint arXiv:1606.03212, 2016.
  • [13] Achim Klenke. Infinitely divisible distributions. In Probability Theory, pages 331–349. Springer, 2014.
  • [14] Michalis Kolossiatis, Jim E Griffin, and Mark FJ Steel. Modeling overdispersion with the normalized tempered stable distribution. Computational Statistics & Data Analysis, 55(7):2288–2301, 2011.
  • [15] M. Lichman. UCI machine learning repository, 2013.
  • [16] Antonio Lijoi, Ramsés H Mena, and Igor Prünster. Hierarchical mixture modeling with normalized inverse-gaussian priors. Journal of the American Statistical Association, 100(472):1278–1291, 2005.
  • [17] Antonio Lijoi and Igor Prünster. Models beyond the dirichlet process. Bayesian nonparametrics, 28:80, 2010.
  • [18] Francesca Mangili and Alessio Benavoli. New prior near-ignorance models on the simplex. International Journal of Approximate Reasoning, 56:278–306, 2015.
  • [19] David Mimno, Hanna M Wallach, and Andrew McCallum. Gibbs sampling for logistic normal topic models with graph-based priors. 2008.
  • [20] Alexandre Passos, Hanna M Wallach, and Andrew McCallum. Correlations and anticorrelations in lda inference. 2011.
  • [21] Issei Sato and Hiroshi Nakagawa. Topic models with power-law using pitman-yor process. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 673–682. ACM, 2010.
  • [22] Hsiao-Yu Tung and Alex J Smola. Spectral methods for indian buffet process inference. In Advances in Neural Information Processing Systems, pages 1484–1492, 2014.

Appendix

Topic Top Words in descending order of importance
1 newspaper, question, copy, fall, diane, chante-lagon, kill, mandatory, drug, patient
2 held, guard, send, publication, released, advisory, premature, attn-editor, undatelined, washington-datelined
3 los-angeles-daily-new, slugged, com, xxx, www, x-x-x, web, information, site, eastern
4 million, shares, offering, boston-globe, debt, public, initial, player, bill, contract
5 onlytest, point, tax, case, court, lawyer, police, minutes, death, shot
6 held, released, publication, guard, advisory, premature, send, attn-editor, undatelined, washington-datelined
7 com, information, www, web, eastern, daily, commentary, business, separate, marked
8 boston-globe, spot, file, killed, tonight, women, earlier, article, george-bush, incorrectly
9 million, shares, offering, debt, public, initial, player, contract, bond, revenue
10 boston-globe, spot, file, held, killed, attn-editor, earlier, article, court, women
11 percent, market, stock, point, quarter, economy, rate, women, growth, companies
12 boston-globe, spot, file, tonight, killed, earlier, article, women, incorrectly, news-feature
13 held, guard, publication, released, send, advisory, premature, attn-editor, undatelined, washington-datelined
14 los-angeles-daily-new, slugged, xxx, new-york, x-x-x, fund, bush, goal, king, evening
15 tonight, copy, question, diane, fall, newspaper, russia, terrorist, russian, black
16 slugged, los-angeles-daily-new, xxx, new-york, x-x-x, bush, run, school, inning, student
17 onlytest, file, film, onlyendpar, movie, new-york, seattle-pi, los-angeles, sport, patient
18 los-angeles-daily-new, slugged, xxx, x-x-x, student, inning, send, program, enron, game
19 los-angeles-daily-new, slugged, xxx, new-york, x-x-x, fund, evening, program, student, enron
20 test, houston-chronicle, hearst-news-service, seattle-post-intelligencer, ignore, patient, kansas-city, yard, race, doctor
Table 6: LDA Top 10 Words for NYtimes, K = 20

Proof of Theorem 1

Proof:     The moment form of Lemma 1 can be represented as [18],

(16)

We use the above general form of the moments to compute and diagonalize the following moment tensors,

(17)
(18)

Setting the off-diagonal entries of Equations (17) and (18) to and get the following set of equations

(19)
(20)
(21)

Writing the moments using Equation (16), assuming , we get the following weights by some simple algebraic manipulations,

(22)
(23)
(24)

Where

(25)

Setting , and and defining

(26)

the set of weights , and have the following form,

(27)
(28)
(29)

Weights , and ensure that moment tensors and form diagonal tensors. Therefore they can be represented as,

(31)
(32)

where,

(33)
(34)

The exchangeability assumption on the word space gives,

(35)
(36)
(37)

Therefore,

(38)
(39)