Poisson--Gamma Dynamical Systems

01/19/2017 ∙ by Aaron Schein, et al. ∙ University of Massachusetts Amherst The University of Texas at Austin 0

We introduce a new dynamical system for sequentially observed multivariate count data. This model is based on the gamma--Poisson construction---a natural choice for count data---and relies on a novel Bayesian nonparametric prior that ties and shrinks the model parameters, thus avoiding overfitting. We present an efficient MCMC inference algorithm that advances recent work on augmentation schemes for inference in negative binomial models. Finally, we demonstrate the model's inductive bias using a variety of real-world data sets, showing that it exhibits superior predictive performance over other models and infers highly interpretable latent structure.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Sequentially observed count vectors

are the main object of study in many real-world applications, including text analysis, social network analysis, and recommender systems. Count data pose unique statistical and computational challenges when they are high-dimensional, sparse, and overdispersed, as is often the case in real-world applications. For example, when tracking counts of user interactions in a social network, only a tiny fraction of possible edges are ever active, exhibiting bursty periods of activity when they are. Models of such data should exploit this sparsity in order to scale to high dimensions and be robust to overdispersed temporal patterns. In addition to these characteristics, sequentially observed multivariate count data often exhibit complex dependencies within and across time steps. For example, scientific papers about one topic may encourage researchers to write papers about another related topic in the following year. Models of such data should therefore capture the topic structure of individual documents as well as the excitatory relationships between topics.

The linear dynamical system (LDS) is a widely used model for sequentially observed data, with many well-developed inference techniques based on the Kalman filter 

kalman1960new ; ghahramani1999learning . The LDS assumes that each sequentially observed -dimensional vector

is real valued and Gaussian distributed:

, where is a latent state, with components, that is linked to the observed space via . The LDS derives its expressive power from the way it assumes that the latent states evolve: , where is a transition matrix that captures between-component dependencies across time steps. Although the LDS can be linked to non-real observations via the extended Kalman filter haykin2001kalman , it cannot efficiently model real-world count data because inference is and thus scales poorly with the dimensionality of the data ghahramani1999learning .

Many previous approaches to modeling sequentially observed count data rely on the generalized linear modeling framework mccullagh1989generalized to link the observations to a latent Gaussian space—e.g., via the Poisson–lognormal link bulmer1974fitting . Researchers have used this construction to factorize sequentially observed count matrices under a Poisson likelihood, while modeling the temporal structure using well-studied Gaussian techniques blei2006dynamic ; charlin2015dynamic . Most of these previous approaches assume a simple Gaussian state-space model—i.e., —that lacks the expressive transition structure of the LDS; one notable exception is the Poisson linear dynamical system macke2011empirical . In practice, these approaches exhibit prohibitive computational complexity in high dimensions, and the Gaussian assumption may fail to accommodate the burstiness often inherent to real-world count data kleinberg2003bursty .

Figure 1:

The time-step factors for three components inferred by the PGDS from a corpus of NIPS papers. Each component is associated with a feature factor for each word type in the corpus; we list the words with the largest factors. The inferred structure tells a familiar story about the rise and fall of certain subfields of machine learning.

We present the Poisson–gamma dynamical system (PGDS)—a new dynamical system, based on the gamma–Poisson construction, that supports the expressive transition structure of the LDS. This model naturally handles overdispersed data. We introduce a new Bayesian nonparametric prior to automatically infer the model’s rank. We develop an elegant and efficient algorithm for inferring the parameters of the transition structure that advances recent work on augmentation schemes for inference in negative binomial models zhou12augment-and-conquer and scales with the number of non-zero counts, thus exploiting the sparsity inherent to real-world count data. We examine the way in which the dynamical gamma–Poisson construction propagates information and derive the model’s steady state, which involves the Lambert W function corless1996lambertw . Finally, we use the PGDS to analyze a diverse range of real-world data sets, showing that it exhibits excellent predictive performance on smoothing and forecasting tasks and infers interpretable latent structure, an example of which is depicted in figure 1.

2 Poisson–Gamma Dynamical Systems

We can represent a data set of -dimensional sequentially observed count vectors as a count matrix . The PGDS models a single count in this matrix as follows:

(1)

where the latent factors and are both positive, and represent the strength of feature in component and the strength of component at time step , respectively. The scaling factor captures the scale of the counts at time step , and therefore obviates the need to rescale the data as a preprocessing step. We refer to the PGDS as stationary if for . We can view the feature factors as a matrix and the time-step factors as a matrix . Because we can also collectively view the scaling factors and time-step factors as a matrix , where element , the PGDS is a form of Poisson matrix factorization:  (canny2004gap, ; cemgil09bayesian, ; zhou2011beta, ; gopalan15scalable, ).

The PGDS is characterized by its expressive transition structure, which assumes that each time-step factor

is drawn from a gamma distribution, whose shape parameter is a linear combination of the

factors at the previous time step. The latent transition weights , which we can view as a transition matrix , capture the excitatory relationships between components. The vector has an expected value of and is therefore analogous to a latent state in the the LDS. The concentration parameter

determines the variance of

—specifically, —without affecting its expected value.

To model the strength of each component, we introduce component weights and place a shrinkage prior over them. We assume that the time-step factors and transition weights for component are tied to its component weight . Specifically, we define the following structure:

(2)

where is the column of . Because , we can interpret

as the probability of transitioning from component

to component . (We note that interpreting

as a stochastic transition matrix relates the PGDS to the discrete hidden Markov model.) For a fixed value of

, increasing will encourage many of the component weights to be small. A small value of will shrink , as well as the transition weights in the row of . Small values of the transition weights in the row of therefore prevent component from being excited by the other components and by itself. Specifically, because the shape parameter for the gamma prior over involves a linear combination of and the transition weights in the row of , small transition weights will result in a small shape parameter, shrinking . Thus, the component weights play a critical role in the PGDS by enabling it to automatically turn off any unneeded capacity and avoid overfitting.

Finally, we place Dirichlet priors over the feature factors and draw the other parameters from a non-informative gamma prior: and

. The PGDS therefore has four positive hyperparameters to be set by the user:

, , , and .

Bayesian nonparametric interpretation: As , the component weights and their corresponding feature factor vectors constitute a draw from a gamma process , where is a scale parameter and is a finite and continuous base measure over a complete separable metric space  ferguson73bayesian . Models based on the gamma process have an inherent shrinkage mechanism because the number of atoms with weights greater than

follows a Poisson distribution with a finite mean—specifically,

, where is the total mass under the base measure. This interpretation enables us to view the priors over and as novel stochastic processes, which we call the column-normalized relational gamma process and the recurrent gamma process, respectively. We provide the definitions of these processes in the supplementary material.

Non-count observations: The PGDS can also model non-count data by linking the observed vectors to latent counts. A binary observation can be linked to a latent Poisson count via the Bernoulli–Poisson distribution: and  (zhou15infinite, ). Similarly, a real-valued observation can be linked to a latent Poisson count via the Poisson randomized gamma distribution zhou2015gamma . Finally, Basbug and Engelhardt (basbug2016hierarchical, ) recently showed that many types of non-count matrices can be linked to a latent count matrix via the compound Poisson distribution (adelson1966compound, ).

3 MCMC Inference

MCMC inference for the PGDS consists of drawing samples of the model parameters from their joint posterior distribution given an observed count matrix and the model hyperparameters , , , . In this section, we present a Gibbs sampling algorithm for drawing these samples. At a high level, our approach is similar to that used to develop Gibbs sampling algorithms for several other related models zhou12augment-and-conquer ; zhou15negative ; acharya2015nonparametric ; zhou15infinite ; however, we extend this approach to handle the unique properties of the PGDS. The main technical challenge is sampling from its conditional posterior, which does not have a closed form. We address this challenge by introducing a set of auxiliary variables. Under this augmented version of the model, marginalizing over becomes tractable and its conditional posterior has a closed form. Moreover, by introducing these auxiliary variables and marginalizing over , we obtain an alternative model specification that we can subsequently exploit to obtain closed-form conditional posteriors for , , and . We marginalize over by performing a “backward filtering” pass, starting with . We repeatedly exploit the following three definitions in order to do this.

Definition 1: If , where

are independent Poisson-distributed random variables, then

and  kingman72poisson ; Dunson05bayesianlatent .

Definition 2: If , where is a constant, and , then

is a negative binomial–distributed random variable. We can equivalently parameterize it as

, where is the Bernoulli–Poisson link zhou15infinite and .

Definition 3: If and is a Chinese restaurant table–distributed random variable, then and

are equivalently jointly distributed as

and  zhou15negative . The sum logarithmic distribution is further defined as the sum of independent and identically logarithmic-distributed random variables—i.e., and .

Marginalizing over : We first note that we can re-express the Poisson likelihood in equation 1 in terms of latent subcounts cemgil09bayesian : and . We then define . Via definition 1, we obtain because .

We start with because none of the other time-step factors depend on it in their priors. Via definition 2, we can immediately marginalize over to obtain the following equation:

(3)

Next, we marginalize over . To do this, we introduce an auxiliary variable: . We can then re-express the joint distribution over and as

(4)

We are still unable to marginalize over because it appears in a sum in the parameter of the Poisson distribution over ; however, via definition 1, we can re-express this distribution as

(5)

We then define . Again via definition 1, we can express the distribution over as . We note that this expression does not depend on the transition weights because . We also note that definition 1 implies that . Next, we introduce , which summarizes all of the information about the data at time steps and via and , respectively. Because and are both Poisson distributed, we can use definition 1 to obtain

(6)

Combining this likelihood with the gamma prior in equation 1, we can marginalize over :

(7)

We then introduce and re-express the joint distribution over and as the product of a Poisson and a sum logarithmic distribution, similar to equation 4. This then allows us to marginalize over to obtain a negative binomial distribution. We can repeat the same process all the way back to , where marginalizing over yields . We note that just as summarizes all of the information about the data at time steps , summarizes all of the information about .

for
for
Figure 2: Alternative model specification.

As we mentioned previously, introducing these auxiliary variables and marginalizing over also enables us to define an alternative model specification that we can exploit to obtain closed-form conditional posteriors for , , and . We provide part of its generative process in figure 2. We define , where , and so that we can present the alternative model specification concisely.

Steady state: We draw particular attention to the backward pass that propagates information about as we marginalize over . In the case of the stationary PGDS—i.e., —the backward pass has a fixed point that we define in the following proposition.

Proposition 1: The backward pass has a fixed point of .

The function is the lower real part of the Lambert W function corless1996lambertw . We prove this proposition in the supplementary material. During inference, we perform the backward pass repeatedly. The existence of a fixed point means that we can assume the stationary PGDS is in its steady state and replace the backward pass with an computation111Several software packages contain fast implementations of the Lambert W function. of the fixed point . To make this assumption, we must also assume that instead of . We note that an analogous steady-state approximation exists for the LDS and is routinely exploited to reduce computation rugh1996linear .

Gibbs sampling algorithm: Given and the hyperparameters, Gibbs sampling involves resampling each auxiliary variable or model parameter from its conditional posterior. Our algorithm involves a “backward filtering” pass and a “forward sampling” pass, which together form a “backward filtering–forward sampling” algorithm. We use to denote everything excluding .

Sampling the auxiliary variables: This step is the “backward filtering” pass. For the stationary PGDS in its steady state, we first compute and draw . For the other variants of the model, we set . Then, working backward from , we draw

(8)
(9)

After using equations 8 and 9 for all , we then set . For the non-steady-state variants, we also set ; for the steady-state variant, we set .

Sampling : We sample from its conditional posterior by performing a “forward sampling” pass, starting with . Conditioned on the values of and obtained via the “backward filtering” pass, we sample forward from , using the following equations:

(10)
(11)

Sampling : The alternative model specification, with marginalized out, assumes that . Therefore, via Dirichlet–multinomial conjugacy,

(12)

Sampling and : We use the alternative model specification to obtain closed-form conditional posteriors for and . First, we marginalize over

to obtain a Dirichlet–multinomial distribution. When augmented with a beta-distributed auxiliary variable, the Dirichlet–multinomial distribution is proportional to the negative binomial distribution 

zhou2016nonparametric . We draw such an auxiliary variable, which we use, along with negative binomial augmentation schemes, to derive closed-form conditional posteriors for and . We provide these posteriors, along with their derivations, in the supplementary material.

We also provide the conditional posteriors for the remaining model parameters—, , and —which we obtain via Dirichlet–multinomial, gamma–Poisson, and gamma–gamma conjugacy.

4 Experiments

In this section, we compare the predictive performance of the PGDS to that of the LDS and that of gamma process dynamic Poisson factor analysis (GP-DPFA) acharya2015nonparametric . GP-DPFA models a single count in as

, where each component’s time-step factors evolve as a simple gamma Markov chain, independently of those belonging to the other components:

. We consider the stationary variants of all three models.222We used the pykalman Python library for the LDS and implemented GP-DPFA ourselves. We used five data sets, and tested each model on two time-series prediction tasks: smoothing—i.e., predicting given —and forecasting—i.e., predicting given for some  durbin2012time . We provide brief descriptions of the data sets below before reporting results.

Global Database of Events, Language, and Tone (GDELT): GDELT is an international relations data set consisting of country-to-country interaction events of the form “country took action toward country at time ,” extracted from news corpora. We created five count matrices, one for each year from 2001 through 2005. We treated directed pairs of countries as features and counted the number of events for each pair during each day. We discarded all pairs with fewer than twenty-five total events, leaving , around , and three to six million events for each matrix.

Integrated Crisis Early Warning System (ICEWS): ICEWS is another international relations event data set extracted from news corpora. It is more highly curated than GDELT and contains fewer events. We therefore treated undirected pairs of countries as features. We created three count matrices, one for 2001–2003, one for 2004–2006, and one for 2007–2009. We counted the number of events for each pair during each three-day time step, and again discarded all pairs with fewer than twenty-five total events, leaving , around , and 1.3 to 1.5 million events for each matrix.

State-of-the-Union transcripts (SOTU): The SOTU corpus contains the text of the annual SOTU speech transcripts from 1790 through 2014. We created a single count matrix with one column per year. After discarding stopwords, we were left with , , and 656,949 tokens.

DBLP conference abstracts (DBLP): DBLP is a database of computer science research papers. We used the subset of this corpus that Acharya et al. used to evaluate GP-DPFA acharya2015nonparametric . This subset corresponds to a count matrix with columns, unique word types, and 13,431 tokens.

NIPS corpus (NIPS): The NIPS corpus contains the text of every NIPS conference paper from 1987 to 2003. We created a single count matrix with one column per year. We treated unique word types as features and discarded all stopwords, leaving , , and 3.1 million tokens.

Figure 3: over time for the top four features in the NIPS (left) and ICEWS (right) data sets.

Experimental design: For each matrix, we created four masks indicating some randomly selected subset of columns to treat as held-out data. For the event count matrices, we held out six (non-contiguous) time steps between and to test the models’ smoothing performance, as well as the last two time steps to test their forecasting performance. The other matrices have fewer time steps. For the SOTU matrix, we therefore held out five time steps between and , as well as . For the NIPS and DBLP matrices, which contain substantially fewer time steps than the SOTU matrix, we held out three time steps between and , as well as .

For each matrix, mask, and model combination, we ran inference four times.333For the PGDS and GP-DPFA we used . For the PGDS, we set , , . We set the hyperparameters of GP-DPFA to the values used by Acharya et al. acharya2015nonparametric . For the LDS, we used the default hyperparameters for pykalman, and report results for the best-performing value of .

For the PGDS and GP-DPFA, we performed 6,000 Gibbs sampling iterations, imputing the missing counts from the “smoothing” columns at the same time as sampling the model parameters. We then discarded the first 4,000 samples and retained every hundredth sample thereafter. We used each of these samples to predict the missing counts from the “forecasting” columns. We then averaged the predictions over the samples. For the LDS, we ran EM to learn the model parameters. Then, given these parameter values, we used the Kalman filter and smoother 

kalman1960new to predict the held-out data. In practice, for all five data sets, was too large for us to run inference for the LDS, which is  ghahramani1999learning , using all features. We therefore report results from two independent sets of experiments: one comparing all three models using only the top features for each data set, and one comparing the PGDS to just GP-DPFA using all the features. The first set of experiments is generous to the LDS because the Poisson distribution is well approximated by the Gaussian distribution when its mean is large.

Mean Relative Error (MRE) Mean Absolute Error (MAE)
Task PGDS GP-DPFA LDS PGDS GP-DPFA LDS
GDELT 365 1.27 S 2.951 3.493 9.366 10.098
F 2.207 2.397 7.095 7.047
ICEWS 365 1.10 S 0.877 1.023 2.872 3.104
F 0.792 0.937 1.894 1.973
SOTU 225 1.45 S 0.238 0.260 0.414 0.448
F 0.173 0.225 0.323 0.370
DBLP 14 1.64 S 0.417 0.422 0.782 0.831
F 0.323 0.369 0.747 0.943
NIPS 17 0.33 S 0.415 1.609 29.940 108.378
F 0.343 0.642 62.839 95.495
Table 1: Results for the smoothing (“S”) and forecasting (“F”) tasks. For both error measures, lower values are better. We also report the number of time steps and the burstiness of each data set. 

Results: We used two error measures—mean relative error (MRE) and mean absolute error (MAE)—to compute the models’ smoothing and forecasting scores for each matrix and mask combination. We then averaged these scores over the masks. For the data sets with multiple matrices, we also averaged the scores over the matrices. The two error measures differ as follows: MRE accommodates the scale of the data, while MAE does not. This is because relative error—which we define as , where is the true count and is the prediction—divides the absolute error by the true count and thus penalizes overpredictions more harshly than underpredictions. MRE is therefore an especially natural choice for data sets that are bursty—i.e., data sets that exhibit short periods of activity that far exceed their mean. Models that are robust to these kinds of overdispersed temporal patterns are less likely to make overpredictions following a burst, and are therefore rewarded accordingly by MRE.

In table 1, we report the MRE and MAE scores for the experiments using the top features. We also report the average burstiness of each data set. We define the burstiness of feature in matrix to be , where . For each data set, we calculated the burstiness of each feature in each matrix, and then averaged these values to obtain an average burstiness score . The PGDS outperformed the LDS and GP-DPFA on seven of the ten prediction tasks when we used MRE to measure the models’ performance; when we used MAE, the PGDS outperformed the other models on five of the tasks. In the supplementary material, we also report the results for the experiments comparing the PGDS to GP-DPFA using all the features. The superiority of the PGDS over GP-DPFA is even more pronounced in these results. We hypothesize that the difference between these models is related to the burstiness of the data. For both error measures, the only data set for which GP-DPFA outperformed the PGDS on both tasks was the NIPS data set. This data set has a substantially lower average burstiness score than the other data sets. We provide visual evidence in figure 3, where we display over time for the top four features in the NIPS and ICEWS data sets. For the former, the features evolve smoothly; for the latter, they exhibit bursts of activity.

Exploratory analysis: We also explored the latent structure inferred by the PGDS. Because its parameters are positive, they are easy to interpret. In figure 1

, we depict three components inferred from the NIPS data set. By examining the time-step factors and feature factors for these components, we see that they capture the decline of research on neural networks between 1987 and 2003, as well as the rise of Bayesian methods in machine learning. These patterns match our prior knowledge.

Figure 4: The time-step factors for the top three components inferred by the PGDS from the 2003 GDELT matrix. The top component is in blue, the second is in green, and the third is in red. For each component, we also list the features (directed pairs of countries) with the largest feature factors.

In figure 4, we depict the three components with the largest component weights inferred by the PGDS from the 2003 GDELT matrix. The top component is in blue, the second is in green, and the third is in red. For each component, we also list the sixteen features (directed pairs of countries) with the largest feature factors. The top component (blue) is most active in March and April, 2003. Its features involve USA, Iraq (IRQ), Great Britain (GBR), Turkey (TUR), and Iran (IRN), among others. This component corresponds to the 2003 invasion of Iraq. The second component (green) exhibits a noticeable increase in activity immediately after April, 2003. Its top features involve Israel (ISR), Palestine (PSE), USA, and Afghanistan (AFG). The third component exhibits a large burst of activity in August, 2003, but is otherwise inactive. Its top features involve North Korea (PRK), South Korea (KOR), Japan (JPN), China (CHN), Russia (RUS), and USA. This component corresponds to the six-party talks—a series of negotiations between these six countries for the purpose of dismantling North Korea’s nuclear program. The first round of talks occurred during August 27–29, 2003.

Figure 5: The latent transition structure inferred by the PGDS from the 2003 GDELT matrix. Top: The component weights for the top ten components, in decreasing order from left to right; two of the weights are greater than one. Bottom: The transition weights in the corresponding subset of the transition matrix. This structure means that all components are likely to transition to the top two components.

In figure 5, we also show the component weights for the top ten components, along with the corresponding subset of the transition matrix . There are two components with weights greater than one: the components that are depicted in blue and green in figure 4. The transition weights in the corresponding rows of are also large, meaning that other components are likely to transition to them. As we mentioned previously, the GDELT data set was extracted from news corpora. Therefore, patterns in the data primarily reflect patterns in media coverage of international affairs. We therefore interpret the latent structure inferred by the PGDS in the following way: in 2003, the media briefly covered various major events, including the six-party talks, before quickly returning to a backdrop of the ongoing Iraq war and Israeli–Palestinian relations. By inferring the kind of transition structure depicted in figure 5, the PGDS is able to model persistent, long-term temporal patterns while accommodating the burstiness often inherent to real-world count data. This ability is what enables the PGDS to achieve superior predictive performance over the LDS and GP-DPFA.

5 Summary

We introduced the Poisson–gamma dynamical system (PGDS)—a new Bayesian nonparametric model for sequentially observed multivariate count data. This model supports the expressive transition structure of the linear dynamical system, and naturally handles overdispersed data. We presented a novel MCMC inference algorithm that remains efficient for high-dimensional data sets, advancing recent work on augmentation schemes for inference in negative binomial models. Finally, we used the PGDS to analyze five real-world data sets, demonstrating that it exhibits superior smoothing and forecasting performance over two baseline models and infers highly interpretable latent structure.

Acknowledgments

We thank David Belanger, Roy Adams, Kostis Gourgoulias, Ben Marlin, Dan Sheldon, and Tim Vieira for many helpful conversations. This work was supported in part by the UMass Amherst CIIR and in part by NSF grants SBE-0965436 and IIS-1320219. Any opinions, findings, conclusions, or recommendations are those of the authors and do not necessarily reflect those of the sponsors.

plus 0.3ex

References

  • [1] R. E. Kalman. A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82(1):35–45, 1960.
  • [2] Z. Ghahramani and S. T. Roweis. Learning nonlinear dynamical systems using an EM algorithm. In Advances in Neural Information Processing Systems, pages 431–437, 1998.
  • [3] S. S. Haykin. Kalman Filtering and Neural Networks. 2001.
  • [4] P. McCullagh and J. A. Nelder. Generalized linear models. 1989.
  • [5] M. G. Bulmer. On fitting the Poisson lognormal distribution to species-abundance data. Biometrics, pages 101–110, 1974.
  • [6] D. M. Blei and J. D. Lafferty. Dynamic topic models. In Proceedings of the 23rd International Conference on Machine Learning, pages 113–120, 2006.
  • [7] L. Charlin, R. Ranganath, J. McInerney, and D. M. Blei. Dynamic Poisson factorization. In Proceedings of the 9th ACM Conference on Recommender Systems, pages 155–162, 2015.
  • [8] J. H. Macke, L. Buesing, J. P. Cunningham, B. M. Yu, K. V. Krishna, and M. Sahani. Empirical models of spiking in neural populations. In Advances in Neural Information Processing Systems, pages 1350–1358, 2011.
  • [9] J. Kleinberg. Bursty and hierarchical structure in streams. Data Mining and Knowledge Discovery, 7(4):373–397, 2003.
  • [10] M. Zhou and L. Carin. Augment-and-conquer negative binomial processes. In Advances in Neural Information Processing Systems, pages 2546–2554, 2012.
  • [11] R. Corless, G. Gonnet, D. E. G. Hare, D. J. Jeffrey, and D. E. Knuth. On the LambertW function. Advances in Computational Mathematics, 5(1):329–359, 1996.
  • [12] J. Canny. GaP: A factor model for discrete data. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 122–129, 2004.
  • [13] A. T. Cemgil. Bayesian inference for nonnegative matrix factorisation models. Computational Intelligence and Neuroscience, 2009.
  • [14] M. Zhou, L. Hannah, D. Dunson, and L. Carin. Beta-negative binomial process and Poisson factor analysis. In

    Proceedings of the 15th International Conference on Artificial Intelligence and Statistics

    , 2012.
  • [15] P. Gopalan, J. Hofman, and D. Blei. Scalable recommendation with Poisson factorization. In Proceedings of the 31st Conference on Uncertainty in Artificial Intelligence, 2015.
  • [16] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209–230, 1973.
  • [17] M. Zhou. Infinite edge partition models for overlapping community detection and link prediction. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, pages 1135–1143, 2015.
  • [18] M. Zhou, Y. Cong, and B. Chen. Augmentable gamma belief networks. Journal of Machine Learning Research, 17(163):1–44, 2016.
  • [19] M. E. Basbug and B. Engelhardt. Hierarchical compound Poisson factorization. In Proceedings of the 33rd International Conference on Machine Learning, 2016.
  • [20] R. M. Adelson. Compound Poisson distributions. OR, 17(1):73–75, 1966.
  • [21] M. Zhou and L. Carin. Negative binomial process count and mixture modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(2):307–320, 2015.
  • [22] A. Acharya, J. Ghosh, and M. Zhou. Nonparametric Bayesian factor analysis for dynamic count matrices. Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, 2015.
  • [23] J. F. C. Kingman. Poisson Processes. Oxford University Press, 1972.
  • [24] D. B. Dunson and A. H. Herring. Bayesian latent variable models for mixed discrete outcomes. Biostatistics, 6(1):11–25, 2005.
  • [25] W. J. Rugh. Linear System Theory. Pearson, 1995.
  • [26] M. Zhou. Nonparametric Bayesian negative binomial factor analysis. arXiv:1604.07464.
  • [27] J. Durbin and S. J. Koopman. Time Series Analysis by State Space Methods. Oxford University Press, 2012.