Deep Generative Models for Sparse, High-dimensional, and Overdispersed Discrete Data
Many applications, such as text modelling, high-throughput sequencing, and recommender systems, require analysing sparse, high-dimensional, and overdispersed discrete (count/binary) data. With the ability of handling high-dimensional and sparse discrete data, models based on probabilistic matrix factorisation and latent factor analysis have enjoyed great success in modeling such data. Of particular interest among these are hierarchical Bayesian count/binary matrix factorisation models and nonlinear latent variable models based on deep neural networks, such as recently proposed variational autoencoders for discrete data. However, unlike the extensive research on sparsity and high-dimensionality, another important phenomenon, overdispersion, which large-scale discrete data exhibit, is relatively less studied. It can be shown that most existing latent factor models do not capture overdispersion in discrete data properly due to their ineffectiveness of modelling self- and cross-excitation (e.g., word burstiness in text), which may lead to inferior modelling performance. In this paper, we provide an in-depth analysis on how self- and cross-excitation are modelled in existing models and propose a novel variational autoencoder framework, which is able to explicitly capture self-excitation and also better model cross-excitation. Our model construction is originally designed for count-valued observations with the negative-binomial data distribution (and an equivalent representation with the Dirichlet-multinomial distribution) and it also extends seamlessly to binary-valued observations via a link function to the Bernoulli distribution. To demonstrate the effectiveness of our framework, we conduct extensive experiments on both large-scale bag-of-words corpora and collaborative filtering datasets, where the proposed models achieve state-of-the-art results.
READ FULL TEXT