Document Informed Neural Autoregressive Topic Models

08/11/2018
by   Pankaj Gupta, et al.
0

Context information around words helps in determining their actual meaning, for example "networks" used in contexts of artificial neural networks or biological neuron networks. Generative topic models infer topic-word distributions, taking no or only little context into account. Here, we extend a neural autoregressive topic model to exploit the full context information around words in a document in a language modeling fashion. This results in an improved performance in terms of generalization, interpretability and applicability. We apply our modeling approach to seven data sets from various domains and demonstrate that our approach consistently outperforms stateof-the-art generative topic models. With the learned representations, we show on an average a gain of 9.6 fraction 0.02 and 7.2

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset