DeepAI AI Chat
Log In Sign Up

Graph-Sparse LDA: A Topic Model with Structured Sparsity

by   Finale Doshi-Velez, et al.
The University of Texas at Austin
Harvard University

Originally designed to model text, topic modeling has become a powerful tool for uncovering latent structure in domains including medicine, finance, and vision. The goals for the model vary depending on the application: in some cases, the discovered topics may be used for prediction or some other downstream task. In other cases, the content of the topic itself may be of intrinsic scientific interest. Unfortunately, even using modern sparse techniques, the discovered topics are often difficult to interpret due to the high dimensionality of the underlying space. To improve topic interpretability, we introduce Graph-Sparse LDA, a hierarchical topic model that leverages knowledge of relationships between words (e.g., as encoded by an ontology). In our model, topics are summarized by a few latent concept-words from the underlying graph that explain the observed words. Graph-Sparse LDA recovers sparse, interpretable summaries on two real-world biomedical datasets while matching state-of-the-art prediction performance.


Parsimonious Topic Models with Salient Word Discovery

We propose a parsimonious topic model for text corpora. In related model...

On a Topic Model for Sentences

Probabilistic topic models are generative models that describe the conte...

The Polylingual Labeled Topic Model

In this paper, we present the Polylingual Labeled Topic Model, a model w...

VSEC-LDA: Boosting Topic Modeling with Embedded Vocabulary Selection

Topic modeling has found wide application in many problems where latent ...

Re-Ranking Words to Improve Interpretability of Automatically Generated Topics

Topics models, such as LDA, are widely used in Natural Language Processi...

Combining LSTM and Latent Topic Modeling for Mortality Prediction

There is a great need for technologies that can predict the mortality of...