DeepAI AI Chat
Log In Sign Up

The Polylingual Labeled Topic Model

by   Lisa Posch, et al.

In this paper, we present the Polylingual Labeled Topic Model, a model which combines the characteristics of the existing Polylingual Topic Model and Labeled LDA. The model accounts for multiple languages with separate topic distributions for each language while restricting the permitted topics of a document to a set of predefined labels. We explore the properties of the model in a two-language setting on a dataset from the social science domain. Our experiments show that our model outperforms LDA and Labeled LDA in terms of their held-out perplexity and that it produces semantically coherent topics which are well interpretable by human subjects.


page 1

page 2

page 3

page 4


Concentrated Document Topic Model

We propose a Concentrated Document Topic Model(CDTM) for unsupervised te...

Source-LDA: Enhancing probabilistic topic models using prior knowledge sources

A popular approach to topic modeling involves extracting co-occurring n-...

Detecting Inappropriate Messages on Sensitive Topics that Could Harm a Company's Reputation

Not all topics are equally "flammable" in terms of toxicity: a calm disc...

Modelling Grocery Retail Topic Distributions: Evaluation, Interpretability and Stability

Understanding the shopping motivations behind market baskets has high co...

Graph-Sparse LDA: A Topic Model with Structured Sparsity

Originally designed to model text, topic modeling has become a powerful ...

Progressive EM for Latent Tree Models and Hierarchical Topic Detection

Hierarchical latent tree analysis (HLTA) is recently proposed as a new m...