Scalable Generalized Dynamic Topic Models

03/21/2018
by   Patrick Jähnichen, et al.
0

Dynamic topic models (DTMs) model the evolution of prevalent themes in literature, online media, and other forms of text over time. DTMs assume that word co-occurrence statistics change continuously and therefore impose continuous stochastic process priors on their model parameters. These dynamical priors make inference much harder than in regular topic models, and also limit scalability. In this paper, we present several new results around DTMs. First, we extend the class of tractable priors from Wiener processes to the generic class of Gaussian processes (GPs). This allows us to explore topics that develop smoothly over time, that have a long-term memory or are temporally concentrated (for event detection). Second, we show how to perform scalable approximate inference in these models based on ideas around stochastic variational inference and sparse Gaussian processes. This way we can train a rich family of DTMs to massive data. Our experiments on several large-scale datasets show that our generalized model allows us to find interesting patterns that were not accessible by previous approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/28/2018

Scalable GAM using sparse variational Gaussian processes

Generalized additive models (GAMs) are a widely used class of models of ...
research
05/24/2017

Doubly Stochastic Variational Inference for Deep Gaussian Processes

Gaussian processes (GPs) are a good choice for function approximation as...
research
10/06/2020

Recyclable Gaussian Processes

We present a new framework for recycling independent variational approxi...
research
11/12/2021

On-the-Fly Rectification for Robust Large-Vocabulary Topic Inference

Across many data domains, co-occurrence statistics about the joint appea...
research
10/29/2018

Variational Calibration of Computer Models

Bayesian calibration of black-box computer models offers an established ...
research
12/04/2019

Scalable Bayesian Preference Learning for Crowds

We propose a scalable Bayesian preference learning method for jointly pr...
research
07/25/2011

Generalized Beta Mixtures of Gaussians

In recent years, a rich variety of shrinkage priors have been proposed t...

Please sign up or login with your details

Forgot password? Click here to reset