On some provably correct cases of variational inference for topic models

03/23/2015
by   Pranjal Awasthi, et al.
0

Variational inference is a very efficient and popular heuristic used in various forms in the context of latent variable models. It's closely related to Expectation Maximization (EM), and is applied when exact EM is computationally infeasible. Despite being immensely popular, current theoretical understanding of the effectiveness of variaitonal inference based algorithms is very limited. In this work we provide the first analysis of instances where variational inference algorithms converge to the global optimum, in the setting of topic models. More specifically, we show that variational inference provably learns the optimal parameters of a topic model under natural assumptions on the topic-word matrix and the topic priors. The properties that the topic word matrix must satisfy in our setting are related to the topic expansion assumption introduced in (Anandkumar et al., 2013), as well as the anchor words assumption in (Arora et al., 2012c). The assumptions on the topic priors are related to the well known Dirichlet prior, introduced to the area of topic modeling by (Blei et al., 2003). It is well known that initialization plays a crucial role in how well variational based algorithms perform in practice. The initializations that we use are fairly natural. One of them is similar to what is currently used in LDA-c, the most popular implementation of variational inference for topic models. The other one is an overlapping clustering algorithm, inspired by a work by (Arora et al., 2014) on dictionary learning, which is very simple and efficient. While our primary goal is to provide insights into when variational inference might work in practice, the multiplicative, rather than the additive nature of the variational inference updates forces us to use fairly non-standard proof arguments, which we believe will be of general interest.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/04/2017

Autoencoding Variational Inference For Topic Models

Topic models are one of the most popular methods for learning representa...
research
10/22/2020

A Discrete Variational Recurrent Topic Model without the Reparametrization Trick

We show how to learn a neural topic model with discrete random variables...
research
07/24/2019

On the relationship between variational inference and adaptive importance sampling

The importance weighted autoencoder (IWAE) (Burda et al., 2016) and rewe...
research
10/09/2018

Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives

Deep latent variable models have become a popular model choice due to th...
research
02/07/2019

Towards Autoencoding Variational Inference for Aspect-based Opinion Summary

Aspect-based Opinion Summary (AOS), consisting of aspect discovery and s...
research
11/03/2018

DAPPER: Scaling Dynamic Author Persona Topic Model to Billion Word Corpora

Extracting common narratives from multi-author dynamic text corpora requ...
research
05/09/2012

On Smoothing and Inference for Topic Models

Latent Dirichlet analysis, or topic modeling, is a flexible latent varia...

Please sign up or login with your details

Forgot password? Click here to reset