Deep Probabilistic Graphical Modeling

by   Adji B. Dieng, et al.

Probabilistic graphical modeling (PGM) provides a framework for formulating an interpretable generative process of data and expressing uncertainty about unknowns, but it lacks flexibility. Deep learning (DL) is an alternative framework for learning from data that has achieved great empirical success in recent years. DL offers great flexibility, but it lacks the interpretability and calibration of PGM. This thesis develops deep probabilistic graphical modeling (DPGM.) DPGM consists in leveraging DL to make PGM more flexible. DPGM brings about new methods for learning from data that exhibit the advantages of both PGM and DL. We use DL within PGM to build flexible models endowed with an interpretable latent structure. One model class we develop extends exponential family PCA using neural networks to improve predictive performance while enforcing the interpretability of the latent factors. Another model class we introduce enables accounting for long-term dependencies when modeling sequential data, which is a challenge when using purely DL or PGM approaches. Finally, DPGM successfully solves several outstanding problems of probabilistic topic models, a widely used family of models in PGM. DPGM also brings about new algorithms for learning with complex data. We develop reweighted expectation maximization, an algorithm that unifies several existing maximum likelihood-based algorithms for learning models parameterized by neural networks. This unifying view is made possible using expectation maximization, a canonical inference algorithm in PGM. We also develop entropy-regularized adversarial learning, a learning paradigm that deviates from the traditional maximum likelihood approach used in PGM. From the DL perspective, entropy-regularized adversarial learning provides a solution to the long-standing mode collapse problem of generative adversarial networks, a widely used DL approach.


page 1

page 2

page 3

page 4


Latent Dirichlet Allocation in Generative Adversarial Networks

Mode collapse is one of the key challenges in the training of Generative...

Maximum Likelihood Recursive State Estimation using the Expectation Maximization Algorithm

A Maximum Likelihood recursive state estimator is derived for non-linear...

A Brief Introduction to Generative Models

We introduce and motivate generative modeling as a central task for mach...

Neuro-symbolic Neurodegenerative Disease Modeling as Probabilistic Programmed Deep Kernels

We present a probabilistic programmed deep kernel learning approach to p...

Sequential Local Learning for Latent Graphical Models

Learning parameters of latent graphical models (GM) is inherently much h...

Algebraic Learning: Towards Interpretable Information Modeling

Along with the proliferation of digital data collected using sensor tech...

Please sign up or login with your details

Forgot password? Click here to reset