Deep Probabilistic Graphical Modeling

04/25/2021
by   Adji B. Dieng, et al.
0

Probabilistic graphical modeling (PGM) provides a framework for formulating an interpretable generative process of data and expressing uncertainty about unknowns, but it lacks flexibility. Deep learning (DL) is an alternative framework for learning from data that has achieved great empirical success in recent years. DL offers great flexibility, but it lacks the interpretability and calibration of PGM. This thesis develops deep probabilistic graphical modeling (DPGM.) DPGM consists in leveraging DL to make PGM more flexible. DPGM brings about new methods for learning from data that exhibit the advantages of both PGM and DL. We use DL within PGM to build flexible models endowed with an interpretable latent structure. One model class we develop extends exponential family PCA using neural networks to improve predictive performance while enforcing the interpretability of the latent factors. Another model class we introduce enables accounting for long-term dependencies when modeling sequential data, which is a challenge when using purely DL or PGM approaches. Finally, DPGM successfully solves several outstanding problems of probabilistic topic models, a widely used family of models in PGM. DPGM also brings about new algorithms for learning with complex data. We develop reweighted expectation maximization, an algorithm that unifies several existing maximum likelihood-based algorithms for learning models parameterized by neural networks. This unifying view is made possible using expectation maximization, a canonical inference algorithm in PGM. We also develop entropy-regularized adversarial learning, a learning paradigm that deviates from the traditional maximum likelihood approach used in PGM. From the DL perspective, entropy-regularized adversarial learning provides a solution to the long-standing mode collapse problem of generative adversarial networks, a widely used DL approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/17/2018

Latent Dirichlet Allocation in Generative Adversarial Networks

Mode collapse is one of the key challenges in the training of Generative...
research
03/18/2021

Maximum Likelihood Recursive State Estimation using the Expectation Maximization Algorithm

A Maximum Likelihood recursive state estimator is derived for non-linear...
research
02/27/2021

A Brief Introduction to Generative Models

We introduce and motivate generative modeling as a central task for mach...
research
09/16/2020

Neuro-symbolic Neurodegenerative Disease Modeling as Probabilistic Programmed Deep Kernels

We present a probabilistic programmed deep kernel learning approach to p...
research
03/12/2017

Sequential Local Learning for Latent Graphical Models

Learning parameters of latent graphical models (GM) is inherently much h...
research
03/13/2022

Algebraic Learning: Towards Interpretable Information Modeling

Along with the proliferation of digital data collected using sensor tech...

Please sign up or login with your details

Forgot password? Click here to reset