From Boltzmann Machines to Neural Networks and Back Again

07/25/2020
by   Surbhi Goel, et al.
0

Graphical models are powerful tools for modeling high-dimensional data, but learning graphical models in the presence of latent variables is well-known to be difficult. In this work we give new results for learning Restricted Boltzmann Machines, probably the most well-studied class of latent variable models. Our results are based on new connections to learning two-layer neural networks under ℓ_∞ bounded input; for both problems, we give nearly optimal results under the conjectured hardness of sparse parity with noise. Using the connection between RBMs and feedforward networks, we also initiate the theoretical study of supervised RBMs [Hinton, 2012], a version of neural-network learning that couples distributional assumptions induced from the underlying graphical model with the architecture of the unknown function class. We then give an algorithm for learning a natural class of supervised RBMs with better runtime than what is possible for its related class of networks without distributional assumptions.

READ FULL TEXT
research
05/25/2018

Learning Restricted Boltzmann Machines via Influence Maximization

Graphical models are a rich language for describing high-dimensional dis...
research
09/20/2023

Deep Networks as Denoising Algorithms: Sample-Efficient Learning of Diffusion Models in High-Dimensional Graphical Models

We investigate the approximation efficiency of score functions by deep n...
research
06/18/2012

Deep Mixtures of Factor Analysers

An efficient way to learn deep density models that have many layers of l...
research
03/16/2018

Learning Sparse Deep Feedforward Networks via Tree Skeleton Expansion

Despite the popularity of deep learning, structure learning for deep mod...
research
09/14/2022

Optimal Connectivity through Network Gradients for the Restricted Boltzmann Machine

Leveraging sparse networks to connect successive layers in deep neural n...
research
06/15/2019

Learning Restricted Boltzmann Machines with Arbitrary External Fields

We study the problem of learning graphical models with latent variables....
research
10/30/2017

A Connection between Feed-Forward Neural Networks and Probabilistic Graphical Models

Two of the most popular modelling paradigms in computer vision are feed-...

Please sign up or login with your details

Forgot password? Click here to reset