On Training Deep Boltzmann Machines

03/20/2012
by   Guillaume Desjardins, et al.
0

The deep Boltzmann machine (DBM) has been an important development in the quest for powerful "deep" probabilistic models. To date, simultaneous or joint training of all layers of the DBM has been largely unsuccessful with existing training methods. We introduce a simple regularization scheme that encourages the weight vectors associated with each hidden unit to have similar norms. We demonstrate that this regularization can be easily combined with standard stochastic maximum likelihood to yield an effective training strategy for the simultaneous training of all layers of the deep Boltzmann machine.

READ FULL TEXT
research
12/12/2012

Joint Training of Deep Boltzmann Machines

We introduce a new method for training deep Boltzmann machines jointly. ...
research
01/16/2013

Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines

This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm ...
research
01/16/2013

Joint Training Deep Boltzmann Machines for Classification

We introduce a new method for training deep Boltzmann machines jointly. ...
research
03/19/2023

Training Deep Boltzmann Networks with Sparse Ising Machines

The slowing down of Moore's law has driven the development of unconventi...
research
02/17/2021

Mode-Assisted Joint Training of Deep Boltzmann Machines

The deep extension of the restricted Boltzmann machine (RBM), known as t...
research
03/16/2012

Learning Feature Hierarchies with Centered Deep Boltzmann Machines

Deep Boltzmann machines are in principle powerful models for extracting ...
research
10/15/2017

Learning Infinite RBMs with Frank-Wolfe

In this work, we propose an infinite restricted Boltzmann machine (RBM),...

Please sign up or login with your details

Forgot password? Click here to reset