Partitioning Large Scale Deep Belief Networks Using Dropout

08/28/2015
by   Yanping Huang, et al.
0

Deep learning methods have shown great promise in many practical applications, ranging from speech recognition, visual object recognition, to text processing. However, most of the current deep learning methods suffer from scalability problems for large-scale applications, forcing researchers or users to focus on small-scale problems with fewer parameters. In this paper, we consider a well-known machine learning model, deep belief networks (DBNs) that have yielded impressive classification performance on a large number of benchmark machine learning tasks. To scale up DBN, we propose an approach that can use the computing clusters in a distributed environment to train large models, while the dense matrix computations within a single machine are sped up using graphics processors (GPU). When training a DBN, each machine randomly drops out a portion of neurons in each hidden layer, for each training case, making the remaining neurons only learn to detect features that are generally helpful for producing the correct answer. Within our approach, we have developed four methods to combine outcomes from each machine to form a unified model. Our preliminary experiment on the mnst handwritten digit database demonstrates that our approach outperforms the state of the art test error rate.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/28/2018

Deep learning systems as complex networks

Thanks to the availability of large scale digital datasets and massive a...
research
05/07/2017

Handwritten Bangla Digit Recognition Using Deep Learning

In spite of the advances in pattern recognition technology, Handwritten ...
research
07/03/2012

Improving neural networks by preventing co-adaptation of feature detectors

When a large feedforward neural network is trained on a small training s...
research
05/04/2016

Accelerating Deep Learning with Shrinkage and Recall

Deep Learning is a very powerful machine learning model. Deep Learning t...
research
10/09/2015

Large-scale Artificial Neural Network: MapReduce-based Deep Learning

Faced with continuously increasing scale of data, original back-propagat...
research
11/14/2014

How to Scale Up Kernel Methods to Be As Good As Deep Neural Nets

The computational complexity of kernel methods has often been a major ba...

Please sign up or login with your details

Forgot password? Click here to reset