Accelerating Deep Learning with Shrinkage and Recall

05/04/2016
by   Shuai Zheng, et al.
0

Deep Learning is a very powerful machine learning model. Deep Learning trains a large number of parameters for multiple layers and is very slow when data is in large scale and the architecture size is large. Inspired from the shrinking technique used in accelerating computation of Support Vector Machines (SVM) algorithm and screening technique used in LASSO, we propose a shrinking Deep Learning with recall (sDLr) approach to speed up deep learning computation. We experiment shrinking Deep Learning with recall (sDLr) using Deep Neural Network (DNN), Deep Belief Network (DBN) and Convolution Neural Network (CNN) on 4 data sets. Results show that the speedup using shrinking Deep Learning with recall (sDLr) can reach more than 2.0 while still giving competitive classification performance.

READ FULL TEXT
research
08/23/2018

An Improvement of Data Classification Using Random Multimodel Deep Learning (RMDL)

The exponential growth in the number of complex datasets every year requ...
research
07/14/2020

Accelerating the identification of informative reduced representations of proteins with deep learning for graphs

The limits of molecular dynamics (MD) simulations of macromolecules are ...
research
11/12/2020

A Deep Learning Approach to Predict Hamburg Rutting Curve

Rutting continues to be one of the principal distresses in asphalt pavem...
research
06/22/2018

Compact Deep Neural Networks for Computationally Efficient Gesture Classification From Electromyography Signals

Machine learning classifiers using surface electromyography are importan...
research
08/28/2015

Partitioning Large Scale Deep Belief Networks Using Dropout

Deep learning methods have shown great promise in many practical applica...

Please sign up or login with your details

Forgot password? Click here to reset