Compacting Neural Network Classifiers via Dropout Training

11/18/2016
by   Yotaro Kubo, et al.
0

We introduce dropout compaction, a novel method for training feed-forward neural networks which realizes the performance gains of training a large model with dropout regularization, yet extracts a compact neural network for run-time efficiency. In the proposed method, we introduce a sparsity-inducing prior on the per unit dropout retention probability so that the optimizer can effectively prune hidden units during training. By changing the prior hyperparameters, we can control the size of the resulting network. We performed a systematic comparison of dropout compaction and competing methods on several real-world speech recognition tasks and found that dropout compaction achieved comparable accuracy with fewer than 50 2.5x speedup in run-time.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset