Learning both Weights and Connections for Efficient Neural Networks

06/08/2015
by   Song Han, et al.
0

Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/01/2015

Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding

Neural networks are both computationally intensive and memory intensive,...
research
04/17/2017

Exploring Sparsity in Recurrent Neural Networks

Recurrent Neural Networks (RNN) are widely used to solve a variety of pr...
research
07/15/2016

DSD: Dense-Sparse-Dense Training for Deep Neural Networks

Modern deep neural networks have a large number of parameters, making th...
research
06/30/2020

Training highly effective connectivities within neural networks with randomly initialized, fixed weights

We present some novel, straightforward methods for training the connecti...
research
08/17/2023

TinyProp – Adaptive Sparse Backpropagation for Efficient TinyML On-device Learning

Training deep neural networks using backpropagation is very memory and c...
research
06/17/2016

DecomposeMe: Simplifying ConvNets for End-to-End Learning

Deep learning and convolutional neural networks (ConvNets) have been suc...
research
05/27/2019

Incremental Learning Using a Grow-and-Prune Paradigm with Efficient Neural Networks

Deep neural networks (DNNs) have become a widely deployed model for nume...

Please sign up or login with your details

Forgot password? Click here to reset