Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding

10/01/2015
by   Song Han, et al.
0

Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2015

Learning both Weights and Connections for Efficient Neural Networks

Neural networks are both computationally intensive and memory intensive,...
research
02/04/2016

EIE: Efficient Inference Engine on Compressed Deep Neural Network

State-of-the-art deep neural networks (DNNs) have hundreds of millions o...
research
08/26/2016

Scalable Compression of Deep Neural Networks

Deep neural networks generally involve some layers with mil- lions of pa...
research
06/11/2018

DropBack: Continuous Pruning During Training

We introduce a technique that compresses deep neural networks both durin...
research
09/30/2018

Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters

While deep neural networks are a highly successful model class, their la...
research
04/11/2019

Cramnet: Layer-wise Deep Neural Network Compression with Knowledge Transfer from a Teacher Network

Neural Networks accomplish amazing things, but they suffer from computat...
research
01/20/2022

What can we learn from misclassified ImageNet images?

Understanding the patterns of misclassified ImageNet images is particula...

Please sign up or login with your details

Forgot password? Click here to reset