DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression

01/26/2019
by   Sian Jin, et al.
0

DNNs have been quickly and broadly exploited to improve the data analysis quality in many complex science and engineering applications. Today's DNNs are becoming deeper and wider because of increasing demand on the analysis quality and more and more complex applications to resolve. The wide and deep DNNs, however, require large amounts of resources, significantly restricting their utilization on resource-constrained systems. Although some network simplification methods have been proposed to address this issue, they suffer from either low compression ratios or high compression errors, which may introduce a costly retraining process for the target accuracy. In this paper, we propose DeepSZ: an accuracy-loss bounded neural network compression framework, which involves four key steps: network pruning, error bound assessment, optimization for error bound configuration, and compressed model generation, featuring a high compression ratio and low encoding time. The contribution is three-fold. (1) We develop an adaptive approach to select the feasible error bounds for each layer. (2) We build a model to estimate the overall loss of accuracy based on the accuracy degradation caused by individual decompressed layers. (3) We develop an efficient optimization algorithm to determine the best-fit configuration of error bounds in order to maximize the compression ratio under the user-set accuracy constraint. Experiments show that DeepSZ can compress AlexNet and VGG-16 on the ImageNet by a compression ratio of 46X and 116X, respectively, and compress LeNet-300-100 and LeNet-5 on the MNIST by a compression ratio of 57X and 56X, respectively, with only up to 0.3 loss of accuracy. Compared with other state-of-the-art methods, DeepSZ can improve the compression ratio by up to 1.43X, the DNN encoding performance by up to 4.0X (with four Nvidia Tesla V100 GPUs), and the decoding performance by up to 6.2X.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/01/2021

Adaptive Configuration of In Situ Lossy Compression for Cosmology Simulations via Fine-Grained Rate-Quality Modeling

Extreme-scale cosmological simulations have been widely used by today's ...
research
10/14/2019

Learning Sparsity and Quantization Jointly and Automatically for Neural Network Compression via Constrained Optimization

Deep Neural Networks (DNNs) are widely applied in a wide range of usecas...
research
11/18/2021

COMET: A Novel Memory-Efficient Deep Learning Training Framework by Using Error-Bounded Lossy Compression

Training wide and deep neural networks (DNNs) require large amounts of s...
research
11/18/2020

A Novel Memory-Efficient Deep Learning Training Framework via Error-Bounded Lossy Compression

Deep neural networks (DNNs) are becoming increasingly deeper, wider, and...
research
11/06/2019

A Programmable Approach to Model Compression

Deep neural networks frequently contain far more weights, represented at...
research
07/20/2018

Principal Filter Analysis for Guided Network Compression

Principal Filter Analysis (PFA), is an elegant, easy to implement, yet e...
research
12/05/2018

ECC: Energy-Constrained Deep Neural Network Compression via a Bilinear Regression Model

Many DNN-enabled vision applications constantly operate under severe ene...

Please sign up or login with your details

Forgot password? Click here to reset