DeepCABAC: Context-adaptive binary arithmetic coding for deep neural network compression

05/15/2019
by   Simon Wiedemann, et al.
0

We present DeepCABAC, a novel context-adaptive binary arithmetic coder for compressing deep neural networks. It quantizes each weight parameter by minimizing a weighted rate-distortion function, which implicitly takes the impact of quantization on to the accuracy of the network into account. Subsequently, it compresses the quantized values into a bitstream representation with minimal redundancies. We show that DeepCABAC is able to reach very high compression ratios across a wide set of different network architectures and datasets. For instance, we are able to compress by x63.6 the VGG16 ImageNet model with no loss of accuracy, thus being able to represent the entire network with merely 8.7MB.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2019

DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks

The field of video compression has developed some of the most sophistica...
research
12/05/2016

Towards the Limit of Network Quantization

Network quantization is one of network compression techniques to reduce ...
research
09/18/2017

Neural network-based arithmetic coding of intra prediction modes in HEVC

In both H.264 and HEVC, context-adaptive binary arithmetic coding (CABAC...
research
02/07/2018

Spatially adaptive image compression using a tiled deep network

Deep neural networks represent a powerful class of function approximator...
research
04/26/2023

Guaranteed Quantization Error Computation for Neural Network Model Compression

Neural network model compression techniques can address the computation ...
research
01/15/2018

Efficient Trimmed Convolutional Arithmetic Encoding for Lossless Image Compression

Arithmetic encoding is an essential class of coding techniques which hav...

Please sign up or login with your details

Forgot password? Click here to reset