Weightless: Lossy Weight Encoding For Deep Neural Network Compression

11/13/2017
by   Brandon Reagen, et al.
0

The large memory requirements of deep neural networks limit their deployment and adoption on many devices. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for lossy weight encoding which complements conventional compression techniques. The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors. Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, Weightless, can compress DNN weights by up to 496x with the same model accuracy. This results in up to a 1.51x improvement over the state-of-the-art.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/20/2019

EAST: Encoding-Aware Sparse Training for Deep Memory Compression of ConvNets

The implementation of Deep Convolutional Neural Networks (ConvNets) on t...
research
06/11/2018

DropBack: Continuous Pruning During Training

We introduce a technique that compresses deep neural networks both durin...
research
09/30/2018

Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters

While deep neural networks are a highly successful model class, their la...
research
11/01/2017

Efficient Inferencing of Compressed Deep Neural Networks

Large number of weights in deep neural networks makes the models difficu...
research
10/24/2022

OLLA: Decreasing the Memory Usage of Neural Networks by Optimizing the Lifetime and Location of Arrays

The size of deep neural networks has grown exponentially in recent years...
research
07/12/2023

DeepMapping: The Case for Learned Data Mapping for Compression and Efficient Query Processing

Storing tabular data in a way that balances storage and query efficienci...
research
08/08/2022

NeuralVDB: High-resolution Sparse Volume Representation using Hierarchical Neural Networks

We introduce NeuralVDB, which improves on an existing industry standard ...

Please sign up or login with your details

Forgot password? Click here to reset