Neural Network Compression using Transform Coding and Clustering

05/18/2018
by   Thorsten Laude, et al.
0

With the deployment of neural networks on mobile devices and the necessity of transmitting neural networks over limited or expensive channels, the file size of the trained model was identified as bottleneck. In this paper, we propose a codec for the compression of neural networks which is based on transform coding for convolutional and dense layers and on clustering for biases and normalizations. By using this codec, we achieve average compression factors between 7.9-9.3 while the accuracy of the compressed networks for image classification decreases only by 1

READ FULL TEXT
research
10/07/2019

Deep Neural Network Compression for Image Classification and Object Detection

Neural networks have been notorious for being computationally expensive....
research
01/01/2020

Lossless Compression of Deep Neural Networks

Deep neural networks have been successful in many predictive modeling ta...
research
10/26/2017

Image Compression: Sparse Coding vs. Bottleneck Autoencoders

Bottleneck autoencoders have been actively researched as a solution to i...
research
03/15/2023

Gated Compression Layers for Efficient Always-On Models

Mobile and embedded machine learning developers frequently have to compr...
research
11/17/2017

Improved Bayesian Compression

Compression of Neural Networks (NN) has become a highly studied topic in...
research
10/06/2022

DeltaFS: Pursuing Zero Update Overhead via Metadata-Enabled Delta Compression for Log-structured File System on Mobile Devices

Data compression has been widely adopted to release mobile devices from ...
research
06/17/2021

On Effects of Compression with Hyperdimensional Computing in Distributed Randomized Neural Networks

A change of the prevalent supervised learning techniques is foreseeable ...

Please sign up or login with your details

Forgot password? Click here to reset