Compressibility Loss for Neural Network Weights

05/03/2019
by   Caglar Aytekin, et al.
0

In this paper we apply a compressibility loss that enables learning highly compressible neural network weights. The loss was previously proposed as a measure of negated sparsity of a signal, yet in this paper we show that minimizing this loss also enforces the non-zero parts of the signal to have very low entropy, thus making the entire signal more compressible. For an optimization problem where the goal is to minimize the compressibility loss (the objective), we prove that at any critical point of the objective, the weight vector is a ternary signal and the corresponding value of the objective is the squared root of the number of non-zero elements in the signal, thus directly related to sparsity. In the experiments, we train neural networks with the compressibility loss and we show that the proposed method achieves weight sparsity and compression ratios comparable with the state-of-the-art.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/18/2018

Entropy-Constrained Training of Deep Neural Networks

We propose a general framework for neural network compression that is mo...
research
01/07/2019

GASL: Guided Attention for Sparsity Learning in Deep Neural Networks

The main goal of network pruning is imposing sparsity on the neural netw...
research
03/01/2021

SWIS – Shared Weight bIt Sparsity for Efficient Neural Network Acceleration

Quantization is spearheading the increase in performance and efficiency ...
research
07/16/2019

An Inter-Layer Weight Prediction and Quantization for Deep Neural Networks based on a Smoothly Varying Weight Hypothesis

Network compression for deep neural networks has become an important par...
research
04/30/2018

Interpreting weight maps in terms of cognitive or clinical neuroscience: nonsense?

Since machine learning models have been applied to neuroimaging data, re...
research
02/09/2023

Accurate and Interpretable Solution of the Inverse Rig for Realistic Blendshape Models with Quadratic Corrective Terms

We propose a new model-based algorithm solving the inverse rig problem i...
research
11/07/2018

FLOPs as a Direct Optimization Objective for Learning Sparse Neural Networks

There exists a plethora of techniques for inducing structured sparsity i...

Please sign up or login with your details

Forgot password? Click here to reset