LCS: Learning Compressible Subspaces for Adaptive Network Compression at Inference Time

10/08/2021
by   Elvis Nunez, et al.
2

When deploying deep learning models to a device, it is traditionally assumed that available computational resources (compute, memory, and power) remain static. However, real-world computing systems do not always provide stable resource guarantees. Computational resources need to be conserved when load from other processes is high or battery power is low. Inspired by recent works on neural network subspaces, we propose a method for training a "compressible subspace" of neural networks that contains a fine-grained spectrum of models that range from highly efficient to highly accurate. Our models require no retraining, thus our subspace of models can be deployed entirely on-device to allow adaptive network compression at inference time. We present results for achieving arbitrarily fine-grained accuracy-efficiency trade-offs at inference time for structured and unstructured sparsity. We achieve accuracies on-par with standard models when testing our uncompressed models, and maintain high accuracy for sparsity rates above 90 also demonstrate that our algorithm extends to quantization at variable bit widths, achieving accuracy on par with individually trained networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2018

Differentiable Fine-grained Quantization for Deep Neural Network Compression

Neural networks have shown great performance in cognitive tasks. When de...
research
02/08/2021

Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch

Sparsity in Deep Neural Networks (DNNs) has been widely studied to compr...
research
12/20/2021

Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks

Deep neural networks (DNNs) have been proven to be effective in solving ...
research
08/22/2016

Computational and Statistical Tradeoffs in Learning to Rank

For massive and heterogeneous modern datasets, it is of fundamental inte...
research
12/20/2018

SQuantizer: Simultaneous Learning for Both Sparse and Low-precision Neural Networks

Deep neural networks have achieved state-of-the-art accuracies in a wide...
research
11/20/2016

LCNN: Lookup-based Convolutional Neural Network

Porting state of the art deep learning algorithms to resource constraine...
research
06/17/2022

Binary Early-Exit Network for Adaptive Inference on Low-Resource Devices

Deep neural networks have significantly improved performance on a range ...

Please sign up or login with your details

Forgot password? Click here to reset