HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks

01/20/2023
by   Jinqi Xiao, et al.
1

Low-rank compression is an important model compression strategy for obtaining compact neural network models. In general, because the rank values directly determine the model complexity and model accuracy, proper selection of layer-wise rank is very critical and desired. To date, though many low-rank compression approaches, either selecting the ranks in a manual or automatic way, have been proposed, they suffer from costly manual trials or unsatisfied compression performance. In addition, all of the existing works are not designed in a hardware-aware way, limiting the practical performance of the compressed models on real-world hardware platforms. To address these challenges, in this paper we propose HALOC, a hardware-aware automatic low-rank compression framework. By interpreting automatic rank selection from an architecture search perspective, we develop an end-to-end solution to determine the suitable layer-wise ranks in a differentiable and hardware-aware way. We further propose design principles and mitigation strategy to efficiently explore the rank space and reduce the potential interference problem. Experimental results on different datasets and hardware platforms demonstrate the effectiveness of our proposed approach. On CIFAR-10 dataset, HALOC enables 0.07 models with 72.20 HALOC achieves 0.9 with 66.16 than the state-of-the-art automatic low-rank compression solution with fewer computational and memory costs. In addition, HALOC demonstrates the practical speedups on different hardware platforms, verified by the measurement results on desktop GPU, embedded GPU and ASIC accelerator.

READ FULL TEXT
research
10/17/2020

End-to-End Variational Bayesian Training of Tensorized Neural Networks with Automatic Rank Determination

Low-rank tensor decomposition is one of the most effective approaches to...
research
08/06/2020

Iterative Compression of End-to-End ASR Model using AutoML

Increasing demand for on-device Automatic Speech Recognition (ASR) syste...
research
11/07/2017

Compression-aware Training of Deep Networks

In recent years, great progress has been made in a variety of applicatio...
research
05/24/2019

Bayesian Tensorized Neural Networks with Automatic Rank Selection

Tensor decomposition is an effective approach to compress over-parameter...
research
04/12/2022

Compact Model Training by Low-Rank Projection with Energy Transfer

Low-rankness plays an important role in traditional machine learning, bu...
research
10/02/2019

AntMan: Sparse Low-Rank Compression to Accelerate RNN inference

Wide adoption of complex RNN based models is hindered by their inference...
research
06/04/2023

Riemannian Low-Rank Model Compression for Federated Learning with Over-the-Air Aggregation

Low-rank model compression is a widely used technique for reducing the c...

Please sign up or login with your details

Forgot password? Click here to reset