STN: Scalable Tensorizing Networks via Structure-Aware Training and Adaptive Compression

05/30/2022
by   Chang Nie, et al.
1

Deep neural networks (DNNs) have delivered a remarkable performance in many tasks of computer vision. However, over-parameterized representations of popular architectures dramatically increase their computational complexity and storage costs, and hinder their availability in edge devices with constrained resources. Regardless of many tensor decomposition (TD) methods that have been well-studied for compressing DNNs to learn compact representations, they suffer from non-negligible performance degradation in practice. In this paper, we propose Scalable Tensorizing Networks (STN), which dynamically and adaptively adjust the model size and decomposition structure without retraining. First, we account for compression during training by adding a low-rank regularizer to guarantee networks' desired low-rank characteristics in full tensor format. Then, considering network layers exhibit various low-rank structures, STN is obtained by a data-driven adaptive TD approach, for which the topological structure of decomposition per layer is learned from the pre-trained model, and the ranks are selected appropriately under specified storage constraints. As a result, STN is compatible with arbitrary network architectures and achieves higher compression performance and flexibility over other tensorizing versions. Comprehensive experiments on several popular architectures and benchmarks substantiate the superiority of our model towards improving parameter efficiency.

READ FULL TEXT
research
11/02/2021

Low-Rank+Sparse Tensor Compression for Neural Networks

Low-rank tensor compression has been proposed as a promising approach to...
research
12/06/2018

Trained Rank Pruning for Efficient Deep Neural Networks

The performance of Deep Neural Networks (DNNs) keeps elevating in recent...
research
07/13/2021

Data-Driven Low-Rank Neural Network Compression

Despite many modern applications of Deep Neural Networks (DNNs), the lar...
research
11/07/2017

Compression-aware Training of Deep Networks

In recent years, great progress has been made in a variety of applicatio...
research
06/18/2019

ADA-Tucker: Compressing Deep Neural Networks via Adaptive Dimension Adjustment Tucker Decomposition

Despite the recent success of deep learning models in numerous applicati...
research
08/28/2023

Maestro: Uncovering Low-Rank Structures via Trainable Decomposition

Deep Neural Networks (DNNs) have been a large driver and enabler for AI ...
research
10/10/2020

Block-term Tensor Neural Networks

Deep neural networks (DNNs) have achieved outstanding performance in a w...

Please sign up or login with your details

Forgot password? Click here to reset