Low Rank Optimization for Efficient Deep Learning: Making A Balance between Compact Architecture and Fast Training

03/22/2023
by   Xinwei Ou, et al.
0

Deep neural networks have achieved great success in many data processing applications. However, the high computational complexity and storage cost makes deep learning hard to be used on resource-constrained devices, and it is not environmental-friendly with much power cost. In this paper, we focus on low-rank optimization for efficient deep learning techniques. In the space domain, deep neural networks are compressed by low rank approximation of the network parameters, which directly reduces the storage requirement with a smaller number of network parameters. In the time domain, the network parameters can be trained in a few subspaces, which enables efficient training for fast convergence. The model compression in the spatial domain is summarized into three categories as pre-train, pre-set, and compression-aware methods, respectively. With a series of integrable techniques discussed, such as sparse pruning, quantization, and entropy coding, we can ensemble them in an integration framework with lower computational complexity and storage. Besides of summary of recent technical advances, we have two findings for motivating future works: one is that the effective rank outperforms other sparse measures for network compression. The other is a spatial and temporal balance for tensorized neural networks.

READ FULL TEXT
research
07/27/2020

ALF: Autoencoder-based Low-rank Filter-sharing for Efficient Convolutional Neural Networks

Closing the gap between the hardware requirements of state-of-the-art co...
research
11/07/2017

Compression-aware Training of Deep Networks

In recent years, great progress has been made in a variety of applicatio...
research
11/30/2021

A Highly Effective Low-Rank Compression of Deep Neural Networks with Modified Beam-Search and Modified Stable Rank

Compression has emerged as one of the essential deep learning research t...
research
09/23/2019

Class-dependent Compression of Deep Neural Networks

Today's deep neural networks require substantial computation resources f...
research
06/05/2023

Computational Complexity of Detecting Proximity to Losslessly Compressible Neural Network Parameters

To better understand complexity in neural networks, we theoretically inv...
research
04/12/2022

Compact Model Training by Low-Rank Projection with Energy Transfer

Low-rankness plays an important role in traditional machine learning, bu...
research
07/26/2018

A Unified Approximation Framework for Non-Linear Deep Neural Networks

Deep neural networks (DNNs) have achieved significant success in a varie...

Please sign up or login with your details

Forgot password? Click here to reset