Scalable Deep Neural Networks via Low-Rank Matrix Factorization

10/29/2019
by   Atsushi Yaguchi, et al.
42

Compressing deep neural networks (DNNs) is important for real-world applications operating on resource-constrained devices. However, it is difficult to change the model size once the training is completed, which needs re-training to configure models suitable for different devices. In this paper, we propose a novel method that enables DNNs to flexibly change their size after training. We factorize the weight matrices of the DNNs via singular value decomposition (SVD) and change their ranks according to the target size. In contrast with existing methods, we introduce simple criteria that characterize the importance of each basis and layer, which enables to effectively compress the error and complexity of models as little as possible. In experiments on multiple image-classification tasks, our method exhibits favorable performance compared with other methods.

READ FULL TEXT
research
04/20/2020

Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality Regularization and Singular Value Sparsification

Modern deep neural networks (DNNs) often require high memory consumption...
research
08/19/2017

Ensemble Of Deep Neural Networks For Acoustic Scene Classification

Deep neural networks (DNNs) have recently achieved great success in a mu...
research
04/02/2020

Controllable Orthogonalization in Training DNNs

Orthogonality is widely used for training deep neural networks (DNNs) du...
research
12/15/2022

Backdoor Attack Detection in Computer Vision by Applying Matrix Factorization on the Weights of Deep Networks

The increasing importance of both deep neural networks (DNNs) and cloud ...
research
11/21/2022

Linear Stability Hypothesis and Rank Stratification for Nonlinear Models

Models with nonlinear architectures/parameterizations such as deep neura...
research
08/28/2023

Maestro: Uncovering Low-Rank Structures via Trainable Decomposition

Deep Neural Networks (DNNs) have been a large driver and enabler for AI ...
research
08/02/2021

Adversarial Energy Disaggregation for Non-intrusive Load Monitoring

Energy disaggregation, also known as non-intrusive load monitoring (NILM...

Please sign up or login with your details

Forgot password? Click here to reset