DeepAI AI Chat
Log In Sign Up

Exploiting Elasticity in Tensor Ranks for Compressing Neural Networks

by   Jie Ran, et al.

Elasticities in depth, width, kernel size and resolution have been explored in compressing deep neural networks (DNNs). Recognizing that the kernels in a convolutional neural network (CNN) are 4-way tensors, we further exploit a new elasticity dimension along the input-output channels. Specifically, a novel nuclear-norm rank minimization factorization (NRMF) approach is proposed to dynamically and globally search for the reduced tensor ranks during training. Correlation between tensor ranks across multiple layers is revealed, and a graceful tradeoff between model size and accuracy is obtained. Experiments then show the superiority of NRMF over the previous non-elastic variational Bayesian matrix factorization (VBMF) scheme.


Bayesian multi-tensor factorization

We introduce Bayesian multi-tensor factorization, a model that is the fi...

Deep Image Clustering with Tensor Kernels and Unsupervised Companion Objectives

In this paper we develop a new model for deep image clustering, using co...

Hyperspectral Super-Resolution via Coupled Tensor Ring Factorization

Hyperspectral super-resolution (HSR) fuses a low-resolution hyperspectra...

Incoherent Tensor Norms and Their Applications in Higher Order Tensor Completion

In this paper, we investigate the sample size requirement for a general ...

Implicit Regularization with Polynomial Growth in Deep Tensor Factorization

We study the implicit regularization effects of deep learning in tensor ...

Efficient NTK using Dimensionality Reduction

Recently, neural tangent kernel (NTK) has been used to explain the dynam...

Information Plane Analysis of Deep Neural Networks via Matrix-Based Renyi's Entropy and Tensor Kernels

Analyzing deep neural networks (DNNs) via information plane (IP) theory ...