Learning Robust and Lightweight Model through Separable Structured Transformations

12/27/2021
by   Xian Wei, et al.
2

With the proliferation of mobile devices and the Internet of Things, deep learning models are increasingly deployed on devices with limited computing resources and memory, and are exposed to the threat of adversarial noise. Learning deep models with both lightweight and robustness is necessary for these equipments. However, current deep learning solutions are difficult to learn a model that possesses these two properties without degrading one or the other. As is well known, the fully-connected layers contribute most of the parameters of convolutional neural networks. We perform a separable structural transformation of the fully-connected layer to reduce the parameters, where the large-scale weight matrix of the fully-connected layer is decoupled by the tensor product of several separable small-sized matrices. Note that data, such as images, no longer need to be flattened before being fed to the fully-connected layer, retaining the valuable spatial geometric information of the data. Moreover, in order to further enhance both lightweight and robustness, we propose a joint constraint of sparsity and differentiable condition number, which is imposed on these separable matrices. We evaluate the proposed approach on MLP, VGG-16 and Vision Transformer. The experimental results on datasets such as ImageNet, SVHN, CIFAR-100 and CIFAR10 show that we successfully reduce the amount of network parameters by 90 accuracy loss is less than 1.5 the original fully-connected layer. Interestingly, it can achieve an overwhelming advantage even at a high compression rate, e.g., 200 times.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/22/2015

Tensorizing Neural Networks

Deep neural networks currently demonstrate state-of-the-art performance ...
research
04/28/2020

Do We Need Fully Connected Output Layers in Convolutional Networks?

Traditionally, deep convolutional neural networks consist of a series of...
research
04/25/2020

NullSpaceNet: Nullspace Convoluional Neural Network with Differentiable Loss Function

We propose NullSpaceNet, a novel network that maps from the pixel level ...
research
12/27/2018

Stanza: Layer Separation for Distributed Training in Deep Learning

The parameter server architecture is prevalently used for distributed de...
research
11/18/2015

ACDC: A Structured Efficient Linear Layer

The linear layer is one of the most pervasive modules in deep learning r...
research
10/21/2019

Separable Convolutional Eigen-Filters (SCEF): Building Efficient CNNs Using Redundancy Analysis

The high model complexity of deep learning algorithms enables remarkable...
research
02/17/2021

Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with 1/n Parameters

Recent works have demonstrated reasonable success of representation lear...

Please sign up or login with your details

Forgot password? Click here to reset