Adaptive Low-Rank Factorization to regularize shallow and deep neural networks

05/05/2020
by   Mohammad Mahdi Bejani, et al.
0

The overfitting is one of the cursing subjects in the deep learning field. To solve this challenge, many approaches were proposed to regularize the learning models. They add some hyper-parameters to the model to extend the generalization; however, it is a hard task to determine these hyper-parameters and a bad setting diverges the training process. In addition, most of the regularization schemes decrease the learning speed. Recently, Tai et al. [1] proposed low-rank tensor decomposition as a constrained filter for removing the redundancy in the convolution kernels of CNN. With a different viewpoint, we use Low-Rank matrix Factorization (LRF) to drop out some parameters of the learning model along the training process. However, this scheme similar to [1] probably decreases the training accuracy when it tries to decrease the number of operations. Instead, we use this regularization scheme adaptively when the complexity of a layer is high. The complexity of any layer can be evaluated by the nonlinear condition numbers of its learning system. The resulted method entitled "AdaptiveLRF" neither decreases the training speed nor vanishes the accuracy of the layer. The behavior of AdaptiveLRF is visualized on a noisy dataset. Then, the improvements are presented on some small-size and large-scale datasets. The preference of AdaptiveLRF on famous dropout regularizers on shallow networks is demonstrated. Also, AdaptiveLRF competes with dropout and adaptive dropout on the various deep networks including MobileNet V2, ResNet V2, DenseNet, and Xception. The best results of AdaptiveLRF on SVHN and CIFAR-10 datasets are 98 94 improve the quality of the learning model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2021

Adaptive Low-Rank Regularization with Damping Sequences to Restrict Lazy Weights in Deep Networks

Overfitting is one of the critical problems in deep neural networks. Man...
research
02/27/2019

Stochastically Rank-Regularized Tensor Regression Networks

Over-parametrization of deep neural networks has recently been shown to ...
research
05/04/2023

Cuttlefish: Low-Rank Model Training without All the Tuning

Recent research has shown that training low-rank neural networks can eff...
research
11/19/2015

Convolutional neural networks with low-rank regularization

Large CNNs have delivered impressive performance in various computer vis...
research
10/17/2020

End-to-End Variational Bayesian Training of Tensorized Neural Networks with Automatic Rank Determination

Low-rank tensor decomposition is one of the most effective approaches to...
research
10/13/2017

Dropout as a Low-Rank Regularizer for Matrix Factorization

Regularization for matrix factorization (MF) and approximation problems ...
research
01/30/2021

NL-CNN: A Resources-Constrained Deep Learning Model based on Nonlinear Convolution

A novel convolution neural network model, abbreviated NL-CNN is proposed...

Please sign up or login with your details

Forgot password? Click here to reset