DeepAI AI Chat
Log In Sign Up

Approximated Orthonormal Normalisation in Training Neural Networks

by   Guoqiang Zhang, et al.
Victoria University of Wellington

Generalisation of a deep neural network (DNN) is one major concern when employing the deep learning approach for solving practical problems. In this paper we propose a new technique, named approximated orthonormal normalisation (AON), to improve the generalisation capacity of a DNN model. Considering a weight matrix W from a particular neural layer in the model, our objective is to design a function h(W) such that its row vectors are approximately orthogonal to each other while allowing the DNN model to fit the training data sufficiently accurate. By doing so, it would avoid co-adaptation among neurons of the same layer to be able to improve network-generalisation capacity. Specifically, at each iteration, we first approximate (WW^T)^(-1/2) using its Taylor expansion before multiplying the matrix W. After that, the matrix product is then normalised by applying the spectral normalisation (SN) technique to obtain h(W). Conceptually speaking, AON is designed to turn orthonormal regularisation into orthonormal normalisation to avoid manual balancing the original and penalty functions. Experimental results show that AON yields promising validation performance compared to orthonormal regularisation.


Solving the linear transport equation by a deep neural network approach

In this paper, we study the linear transport model by adopting the deep ...

Deep Learning Weight Pruning with RMT-SVD: Increasing Accuracy and Reducing Overfitting

In this work, we present some applications of random matrix theory for t...

Stacking-based Deep Neural Network: Deep Analytic Network on Convolutional Spectral Histogram Features

Stacking-based deep neural network (S-DNN), in general, denotes a deep n...

Control Variate Approximation for DNN Accelerators

In this work, we introduce a control variate approximation technique for...

Automatic Node Selection for Deep Neural Networks using Group Lasso Regularization

We examine the effect of the Group Lasso (gLasso) regularizer in selecti...

A Frobenius norm regularization method for convolutional kernels to avoid unstable gradient problem

Convolutional neural network is a very important model of deep learning....

TAOTF: A Two-stage Approximately Orthogonal Training Framework in Deep Neural Networks

The orthogonality constraints, including the hard and soft ones, have be...