Approximated Orthonormal Normalisation in Training Neural Networks

11/21/2019
by   Guoqiang Zhang, et al.
0

Generalisation of a deep neural network (DNN) is one major concern when employing the deep learning approach for solving practical problems. In this paper we propose a new technique, named approximated orthonormal normalisation (AON), to improve the generalisation capacity of a DNN model. Considering a weight matrix W from a particular neural layer in the model, our objective is to design a function h(W) such that its row vectors are approximately orthogonal to each other while allowing the DNN model to fit the training data sufficiently accurate. By doing so, it would avoid co-adaptation among neurons of the same layer to be able to improve network-generalisation capacity. Specifically, at each iteration, we first approximate (WW^T)^(-1/2) using its Taylor expansion before multiplying the matrix W. After that, the matrix product is then normalised by applying the spectral normalisation (SN) technique to obtain h(W). Conceptually speaking, AON is designed to turn orthonormal regularisation into orthonormal normalisation to avoid manual balancing the original and penalty functions. Experimental results show that AON yields promising validation performance compared to orthonormal regularisation.

READ FULL TEXT
research
02/18/2021

Solving the linear transport equation by a deep neural network approach

In this paper, we study the linear transport model by adopting the deep ...
research
03/15/2023

Deep Learning Weight Pruning with RMT-SVD: Increasing Accuracy and Reducing Overfitting

In this work, we present some applications of random matrix theory for t...
research
03/04/2017

Stacking-based Deep Neural Network: Deep Analytic Network on Convolutional Spectral Histogram Features

Stacking-based deep neural network (S-DNN), in general, denotes a deep n...
research
02/18/2021

Control Variate Approximation for DNN Accelerators

In this work, we introduce a control variate approximation technique for...
research
11/17/2016

Automatic Node Selection for Deep Neural Networks using Group Lasso Regularization

We examine the effect of the Group Lasso (gLasso) regularizer in selecti...
research
07/25/2019

A Frobenius norm regularization method for convolutional kernels to avoid unstable gradient problem

Convolutional neural network is a very important model of deep learning....
research
11/25/2022

TAOTF: A Two-stage Approximately Orthogonal Training Framework in Deep Neural Networks

The orthogonality constraints, including the hard and soft ones, have be...

Please sign up or login with your details

Forgot password? Click here to reset