On the Interpretability of Regularisation for Neural Networks Through Model Gradient Similarity

05/25/2022
by   Vincent Szolnoky, et al.
0

Most complex machine learning and modelling techniques are prone to over-fitting and may subsequently generalise poorly to future data. Artificial neural networks are no different in this regard and, despite having a level of implicit regularisation when trained with gradient descent, often require the aid of explicit regularisers. We introduce a new framework, Model Gradient Similarity (MGS), that (1) serves as a metric of regularisation, which can be used to monitor neural network training, (2) adds insight into how explicit regularisers, while derived from widely different principles, operate via the same mechanism underneath by increasing MGS, and (3) provides the basis for a new regularisation scheme which exhibits excellent performance, especially in challenging settings such as high levels of label noise or limited sample sizes.

READ FULL TEXT
research
12/01/2021

Asymptotic properties of one-layer artificial neural networks with sparse connectivity

A law of large numbers for the empirical distribution of parameters of a...
research
07/27/2020

Universality of Gradient Descent Neural Network Training

It has been observed that design choices of neural networks are often cr...
research
11/29/2018

On Implicit Filter Level Sparsity in Convolutional Neural Networks

We investigate filter level sparsity that emerges in convolutional neura...
research
02/22/2023

Regularised neural networks mimic human insight

Humans sometimes show sudden improvements in task performance that have ...
research
06/09/2021

XBNet : An Extremely Boosted Neural Network

Neural networks have proved to be very robust at processing unstructured...
research
10/06/2020

A Novel Neural Network Training Framework with Data Assimilation

In recent years, the prosperity of deep learning has revolutionized the ...

Please sign up or login with your details

Forgot password? Click here to reset