MaxGain: Regularisation of Neural Networks by Constraining Activation Magnitudes

04/16/2018
by   Henry Gouk, et al.
0

Effective regularisation of neural networks is essential to combat overfitting due to the large number of parameters involved. We present an empirical analogue to the Lipschitz constant of a feed-forward neural network, which we refer to as the maximum gain. We hypothesise that constraining the gain of a network will have a regularising effect, similar to how constraining the Lipschitz constant of a network has been shown to improve generalisation. A simple algorithm is provided that involves rescaling the weight matrix of each layer after each parameter update. We conduct a series of studies on common benchmark datasets, and also a novel dataset that we introduce to enable easier significance testing for experiments using convolutional networks. Performance on these datasets compares favourably with other common regularisation techniques.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/12/2018

Regularisation of Neural Networks by Enforcing Lipschitz Continuity

We investigate the effect of explicitly enforcing the Lipschitz continui...
04/12/2019

The coupling effect of Lipschitz regularization in deep neural networks

We investigate robustness of deep feed-forward neural networks when inpu...
04/18/2020

Lipschitz constant estimation of Neural Networks via sparse polynomial optimization

We introduce LiPopt, a polynomial optimization framework for computing i...
04/27/2020

Estimating Full Lipschitz Constants of Deep Neural Networks

We estimate the Lipschitz constants of the gradient of a deep neural net...
08/04/2018

On Lipschitz Bounds of General Convolutional Neural Networks

Many convolutional neural networks (CNNs) have a feed-forward structure....
10/02/2018

Deep Decoder: Concise Image Representations from Untrained Non-convolutional Networks

Deep neural networks, in particular convolutional neural networks, have ...
11/18/2016

Spikes as regularizers

We present a confidence-based single-layer feed-forward learning algorit...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.