Effect of Various Regularizers on Model Complexities of Neural Networks in Presence of Input Noise

01/31/2019
by   Mayank Sharma, et al.
24

Deep neural networks are over-parameterized, which implies that the number of parameters are much larger than the number of samples used to train the network. Even in such a regime deep architectures do not overfit. This phenomenon is an active area of research and many theories have been proposed trying to understand this peculiar observation. These include the Vapnik Chervonenkis (VC) dimension bounds and Rademacher complexity bounds which show that the capacity of the network is characterized by the norm of weights rather than the number of parameters. However, the effect of input noise on these measures for shallow and deep architectures has not been studied. In this paper, we analyze the effects of various regularization schemes on the complexity of a neural network which we characterize with the loss, L_2 norm of the weights, Rademacher complexities (Directly Approximately Regularizing Complexity-DARC1), VC dimension based Low Complexity Neural Network (LCNN) when subject to varying degrees of Gaussian input noise. We show that L_2 regularization leads to a simpler hypothesis class and better generalization followed by DARC1 regularizer, both for shallow as well as deeper architectures. Jacobian regularizer works well for shallow architectures with high level of input noises. Spectral normalization attains highest test set accuracies both for shallow and deeper architectures. We also show that Dropout alone does not perform well in presence of input noise. Finally, we show that deeper architectures are robust to input noise as opposed to their shallow counterparts.

READ FULL TEXT
research
12/27/2022

Langevin algorithms for very deep Neural Networks with application to image classification

Training a very deep neural network is a challenging task, as the deeper...
research
11/03/2018

Radius-margin bounds for deep neural networks

Explaining the unreasonable effectiveness of deep learning has eluded re...
research
05/22/2018

Deep learning generalizes because the parameter-function map is biased towards simple functions

Deep neural networks generalize remarkably well without explicit regular...
research
05/26/2016

Robust Large Margin Deep Neural Networks

The generalization error of deep neural networks via their classificatio...
research
06/08/2021

What training reveals about neural network complexity

This work explores the hypothesis that the complexity of the function a ...
research
04/24/2019

Analytical Moment Regularizer for Gaussian Robust Networks

Despite the impressive performance of deep neural networks (DNNs) on num...
research
01/19/2023

Convergence beyond the over-parameterized regime using Rayleigh quotients

In this paper, we present a new strategy to prove the convergence of dee...

Please sign up or login with your details

Forgot password? Click here to reset