How regularization affects the geometry of loss functions

07/28/2023
by   Nathaniel Bottman, et al.
0

What neural networks learn depends fundamentally on the geometry of the underlying loss function. We study how different regularizers affect the geometry of this function. One of the most basic geometric properties of a smooth function is whether it is Morse or not. For nonlinear deep neural networks, the unregularized loss function L is typically not Morse. We consider several different regularizers, including weight decay, and study for which regularizers the regularized function L_ϵ becomes Morse.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2020

Effective Regularization Through Loss-Function Metalearning

Loss-function metalearning can be used to discover novel, customized los...
research
10/03/2019

Pure and Spurious Critical Points: a Geometric Study of Linear Networks

The critical locus of the loss function of a neural network is determine...
research
03/07/2019

Only sparsity based loss function for learning representations

We study the emergence of sparse representations in neural networks. We ...
research
11/30/2021

LossPlot: A Better Way to Visualize Loss Landscapes

Investigations into the loss landscapes of deep neural networks are ofte...
research
07/23/2020

Adma: A Flexible Loss Function for Neural Networks

Highly increased interest in Artificial Neural Networks (ANNs) have resu...
research
11/19/2017

Convergence Analysis of the Dynamics of a Special Kind of Two-Layered Neural Networks with ℓ_1 and ℓ_2 Regularization

In this paper, we made an extension to the convergence analysis of the d...
research
06/24/2023

Current density impedance imaging with PINNs

In this paper, we introduce CDII-PINNs, a computationally efficient meth...

Please sign up or login with your details

Forgot password? Click here to reset