Empirical study of extreme overfitting points of neural networks

06/14/2019
by   Daniil Merkulov, et al.
2

In this paper we propose a method of obtaining points of extreme overfitting - parameters of modern neural networks, at which they demonstrate close to 100 sample. Despite the widespread opinion that the overwhelming majority of critical points of the loss function of a neural network have equally good generalizing ability, such points have a huge generalization error. The paper studies the properties of such points and their location on the surface of the loss function of modern neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

research
07/21/2021

Memorization in Deep Neural Networks: Does the Loss Function matter?

Deep Neural Networks, often owing to the overparameterization, are shown...
research
10/29/2016

Sparse Signal Recovery for Binary Compressed Sensing by Majority Voting Neural Networks

In this paper, we propose majority voting neural networks for sparse sig...
research
01/30/2023

Complex Critical Points of Deep Linear Neural Networks

We extend the work of Mehta, Chen, Tang, and Hauenstein on computing the...
research
02/02/2013

A New Constructive Method to Optimize Neural Network Architecture and Generalization

In this paper, after analyzing the reasons of poor generalization and ov...
research
10/03/2019

Pure and Spurious Critical Points: a Geometric Study of Linear Networks

The critical locus of the loss function of a neural network is determine...
research
05/20/2017

Calibrating Black Box Classification Models through the Thresholding Method

In high-dimensional classification settings, we wish to seek a balance b...
research
08/31/2020

Extreme Memorization via Scale of Initialization

We construct an experimental setup in which changing the scale of initia...

Please sign up or login with your details

Forgot password? Click here to reset