Using noise resilience for ranking generalization of deep neural networks

12/16/2020
by   Depen Morwani, et al.
0

Recent papers have shown that sufficiently overparameterized neural networks can perfectly fit even random labels. Thus, it is crucial to understand the underlying reason behind the generalization performance of a network on real-world data. In this work, we propose several measures to predict the generalization error of a network given the training data and its parameters. Using one of these measures, based on noise resilience of the network, we secured 5th position in the predicting generalization in deep learning (PGDL) competition at NeurIPS 2020.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/04/2020

Representation Based Complexity Measures for Predicting Generalization in Deep Learning

Deep Neural Networks can generalize despite being significantly overpara...
research
11/25/2020

Ranking Deep Learning Generalization using Label Variation in Latent Geometry Graphs

Measuring the generalization performance of a Deep Neural Network (DNN) ...
research
04/08/2021

Gi and Pal Scores: Deep Neural Network Generalization Statistics

The field of Deep Learning is rich with empirical evidence of human-like...
research
10/26/2017

Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behavior

We describe an approach to understand the peculiar and counterintuitive ...
research
06/09/2021

Predicting Deep Neural Network Generalization with Perturbation Response Curves

The field of Deep Learning is rich with empirical evidence of human-like...
research
08/11/2023

Predicting Resilience with Neural Networks

Resilience engineering studies the ability of a system to survive and re...
research
12/13/2020

Predicting Generalization in Deep Learning via Local Measures of Distortion

We study generalization in deep learning by appealing to complexity meas...

Please sign up or login with your details

Forgot password? Click here to reset