Predicting Deep Neural Network Generalization with Perturbation Response Curves

by   Yair Schiff, et al.

The field of Deep Learning is rich with empirical evidence of human-like performance on a variety of prediction tasks. However, despite these successes, the recent Predicting Generalization in Deep Learning (PGDL) NeurIPS 2020 competition suggests that there is a need for more robust and efficient measures of network generalization. In this work, we propose a new framework for evaluating the generalization capabilities of trained networks. We use perturbation response (PR) curves that capture the accuracy change of a given network as a function of varying levels of training sample perturbation. From these PR curves, we derive novel statistics that capture generalization capability. Specifically, we introduce two new measures for accurately predicting generalization gaps: the Gi-score and Pal-score, that are inspired by the Gini coefficient and Palma ratio (measures of income inequality), that accurately predict generalization gaps. Using our framework applied to intra and inter class sample mixup, we attain better predictive scores than the current state-of-the-art measures on a majority of tasks in the PGDL competition. In addition, we show that our framework and the proposed statistics can be used to capture to what extent a trained network is invariant to a given parametric input transformation, such as rotation or translation. Therefore, these generalization gap prediction statistics also provide a useful means for selecting the optimal network architectures and hyperparameters that are invariant to a certain perturbation.



There are no comments yet.


page 1

page 2

page 3

page 4


Gi and Pal Scores: Deep Neural Network Generalization Statistics

The field of Deep Learning is rich with empirical evidence of human-like...

It's Not Whom You Know, It's What You (or Your Friends) Can Do: Succint Coalitional Frameworks for Network Centralities

We investigate the representation of measures of network centrality usin...

Using noise resilience for ranking generalization of deep neural networks

Recent papers have shown that sufficiently overparameterized neural netw...

Predicting Generalization in Deep Learning via Metric Learning – PGDL Shared task

The competition "Predicting Generalization in Deep Learning (PGDL)" aims...

NeurIPS 2020 Competition: Predicting Generalization in Deep Learning

Understanding generalization in deep learning is arguably one of the mos...

Stochastic Deep Networks

Machine learning is increasingly targeting areas where input data cannot...

Ranking Deep Learning Generalization using Label Variation in Latent Geometry Graphs

Measuring the generalization performance of a Deep Neural Network (DNN) ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.