Comparing Sample-wise Learnability Across Deep Neural Network Models

01/08/2019
by   Seung-Geon Lee, et al.
0

Estimating the relative importance of each sample in a training set has important practical and theoretical value, such as in importance sampling or curriculum learning. This kind of focus on individual samples invokes the concept of sample-wise learnability: How easy is it to correctly learn each sample (cf. PAC learnability)? In this paper, we approach the sample-wise learnability problem within a deep learning context. We propose a measure of the learnability of a sample with a given deep neural network (DNN) model. The basic idea is to train the given model on the training set, and for each sample, aggregate the hits and misses over the entire training epochs. Our experiments show that the sample-wise learnability measure collected this way is highly linearly correlated across different DNN models (ResNet-20, VGG-16, and MobileNet), suggesting that such a measure can provide deep general insights on the data's properties. We expect our method to help develop better curricula for training, and help us better understand the data itself.

READ FULL TEXT

page 2

page 5

research
03/29/2023

Importance Sampling for Stochastic Gradient Descent in Deep Neural Networks

Stochastic gradient descent samples uniformly the training set to build ...
research
09/17/2020

An Algorithm to Attack Neural Network Encoder-based Out-Of-Distribution Sample Detector

Deep neural network (DNN), especially convolutional neural network, has ...
research
10/10/2020

Improve the Robustness and Accuracy of Deep Neural Network with L_2,∞ Normalization

In this paper, the robustness and accuracy of the deep neural network (D...
research
10/09/2022

Galaxy Spin Classification I: Z-wise vs S-wise Spirals With Chirality Equivariant Residual Network

The angular momentum of galaxies (galaxy spin) contains rich information...
research
10/27/2021

How Important is Importance Sampling for Deep Budgeted Training?

Long iterative training processes for Deep Neural Networks (DNNs) are co...
research
11/18/2021

DIVA: Dataset Derivative of a Learning Task

We present a method to compute the derivative of a learning task with re...
research
04/28/2019

Deep pNML: Predictive Normalized Maximum Likelihood for Deep Neural Networks

The Predictive Normalized Maximum Likelihood (pNML) scheme has been rece...

Please sign up or login with your details

Forgot password? Click here to reset