Example Perplexity

03/16/2022
by   Nevin L. Zhang, et al.
10

Some examples are easier for humans to classify than others. The same should be true for deep neural networks (DNNs). We use the term example perplexity to refer to the level of difficulty of classifying an example. In this paper, we propose a method to measure the perplexity of an example and investigate what factors contribute to high example perplexity. The related codes and resources are available at https://github.com/vaynexie/Example-Perplexity.

READ FULL TEXT

page 4

page 5

page 6

page 7

page 9

page 10

page 11

page 16

research
07/12/2020

Visualizing Classification Structure in Deep Neural Networks

We propose a measure to compute class similarity in large-scale classifi...
research
12/28/2020

Synergy between Observation Systems Oceanic in Turbulent Regions

Ocean dynamics constitute a source of incertitude in determining the oce...
research
02/10/2022

Understanding Rare Spurious Correlations in Neural Networks

Neural networks are known to use spurious correlations for classificatio...
research
09/19/2022

Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty

Among attempts at giving a theoretical account of the success of deep ne...
research
02/01/2022

Datamodels: Predicting Predictions from Training Data

We present a conceptual framework, datamodeling, for analyzing the behav...
research
06/28/2023

Time Regularization in Optimal Time Variable Learning

Recently, optimal time variable learning in deep neural networks (DNNs) ...
research
06/23/2023

LightGlue: Local Feature Matching at Light Speed

We introduce LightGlue, a deep neural network that learns to match local...

Please sign up or login with your details

Forgot password? Click here to reset