Empirical Study of Easy and Hard Examples in CNN Training

11/25/2019
by   Ikki Kishida, et al.
0

Deep Neural Networks (DNNs) generalize well despite their massive size and capability of memorizing all examples. There is a hypothesis that DNNs start learning from simple patterns and the hypothesis is based on the existence of examples that are consistently well-classified at the early training stage (i.e., easy examples) and examples misclassified (i.e., hard examples). Easy examples are the evidence that DNNs start learning from specific patterns and there is a consistent learning process. It is important to know how DNNs learn patterns and obtain generalization ability, however, properties of easy and hard examples are not thoroughly investigated (e.g., contributions to generalization and visual appearances). In this work, we study the similarities of easy and hard examples respectively for different Convolutional Neural Network (CNN) architectures, assessing how those examples contribute to generalization. Our results show that easy examples are visually similar to each other and hard examples are visually diverse, and both examples are largely shared across different CNN architectures. Moreover, while hard examples tend to contribute more to generalization than easy examples, removing a large number of easy examples leads to poor generalization. By analyzing those results, we hypothesize that biases in a dataset and Stochastic Gradient Descent (SGD) are the reasons why CNNs have consistent easy and hard examples. Furthermore, we show that large scale classification datasets can be efficiently compressed by using easiness proposed in this work.

READ FULL TEXT
research
08/13/2021

Datasets for Studying Generalization from Easy to Hard Examples

We describe new datasets for studying generalization from easy to hard e...
research
03/16/2020

Explaining Memorization and Generalization: A Large-Scale Study with Coherent Gradients

Coherent Gradients is a recently proposed hypothesis to explain why over...
research
03/17/2022

Confidence Dimension for Deep Learning based on Hoeffding Inequality and Relative Evaluation

Research on the generalization ability of deep neural networks (DNNs) ha...
research
03/18/2023

Learn, Unlearn and Relearn: An Online Learning Paradigm for Deep Neural Networks

Deep neural networks (DNNs) are often trained on the premise that the co...
research
11/12/2018

Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers

Neural networks have great success in many machine learning applications...
research
04/07/2021

[RE] CNN-generated images are surprisingly easy to spot...for now

This work evaluates the reproducibility of the paper "CNN-generated imag...

Please sign up or login with your details

Forgot password? Click here to reset