What can we learn from misclassified ImageNet images?

01/20/2022
by   Shixian Wen, et al.
22

Understanding the patterns of misclassified ImageNet images is particularly important, as it could guide us to design deep neural networks (DNN) that generalize better. However, the richness of ImageNet imposes difficulties for researchers to visually find any useful patterns of misclassification. Here, to help find these patterns, we propose "Superclassing ImageNet dataset". It is a subset of ImageNet which consists of 10 superclasses, each containing 7-116 related subclasses (e.g., 52 bird types, 116 dog types). By training neural networks on this dataset, we found that: (i) Misclassifications are rarely across superclasses, but mainly among subclasses within a superclass. (ii) Ensemble networks trained each only on subclasses of a given superclass perform better than the same network trained on all subclasses of all superclasses. Hence, we propose a two-stage Super-Sub framework, and demonstrate that: (i) The framework improves overall classification performance by 3.3 inferring a superclass using a generalist superclass-level network, and then using a specialized network for final subclass-level classification. (ii) Although the total parameter storage cost increases to a factor N+1 for N superclasses compared to using a single network, with finetuning, delta and quantization aware training techniques this can be reduced to 0.2N+1. Another advantage of this efficient implementation is that the memory cost on the GPU during inference is equivalent to using only one network. The reason is we initiate each subclass-level network through addition of small parameter variations (deltas) to the superclass-level network. (iii) Finally, our framework promises to be more scalable and generalizable than the common alternative of simply scaling up a vanilla network in size, since very large networks often suffer from overfitting and gradient vanishing.

READ FULL TEXT

page 2

page 3

page 5

page 7

page 8

research
04/25/2018

Progressive Neural Networks for Image Classification

The inference structures and computational complexity of existing deep n...
research
10/01/2021

One Timestep is All You Need: Training Spiking Neural Networks with Ultra Low Latency

Spiking Neural Networks (SNNs) are energy efficient alternatives to comm...
research
10/01/2015

Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding

Neural networks are both computationally intensive and memory intensive,...
research
09/02/2022

Impact of Scaled Image on Robustness of Deep Neural Networks

Deep neural networks (DNNs) have been widely used in computer vision tas...
research
04/11/2016

Binarized Neural Networks on the ImageNet Classification Task

We trained Binarized Neural Networks (BNNs) on the high resolution Image...
research
02/06/2015

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

Rectified activation units (rectifiers) are essential for state-of-the-a...
research
02/27/2019

Generative Collaborative Networks for Single Image Super-Resolution

A common issue of deep neural networks-based methods for the problem of ...

Please sign up or login with your details

Forgot password? Click here to reset