Re-labeling ImageNet: from Single to Multi-Labels, from Global to Localized Labels

01/13/2021
by   Sangdoo Yun, et al.
21

ImageNet has been arguably the most popular image classification benchmark, but it is also the one with a significant level of label noise. Recent studies have shown that many samples contain multiple classes, despite being assumed to be a single-label benchmark. They have thus proposed to turn ImageNet evaluation into a multi-label task, with exhaustive multi-label annotations per image. However, they have not fixed the training set, presumably because of a formidable annotation cost. We argue that the mismatch between single-label annotations and effectively multi-label images is equally, if not more, problematic in the training setup, where random crops are applied. With the single-label annotations, a random crop of an image may contain an entirely different object from the ground truth, introducing noisy or even incorrect supervision during training. We thus re-label the ImageNet training set with multi-labels. We address the annotation cost barrier by letting a strong image classifier, trained on an extra source of data, generate the multi-labels. We utilize the pixel-wise multi-label predictions before the final pooling layer, in order to exploit the additional location-specific supervision signals. Training on the re-labeled samples results in improved model performances across the board. ResNet-50 attains the top-1 classification accuracy of 78.9 on ImageNet with our localized multi-labels, which can be further boosted to 80.2 localized multi-labels also outperforms the baselines on transfer learning to object detection and instance segmentation tasks, and various robustness benchmarks. The re-labeled ImageNet training set, pre-trained weights, and the source code are available at https://github.com/naver-ai/relabel_imagenet.

READ FULL TEXT

page 1

page 2

page 13

page 15

research
02/26/2019

Learning a Deep ConvNet for Multi-label Classification with Partial Labels

Deep ConvNets have shown great performance for single-label image classi...
research
11/23/2021

Multi-label Iterated Learning for Image Classification with Label Ambiguity

Transfer learning from large-scale pre-trained models has become essenti...
research
11/29/2022

LUMix: Improving Mixup by Better Modelling Label Uncertainty

Modern deep networks can be better generalized when trained with noisy s...
research
05/09/2022

When does dough become a bagel? Analyzing the remaining mistakes on ImageNet

Image classification accuracy on the ImageNet dataset has been a baromet...
research
03/24/2017

Improving Classification by Improving Labelling: Introducing Probabilistic Multi-Label Object Interaction Recognition

This work deviates from easy-to-define class boundaries for object inter...
research
06/12/2020

Are we done with ImageNet?

Yes, and no. We ask whether recent progress on the ImageNet classificati...
research
03/11/2022

Spatial Consistency Loss for Training Multi-Label Classifiers from Single-Label Annotations

As natural images usually contain multiple objects, multi-label image cl...

Please sign up or login with your details

Forgot password? Click here to reset