Rethinking ImageNet Pre-training

11/21/2018
by   Kaiming He, et al.
8

We report competitive results on object detection and instance segmentation on the COCO dataset using standard models trained from random initialization. The results are no worse than their ImageNet pre-training counterparts even when using the hyper-parameters of the baseline system (Mask R-CNN) that were optimized for fine-tuning pre-trained models, with the sole exception of increasing the number of training iterations so the randomly initialized models may converge. Training from random initialization is surprisingly robust; our results hold even when: (i) using only 10 deeper and wider models, and (iii) for multiple tasks and metrics. Experiments show that ImageNet pre-training speeds up convergence early in training, but does not necessarily provide regularization or improve final target task accuracy. To push the envelope we demonstrate 50.9 AP on COCO object detection without using any external data---a result on par with the top COCO 2017 competition results that used ImageNet pre-training. These observations challenge the conventional wisdom of ImageNet pre-training for dependent tasks and we expect these discoveries will encourage people to rethink the current de facto paradigm of `pre-training and fine-tuning' in computer vision.

READ FULL TEXT
research
04/25/2020

Cheaper Pre-training Lunch: An Efficient Paradigm for Object Detection

In this paper, we propose a general and efficient pre-training paradigm,...
research
09/09/2019

Understanding the Effects of Pre-Training for Object Detectors via Eigenspectrum

ImageNet pre-training has been regarded as essential for training accura...
research
12/21/2022

Object detection-based inspection of power line insulators: Incipient fault detection in the low data-regime

Deep learning-based object detection is a powerful approach for detectin...
research
06/11/2020

Rethinking Pre-training and Self-training

Pre-training is a dominant paradigm in computer vision. For example, sup...
research
08/17/2021

RandomRooms: Unsupervised Pre-training from Synthetic Shapes and Randomized Layouts for 3D Object Detection

3D point cloud understanding has made great progress in recent years. Ho...
research
12/22/2022

Reversible Column Networks

We propose a new neural network design paradigm Reversible Column Networ...
research
05/26/2022

Revealing the Dark Secrets of Masked Image Modeling

Masked image modeling (MIM) as pre-training is shown to be effective for...

Please sign up or login with your details

Forgot password? Click here to reset