The last decade of computer vision research has pursued academic benchmarks as a measure of progress. No benchmark has been as hotly pursued as ImageNet [15, 60]. Network architectures measured against this dataset have fueled much progress in computer vision research across a broad array of problems, including transferring to new datasets [17, 59], object detection , image segmentation [27, 7] and perceptual metrics of images . An implicit assumption behind this progress is that network architectures that perform better on ImageNet necessarily perform better on other vision tasks. Another assumption is that better network architectures learn better features that can be transferred across vision-based tasks. Although previous studies have provided some evidence for these hypotheses (e.g. [6, 63, 32, 30, 27]), they have never been systematically explored across network architectures.
In the present work, we seek to test these hypotheses by investigating the transferability of both ImageNet features and ImageNet classification architectures. Specifically, we conduct a large-scale study of transfer learning across 16 modern convolutional neural networks for image classification on 12 image classification datasets in 3 different experimental settings: as fixed feature extractors[17, 59], fine-tuned from ImageNet initialization [1, 24, 6], and trained from random initialization. Our main contributions are as follows:
Better ImageNet networks provide better penultimate layer features for transfer learning with linear classification (), and better performance when the entire network is fine-tuned ().
Regularizers that improve ImageNet performance are highly detrimental to the performance of transfer learning based on penultimate layer features.
Architectures transfer well across tasks even when weights do not. On two small fine-grained classification datasets, fine-tuning does not provide a substantial benefit over training from random initialization, but better ImageNet architectures nonetheless obtain higher accuracy.
2 Related work
ImageNet follows in a succession of progressively larger and more realistic benchmark datasets for computer vision. Each successive dataset was designed to address perceived issues with the size and content of previous datasets. Torralba and Efros 
showed that many early datasets were heavily biased, with classifiers trained to recognize or classify objects on those datasets possessing almost no ability to generalize to images from other datasets.
Early works using convolutional neural networks (CNNs) for transfer learning extracted fixed features from ImageNet-trained networks and used these features to train SVMs and logistic regression classifiers for new tasks[17, 59, 6]. These works found that these features could outperform hand-engineered features even for tasks very distinct from ImageNet classification [17, 59]. Several such studies have compared the performance of AlexNet-like CNNs of varying levels of computational complexity in a transfer learning setting with no fine-tuning. Chatfield et al.  found that, out of three networks, the two more computationally expensive networks performed better on PASCAL VOC. Similar work concluded that deeper networks produce higher accuracy across many transfer tasks, but wider networks produce lower accuracy .
A number of studies have compared the accuracy of classifiers trained on fixed image features vs. fine-tuning the image representations on a new dataset [1, 6, 24]. Fine-tuning typically achieves higher accuracy, especially for larger datasets or datasets with a larger domain mismatch from the training set [78, 2, 44, 33, 8]. In object detection, ImageNet-pretrained networks are used as backbone models for Faster R-CNN and R-FCN detection systems. Classifiers with higher ImageNet accuracy achieve higher overall object detection accuracy , although variability across network architectures is small compared to variability from other object detection architecture choices. A parallel story likewise appears in image segmentation models , although it has not been as systematically explored.
Several authors have investigated how properties of the original training dataset affect transfer accuracy. Work examining the performance of fixed image features drawn from networks trained on subsets of ImageNet have reached conflicting conclusions regarding the importance of the number of classes vs. number of images per class [33, 2]. Yosinski et al.  showed that the first layer of AlexNet can be frozen when transferring between natural and manmade subsets of ImageNet without performance impairment, but freezing later layers produces a substantial drop in accuracy. Other work has investigated transfer from extremely large image datasets to ImageNet, demonstrating that transfer learning can be useful even when the target dataset is large [67, 49]. Finally, a recent work devised a strategy to transfer when labeled data from many different domains is available .
|Dataset||Classes||Size (train/test)||Accuracy measure|
|Stanford Cars ||196||8,144/8,041||top-1|
|FGVC Aircraft ||100||6,667/3,333||mean per-class|
|PASCAL VOC 2007 Cls. ||20||5,011/4,952||11-point mAP|
|Describable Textures (DTD) ||47||3,760/1,880||top-1|
|Oxford-IIIT Pets ||37||3,680/3,369||mean per-class|
|Caltech-101 ||102||3,060/6,084||mean per-class|
|Oxford 102 Flowers ||102||2,040/6,149||mean per-class|
3 Statistical methods
Much of the analysis in this work requires comparing accuracies across datasets of differing difficulty. When fitting linear models to accuracy values across multiple datasets, we consider effects of model and dataset to be additive. In this context, using untransformed accuracy as a dependent variable is problematic: The meaning of a 1% additive increase in accuracy is different if it is relative to a base accuracy of 50% vs. 99%. Thus, we consider the log odds, i.e., the accuracy after the logit transformation. The logit transformation is the most commonly used transformation for analysis of proportion data, and an additive change in logit-transformed accuracy has a simple interpretation as a multiplicative change in the odds of correct classification:
We plot all accuracy numbers on logit-scaled axes.
We computed error bars for model accuracy averaged across datasets, using the procedure from Morey 
to remove variance due to inherent differences in dataset difficulty. Given logit-transformed accuraciesof model on dataset , we compute adjusted accuracies
. For each model, we take the mean and standard error of the adjusted accuracy across datasets, and multiply the latter by a correction factor.
When examining the strength of the correlation between ImageNet accuracy and accuracy on transfer datasets, we report for the correlation between the logit-transformed ImageNet accuracy and the logit-transformed transfer accuracy averaged across datasets. We also report the rank correlation (Spearman’s ) in Appendix A.1.2.
We tested for significant differences between pairs of networks on the same dataset using a permutation test or equivalent binomial test of the null hypothesis that the predictions of the two networks are equally likely to be correct; see AppendixA.1.1
for further details. We tested for significant differences between networks in average performance across datasets using a t-test.
We examined 16 modern networks ranging in ImageNet (ILSVRC 2012 validation) top-1 accuracy from 71.6% to 80.8%. These networks encompassed widely used Inception architectures [69, 34, 70, 68]; ResNets [29, 26, 25]; DenseNets ; MobileNets [30, 61]; and NASNets 
. For fair comparison, we retrained all models with scale parameters for batch normalization layers and without label smoothing, dropout, or auxiliary heads, rather than relying on pretrained models. AppendixA.3
provides training hyperparameters along with further details of each network, including the ImageNet top-1 accuracy, parameter count, dimension of the penultimate layer, input image size, and performance of retrained models. For all experiments, we rescaled images to the same image size as was used for ImageNet training.
We evaluated models on 12 image classification datasets ranging in training set size from 2,040 to 75,750 images (20 to 5,000 images per class; Table 1). These datasets covered a wide range of image classification tasks, including superordinate-level object classification (CIFAR-10 , CIFAR-100 , PASCAL VOC 2007 , Caltech-101 ); fine-grained object classification (Food-101 , Birdsnap , Stanford Cars , FGVC Aircraft , Oxford-IIIT Pets ); texture classification (DTD 
); and scene classification (SUN397).
Figure 2 presents correlations between the top-1 accuracy on ImageNet vs. the performance of the same model architecture on new image tasks. We measure transfer learning performance in three settings: (1) training a logistic regression classifier on the fixed feature representation from the penultimate layer of the ImageNet-pretrained network, (2) fine-tuning the ImageNet-pretrained network, and (3) training the same CNN architecture from scratch on the new image task.
4.1 ImageNet accuracy predicts performance of logistic regression on fixed features, but regularization settings matter
We first examined the performance of different networks when used as fixed feature extractors by training an -regularized logistic regression classifier on penultimate layer activations using L-BFGS  without data augmentation.111 We also repeated these experiments with support vector machine classifiers in place of logistic regression, and when using data augmentation for logistic regression; see Appendix
We also repeated these experiments with support vector machine classifiers in place of logistic regression, and when using data augmentation for logistic regression; see AppendixG. Findings did not change. As shown in Figure 2 (top), ImageNet top-1 accuracy was highly correlated with accuracy on transfer tasks (). Inception-ResNet v2 and NASNet Large, the top two models in terms of ImageNet accuracy, were statistically tied for first place.
Critically, results in Figure 2 were obtained with models that were all trained on ImageNet with the same training settings. In experiments conducted with publicly available checkpoints, we were surprised to find that ResNets and DenseNets consistently achieved higher accuracy than other models, and the correlation between ImageNet accuracy and transfer accuracy with fixed features was low and not statistically significant (Appendix B). Further investigation revealed that the poor correlation arose from differences in regularization used for these public checkpoints.
Figure 3 shows the transfer learning performance of Inception models with different training settings. We identify 4 choices made in the Inception training procedure and subsequently adopted by several other models that are detrimental to transfer accuracy: (1) The absence of scale parameter for batch normalization layers; the use of (2) label smoothing  and (3) dropout ; and (4) the presence of an auxiliary classifier head . These settings had a small () impact on the overall ImageNet top-1 accuracy of each model (Figure 3, inset). However, in terms of average transfer accuracy, the difference between the default and optimal training settings was approximately equal to the difference between the worst and best ImageNet models trained with optimal settings. This difference was visible not only in transfer accuracy, but also in t-SNE embeddings of the features (Figure 4). Differences in transfer accuracy between settings were apparent earlier in training than differences in ImageNet accuracy, and were consistent across datasets (Appendix C.1).
Label smoothing and dropout are regularizers in the traditional sense: They are intended to improve generalization accuracy at the expense of training accuracy. Although auxiliary classifier heads were initially proposed to alleviate issues related to vanishing gradients [41, 69], Szegedy et al.  instead suggest that they also act as regularizers. The improvement in transfer performance when incorporating batch normalization scale parameters may relate to changes in effective learning rates [73, 82].
4.2 ImageNet accuracy predicts fine-tuning performance
We also examined performance when fine-tuning ImageNet networks (Figure 2
, middle). We initialized each network from the ImageNet weights and fine-tuned for 20,000 steps with Nesterov momentum and a cosine decay learning rate schedule at a batch size of 256. We performed grid search to select the optimal learning rate and weight decay based on a validation set (for details, see AppendixA.5). Again, we found that ImageNet top-1 accuracy was highly correlated with transfer accuracy ().
Compared with the logistic regression setting, regularization and training settings had smaller effects upon the performance of fine-tuned models. Figure 5 shows average transfer accuracy for Inception v4 and Inception-ResNet v2 models with different regularization settings. As in the logistic regression setting, introducing a batch normalization scale parameter and disabling label smoothing improved performance. In contrast to the logistic regression setting, dropout and the auxiliary head sometimes improved performance, but only if used during both pretraining and fine-tuning. We discuss these results in more detail in Appendix C.2.
Overall, fine-tuning yielded better performance than classifiers trained on fixed ImageNet features, but the gain differed by dataset. Fine-tuning improved performance over logistic regression in 179 out of 192 dataset and model combinations (Figure 6; see also Appendix E). When averaged across the tested architectures, fine-tuning yielded significantly better results on all datasets except Caltech-101 (all , Wilcoxon signed rank test; Figure 6). The improvement was generally larger for larger datasets. However, fine-tuning provided substantial gains on the smallest dataset, 102 Flowers, with 102 classes and 2,040 training examples.
4.3 ImageNet accuracy predicts performance of networks trained from random initialization
One confound of the previous results is that it is not clear whether ImageNet accuracy for transfer learning is due to the weights derived from the ImageNet training or the architecture itself. To remove the confound, we next examined architectures trained from random initialization, using a similar training setup as for fine-tuning (see Appendix A.6). In this setting, the correlation between ImageNet top-1 accuracy and accuracy on the new tasks was more variable than in the transfer learning settings, but there was a tendency toward higher performance for models that achieved higher accuracy on ImageNet (; Figure 2, bottom).
Examining these results further, we found that a single correlation averages over a large amount of variability. For the 7 datasets with <10,000 examples, the correlation was low and did not reach statistically significance (; see also Appendix D). However, for the larger datasets, the correlation between ImageNet top-1 accuracy and transfer learning performance was markedly stronger (). Inception v3 and v4 were among the top-performing models across all dataset sizes.
4.4 Fine-tuning with better models is comparable to specialized methods for transfer learning
Given the strong correlation between ImageNet accuracy and transfer accuracy, we next sought to compare simple approaches to transfer learning with better ImageNet models with baselines from the literature. We achieve state-of-the-art performance on 7 of the 12 datasets if we evaluate using the same image sizes as the baseline methods (Figure 6; see full results in Appendix F). Our results suggest that the ImageNet performance of the pretrained model is a critical factor in transfer performance.
Several papers have proposed methods to make better use of CNN features and thus improve the efficacy of transfer learning [44, 10, 43, 23, 77, 65, 13, 42, 57]. On the datasets we examine, we outperform all such methods simply by fine-tuning state-of-the-art CNNs (Appendix F
). Moreover, in some cases a better CNN can make up for dataset deficiencies: By fine-tuning ImageNet-pretrained Inception v4, we obtain state-of-the-art performance on the SUN397 scene dataset, outperforming features extracted from VGG pretrained on the Places dataset, which more closely matches the domain of SUN397.
4.5 ImageNet pretraining does not necessarily improve accuracy on fine-grained tasks
Fine-tuning was more accurate than training from random initialization for 189 out of 192 dataset/model combinations, but on Stanford Cars and FGVC Aircraft, the improvement was unexpectedly small (Figures 6 and 7). In both settings, Inception v4 was the best model we tested on these datasets. When trained at the default image size of , it achieved 92.7% on Stanford Cars when trained from scratch on vs. 93.3% when fine-tuned, and 88.8% on FGVC Aircraft when trained from scratch vs. 89.0% when fine-tuned.
ImageNet pretraining thus appears to have only marginal accuracy benefits for fine-grained classification tasks where labels are not well-represented in ImageNet. At 100+ classes and <10,000 examples, Stanford Cars and FGVC Aircraft are much smaller than most datasets used to train CNNs . In fact, the ImageNet training set contains more car images than Stanford Cars (12,455 vs. 8,144). However, ImageNet contains only 10 high-level car classes (e.g., sports car), whereas Stanford Cars contains 196 car classes by make, model, and year. Four other datasets (Oxford 102 Flowers, Oxford-IIIT Pets, Birdsnap, and Food-101) require similarly fine-grained classification, but the classes contained in the latter three datasets are much better-represented in ImageNet. Most of the cat and dog breeds present in Oxford-IIIT Pets correspond directly to ImageNet classes, and ImageNet contains 59 classes of birds and around 45 classes of fruits, vegetables, and prepared dishes.
4.6 ImageNet pretraining accelerates convergence
Given that fine-tuning and training from random initialization achieved similar performance on Stanford Cars and FGVC Aircraft, we next asked whether fine-tuning still posed an advantage in terms of training time. In Figure 8
, we examine performance of Inception v4 when fine-tuning or training from random initialization for different numbers of steps. Even when fine-tuning and training from scratch achieved similar final accuracy, we could fine-tune the model to this level of accuracy in an order of magnitude fewer steps. To quantify this acceleration, we computed the number of epochs and steps required to reach 90% of the maximum odds of correct classification achieved at any number of steps, and computed the geometric mean across datasets. Fine-tuning reached this threshold level of accuracy in an average of 26 epochs/1151 steps (inter-quartile ranges 267-4882 steps, 12-58 epochs), whereas training from scratch required 444 epochs/19531 steps (inter-quartile ranges 9765-39062 steps, 208-873 epochs) corresponding to a 17-fold speedup on average.
4.7 Accuracy benefits of ImageNet pretraining fade quickly with dataset size
Although all datasets benefit substantially from ImageNet pretraining when few examples are available for transfer, for many datasets, these benefits fade quickly when more examples are available. In Figure 9, we show the behavior of logistic regression, fine-tuning, and training from random initialization in the regime of limited data, i.e., for dataset subsets consisting of different numbers of examples per class. When data is sparse (47-800 total examples), logistic regression is a strong baseline, achieving accuracy comparable to or better than fine-tuning. At larger dataset sizes, fine-tuning achieves higher performance than logistic regression, and, for fine-grained classification datasets, the performance of training from random initialization begins to approach results of pre-trained models. On FGVC Aircraft, training from random initialization achieved parity with fine-tuning at only 1600 total examples (16 examples per class).
Has the computer vision community overfit to ImageNet as a dataset? In a broad sense, our results suggest the answer is no: We find that there is a strong correlation between ImageNet top-1 accuracy and transfer accuracy, suggesting that better ImageNet architectures are capable of learning better, transferable representations. But we also find that a number of widely-used regularizers that improve ImageNet performance do not produce better representations. These regularizers are harmful to the penultimate layer feature space, and have mixed effects when networks are fine-tuned.
More generally, our results reveal clear limits to transferring features, even among natural image datasets. ImageNet pretraining accelerates convergence and improves performance on many datasets, but its value diminishes with greater training time, more training data, and greater divergence from ImageNet labels. For some fine-grained classification datasets, a few thousand labeled examples, or a few dozen per class, are all that are needed to make training from scratch perform competitively with fine-tuning. Surprisingly, however, the value of architecture persists.
The last decade of computer vision research has demonstrated the superiority of image features learned from data over generic, hand-crafted features. Before the rise of convolutional neural networks, most approaches to image understanding relied on hand-engineered feature descriptors [47, 14, 3]. Krizhevsky et al.  showed that, given the training data provided by ImageNet , features learned by convolutional neural networks could substantially outperform these hand-engineered features. Soon after, it became clear that intermediate representations learned from ImageNet also provided substantial gains over hand-engineered features when transferred to other tasks [17, 59].
Is the general enterprise of learning widely-useful features doomed to suffer the same fate as feature engineering? Given differences between datasets , it is not entirely surprising that features learned on one dataset benefit from some amount of adaptation when applied to another. However, given the history of attempts to build general natural-image feature descriptors, it is surprising that features learned from a large natural-image dataset cannot always be profitably adapted to much smaller natural-image datasets. ImageNet weights provide a starting point for features on a new classification task, but perhaps what is needed is a way to learn how to adapt features. This problem is closely related to few-shot learning [40, 74, 58, 64, 21, 64, 51], but these methods are typically evaluated with training and test classes from the same distribution. It remains to be seen whether methods can be developed or repurposed to adapt visual representations learned from ImageNet to provide larger benefits across natural image tasks.
We thank George Dahl, Boyang Deng, Sara Hooker, Pieter-jan Kindermans, Rafael Müller, Jiquan Ngiam, Ruoming Pang, Daiyi Peng, Kevin Swersky, Vishy Tirumalashetty, Vijay Vasudevan, and Emily Xue for comments on the experiments and manuscript, and Aliza Elkin and members of the Google Brain team for support and ideas.
-  P. Agrawal, R. B. Girshick, and J. Malik. Analyzing the performance of multilayer neural networks for object recognition. In European Conference on Computer Vision, 2014.
-  H. Azizpour, A. S. Razavian, J. Sullivan, A. Maki, and S. Carlsson. Factors of transferability for a generic convnet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(9):1790–1802, Sept 2016.
-  H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool. Speeded-up robust features (surf). Computer Vision and Image Understanding, 110(3):346–359, 2008.
T. Berg, J. Liu, S. W. Lee, M. L. Alexander, D. W. Jacobs, and P. N. Belhumeur.
Birdsnap: Large-scale fine-grained visual categorization of birds.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2019–2026. IEEE, 2014.
L. Bossard, M. Guillaumin, and L. Van Gool.
Food-101 — mining discriminative components with random forests.In European Conference on Computer Vision, pages 446–461. Springer, 2014.
-  K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: delving deep into convolutional nets. In British Machine Vision Conference, 2014.
-  L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4):834–848, 2018.
-  B. Chu, V. Madhavan, O. Beijbom, J. Hoffman, and T. Darrell. Best practices for fine-tuning visual classifiers to new domains. In G. Hua and H. Jégou, editors, Computer Vision – ECCV 2016 Workshops, pages 435–442, Cham, 2016. Springer International Publishing.
-  M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi. Describing textures in the wild. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3606–3613. IEEE, 2014.
-  M. Cimpoi, S. Maji, and A. Vedaldi. Deep filter banks for texture recognition and segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3828–3836. IEEE, 2015.
-  E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le. Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501, 2018.
-  Y. Cui, Y. Song, C. Sun, A. Howard, and S. Belongie. Large scale fine-grained categorization and domain-specific transfer learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
-  Y. Cui, F. Zhou, J. Wang, X. Liu, Y. Lin, and S. Belongie. Kernel pooling for convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
-  N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pages 886–893. IEEE, 2005.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255. IEEE, 2009.
-  B. Diedenhofen and J. Musch. cocor: A comprehensive solution for the statistical comparison of correlations. PLOS ONE, 10(4):1–12, 04 2015.
J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell.
Decaf: A deep convolutional activation feature for generic visual
International Conference on Machine Learning, pages 647–655, 2014.
-  M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2):303–338, 2010.
-  R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. Liblinear: A library for large linear classification. Journal of Machine Learning Research, 9(Aug):1871–1874, 2008.
-  L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshop on Generative-Model Based Vision, 2004.
-  C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning, pages 1126–1135, 2017.
B. H. Frank.
Google brain chief: Deep learning takes at least 100,000 examples.VentureBeat, 2017.
-  Y. Gao, O. Beijbom, N. Zhang, and T. Darrell. Compact bilinear pooling. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 317–326, 2016.
-  R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 580–587, 2014.
-  P. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He. Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
S. Gross and M. Wilber.
Training and investigating residual nets.
The Torch Blog. http://torch.ch/blog/2016/02/04/resnets.html, 2016.
-  K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In IEEE International Conference on Computer Vision (ICCV), pages 2980–2988. IEEE, 2017.
-  K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In European Conference on Computer Vision, pages 346–361. Springer, 2014.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016.
-  A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
-  G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2261–2269, 2017.
-  J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, and K. Murphy. Speed/accuracy trade-offs for modern convolutional object detectors. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
-  M. Huh, P. Agrawal, and A. A. Efros. What makes imagenet good for transfer learning? CoRR, abs/1608.08614, 2016.
-  S. Ioffe and C. Szegedy. Batch normalization: accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pages 448–456, 2015.
J. Johnson, A. Alahi, and L. Fei-Fei.
Perceptual losses for real-time style transfer and super-resolution.In European Conference on Computer Vision, pages 694–711. Springer, 2016.
-  J. Krause, J. Deng, M. Stark, and L. Fei-Fei. Collecting a large-scale dataset of fine-grained cars. In Second Workshop on Fine-Grained Visual Categorization, 2013.
-  J. Krause, B. Sapp, A. Howard, H. Zhou, A. Toshev, T. Duerig, J. Philbin, and L. Fei-Fei. The unreasonable effectiveness of noisy data for fine-grained recognition. In B. Leibe, J. Matas, N. Sebe, and M. Welling, editors, Computer Vision – ECCV 2016, pages 301–320, Cham, 2016. Springer International Publishing.
-  A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097–1105, 2012.
-  B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015.
-  C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-supervised nets. In Artificial Intelligence and Statistics, pages 562–570, 2015.
-  Z. Li, Y. Yang, X. Liu, F. Zhou, S. Wen, and W. Xu. Dynamic computational time for visual attention. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1199–1209, 2017.
-  T.-Y. Lin and S. Maji. Visualizing and understanding deep texture representations. In IEEE International Conference on Computer Vision (ICCV), pages 2791–2799, 2016.
-  T.-Y. Lin, A. RoyChowdhury, and S. Maji. Bilinear cnn models for fine-grained visual recognition. In IEEE International Conference on Computer Vision (ICCV), pages 1449–1457, 2015.
-  D. C. Liu and J. Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1-3):503–528, 1989.
-  I. Loshchilov and F. Hutter. Fixing weight decay regularization in adam. CoRR, abs/1711.05101, 2017.
-  D. G. Lowe. Object recognition from local scale-invariant features. In IEEE International Conference on Computer Vision, volume 2, pages 1150–1157. Ieee, 1999.
-  L. v. d. Maaten and G. Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579–2605, 2008.
-  D. Mahajan, R. Girshick, V. Ramanathan, K. He, M. Paluri, Y. Li, A. Bharambe, and L. van der Maaten. Exploring the limits of weakly supervised pretraining. arXiv preprint arXiv:1805.00932, 2018.
-  S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. Technical report, 2013.
-  N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel. A simple neural attentive meta-learner. In International Conference on Learning Representations, 2018.
-  R. D. Morey. Confidence intervals from normalized data: A correction to cousineau (2005). Tutorials in Quantitative Methods for Psychology, 4(2):61–64, 2008.
-  M.-E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In Computer Vision, Graphics & Image Processing, 2008. ICVGIP’08. Sixth Indian Conference on, pages 722–729. IEEE, 2008.
A. Odena, A. Oliver, C. Raffel, E. D. Cubuk, and I. Goodfellow.
Realistic evaluation of semi-supervised learning algorithms.In ICLR Workshops, 2018.
-  O. M. Parkhi, A. Vedaldi, A. Zisserman, and C. Jawahar. Cats and dogs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3498–3505. IEEE, 2012.
-  F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, et al. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12(Oct):2825–2830, 2011.
Y. Peng, X. He, and J. Zhao.
Object-part attention model for fine-grained image classification.IEEE Transactions on Image Processing, 27(3):1487–1500, 2018.
-  S. Ravi and H. Larochelle. Optimization as a model for few-shot learning. In International Conference on Machine Learning, 2016.
-  A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 512–519. IEEE, 2014.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, Dec 2015.
-  M. Sandler, A. G. Howard, M. Zhu, A. Zhmoginov, and L. Chen. Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. CoRR, abs/1801.04381, 2018.
-  N. C. Silver, J. B. Hittner, and K. May. Testing dependent correlations with nonoverlapping variables: A monte carlo simulation. Journal of Experimental Education, 73(1):53–69, 2004.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations, 2015.
-  J. Snell, K. Swersky, and R. Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pages 4080–4090, 2017.
-  Y. Song, F. Zhang, Q. Li, H. Huang, L. J. O’Donnell, and W. Cai. Locally-transferred fisher vectors for texture classification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4912–4920, 2017.
-  N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014.
-  C. Sun, A. Shrivastava, S. Singh, and A. Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 843–852. IEEE, 2017.
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi.
Inception-v4, inception-resnet and the impact of residual connections on learning.In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), 2017.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
-  C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818–2826, 2016.
-  The TensorFlow Authors. inception_preprocessing.py. In TensorFlow Models. https://github.com/tensorflow/models/blob/0344c5503ee55e24f0de7f37336a6e08f10976fd/research/slim/preprocessing/inception_preprocessing.py, 2016.
-  A. Torralba and A. A. Efros. Unbiased look at dataset bias. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1521–1528. IEEE, 2011.
-  T. van Laarhoven. L2 regularization versus batch and weight normalization. CoRR, abs/1706.05350, 2017.
-  O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pages 3630–3638, 2016.
-  X.-S. Wei, C.-W. Xie, J. Wu, and C. Shen. Mask-cnn: Localizing parts and selecting descriptors for fine-grained bird species categorization. Pattern Recognition, 76:704 – 714, 2018.
-  J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3485–3492. IEEE, 2010.
-  H. Yao, S. Zhang, Y. Zhang, J. Li, and Q. Tian. Coarse-to-fine description for fine-grained visual categorization. IEEE Transactions on Image Processing, 25(10):4858–4872, 2016.
-  J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems, pages 3320–3328, 2014.
-  F. Yu, D. Wang, and T. Darrell. Deep layer aggregation. CoRR, abs/1707.06484, 2017.
-  A. R. Zamir, A. Sax, W. Shen, L. Guibas, J. Malik, and S. Savarese. Taskonomy: Disentangling task transfer learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3712–3722, 2018.
-  M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, pages 818–833. Springer, 2014.
-  G. Zhang, C. Wang, B. Xu, and R. Grosse. Three Mechanisms of Weight Decay Regularization. CoRR, Oct. 2018.
-  B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba. Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
-  B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learning transferable architectures for scalable image recognition. arXiv preprint arXiv:1707.07012, 2017.
Appendix A Supplementary experimental procedures
a.1 Supplementary statistical methods
a.1.1 Comparison of two models on the same dataset
To test for superiority of one model over another on a given dataset, we constructed permutations where, for each example, we randomly exchanged the predictions of the two networks. For each permutation, we computed the difference in accuracy between the two networks.222For VOC2007, we considered the accuracy of predictions across labels. We computed a p-value as the proportion of permutations where the difference is at least as extreme as the observed difference in accuracy. For top-1 accuracy, this procedure is equivalent to a binomial test sometimes called the "exact McNemar test," and a p-value can be computed exactly. For mean per-class accuracy, we approximated a p-value based on 10,000 permutations. These tests assess whether one trained model performs better than another on data drawn from the test set distribution. However, they are tests between trained models, rather than tests between architectures, since we do not measure variability arising from training networks from different random initializations or from different orderings of the training data.
a.1.2 Measures of correlation
Table 2 shows the Pearson correlation (as and ) as well as the Spearman rank correlation () in each of the three transfer settings we examine. We believe that Pearson correlation is the more appropriate measure, given that it is less dependent on the specific CNNs chosen for the study and the effects are approximately linear, but our results are similar in either case.
All datasets had a median image size on the shortest side of at least 331 pixels (the highest ImageNet-native input image size out of all networks tested), except Caltech-101, for which the median size is 225 on the shortest side and 300 on the longer side, and CIFAR-10 and CIFAR-100, which consist of pixel images.
For datasets with a provided validation set (FGVC Aircraft, VOC2007, DTD, and 102 Flowers), we used this validation set to select hyperparameters. For other datasets, we constructed a validation set by subsetting the original training set. For the DTD and SUN397 datasets, which provide multiple train/test splits, we used only the first provided split. For the Caltech-101 dataset, which specifies no train/test split, we trained on 30 images per class and tested on the remainder, as in previous works [17, 63, 81, 6]. With the exception of dataset subset results (Figure 9), all results indicate the performance of models retrained on the combined training and validation set.
a.3 Networks and ImageNet training procedure
Table 3 lists the parameter count, penultimate layer feature dimension, and input image size for each network examined. Unless otherwise stated, our results were obtained with networks we trained, rather than publicly available checkpoints. We trained all networks with a batch size of 4096 using Nesterov momentum of 0.9 and weight decay of , taking an exponential moving average of the weights with a decay factor of 0.9999. We performed linear warmup to a learning rate of 1.6 over the first 10 epochs, and then continuously decayed the learning rate by a factor of 0.975 per epoch. We used the preprocessing and data augmentation from . To determine how long to train each network, we trained a separate model for up to 300 epochs with approximately 50,000 ImageNet training images held out as a validation set, and then trained a model on the full ImageNet training set for the number of steps that yielded the highest performance. For all networks, we used scale parameters for batch normalization layers, and did not use label smoothing, dropout, or an auxiliary head. For NASNet-A Large, we additionally disabled drop path regularization.
When training on ImageNet, we did not optimize hyperparameters for each network individually because we were able to achieve ImageNet top-1 performance comparable to publicly available checkpoints without doing so. (When fine-tuning and training from random initialization, we found that hyperparameters were more important and performed extensive tuning; see below.) For all networks except NASNet-A Large, our retrained models achieved accuracy no more than 0.5% lower than the original reported results and public checkpoint, and sometimes substantially higher (Table 3). Given that we disabled the regularizers used in the original model, we expected a larger performance drop. Our experiments indicate that these regularizers further improve accuracy, but are evidently not necessary to achieve performance close to the published results.
For NASNet-A Large, there was a substantial gap between the performance of the published model and our retrained model (82.7% vs. 80.8%). As a sanity check, we enabled label smoothing, dropout, the auxiliary head, and drop path, and retrained NASNet-A Large with the same hyperparameters described above. This regularized model achieved 82.5% accuracy, suggesting that most of the loss in accuracy in our setup is due to disabling regularization. For other models, we could further improve ImageNet top-1 accuracy over published results by applying regularizers: A retrained Inception-ResNet v2 model with label smoothing, dropout, and the auxiliary head enabled achieved 81.4% top-1 accuracy, 1.1% better than the unregularized model and 1.3% better than the published result . However, because these regularizers clearly hurt results in the logistic regression setting, and because our goal was to compare all models and settings fairly, we report results for models trained and fine-tuned without regularization unless otherwise specified.
a.4 Logistic regression
For each dataset, we extracted features from the penultimate layer of the network. We trained a multinomial logistic regression classifier using L-BFGS, with an L2 regularization parameter applied to the sum of the per-example losses, selected from a range of 45 logarithmically spaced values from to on the validation set. Since the optimization problem is convex, we used the solution at the previous point along the regularization path can be used as a warm start for the next point, which greatly accelerated the search. For these experiments, we did not perform data augmentation or scale aggregation, and we used the entire image, rather than cropping the central 87.5% as is common for testing on ImageNet.
For fine-tuning experiments in Figure 2, we initialized networks with ImageNet-pretrained weights and trained for 20,000 steps at a batch size of 256 using Nesterov momentum with a momentum parameter of 0.9. We selected the optimal learning rate and weight decay on the validation set by grid search. Our early experiments indicated that the optimal weight decay at a given learning rate varied inversely with the learning rate, as has been recently reported . Thus, our grid consisted of 7 logarithmically spaced learning rates between 0.0001 and 0.1 and 7 logarithmically spaced weight decay to learning rate ratios between and , as well as no weight decay. We found it useful to decrease the batch normalization momentum parameter from its ImageNet value to where is the number of steps per epoch. We found that the maximum performance on the validation set at any step during training was very similar to the maximum performance at the last step, presumably because we searched over learning rate, so we did not perform early stopping. On the validation set, we evaluated on both uncropped images and images cropped to the central 87.5% and picked the approach that gave higher accuracy for evaluation on the test set. Cropped images typically yielded better performance, except on CIFAR-10 and CIFAR-100, where results differed by model.
When examining the effect of dataset size (Section 4.7), we fine-tuned for at least 1000 steps or 100 epochs (following guidance from our analysis of training time in Section 4.6) at a batch size of 64, with the learning rate range scaled down by a factor of 4. Otherwise, we used the same settings as above. Because we chose hyperparameters based on a large validation set, the results may not reflect what can be accomplished in practice when training on datasets of this size . In Sections 4.7 and 4.6, we fine-tuned models from the publicly available Inception v4 checkpoint rather than using the model trained as above.
a.6 Training from random initialization
We used a similar training protocol for training from random initialization as for fine-tuning, i.e., we trained for 20,000 steps at a batch size of 256 using Nesterov momentum with a momentum parameter of 0.9. Training from random initialization generally achieved optimal performance at higher learning rates and with greater weight decay, so we adjusted the learning rate range to span from 0.001 to 1.0 and the weight decay to learning rate ratio range to span from to .
When examining the effect of dataset size (Section 4.7), we trained from random initialization for at least 78,125 steps or 200 epochs at a batch size of 16, with the learning rate range scaled down by a factor of 16. We chose these parameters because investigation of effects of training time (Section 4.6) indicated that training from random initialization always benefited from increased training time, whereas fine-tuning did not. Additionally, pilot experiments indicated that training from random initialization, but not fine-tuning, benefited from a reduced batch size with very small datasets.
Appendix B Logistic regression performance of public checkpoints
We present results of logistic regression with features extracted from publicly available checkpoints in Figure 10. When compared using public checkpoints, ResNets and DenseNets were consistently among the top performing models. The correlation between ImageNet top-1 accuracy and accuracy across transfer tasks was weak and did not reach statistical significance (, ). By contrast, the correlation with between ImageNet top-1 accuracy and accuracy across transfer tasks with retrained models () was much higher (, , test of equality of nonoverlapping correlations based on dependent groups [62, 16]).
Retrained models achieved higher transfer accuracy than publicly available checkpoints. For 11 of the 12 datasets investigated (all but Oxford Pets), features from the best retrained model achieved higher accuracy than features from the best publicly available checkpoint. Retrained models achieved higher accuracy for 84% of dataset/model pairs (162/192), and transfer accuracy averaged across datasets was higher for retrained models for all networks except MobileNet v1 (Figure 11). The best retrained model, Inception-ResNet v2, achieved an average log odds of 1.58, whereas the best public checkpoint, ResNet v1 152, achieved an average log odds of 1.35 (, ).
Appendix C Extended analysis of effect of training/regularization settings
c.1 Performance of penultimate layer features
Figure 12 shows performance of penultimate layer features in each of the training settings in Figure 3, broken down by dataset. Across nearly all datasets and models, we observed the highest performance with the least-regularized model, even though this model was not the best model in terms of ImageNet top-1 accuracy. We also experimented with removing weight decay, but found that this yielded substantially lower performance on both ImageNet and all transfer datasets except for FGVC Aircraft.
To investigate whether the effect of regularization upon the performance of fixed features was mediated by training time, rather than the regularization itself, we performed logistic regression on penultimate layer features from Inception v4 at different epochs over training (Figure 13). For models with more regularizers, checkpoints from earlier in training typically performed better than the checkpoint that achieved the best accuracy on ImageNet. However, on most datasets, the best checkpoint without regularization outperformed all checkpoints with regularization. For most datasets, the best transfer accuracy was achieved at around the same number of training epochs as the best ImageNet top-1 accuracy, but on FGVC Aircraft, we found that a checkpoint early in training yielded much higher accuracy.
c.2 Fine-tuning performance
Here, we present an expanded analysis of the effect of regularization upon fine-tuning analysis. Figure 14 shows average fine-tuning both performance across datasets when fine-tuning with the same settings as used for pretraining (blue, same data as in Figure 5), and when pretraining with regularization but fine-tuning without any regularization (orange). For all regularization settings, benefits are only clearly observed when the regularization is used for both pretraining and fine-tuning. Figure 15 shows results broken down by dataset for pretraining and fine-tuning with the same settings.
Overall, the effect of regularization upon fine-tuning performance was much smaller than the effect upon the performance of logistic regression on penultimate layer features. As in the logistic regression setting, enabling batch normalization scale parameters and disabling label smoothing improved performance. Effects of dropout and the auxiliary head were not entirely consistent across models and datasets (Figure 15). Inception-ResNet v2 clearly performed better when the auxiliary head was present. For Inception v4, the auxiliary head improved performance on some datasets (Food-101, FGVC Aircraft, VOC2007, and Oxford Pets) but worsened performance on others (CIFAR-100, DTD, Oxford 102 Flowers). However, because improvements were only observed when the auxiliary head was used both for pretraining and fine-tuning, it is unclear whether the auxiliary head leads to better weights or representations. It may instead improve fine-tuning performance by acting as a regularizer at fine-tuning time.
Appendix D Relationship between dataset size and predictive power of ImageNet accuracy
As datasets get larger, ImageNet accuracy becomes a better predictor of the performance of models trained from scratch. Figure 16 shows the relationship between the dataset size and the correlation between ImageNet accuracy and accuracy on other datasets, for each of the 12 datasets investigated. We found that there was a significant relationship when networks were trained from random initialization (), but there were no significant relationships in the transfer learning settings.
One possible explanation for this behavior is that ImageNet performance measures both inductive bias and capacity. When training from scratch on smaller datasets, inductive bias may be more important than capacity.
Appendix E Additional comparisons of logistic regression, fine-tuning, and training from random initialization
Figure 17 presents additional scatter plots comparing performance in the three settings we investigated. Fine-tuning usually achieved higher accuracy than logistic regression on top of fixed ImageNet features or training from randomly initialized models, but for some datasets, the gap was small. The performance of logistic regression on fixed ImageNet features vs. networks trained from random initialization was heavily dependent on the dataset.
Appendix F Comparison versus state-of-the-art
Table 4 shows the best previously reported results of which we are aware on each of the datasets investigated. We achieve state-of-the-art performance on either 5 datasets at networks’ native image sizes, or 7 if we retrain networks at , as some previous transfer learning works have done. For CIFAR-10, CIFAR-100, and Stanford Cars, the best result was trained from scratch; for all other datasets, the baselines use some form of ImageNet pretraining.
Appendix G Comparison of alternative classifiers
In addition to the logistic regression without data augmentation setting described in the main text, we investigated transfer learning performance using support vector machines without data augmentation, and using logistic regression with data augmentation.
Although a logistic regression classifier trained on the penultimate layer activations has a natural interpretation as retraining the last layer of the neural network, many previous studies have reported results with support vector machines [17, 59, 6, 63]. Thus, we examine performance in this setting as well (Figure 18 top). Following previous work [63, 6], we -normalized the input to the model along the feature dimension. We used the SVM implementation from scikit-learn [19, 56], selecting the value of the hyperparameter from 26 logarithmically spaced values between and . SVM and logistic regression results were highly correlated (). For most (146/192) dataset/model pairs, logistic regression outperformed SVM, but differences were small (average log odds 1.32 vs. 1.28, , t-test).
g.2 Logistic regression with data augmentation
|Dataset||Train Size||Test Size||Train Dups||Test Dups||Train Dup %||Test Dup %|
Finally, we trained a logistic regression classifier with data augmentation, in the same setting we use for fine-tuning. We trained for 40,000 steps with Nesterov momentum and a batch size of 256. Because the optimization problem is convex, we did not optimize over learning rate, but instead fixed the initial learning rate at 0.1 and used a cosine decay schedule. We optimized over L2 regularization parameters for the final layer, applied to the mean of the per-example losses, selected from a range of 10 logarthmically spaced values between and 0.1. Results are shown in Figure 18 bottom. No findings changed. Transfer accuracy with data augmentation was highly correlated with ImageNet accuracy (Figure 18F) and with results without data augmentation (Figure 18G; for both correlations). Fine-tuning remained clearly superior to logistic regression with data augmentation, achieving better results for 188/192 dataset/model pairs (Figure 18H).
Logistic regression with data augmentation performed better for 100/192 dataset/model pairs. Data augmentation gave a slight improvement in average log odds (1.35 vs. 1.32), but the best performing model without data augmentation was better than the best performing model with data augmentation on half of the 12 datasets.
Appendix H Duplicate images
We used a CNN-based duplicate detector trained on synthesized image triplets to detect images that were present in both the ImageNet training set and the datasets we examine. Because the duplicate detector is optimized for speed, it is imperfect. We used a threshold that was conservative based on manual examination, i.e., it resulted in some false positives but very few false negatives. Thus, the results below represent a worst-case scenario for overlap in the datasets examined. Generally, there are relatively few duplicates. For most of these datasets, standard practice is to fine-tune an ImageNet pretrained network without special handling of duplicates, so the presence of duplicates does not affect the comparability of our results to previous work. However, for CIFAR-10 and CIFAR-100, we compare against networks trained from scratch and there are a substantial number of duplicates, so we exclude duplicates from the test set.
On CIFAR-10, we achieve an accuracy of 98.04% when fine-tuning NASNet Large (the best model) on the full test set. We also achieve an accuracy of 98.02% on the 9,863 example test set that is disjoint with the ImageNet training set. We achieve an accuracy of 99.27% on the 137 duplicates. On CIFAR-100, we achieve an accuracy of 87.7% on the full test set. We achieve an accuracy of 87.5% on the 9,771 example test set that is disjoint from the ImageNet training set, and an accuracy of 95.63% on the 229 duplicates.
Appendix I Numerical performance results
We present the numerical results for logistic regression, fine-tuning, and training from random initialization in Table 6. Bold-faced numbers represent best models, or models insignificantly different from the best, in each training setting.
|MobileNet v2 (1.4)||71.6||89.8||73.4||50.0||60.3||56.1||55.2||81.9||73.0||89.3||91.83||93.5|
|MobileNet v2 (1.4)||87.7||96.13||82.5||71.5||62.6||91.8||86.8||83.4||73.0||91.0||91.1||97.52|
Trained from random initialization
|MobileNet v2 (1.4)||81.9||94.07||75.5||66.8||51.1||89.0||82.7||66.3||63.1||80.1||73.1||91.9|