models and tools for -What makes ImageNet good for Transfer Learning?
The tremendous success of ImageNet-trained deep features on a wide range of transfer tasks begs the question: what are the properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class? To answer these and related questions, we pre-trained CNN features on various subsets of the ImageNet dataset and evaluated transfer performance on PASCAL detection, PASCAL action classification, and SUN scene classification tasks. Our overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?READ FULL TEXT VIEW PDF
The world is long-tailed. What does this mean for computer vision and vi...
In recent years, it is common practice to extract fully-connected layer ...
The quality and generality of deep image features is crucially determine...
Image geolocalization is the task of identifying the location depicted i...
Transfer learning is a widely used method to build high performing compu...
Few-shot learning methods offer pre-training techniques optimized for ea...
Over the past few years, we have witnessed the success of deep learning ...
models and tools for -What makes ImageNet good for Transfer Learning?
It has become increasingly common within the computer vision community to treat image classification on ImageNet
not as an end in itself, but rather as a “pretext task” for training deep convolutional neural networks (CNNs[25, 22]) to learn good general-purpose features. This practice of first training a CNN to perform image classification on ImageNet (i.e. pre-training) and then adapting these features for a new target task (i.e. fine-tuning) has become the de facto standard for solving a wide range of computer vision problems. Using ImageNet pre-trained CNN features, impressive results have been obtained on several image classification datasets [10, 33], as well as object detection [12, 37], action recognition 
, human pose estimation, image segmentation , optical flow , image captioning [9, 19] and others .
Given the success of ImageNet pre-trained CNN features, it is only natural to ask: what is it about the ImageNet dataset that makes the learnt features as good as they are? One school of thought believes that it is the sheer size of the dataset (1.2 million labeled images) that forces the representation to be general. Others argue that it is the large number of distinct object classes (1000), which forces the network to learn a hierarchy of generalizable features. Yet others believe that the secret sauce is not just the large number of classes, but the fact that many of these classes are visually similar (e.g. many different breeds of dogs), turning this into a fine-grained recognition task and pushing the representation to “work harder”. But, while almost everyone in computer vision seems to have their own opinion on this hot topic, little empirical evidence has been produced so far.
In this work, we systematically investigate which aspects of the ImageNet task are most critical for learning good general-purpose features. We evaluate the features by fine-tuning on three tasks: object detection on PASCAL-VOC 2007 dataset (PASCAL-DET), action classification on PASCAL-VOC 2012 dataset (PASCAL-ACT-CLS) and scene classification on the SUN dataset (SUN-CLS); see Section3 for more details.
The paper is organized as a set of experiments answering a list of key questions about feature learning with ImageNet. The following is a summary of our main findings:
1. How many pre-training ImageNet examples are sufficient for transfer learning? Pre-training with only half the ImageNet data (500 images per class instead of 1000) results in only a small drop in transfer learning performance (1.5 mAP drop on PASCAL-DET). This drop is much smaller than the drop on the ImageNet classification task itself. See Section 4 and Figure 1 for details.
2. How many pre-training ImageNet classes are sufficient for transfer learning? Pre-training with an order of magnitude fewer classes (127 classes instead of 1000) results in only a small drop in transfer learning performance (2.8 mAP drop on PASCAL-DET). Curiously, we also found that for some transfer tasks, pre-training with fewer classes leads to better performance. See Section 5.1 and Figure 2 for details.
3. How important is fine-grained recognition for learning good features for transfer learning? Features pre-trained with a subset of ImageNet classes that do not require fine-grained discrimination still demonstrate good transfer performance. See Section 5.2 and Figure 2 for details.
4. Does pre-training on coarse classes produce features capable of fine-grained recognition (and vice versa) on ImageNet itself?
We found that a CNN trained to classify only between the 127 coarse ImageNet classes produces features capable of telling apart fine-grained ImageNet classes whose labels it has never seen in training (section5.3). Likewise, a CNN trained to classify the 1000 ImageNet classes is able to distinguish between unseen coarse-level classes higher up in the WordNet hierarchy (section 5.4).
5. Given the same budget of pre-training images, should we have more classes or more images per class? Training with fewer classes but more images per class performs slightly better at transfer tasks than training with more classes but fewer images per class. See Section 5.5 and Table 2 for details.
6. Is more data always helpful? We found that training with 771 ImageNet classes (out of 1000) that exclude all PASCAL VOC classes, achieves nearly the same performance on PASCAL-DET as training on complete ImageNet. Further experiments confirm that blindly adding more training data does not always lead to better performance and can sometimes hurt performance. See Section 6, and Table 9 for more details.
A number of papers have studied transfer learning in CNNs, including the various factors that affect pre-training and fine-tuning. For example, the question of whether pre-training should be terminated early to prevent over-fitting and what layers should be used for transfer learning was studied by [2, 44]. A thorough investigation of good architectural choices for transfer learning was conducted by , while  propose an approach to fine-tuning for new tasks without ”forgetting” the old ones. In contrast to these works, we use a fixed fine-tuning pr
One central downside of supervised pre-training is that large quantity of expensive manually-supervised training data is required. The possibility of using large amounts of unlabelled data for feature learning has therefore been very attractive. Numerous methods for learning features by optimizing some auxiliary criterion of the data itself have been proposed. The most well-known such criteria are image reconstruction [5, 36, 29, 27, 32, 20] (see  for a comprehensive overview) and feature slowness [43, 14]. Unfortunately, features learned using these methods turned out not to be competitive with those obtained from supervised ImageNet pre-training . To try and force better feature generalization, more recent “self-supervised” methods use more difficult data prediction auxiliary tasks in an effort to make the CNNs “work harder”. Attempted self-supervised tasks include predictions of ego-motion [1, 16], spatial context [8, 31, 28], temporal context , and even color [45, 23] and sound . While features learned using these methods often come close to ImageNet performance, to date, none have been able to beat it.
A reasonable middle ground between the expensive, fully-supervised pre-training and free unsupervised pre-training is to use weak supervision. For example,  use the YFCC100M dataset of 100 million Flickr images labeled with noisy user tags as pre-training instead of ImageNet. But yet again, even though YFCC100M is almost two orders of magnitude larger than ImageNet, somewhat surprisingly, the resulting features do not appear to give any substantial boost over these pre-trained on ImageNet.
Overall, despite keen interest in this problem, alternative methods for learning general-purpose deep features have not managed to outperform ImageNet-supervised pre-training on transfer tasks.
The goal of this work is to try and understand what is the secret to ImageNet’s continuing success.
The process of using supervised learning to initialize CNN parameters using the task of ImageNet classification is referred to as pre-training. The process of adapting pre-trained CNN to continuously train on a target dataset is referred to as finetuning. All of our experiments use the Caffe implementation of the a single network architecture proposed by Krizhevsky et al. . We refer to this architecture as AlexNet.
We closely follow the experimental setup of Agrawal et al.  for evaluating the generalization of pre-trained features on three transfer tasks: PASCAL VOC 2007 object detection (PASCAL-DET), PASCAL VOC 2012 action recognition (PASCAL-ACT-CLS) and scene classification on SUN dataset (SUN-CLS).
For PASCAL-DET, we used the PASCAL VOC 2007 train/val for finetuning using the experimental setup and code provided by Faster-RCNN 
and report performance on the test set. Finetuning on PASCAL-DET was performed by adapting the pre-trained convolution layers of AlexNet. The model was trained for 70K iterations using stochastic gradient descent (SGD), with an initial learning rate of 0.001 with a reduction by a factor of 10 at 40K iteration.
For PASCAL-ACT-CLS, we used PASCAL VOC 2012 train/val for finetuning and testing using the experimental setup and code provided by R*CNN . The finetuning process for PASCAL-ACT-CLS mimics the procedure described for PASCAL-DET.
For SUN-CLS we used the same train/val/test splits as used by . Finetuning on SUN was performed by first replacing the FC-8 layer in the AlexNet model with a randomly initialized, and fully connected layer with 397 output units. Finetuning was performed for 50K iterations using SGD with an initial learning rate of 0.001 which was reduced by a factor of 10 every 20K iterations.
Faster-RCNN and R*CNN are known to have variance across training runs; we therefore run it three times and report the meanstandard deviation. On the other hand, , reports little variance between runs on SUN-CLS so we report our result using a single run.
In some experiments we pre-train on ImageNet using a different number of images per class. The model with 1000 images/class uses the original ImageNet ILSVRC 2012 training set. Models with N images/class for are trained by drawing a random sample of N images from all images of that class made available as part of the ImageNet training set.
For answering this question, we trained 5 different AlexNet models from scratch using 50, 125, 250, 500 and 1000 images per each of the 1000 ImageNet classes using the procedure described in Section 3. The variation in performance with amount of pre-training data when these models are finetuned for PASCAL-DET, PASCAL-ACT-CLS and SUN-CLS is shown in Figure 1. For PASCAL-DET, the mean average precision (mAP) for CNNs with 1000, 500 and 250 images/class is found to be 58.3, 57.0 and 54.6. A similar trend is observed for PASCAL-ACT-CLS and SUN-CLS. These results indicate that using half the amount of pre-training data leads to only a marginal reduction in performance on transfer tasks. It is important to note that the performance on the ImageNet classification task (the pre-training task) steadily increases with the amount of training data, whereas on transfer tasks, the performance increase with respect to additional pre-training data is significantly slower. This suggests that while adding additional examples to ImageNet classes will improve the ImageNet performance, it has diminishing return for transfer task performance.
In the previous section we investigated how varying number of pre-training images per class effects the performance in transfer tasks. Here we investigate the flip side: keeping the amount of data constant while changing the nomenclature of training labels.
The 1000 classes of the ImageNet challenge  are derived from leaves of the WordNet tree . Using this tree, it is possible to generate different class taxonomies while keeping the total number of images constant. One can generate taxonomies in two ways: (1) bottom up clustering, wherein the leaf nodes belonging to a common parent are iteratively clustered together (see Figure 3), or (2) by fixing the distance of the nodes from the root node (i.e. top down clustering). Using bottom up clustering, 18 possible taxonomies can be generated. Among these, we chose 5 sets of labels constituting 918, 753, 486, 79 and 9 classes respectively. Using top-down clustering only 3 label sets of 127, 10 and 2 can be generated, and we used the one with 127 classes. For studying the effect of number of pre-training classes on transfer performance, we trained separate AlexNet CNNs from scratch using these label sets.
Figure 2 shows the effect of number of pre-training classes obtained using bottom up clustering of WordNet tree on transfer performance. We also include the performance of these different networks on the Imagenet classification task itself after finetuning only the last layer to distinguish between all the 1000 classes. The results show that increase in performance on transfer tasks is significantly slower with increase in number of classes as compared to performance on Imagenet itself. Using only 486 classes results in a performance drop of 1.7 mAP for PASCAL-DET, 0.8% accuracy for SUN-CLS and a boost of 0.6 mAP for PASCAL-ACT-CLS. Table 1 shows the transfer performance after pre-training with 127 classes obtained from top down clustering. The results from this table and the figure indicate that only diminishing returns in transfer performance are observed when more than 127 classes are used. Our results also indicate that making the ImageNet classes finer will not help improve transfer performance.
It can be argued that the PASCAL task requires discrimination between only 20 classes and therefore pre-training with only 127 classes should not lead to substantial reduction in performance. However, the trend also holds true for SUN-CLS that requires discrimination between 397 classes. These two results taken together suggest that although training with a large number of classes is beneficial, diminishing returns are observed beyond using 127 distinct classes for pre-training.
Furthermore, for PASCAL-ACT-CLS and SUN-CLS, finetuning on CNNs pre-trained with class set sizes of 918, and 753 actually results in better performance than using all 1000 classes. This may indicate that having too many classes for pre-training works against learning good generalizable features. Hence, when generating a dataset, one should be attentive of the nomenclature of the classes.
ImageNet challenge requires a classifier to distinguish between 1000 classes, some of which are very fine-grained, such as different breeds of dogs and cats. Indeed, most humans do not perform well on ImageNet unless specifically trained , and yet are easily able to perform most everyday visual tasks. This raises the question: is fine-grained recognition necessary for CNN models to learn good feature representations, or is coarse-grained object recognition (e.g. just distinguishing cats from dogs) is sufficient?
Note that the label set of 127 classes from the previous experiment contains 65 classes that are present in the original set of 1000 classes and the remainder are inner nodes of the WordNet tree. However, all these 127 classes (see supplementary materials) represent coarse semantic concepts. As discussed earlier, pre-training with these classes results in only a small drop in transfer performance (see Table 1). This suggests that performing fine-grained recognition is only marginally helpful and does not appear to be critical for learning good transferable features.
Earlier, we have shown that the features learned on the 127 coarse classes perform almost as well on our transfer tasks as the full set of 1000 ImageNet classes. Here we will probe this further by asking a different question: is the feature embedding induced by the coarse class classification task capable of separating the fine labels of ImageNet (which it never saw at training)?
To investigate this, we used top-1 and top-5 nearest neighbors in the FC7 feature space to measure the accuracy of identifying fine-grained ImageNet classes after training only on a set of coarse classes. We call this measure, “induction accuracy”. As a qualitative example, Figure 5 shows nearest neighbors for a macaque (left) and a schnauzer (right) for feature embeddings trained on ImageNet but with different number of classes. All green-border images below the dotted line indicate instances of correct fine-grain nearest neighbor retrieval for features that were never trained on that class.
Quantitative results are shown in Figure 4. The results show that when 127 classes are used, fine-grained recognition k-NN performance is only about 15% lower compared to training directly for these fine-grained classes (i.e. baseline accuracy). This is rather surprising and suggests that CNNs implicitly discover features capable of distinguishing between finer classes while attempting to distinguish between relatively coarse classes.
Investigating whether the network learns features relevant for fine-grained recognition by training on coarse classes raises the reverse question: does training with fine-grained classes induce features relevant for coarse recognition? If this is indeed the case, then we would expect that when a CNN makes an error, it is more likely to confuse a sub-class (i.e. error in fine-grained recognition) with other sub-classes of the same coarse class. This effect can be measured by computing the difference between the accuracy of classifying the coarse class and the average accuracy of individually classifying all the sub-classes of this coarse class (please see supplementary materials for details).
Figure 6 shows the results. We find that coarse semantic classes such as mammal, fruit, bird, etc. that contain visually similar sub-classes show the hypothesized effect, whereas classes such as tool and home appliance that contain visually dissimilar subclasses do not exhibit this effect. These results indicate that subclasses that share a common visual structure allow the CNN to learn features that are more generalizable. This might suggest a way to improve feature generalization by making class labels respect visual commonality rather than simply WordNet semantics.
Results in previous sections show that it is possible to achieve good performance on transfer tasks using significantly less pre-training data and fewer pre-training classes. However it is unclear what is more important – the number of classes or the number or examples per class. One extreme is to only have 1 class and all 1.2M images from this class and the other extreme is to have 1.2M classes and 1 image per class. It is clear that both ways of splitting the data will result in poor generalization, so the answer must lie somewhere in-between.
To investigate this, we split the same amount of pre-training data in two ways: (1) more classes with fewer images per class, and (2) fewer classes with more images per class. We use datasets of size 500K, 250K and 125K images for this experiment. For 500K images, we considered two ways of constructing the training set – (1) 1000 classes with 500 images/class, and (2) 500 classes with 1000 images/class. Similar splits were made for data budgets of 250K and 125K images. The 500, 250 and 125 classes for these experiments were drawn from a uniform distribution among the 1000 ImageNet classes. Similarly, the image subsets containing 500, 250 and 125 images were drawn from a uniform distribution among the images that belong to the class.
The results presented in Table 2 show that having more images per class with fewer number of classes results in features that perform very slightly better on PASCAL-DET, whereas for SUN-CLS, the performance is comparable across the two settings.
|Pascal removed ImageNet||57.8 0.1|
It is natural to expect that higher correlation between pre-training and transfer tasks leads to better performance on a transfer task. This indeed has been shown to be true in . One possible source of correlation between pre-training and transfer tasks are classes common to both tasks. In order to investigate how strong is the influence of these common classes, we ran an experiment where we removed all the classes from ImageNet that are contained in the PASCAL challenge. PASCAL has 20 classes, some of which map to more than one ImageNet class and thus, after applying this exclusion criterion we are only left with 771 ImageNet classes.
Table 3 compares the results on PASCAL-DET when the PASCAL-removed-ImageNet is used for pre-training against the original ImageNet and a baseline of pre-training on the Places  dataset. The PASCAL-removed-ImageNet achieves mAP of 57.8 (compared to 58.3 with the full ImageNet) indicating that training on ImageNet classes that are not present in PASCAL is sufficient to learn features that are also good for PASCAL classes.
The analysis using PASCAL-removed ImageNet indicates that pre-training on non-PASCAL classes aids performance on PASCAL. This raises the question: is it always better to add pre-training data from additional classes that are not part of the target task? To investigate and test this hypothesis, we chose two different methods of splitting the ImageNet classes. The first is random split, in which the 1000 ImageNet classes are split randomly; the second is a minimal split, in which the classes are deliberately split to ensure that similar classes are not in the same split, (Figure 7). In order to determine if additional data helps performance for classes in split A, we pre-trained two CNNs – one for classifying all classes in split A and the other for classifying all classes in both split A and B (i.e. full dataset). We then finetuned the last layer of the network trained on the full dataset on split A only. If it is the case that additional data from split B helps performance on split A, then the CNN pre-trained with the full dataset should perform better than CNN pre-trained only on split A.
Using the random split, Figure 9 shows that the results of this experiment confirms the intuition that additional data is indeed useful for both splits. However, under a random class split within ImageNet, we are almost certain to have extremely similar classes (e.g. two different breeds of dogs) ending up on the different sides of the split. So, what we have shown so far is that we can improve performance on, say, husky classification by also training on poodles. Hence, the motivation for the minimal split: does adding arbitrary, unrelated classes, such as fire trucks, help dog classification?
The classes in minimal split A do not share any common ancestor with minimal split B up until the nodes at depth 4 of the WordNet hierarchy (Figure 7). This ensures that any class in split A is sufficiently disjoint from split B. Split A has 522 classes and split B has 478 classes (N.B.: for consistency, random splits A and B also had the same number of classes). In order to intuitively understand the difference between min splits A and B, we have visualized a random sample of images in these splits in Figure 8. Min split A consists of mostly static images and min split B consists of living objects.
Contrary to the earlier observation, Figure 9 shows that both min split A and B performs better than the full dataset when we finetune only the last layer. This result is quite surprising because it shows that finetuning the last layer from a network pre-trained on the full dataset, it is not possible to match the performance of a network trained on just one split. We have observed that when training all the layers for an extensive amount of time (420K iterations), the accuracy of min split A does benefit from pre-training on split B but does not for min split B. One explanation could be that images in split B (e.g. person) is contained in images in split A, (e.g. buildings, clothing) but not vice versa.
While it might be possible to recover performance with very clever adjustments of learning rates, current results suggest that training with data from unrelated classes may push the network into a local minimum from which it might be hard to find a better optima that can be obtained by training the network from scratch.
In this work we analyzed factors that affect the quality of ImageNet pre-trained features for transfer learning. Our goal was not to consider alternative neural network architectures, but rather to establish facts about which aspects of the training data are important for feature learning.
The current consensus in the field is that the key to learning highly generalizable deep features is the large amounts of training data and the large number of classes.
To quote the influential R-CNN paper: “..success resulted from training a large CNN on 1.2 million labeled images…” . After the publication of R-CNN, most researchers assumed that the full ImageNet is necessary to pre-train good general-purpose features. Our work quantitatively questions this assumption, and yields some quite surprising results. For example, we have found that a significant reduction in the number of classes or the number of images used in pre-training has only a modest effect on transfer task performance.
While we do not have an explanation as to the cause of this resilience, we list some speculative possibilities that should inform further study of this topic:
In our experiments, we investigated only one CNN architecture – AlexNet. While ImageNet-trained AlexNet features are currently the most popular starting point for fine-tuning on transfer tasks, there exist deeper architectures such as VGG , ResNet , and GoogLeNet . It would be interesting to see if our findings hold up on deeper networks. If not, it might suggest that AlexNet capacity is less than previously thought.
Our results might indicate that researchers have been overestimating the amount of data required for learning good general CNN features. If that is the case, it might suggest that CNN training is not as data-hungry as previously thought. It would also suggest that beating ImageNet-trained features with models trained on a much bigger data corpus will be much harder than once thought.
Finally, it might be that the currently popular target tasks, such as PASCAL and SUN, are too similar to the original ImageNet task to really test the generalization of the learned features. Alternatively, perhaps a more appropriate approach to test the generalization is with much less fine-tuning (e.g. one-shot-learning) or no fine-tuning at all (e.g. nearest neighbour in the learned feature space).
In conclusion, while the answer to the titular question “What makes ImageNet good for transfer learning?” still lacks a definitive answer, our results have shown that a lot of “folk wisdom” on why ImageNet works well is not accurate. We hope that this paper will pique our colleagues’ curiosity and facilitate further research on this fascinating topic.
This work was supported in part by ONR MURI N00014-14-1-0671. We gratefully acknowledge NVIDIA corporation for the donation of K40 GPUs and access to the NVIDIA PSG cluster for this research. We would like to acknowledge the support from the Berkeley Vision and Learning Center (BVLC) and Berkeley DeepDrive (BDD). Minyoung Huh was partially supported by the Rose Hill Foundation.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 36–45, 2015.
Learning representations for automatic colorization.In ECCV, 2016.
Proceedings of the 26th Annual International Conference on Machine Learning, pages 737–744. ACM, 2009.
Deep boltzmann machines.In International Conference on Artificial Intelligence and Statistics, pages 448–455, 2009.