Deep Learning has become all pervasive in many application domains like Vision, Speech, and Natural Language Processing. This can be partly attributed to the availability of fast processing units like GPUs as well as better neural network designs. The availability of large, open source, general purpose labeled data has also helped the penetration of Deep Learning into these domains.
The accuracy obtained on a learning task depends on the quality and quantity of training data. As Figure 1 shows, with larger amounts of data, for the same learning task, one can obtain much better accuracy. In this figure, the accuracy obtained on various categories of ImageNet22K  are shown with the big data being 10x bigger in size than the small data. While large, open source, general purpose, labeled data is available, customers often have specific needs for training. For example, a doctor may be interested in using Deep Learning for Melanoma Detection . The amount of labeled data available in these specific areas is rather limited. In situations like these, the training accuracy can be negatively impacted if trained with only this limited data. To alleviate this problem, one can fallback on Transfer Learning  .
In Transfer Learning, one takes a model, trained on a potentially large dataset (called the source dataset) and then learns a new, smaller dataset (called the target dataset) as a transfer task (T) on it. This can be achieved by finetuning the weights of neurons in the pre-trained model using the target dataset. Finetuning is a technique to leverage the information contained in a source dataset by tweaking the weights of its pre-trained network while training the model for a target dataset. It has been shown that models trained on the source dataset learn basic concepts which will be useful in learning the target dataset.
In the area of vision, the neural networks tend to be quite deep in terms of layers . It has been shown that the layers learn different concepts. The initial layers learn very basic concepts like color, edges, shapes, and textures while later layers learn complex concepts . The last layer tends to learn to differentiate between the labels supported by the source dataset.
The key challenges to Transfer Learning are how, what and when to transfer . One needs to address key questions like the selection of the source dataset, the neural network to use, the various hyperparameter settings as well as the type of training method to apply on the selected neural network and dataset. Figure 2 shows the accuracy obtained while training on the Tool category of ImageNet22K on models created from different source categories of ImageNet22K like Sports, Animals, Plant as well as random initialization. As the figure indicates, accuracy varied from -8% to +67% improvement over the random initialization (no Transfer Learning) case.
When performing Transfer Learning using deep learning, a popular method of training is using Stochastic Gradient Descent (SGD). In SGD, the key hyperparameters to control the descent are the block size, the step size and the learning rate. In the case of Transfer Learning, the learning rate can be set for every layer of the neural network. This controls how much the weights in each layer change as training progresses on the target dataset. A lower learning rate for a layer allows the layer to retain what it has learned from the source data longer. Conversely, a higher learning rate forces the layer to relearn those weights quicker for the target dataset. For Transfer Learning, the concepts learned in the early layers tend to have high value since the source dataset is typically large, and the early layers represent lower-level features that are transferable to the target task. If the rates are large, then the weights could change significantly and the neural network could over-learn on the target task, especially if the target task has a limited amount of training data. The accuracy that is obtained on the target task depends on the proper selection of all these parameters.
In this paper we study the impact of individualized layer learning rates on the accuracy of training. We use a large dataset called ImageNet22K  and a small dataset called the Oxford Flowers  for our experiments. These experiments are done on a deep residual network . We show that the number of images-per-label plays an important role in the choice of the learning rate for a layer. We also share preliminary results on real world image classification tasks which indicate graduated learning rates across a network, such that early layers change slowly, allow for better accuracy on the target dataset.
The paper is organized as follows: In section 2, we describe related work. In sections 3 and 4 we describe our experimental setup and present result our results, respectively. We conclude in section 5.
2 Related Work
Several approaches are proposed to deal with the problem of learning with small amounts of data. These include one-shot learning , zero-shot learning , multi-task learning , and generic transfer learning   .
Multi-task learning simultaneously trains the network for multiple related tasks by finding a shared feature space 
. An example is Neural Machine Translation (NMT) where the same network is used for translation to different languages. In  a joint fine-tuning approach is proposed to tackle the problem of training with insufficient labeled data. The basic idea is to select a subset of training data from source dataset (with similar low-level features as target dataset) and use it to augment the training dataset for target task. Here the convolutional layers of the resulting network are finetuned for both the source and target tasks. Our work is targeted for scenarios where source dataset is not accessible and finetuning is only possible using a target dataset.
It was established in  that finetuning all the layers of the neural network gives the best accuracy. However there is no study on the sensitivity of accuracy to the degree of finetuning. In  it is experimentally shown for one dataset that the accuracy of a (finetuned) model monotonically increases with increasing learning rate and then decreases, indicating existence of an optimal learning rate before overlearning happens. We studied variation in accuracy of model with learning rate used in finetuning for several datasets and observed non-monotone patterns.
. In this approach, there are as many SVMs as categories in the target dataset and each SVM learns to classify a particular label. The feature embeddings can be taken from any layer of the neural network but, in general, is taken from the penultimate layer. This is equivalent of fine tuning with the learning rate multipliers of all the inner layers up to the penultimate layer being kept to 0 and the last layer being changed.
3 Experimental Setup
ImageNet22k contains 21841 categories spread across hierarchical categories. We extracted some of the major hierarchies like sport, garment, fungus, weapon, plant, animal, furniture, food, person, nature, music, fruit, fabric, tool, and building to form multiple source and target domains image sets for our evaluation. Figure 3 shows the hierarchies of ImageNet22k dataset that was used and their relative sizes in terms of number of images. Figure 4
show representative images from some of these important domains. Some of the domains like animal, plant, person, and food contained substantially more images (and labels) than categories such as weapon, tool, or sport. This skew is reflective of real world situations and provides a natural testbed for our method when comparing training sets of different sizes.
Each of these domains was then split into four equal partitions. One was used to train the source model, two were used to validate the source and target models, and the last was used for the Transfer Learning task. One-tenth of the fourth partition was used to create a Transfer Learning target. For example, the person hierarchy has more than one million images. This was split into four equal partitions of more than 250K each. The source model was trained with data of that size, whereas the target model was fine-tuned with one-tenth of that data size taken from one of the partitions. The smaller target datasets are reflective of real Transfer Learning tasks.
We augmented the target datasets by also using the Oxford Flower dataset  as a separate domain. The dataset contains 102 commonly occurring flower types with 8189 images. Out of this, a target dataset of only 10 training images per class was used. The rest of the data was used for validation.
The training of the source and target models was done using Caffe and a ResNet-27 model . The main components of this neural network are shown in Figure 5. The source models were trained using SGD  for 900,000 iterations with a step size of 300,000 iterations and an initial learning rate of 0.01. The target models were trained with an identical network architecture, but with a training method with one-tenth of both iterations and step size. A fixed random seed was used throughout all training.
4 Results and Discussion
Finetuning the weights involves initializing the weights to the values from the source model and then adjusting them to reduce the classification loss with the target dataset. Typically in fine-tuning a source model to a target domain, the practice is to keep the weights of all the inner layers unchanged and only finetune the weights of the last fully connected layer. The parameter which controls the degree of finetuning is the learning rate. Let be a transfer learning finetuning experiment where the inner layers learning rate () is at and outer layer learning rate () is at , with . We are assuming a uniform learning rate for all the inner layers for most of the experiments. For those where the inner learning rate was varied, it is specifically mentioned in the paper.
4.1 Finetuning Last Layer
We first did some experiments to quantify the gains possible by varying the learning rate of the last layer in finetuning while keeping all the inner layers weights unchanged. Table 1 compares the difference in accuracy of trained model for two different values of learning rate of the last layer, 0.01 and 0.1, corresponding to experiments and . Observe that the accuracy is sensitive to the choice of and significant gains in accuracy (up to 127%) are achievable for certain domains by just choosing the best value of .
4.2 Finetuning Inner Layers
An earlier work [18, 2] has observed that finetuning inner layers along with the last layer can give better accuracy compared to only finetuning the last layer. However their observation was based on limited datasets. We are interested in studying how the accuracy changes with for a fixed with following objectives:
Identify patterns which can be used to provide guidelines for choosing and for a give source/destination dataset.
Find correlation between dataset features like images/label, similarity between source and target datasets, and the choice of .
Quantify possible gains in accuracy for different datasets by exploring the space of and values and hence establish the need to develop algorithms for identifying the right set of fine tuning parameters for a given source/target dataset.
To this end, we conducted experiments varying for a fixed . We divided the experiments into two sets based on perceived semantic closeness of source and target domains. Set A (B) consists of experiments where the source and target datasets are semantically close (far). Thus we have,
Two patterns across different experiments are observed: (i) accuracy increases monotonically with and then decreases (ii) accuracy alternates between increase and decrease cycles. The variation in accuracy with can be significant for certain datasets. Let and be the minimum and maximum value of accuracy obtained when is varied at and be defined as:
Observe that represents the percentage range of possible variation in accuracy with and varying . Figure 8 compares for different datasets. All the datasets exhibit , with median values of being 28.96% (83.52%). Observe that for all the datasets. Also, for same dataset, the range of variation in accuracy can be quite large or small depending on . For example, for and the difference is greater than 100 points.
Thus, finetuning both inner and outer layers gives the best accuracy. Further the value of that maximizes accuracy can be different for different datasets. The pattern of variation in accuracy with is not always monotone.
Let be the value of that achieves the best accuracy at for a dataset. Table 2 lists for different datasets. The last column in the table shows the difference . Observe that there is no clear winner, for some datasets keeping and then searching for gives the best accuracy while for others performs better. This indicates the need for joint optimization over the space of and to get the best accuracy.
We are interested in identifying correlation between source/target dataset features and . The first feature that we consider is images/label in the target dataset. Intuitively with more labelled data for the target domain, we can be more aggressive (i.e., use larger and ) in finetuning. Figure 9 plots versus images/label in target for and . For both these cases we observe that increases with images/label. However there is one anomaly, for , though has smaller images/label. This seems to allude that other features of source/target datasets also dictate the choice of learning rates. We are currently investigating this direction with the hope to develop some functional mapping between the features of source/target datasets and . This knowledge can be leveraged to develop intelligent algorithms to identify the best learning rate for inner layers and outer layers for a given source/target dataset.
4.3 Graduated Finetuning of Inner Layers
We also investigated how the top-1 accuracy varies if the inner layer learning rate multipliers are not kept at a fixed value but varied. With the assumption that very basic concepts learned in the earlier layers are more important for transfer learning than later layers which map to complex concepts, we varied the learning rate multipliers in steps within the inner layers.
4.3.1 Oxford Flowers Dataset
The ResNet-27 we are using for throughout these experiments has inner convolutional layers organized in 5 stages, conv1 through conv5 as shown in Figure 5. We can denote the learning rate multiplier for each of these 5 stages as through . We measured the accuracy of finetuning when we kept the inner learning rate multiplier (..) equal across stages, (at a fixed value of either 1, 2 or 5) and also compared to using a graduated set of values. In this case, each convolutional stage was assigned a multiplier (like 0, 1, 2, and 5), with conv1 and conv2 using the same (first, smallest) multiplier, and conv3, 4, and 5 using the successive, larger multipliers. (Meaning was equal to .) In each case we set the learning rate multiplier of the last layer to 10. Figure 10 shows the top-1 accuracy for different configurations with Oxford flowers as the target dataset and plant as the source data set with the base learning rate at 0.001. As the chart shows, the best accuracy was achieved when the learning rate multipliers were graduated.
4.3.2 Real World Image Classification Tasks
Next, we sought to validate these observations on training data ”in the wild”. IBM operates a public cloud API called Watson Visual Recognition111https://www.ibm.com/watson/developercloud/visual-recognition/api/v3 which enables users to train their own image classifier by providing labelled example images. While images provided to the API are not used to train anything aside from that user’s model, users can opt-in to allow their image data to be used to help evaluate changes in the training engine. From the many training tasks that were opted-in, we took a random sample of 70 tasks. We did not manually inspect the images, but based on the names given to the labels, we presumed they represented a wide variety of image types, including industrial, consumer, scientific, and social domains as shown in Figure 11. Based on the languages of the class labels, we had a wide geographic range as well. The average number of training images per task was about 250, with an average of 5 classes in each, so a mean of 50 image examples per class. We randomly split these into 80% for training and 20% for validation, leaving 40 training images per class on average.
For each of the 70 training tasks, we created a baseline model that was a ResNet-27 initialized with weights from an ImageNet1K model. We set the base learning rate to 0.001 and the to 10. The
was set to 0. We fine-tuned the network for 20 epochs and computed top-1 accuracy on the held-out 20% of labelled data from each task. The average top-1 accuracy across the 70 tasks was 78.1%.
For the graduated condition, we initialized .. to be and to be 16. We then defined a set of 11 scales, . The scale is a secondary learning rate multiplier. For example, the final learning rate at scale 0.5 for conv3 () and base learning rate 0.001 would be . The intuition is to combine the scale factors explored in Figures 6 and 7 with the graduated values of .. explored in figure 10.
This combination of scales and learning tasks resulted in additional finetuning jobs, which we ran for 20 epochs each. We evaluated the top-1 accuracy for each of these jobs. We found that if we picked the individual scale which maximized the accuracy for each job, the mean top-1 accuracy across all tasks improved from 78.1% to 88.0%, a significant gain. However, to find this maximum exhaustively requires running 11 fine-tuning jobs for each learning task. So we looked at which scale was most frequently the optimal one, and it was scale of 0.25. If we limit ourselves to one finetuning job per training task, and always chose this single scale, the mean top-1 accuracy across jobs had a more modest increase, from 78.1% to 79.7%.
This promising direction needs further investigation; if we could predict the optimal learning rate multiplier scale based on some known characteristic of the training task, such as number of images per class, or total number of training images, we could efficiently reach the higher accuracy point established by our exhaustive search.
Transfer Learning is a powerful method of learning from small datasets. However the accuracy obtained from this method could vary substantially depending on the choice of the hyperparameters for training as well as the selection of the source dataset and model. We study the impact of the learning rate and multiplier which can be set for every layer of the neural network. We present experimental analysis based on the large ImageNet22K dataset, the small Oxford flower dataset and real world image classification datsets and show that the images per label parameter could be used to determine what the learning rates. It also seems like continuously varying the learning rate for inner layers has more promise than keeping them all fixed and is a worthy direction to pursue.
-  Argyriou, A., Evgeniou, T., Pontil, M.: Multi-task feature learning. In: Proceedings of the 19th International Conference on Neural Information Processing Systems. pp. 41–48 (2006)
-  Bhattacharjee, B., Hill, M., Wu, H., Chandakkar, P., Smith, J., Wegman, M.: Distributed learning of deep feature embeddings for visual recognition tasks. IBM Journal of Research and Development 61(4), 1–9 (2017). https://doi.org/10.1147/JRD.2017.2706118
Bottou, L.: Large-Scale Machine Learning with Stochastic Gradient Descent. In: Proceedings of COMPSTAT (2010)
-  Codella, N., Cai, J., Abedini, M., Garnavi, R., Halpern, A., Smith, J.R.: Deep learning, sparse coding, and svm for melanoma recognition in dermoscopy images. In: Proceedings of the 6th International Workshop on Machine Learning in Medical Imaging - Volume 9352. pp. 118–126. Springer-Verlag New York, Inc. (2015)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., FeiFei, L.: Imagenet: A large-scale hierarchical image database. In: IEEE Conference on CVPR (2009)
-  Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: Decaf: A deep convolutional activation feature for generic visual recognition. In: Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32. pp. I–647–I–655. ICML’14 (2014)
-  Dong, D., Wu, H., He, W., Yu, D., Wang, H.: Multi-task learning for multiple language translation. In: ACL (2015)
-  Fei-Fei, L., Fergus, R., Perona, P.: One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 594–611 (2006)
-  He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition. In: IEEE Conference on CVPR (2016)
-  Jia, Y., Shelhmer, E., Donahue, J., Kacayev, S., long, J., Girshick, R.B., Guadarrama, S., Darrell, T.: Caffe: Convolutional Architecture for Fast Feature Embedding. In: ACM Multimedia (2014)
Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet Classification with Deep Convolutional Neural Networks. In: Neural Information Processing Systems (2012)
-  LeCun, Y., Bengio, Y., Hinton, G.: Deep Learning. In: Nature. vol. 521, pp. 436–444 (2015)
-  Mou, L., Meng, Z., Yan, R., Li, G., Xu, Y., Zhang, L., Jin, Z.: How transferable are neural networks in NLP applications? In: EMNLP (2016)
-  Nilsback, M., Zisserman, A.: Automated flower classification over a large number of classes. In: ICVGIP (2008)
-  Palatucci, M., Pomerleau, D., Hinton, G., Mitchell, T.M.: Zero-shot learning with semantic output codes. In: Proceedings of the 22Nd International Conference on Neural Information Processing Systems. pp. 1410–1418 (2009)
-  Pan, S.J., Yang, O.: A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering (2010)
-  Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? Advances in Neural Information Processing Systems 27 (NIPS 2014) (2014)