Learning What and Where to Transfer

05/15/2019 ∙ by Yunhun Jang, et al. ∙ 7

As the application of deep learning has expanded to real-world problems with insufficient volume of training data, transfer learning recently has gained much attention as means of improving the performance in such small-data regime. However, when existing methods are applied between heterogeneous architectures and tasks, it becomes more important to manage their detailed configurations and often requires exhaustive tuning on them for the desired performance. To address the issue, we propose a novel transfer learning approach based on meta-learning that can automatically learn what knowledge to transfer from the source network to where in the target network. Given source and target networks, we propose an efficient training scheme to learn meta-networks that decide (a) which pairs of layers between the source and target networks should be matched for knowledge transfer and (b) which features and how much knowledge from each feature should be transferred. We validate our meta-transfer approach against recent transfer learning methods on various datasets and network architectures, on which our automated scheme significantly outperforms the prior baselines that find "what and where to transfer" in a hand-crafted manner.



There are no comments yet.


page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Learning deep neural networks (DNNs) requires large datasets, but it is expensive to collect a sufficient amount of labeled samples for each target task. A popular approach for handling such lack of data is transfer learning

(Pan & Yang, 2010), whose goal is to transfer knowledge from a known source task to a new target task. The most widely used method for transfer learning is pre-training with fine-tuning (Razavian et al., 2014): first train a source DNN (e.g. ResNet (He et al., 2016)

) with a large dataset (e.g. ImageNet

(Deng et al., 2009)) and then, use the learned weights as an initialization to train a target DNN. Yet, fine-tuning definitely is not a panacea. If the source and target tasks are semantically distant, it may provide no benefit. Cui et al. (2018) suggest to sample from the source dataset depending on a target task for pre-training, but it is only possible when the source dataset is available. There is also no straightforward way to use fine-tuning, if the network architectures for the source and target tasks largely differ.

Figure 1: Top: Prior approaches. Knowledge transfer between two networks is done between hand-crafted chosen pairs of layers without considering importance of channels. Bottom: Our meta-transfer method. The meta-networks automatically decide amounts of knowledge transfer between layers of the two networks and importance of channels when transfer. Line width indicates an amount of transfer in pairs of transferring layers and channels.

Several existing works can be applied to this challenging scenario of knowledge transfer between heterogeneous DNNs and tasks. Learning without forgetting (LwF) (Li & Hoiem, 2018) proposes to use knowledge distillation, suggested in Hinton et al. (2015), for transfer learning by introducing an additional output layer on a target model, and thus it can be applied to situations where the source and target tasks are different. FitNet (Romero et al., 2015) proposes a teacher-student training scheme for transferring the knowledge from a wider teacher network to a thinner student network, by using teacher’s feature maps to guide the learning of the student. To guide the student network, FitNet uses matching loss between the source and target features. Attention transfer (Zagoruyko & Komodakis, 2017) and Jacobian matching (Srinivas & Fleuret, 2018) suggest similar approaches to FitNet, but use attention maps generated from feature maps or Jacobians for transferring the source knowledge.

Our motivation is that these methods, while allowing to transfer knowledge between heterogeneous source and target tasks/architectures, have no mechanism to identify which source information to transfer, between which layers of the networks. Some source information is more important than others, while some are irrelevant or even harmful depending on the task difference. For example, since network layers generate representations at different level of abstractions (Zeiler & Fergus, 2014), the information of lower layers might be more useful when the input domains of the tasks are similar, but the actual tasks are different (e.g., fine-grained image classification tasks). Furthermore, under heterogeneous network architectures, it is not straightforward to associate a layer from the source network with one from the target network. Yet, since there was no mechanism to learn what to transfer to where, existing approaches require a careful manual configuration of layer associations between the source and target networks depending on tasks, which cannot be optimal.

Contribution. To tackle this problem, we propose a novel transfer learning method based on the concept of meta-learning (Naik & Mammone, 1992; Thrun & Pratt, 2012) that learns what information to transfer to where, from source networks to target networks with heterogeneous architectures and tasks. Our goal is learning to learn transfer rules for performing knowledge transfer in an automatic manner, considering the difference in the architectures and tasks between source and target, without hand-crafted tuning of transfer configurations. Specifically, we learn meta-networks that generate the weights for each feature and between each pair of source and target layers, jointly with the target network. Thus, it can automatically learn to identify which source network knowledge is useful, and where it should transfer to (see Figure 1). We validate our method, learning to transfer what and where (L2T-ww), to multiple source and target task combinations between heterogeneous DNN architectures, and obtain significant improvements over existing transfer learning methods. Our contributions are as follows:

  • We introduce meta-networks for transfer learning that automatically decide which feature maps (channels) of a source model are useful and relevant for learning a target task and which source layers should be transferred to which target layers.

  • To learn the parameters of meta-networks, we propose an efficient meta-learning scheme. Our main novelty is to evaluate the one-step adaptation performance (meta-objective) of a target model learned by minimizing the transfer objective only (as an inner-objective). This scheme significantly accelerates the inner-loop procedure, compared to the standard scheme.

  • The proposed method achieves significant improvements over baseline transfer learning methods in our experiments. For example, in the ImageNet experiment, our meta-transfer learning method achieves accuracy on CUB200, while the second best baseline obtains . In particular, our method outperforms baselines with a large margin when the target task has an insufficient number of training samples and when transferring from multiple source models.

Organization. The rest of the paper is organized as follows. In Section 2, we describe our method for selective knowledge transfer, and training scheme for learning the proposed meta-networks. Section 3 shows our experimental results under various settings, and Section 4 states the conclusion.

2 Learning What and Where to Transfer

Our goal is to learn to transfer useful knowledge from the source network to the target network, without requiring manual layer association or feature selection. To this end, we propose a meta-learning method that learns what knowledge of the source network to transfer to which layer in the target network. In this paper, we primarily focus on transfer learning between convolutional neural networks, but our method is generic and is applicable to other types of deep neural networks as well.

In Section 2.1, we describe meta-networks that learn what to transfer (for selectively transfer only the useful channels/features to a target model), and where to transfer (for deciding a layer-matching configuration that encourages learning a target task). Section 2.2 presents how to train the proposed meta-networks jointly with the target network.

(a) Where to transfer
(b) What to transfer
Figure 2: Our meta-transfer learning method for selective knowledge transfer. The meta-transfer networks are parameterized by

and are learned via meta-learning. The dashed lines indicate flows of tensors such as feature maps, and solid lines denote

feature matching. (a) outputs weights of matching pairs between the and layers of the source and target models, respectively, and (b) outputs weights for each channel.

2.1 Weighted Feature Matching

If a convolutional neural network is well-trained on a task, then its intermediate feature spaces should have useful knowledge for the task. Thus, mimicking the well-trained features might be helpful for training another network. To formalize the loss forcing this effect, let be an input, and be the corresponding (ground-truth) output. For image classification tasks, and are images and their class labels. Let be intermediate feature maps of the layer of the pre-trained source network . Our goal is then to train another target network with parameter utilizing the knowledge of . Let be intermediate feature maps of the layer of the target network. Then, we minimize the following objective, similar to that used in FitNet (Romero et al., 2015), to transfer the knowledge from to :


is a linear transformation parameterized by

such as a pointwise convolution. We refer to this method as feature matching. Here, the parameter consists of both the parameter for linear-transformation and non-linear neural network , where the former is only necessary in training the latter and is not required at testing time.

What to transfer.

In general transfer learning settings, the target model is trained for a task that is different from that of the source model. In this case, not all the intermediate features of the source model may be useful to learn the target task. Thus, to give more attention on the useful channels, we consider a weighted feature matching loss that can emphasize the channels according to their utility on the target task:

where is the spatial size of and , the inner-summation is over and , and is the non-negative weight of channel with . Since the important channels to transfer can vary for each input image, we set channel weights as a function, , by taking the softmax output of a small meta-network which takes features of source models as an input. We let denote the parameters of meta-networks throughout this paper.

Where to transfer.

When transferring knowledge from a source model to a target model, deciding pairs of layers in the source and target model is crucial to its effectiveness. Previous approaches (Romero et al., 2015; Zagoruyko & Komodakis, 2017) select the pairs manually based on prior knowledge of architectures or semantic similarities between tasks. For example, attention transfer (Zagoruyko & Komodakis, 2017) matches the last feature maps of each group of residual blocks in ResNet (He et al., 2016). However, finding the optimal layer association is not a trivial problem and requires exhaustive tuning based on trial-and-error, given models with different numbers of layers or heterogeneous architectures, e.g., between ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2015). Hence, we introduce a learnable parameter for each pair which can decide the amount of transfer between the and layers of source and target models, respectively. We also set for each pair as an output of a meta-network that automatically decides important pairs of layers for learning the target task.

The combined transfer loss given the weights of channels and weights of matching pairs is

where be a set of candidate pairs. Our final loss to train a target model then is given as:

where is the original loss (e.g., cross entropy) and is a hyper-parameter. We note that and decide what and where to transfer, respectively. We provide an illustration of our transfer learning scheme in Figure 2.

2.2 Training Meta-Networks and Target Model

Our goal is to achieve high performance on the target task when the target model is learned using the training objective . To maximize the performance, the feature matching term should encourage learning of useful features for the target task, e.g., predicting labels. To measure and increase usefulness of the feature matching decided by meta-networks parameterized by , a standard approach is to use the following bilevel scheme (Colson et al., 2007) to train , e.g., see (Finn et al., 2017; Franceschi et al., 2018):

  1. Update to minimize for times.

  2. Measure and update to minimize it.

In the above, the actual objective for learning the target model is used in the inner-loop, and the original loss is used as a meta-objective to measure the effectiveness of for learning the target model to perform well.

However, since our meta-networks affect the learning procedure of the target model weakly through the regularization term , their influence on can be very marginal, unless one uses a very large number of inner-loop iterations . Consequently, it causes difficulties on updating using gradient . To tackle this challenge, we propose the following alternative scheme:

  1. Update to minimize for times.

  2. Update to minimize once.

  3. Measure and update to minimize it.

In the first stage, given the current parameter , we update the target model for times via gradient-based algorithms for minimizing . Namely, the resulting parameter is learned only using the knowledge of the source model. Since transfer is done by the form of feature matching, it is feasible to train useful features for the target task by selectively mimic the source features. More importantly, it increases the influence of the regularization term on the learning procedure of the target model in the inner-loop, since the target features are solely trained by the source knowledge (without target labels). The second stage is an one-step adaptation from toward the target label. Then, in the third stage, the task-specific objective can measure how quickly the target model has adapted (via only one step from ) to the target task, under the sample used in the first and second stage. Finally, the meta-parameter can be trained by minimizing . The above 3-stage scheme encourages significantly faster training of , compared the standard 2-stage one. This is because the former measures the effect of the regularization term more directly to the original , and allows to choose a small to update meaningfully (we choose in our experiments).

In the case of using the vanilla gradient descent algorithm for updates, the 3-stage training scheme to learn meta-parameters can be formally written as the following optimization task:

subject to

where is a learning rate. To solve the above optimization problem, we use Reverse-HG (Franceschi et al., 2017) that can compute

efficiently using Hessian-vector products.

To train the target model jointly with meta-networks, we alternatively update the target model parameters and the meta-network parameters . We first update the target model for a single step with objective . Then, given current target model parameters, we update the meta-networks parameters using the 3-stage bilevel training scheme described above. This eliminates an additional meta-training phase for learning . The proposed training scheme is formally outlined in Algorithm 1.

  Input: Dataset , learning rate
     Sample a batch with
     Update to minimize
     for  to  do
     end for
     Update using
  until done
Algorithm 1 Learning of with meta-parameters

3 Experiments

(a) Single
(b) One-to-one
(c) All-to-all
(d) Learned matching
Figure 3: (a)-(c) Matching configurations between ResNet32 (left) and VGG9 (right). (d) The amount of transfer between layers after learning. Line widths indicates the transfer amount. We omit the lines when is less than .

We validate our meta-transfer learning method that learns what and where to transfer, between heterogeneous network architectures and tasks.

3.1 Setups

Network architectures and tasks for source and target.

To evaluate various transfer learning methods including ours, we perform experiments on two scales of image classification tasks, and . For scale, we use the TinyImageNet111https://tiny-imagenet.herokuapp.com/ dataset as a source task, and CIFAR-10, CIFAR-100 (Krizhevsky & Hinton, 2009), and STL-10 (Coates et al., 2011) datasets as target tasks. We train 32-layer ResNet (He et al., 2016) and 9-layer VGG (Simonyan & Zisserman, 2015) on the source and target tasks, respectively. For scale, the ImageNet (Deng et al., 2009) dataset is used as a source dataset, and Caltech-UCSD Bird 200 (Wah et al., 2011)

, MIT Indoor Scene Recognition 

(Quattoni & Torralba, 2009), Stanford 40 Actions (Yao et al., 2011) and Stanford Dogs (Khosla et al., 2011) datasets as target tasks. For these datasets, we use 34-layer and 18-layer ResNet as a source and target model, respectively, unless otherwise stated.

Meta-network architecture.

For all experiments, we construct the meta-networks as -layer fully-connected networks for each pair where is the set of candidates of pairs, or matching configuration (see Figure 3). It takes the globally average pooled features of the layer of the source network as an input, and outputs and . As for the channel assignments , we use the softmax activation to generate them while satisfying , and for transfer amount between layers, we commonly use ReLU6 (Krizhevsky & Hinton, 2010), to ensure non-negativeness of and to prevent from becoming too large.

(a) Transfer from
(b) Transfer from
(c) Transfer from
Figure 4: Change of

during training for STL-10 as the targe task with TinyImageNet as the source task. We plot mean and standard deviation of

of all samples for every 10 epochs.

Source task TinyImageNet ImageNet
Target task CIFAR-100 STL-10 CUB200 MIT67 Stanford40 Stanford Dogs
Scratch 67.690.22 65.180.91 42.150.75 48.910.53 36.930.68 58.080.26
LwF 69.230.09 68.640.58 45.520.66 53.732.14 39.731.63 66.330.45
AT (one-to-one) 67.540.40 74.190.22 57.741.17 59.181.57 59.290.91 69.700.08
LwF+AT (one-to-one) 68.750.09 75.060.57 58.901.32 61.421.68 60.201.34 72.670.26
FM (single) 69.400.67 75.000.34 47.600.31 55.150.93 42.931.48 66.050.76
FM (one-to-one) 69.970.24 76.381.18 48.930.40 54.881.24 44.500.96 67.250.88
L2T-w (single) 70.270.09 74.350.92 51.950.83 60.410.37 46.253.66 69.160.70
L2T-w (one-to-one) 70.020.19 76.420.52 56.610.20 59.781.90 48.191.42 69.841.45
L2T-ww (all-to-all) 70.960.61 78.310.21 65.051.19 64.852.75 63.080.88 78.080.96
Table 1: Classification accuracy (%) of transfer learning from TinyImageNet () or ImageNet ( to CIFAR-100, STL-10, Caltech-UCSD Bird 200 (CUB200), MIT Indoor Scene Recognition (MIT67), Stanford 40 Actions (Stanford40) and Stanford Dogs datasets. For TinyImageNet, ResNet32 and VGG9 are used as a source and target model, respectively, and ResNet34 and ResNet18 are used for ImageNet.

Compared schemes for transfer learning.

We compare our methods with the following prior methods and their combinations: learning without forgetting (LwF) (Li & Hoiem, 2018), attention transfer (AT) (Zagoruyko & Komodakis, 2017) and unweighted feature matching (FM) (Romero et al., 2015).222 In our experimental setup, we reproduce similar relative improvements from the scratch for these baselines as reported in the original papers. We do not report the results of Jacobian matching (JM) (Srinivas & Fleuret, 2018) as the improvement of LwF+AT+JM over LwF+AT is marginal in our setups. Here, AT and FM transfer knowledge on feature-level as like ours by matching attention maps or feature maps between source and target layers, respectively. The feature-level transfer methods generally choose layers just before down-scaling, e.g., the last layer of each residual group for ResNet, and match pairs of the layers of same spatial size. Following this convention, we evaluate two hand-crafted configurations (single, one-to-one) for prior methods and a new configurations (all-to-all) for our methods: (a) single: use a pair of the last feature in the source model and a layer with the same spatial size in the target model, (b) one-to-one: connect each layer just before down-scaling in the source model to a target layer of the same spatial size, (c) all-to-all: use all pairs of layers just before down-scaling, e.g., between ResNet and VGG architectures, we consider

pairs. For matching features of different spatial sizes, we simply use a bilinear interpolation. These configurations are illustrated in Figure

3. Among various combinations between prior methods and matching configurations, we only report the results of those achieving the meaningful performance gains.

Figure 5: Transfer from TinyImageNet to CIFAR-10 with varying numbers of training samples per class in CIFAR-10. -axis is plotted in logarithmic scale.

3.2 Evaluation on Various Target Tasks

We first evaluate the effect of learning to transfer what (L2T-w) without learning to transfer where. To this end, we use conventional hand-crafted matching configurations, single and one-to-one, illustrated in Figure 3(a) and 3(b), respectively. For most cases reported in Table 1, L2T-w improves the performance on target tasks compared to the unweighted counterpart (FM): for fine-grained target tasks transferred from ImageNet, the gain of L2T-w over FM is more significant. The results support that our method, learning what to transfer, is more effective when target tasks have specific types of input distributions, e.g., fine-grained classification, while the source model is trained on a general task.

Next, instead of using hand-crafted matching pairs of layers, we also learn where to transfer starting from all matching pairs illustrated in Figure 3(c). The proposed final scheme in our paper, learning to transfer what and where (L2T-ww), often improves the performance significantly compared to the hand-crafted matching (L2T-w). As a result, L2T-ww achieves the best accuracy for all cases (with large margin) reported in Table 1, e.g., on the CUB200 dataset, we attain relative improvement compared to the second best baseline.

Figure 3(d) shows the amounts of transfer between pairs of layers after learning transfer from TinyImageNet to STL-10. As shown in the figure, our method transfers knowledge to higher layers in the target model: . The amounts of other pairs are smaller than , except . Clearly, those matching pairs are not trivial to find by hand-crafted tuning, which justifies that our method for learning where to transfer is useful. Furthermore, since our method outputs sample-wise , amounts of transfer are adjusted more effectively compared to fixed matching pairs over all the samples. For example, amounts of transfer from source features

have relatively smaller variance over the samples (Figure 

4(a)) compared to the those of (Figure 4(c)). This is because higher-level features are more task-specific while lower-level features are more task-agnostic. It evidences that meta-networks adjust the amounts of transfer for each sample considering the relationship between tasks and the levels of abstractions of features.

3.3 Experiments on Limited-Data Regimes

When a target task has a small number of labeled samples for training, transfer learning can be even more effective. To evaluate our method (L2T-ww) on such limited-data scenario, we use CIFAR-10 as a target task dataset by reducing the number of samples. We use training samples for each class, and compare the performance of learning from scratch, LwF, AT, LwF+AT and L2T-ww. The results are reported in Figure 5. They show that ours achieves significant more improvements compared to other baselines, when the volume of the target dataset is smaller. For example, in the case of , our method achieves classification accuracy, while the baselines, LwF+AT, AT, LwF and scratch show , , and , respectively. Observe that ours needs only samples per class to achieve similar accuracy of LwF with 250 samples per class.

3.4 Experiments on Multi-Source Transfer

In practice, one can have multiple pre-trained source models with various source datasets. Transfer from multiple sources may potentially provide more knowledge for learning a target task, however, using them simultaneously could require more hand-crafted configurations of transfer, such as balancing the transfer from many sources or choosing different pairs of layers depending on the source models. To evaluate the effects of using multiple source models, we consider the scenarios transferred from two source models simultaneously, where the models are different architectures (ResNet20, ResNet32) or trained on different datasets (TinyImageNet, CIFAR-10). In Table LABEL:tbl:multisource-v2, we report the results of ours (L2T-ww) and other transfer methods on a target task STL-10 with 9-layer VGG as a target model architecture.

Our method consistently improves the target model performance over more informative transitions (from left to right in Table LABEL:tbl:multisource-v2) on sources, i.e., when using a larger source model (ResNet20 ResNet32) or using a different second source dataset (TinyImageNet CIFAR-10). This is not the case for all other methods. In particular, compare the best performance of each method transferred from two TinyImageNet models and TinyImageNet+CIFAR-10 models as sources. Then, one can conclude that ours is the only one that effectively aggregates the heterogeneous source knowledge, i.e., TinyImageNet+CIFAR-10. It shows the importance of choosing the right configurations of transfer when using multiple source models, and confirms that ours can automatically decide the useful configuration from many possible candidate pairs for transfer.

3.5 Visualization

With learning what to transfer, our weighted feature matching will allocate larger attention to task-related channels of feature maps. To visualize the attention used in knowledge transfer, we compare saliency maps (Simonyan et al., 2014) for unweighted (FM) and weighted (L2T-w) matching between the last layers of source and target models. Saliency maps can be computed as follows:

where is an image, is a channel of the image, e.g., RGB, and is a pixel position. For the unweighted case, we use uniform weights. On the other hand, for the weighted case, we use the outputs of meta-networks learned by our meta-training scheme. Figure LABEL:fig:vis shows which pixels are more or less activated in the saliency map of L2T-w compared to FM. As shown in the figure, pixels containing task-specific objects (birds or dogs) are more activated when using L2T-w, while background pixels are less activated. It means that the weights make knowledge of the source model be more task-specific, consequently it can improve transfer learning.

4 Conclusion

We propose a transfer method based on meta-learning which can transfer knowledge selectively depending on tasks and architectures. Our method transfers more important knowledge for learning a target task, with identifying what and where to transfer using meta-networks. To learn the meta-networks, we design an efficient meta-learning scheme which requires a few steps in the inner-loop procedure. By doing so, we jointly train the target model and the meta-networks. We believe that our work would shed a new angle for complex transfer learning tasks between heterogeneous or/and multiple network architectures and tasks.


This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government MSIT (No.2016-0-00563, Research on Adaptive Machine Learning Technology Development for Intelligent Autonomous Digital Companion) and supported by the Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF-2018R1A5A1059921).


  • Coates et al. (2011) Coates, A., Ng, A., and Lee, H. An analysis of single-layer networks in unsupervised feature learning. In

    Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS 2011)

    , 2011.
  • Colson et al. (2007) Colson, B., Marcotte, P., and Savard, G. An overview of bilevel optimization. Annals of operations research, 2007.
  • Cui et al. (2018) Cui, Y., Song, Y., Sun, C., Howard, A., and Belongie, S. Large scale fine-grained categorization and domain-specific transfer learning. In

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018)

    , 2018.
  • Deng et al. (2009) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2009), 2009.
  • Finn et al. (2017) Finn, C., Abbeel, P., and Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning (ICML 2017), 2017.
  • Franceschi et al. (2017) Franceschi, L., Donini, M., Frasconi, P., and Pontil, M.

    Forward and reverse gradient-based hyperparameter optimization.

    In Proceedings of the 34th International Conference on Machine Learning, 2017.
  • Franceschi et al. (2018) Franceschi, L., Frasconi, P., Salzo, S., Grazzi, R., and Pontil, M. Bilevel programming for hyperparameter optimization and meta-learning. In Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018.
  • He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016.
  • Hinton et al. (2015) Hinton, G., Vinyals, O., and Dean, J. Distilling the knowledge in a neural network. In Deep Learning and Representation Learning Workshop, Advances in Neural Information Processing Systems 29 (NIPS 2015), 2015.
  • Khosla et al. (2011) Khosla, A., Jayadevaprakash, N., Yao, B., and Fei-Fei, L. Novel dataset for fine-grained image categorization. In The 1st Workshop on Fine-Grained Visual Categorization, the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2011), 2011.
  • Kingma & Ba (2015) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In The 3rd International Conference on Learning Representations (ICLR 2015), 2015.
  • Krizhevsky & Hinton (2009) Krizhevsky, A. and Hinton, G. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
  • Krizhevsky & Hinton (2010) Krizhevsky, A. and Hinton, G.

    Convolutional deep belief networks on cifar-10, 2010.

  • Li & Hoiem (2018) Li, Z. and Hoiem, D. Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.
  • Loshchilov & Hutter (2017) Loshchilov, I. and Hutter, F.

    Sgdr: Stochastic gradient descent with warm restarts.

    In The 5th International Conference on Learning Representations (ICLR 2017), 2017.
  • Naik & Mammone (1992) Naik, D. K. and Mammone, R. Meta-neural networks that learn by learning. In Neural Networks, 1992. IJCNN., International Joint Conference on, 1992.
  • Pan & Yang (2010) Pan, S. J. and Yang, Q. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 2010.
  • Quattoni & Torralba (2009) Quattoni, A. and Torralba, A. Recognizing indoor scenes. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2009), 2009.
  • Razavian et al. (2014) Razavian, A. S., Azizpour, H., Sullivan, J., and Carlsson, S. Cnn features off-the-shelf: an astounding baseline for recognition. In The IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2014), 2014.
  • Romero et al. (2015) Romero, A., Ballas, N., Kahou, S. E., Chassang, A., Gatta, C., and Bengio, Y. Fitnets: Hints for thin deep nets. In The 3rd International Conference on Learning Representations (ICLR 2015), 2015.
  • Simonyan & Zisserman (2015) Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. In The 3rd International Conference on Learning Representations (ICLR 2015), 2015.
  • Simonyan et al. (2014) Simonyan, K., Vedaldi, A., and Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. In The 2nd International Conference on Learning Representations Workshop (ICLR 2014), 2014.
  • Srinivas & Fleuret (2018) Srinivas, S. and Fleuret, F. Knowledge transfer with Jacobian matching. In Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018.
  • Thrun & Pratt (2012) Thrun, S. and Pratt, L. Learning to learn. Springer Science & Business Media, 2012.
  • Wah et al. (2011) Wah, C., Branson, S., Welinder, P., Perona, P., and Belongie, S. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
  • Yao et al. (2011) Yao, B., Jiang, X., Khosla, A., Lin, A. L., Guibas, L., and Fei-Fei, L. Human action recognition by learning bases of action attributes and parts. In The IEEE International Conference on Computer Vision (ICCV 2011), 2011.
  • Zagoruyko & Komodakis (2017) Zagoruyko, S. and Komodakis, N. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In The 5th International Conference on Learning Representations (ICLR 2017), 2017.
  • Zeiler & Fergus (2014) Zeiler, M. D. and Fergus, R. Visualizing and understanding convolutional networks. In The European Conference on Computer Vision (ECCV 2014), 2014.

Appendix A Network Architectures and Tasks

For small image experiments (), we use TinyImageNet333https://tiny-imagenet.herokuapp.com/ as a source task, and use CIFAR-10, CIFAR-100 (Krizhevsky & Hinton, 2009) and STL-10 (Coates et al., 2011) datasets as target tasks. CIFAR-10 and CIFAR-100 have 10 and 100 classes containing and training images for each class, respectively, and each image has pixels. STL-10 consists of 10 classes, with 500 labeled images per each class in the training set. Since the original images in TinyImageNet and STL-10 are not , we resize them into when training and testing. We use a pre-trained 32-layer ResNet (He et al., 2016) on TinyImageNet as a source model, and we train 9-layer VGG (Simonyan & Zisserman, 2015), which is the modified architecture used in Srinivas & Fleuret (2018), on CIFAR-10/100 and STL-10 datasets.

For large image experiments (), we use a pre-trained 34-layer ResNet on ImageNet (Deng et al., 2009) as a source model, and consdier Caltech-UCSD Bird 200 (Wah et al., 2011), MIT Indoor Scene Recognition (Quattoni & Torralba, 2009), Stanford 40 Actions (Yao et al., 2011) and Stanford Dogs (Khosla et al., 2011) datasets as target tasks. Caltech-UCSD Bird 200 (CUB200) contains 5k training images of 200 bird species. MIT Indoor Scene Recognition (MIT67) has 67 labels for indoor scenes and 80 training images per each label. Stanford 40 Actions (Stanford40) contains 4k training images of 40 human actions. Stanford Dogs has 12k training images of 120 dog categories. For these target fine-grained datasets, we train 18-layer ResNets.

Appendix B Optimization

All target networks are trained by stochastic gradient descent (SGD) with a momentum of . We use a weight decay of and an initial learning rate and decay the learning rate with a cosine annealing (Loshchilov & Hutter, 2017): where is the learning rate at epoch , and is the maximum epoch. For all experiments, we train target networks for epochs. The size of mini-batch is for small image experiments, e.g., CIFAR, or for large image experiments, e.g., CUB200. When using feature matching, we use . For data pre-processing and augmentation schemes, we follow He et al. (2016). We use the ADAM (Kingma & Ba, 2015) optimizer for training the meta-networks , with a learning rate of or , and a weight decay of or . In our meta-training scheme, we observe that is enough to learn what and where to transfer. We repeat experiments 3 times and report the average performance as well as the standard deviation.

Appendix C Ablation Studies

c.1 Comparison between the meta-networks and meta-weights

Target task CUB200 MIT67 Stanford40
meta-weights 61.75 64.10 58.88
meta-networks 65.05 64.85 63.08
Table 2: Classification accuracy (%) of transfer learning using meta-networks or meta-weights.

The weights, channel importance and connection importance , decide amounts of transfer given a sample to meta-networks. One can also learn directly and as constant meta-weights using suggested bilevel scheme without meta-networks. Here, we compare the effectiveness of using meta-networks, which gives different amount of transfer for each sample, to learning meta-weights directly, giving the same importance over all the samples. For fair comparison, we use same hyperparameters as described in Section A and B, except the meta-parameters. As reported in Table 2, the performance of target models using meta-networks outperforms the one using meta-weights up-to 4.2%, which supports the effectiveness of using selective transfer depending on samples.

c.2 Comparison between the proposed bilevel scheme and original one

To validate the effectiveness of the suggested bilevel scheme, we perform experiments comparing the performance of target models trained with meta-networks, using the proposed and original bilevel scheme. For a fair comparison, we use for both methods, and the other hyperparameters, model architectures and the source task are same with the ones in Section A and B.

Target task CUB200 MIT67 Stanford40
Original 35.38 54.18 53.47
Ours 65.05 64.85 63.08
Table 3: Classification accuracy (%) of transfer learning using the original or proposed bilevel schemes.

The original scheme obtains significantly lower accuracies than the proposed bilevel scheme (Table 3). With much larger , e.g., 5100, a target model with the original bilevel scheme does not succeed to obtain comparable performance with our bilevel scheme. Moreover the meta-training time for meta-networks is increasing linearly as increases, thus the original scheme is not applicable to practical scenarios. These results show that the proposed bilevel scheme is more effective for learning meta-networks for selective transfer.