Deep convolutional neural networks (CNNs) have revolutionized visual understanding, through the ability to learn “big models” (with hundreds of millions of parameters) with “big data” (very large number of images). Importantly, such data must be annotated with human-provided labels. Producing such massively annotated training data for new categories or tasks of interest is typically unrealistic. Fortunately, when trained on a large enough, diverse “base” set of data (e.g., ImageNet), CNN features appear to transfer across a broad range of tasks [32, 4, 56]. However, an open question is how to best adapt a pre-trained CNN for novel categories/tasks.
Fine-tuning is by far the dominant strategy for transfer learning with neural networks[28, 4, 32, 53, 10, 12]. This approach was pioneered in  by transferring knowledge from a generative to a discriminative model, and has since been generalized with great success [10, 57]
. The basic pipeline involves replacing the last “classifier” layer of a pre-trained network with a new randomly initialized layer for the target task of interest. The modified network is then fine-tuned with additional passes of appropriately tuned gradient descent on the target training set. Virtuallyevery contemporary visual recognition system uses this pipeline. Even though its use is widespread, fine-tuning is still relatively poorly understood. For example, what fraction of the pre-trained weights actually change and how?
Developmental networks: To address this issue, we explore “developmental” neural networks that grow in model capacity as new tasks as encountered. We demonstrate that growing a network, by adding additional units, facilitates knowledge transfer to new tasks. We explore two approaches to adding units as shown in Figure 1: going deeper (more layers) and wider (more channels per layer). Through visualizations, we demonstrate that these additional units help guide the adaptation of pre-existing units. Deeper units allow for new compositions of pre-existing units, while wider units allow for the discovery of complementary cues that address the target task. Due to their progressive nature, developmental networks still remain accurate on their source task, implying that they can learn without forgetting. Finally, we demonstrate that developmental networks particularly facilitate continual transfer across multiple tasks.
Developmental learning: Our approach is loosely inspired by developmental learning in cognitive science. Humans, and in particular children, have the remarkable ability to continually transfer previously-acquired knowledge to novel scenarios. Much of the literature from both neuroscience  and psychology  suggests that such sequential knowledge acquisition is intimately tied with a child’s growth and development.
Contributions: Our contributions are three-fold. (1) We first demonstrate that the dominant paradigm of fine-tuning a fixed-capacity model is sub-optimal. (2) We explore several avenues for increasing model capacity, both in terms of going deeper (more layers) and wider (more channels per layer), and consistently find that increasing capacity helps, with a slight preference for widening. (3) We show that additional units must be normalized and scaled appropriately such that the “pace of learning” is balanced with existing units in the model. Finally, we use our analysis to build a relatively simple pipeline that “grows” a pre-trained model during fine-tuning, producing state-of-the-art results across a large number of standard and heavily benchmarked datasets (for scene classification, fine-grained recognition, and action recognition).
2 Related Work
While there is a large body of work on transfer learning, much of it assumes a fixed capacity model [32, 3, 6, 58, 15]. Notable exceptions include , who introduce an adaptation layer to facilitate transfer. Our work provides a systematic exploration of various methods for increasing capacity, including both the addition of new layers and widening of existing ones. Past work has explored strategies for preserving accuracy on the source task [22, 8], while our primary focus is on improving accuracy on the target task. Most relevant to us are the progressive networks of 
, originally proposed for reinforcement learning. Interestingly,[34, 38] focus on widening a target network to be twice as large as the source one, but fine-tune only the new units. In contrast, we add a small fraction of new units (both by widening and deepening) but fine-tune the entire network, demonstrating that adaptation of old units is crucial for high performance.
Transfer learning is related to both multi-task learning [32, 4, 28, 10, 11, 45, 24, 2] and learning novel categories from few examples [49, 19, 21, 35, 51, 22, 5, 13, 50, 47, 31]. Past techniques have applied such approaches to transfer learning by learning networks that predict models rather than classes [51, 31]. This is typically done without dynamically growing the number of parameters across new tasks (as we do).
In a broad sense, our approach is related to developmental learning [26, 16, 36] and lifelong learning [41, 25, 39, 29]. Different from the non-parametric shallow models (e.g., nearest neighbors) that increase capacity when memorizing new data [40, 42], our developmental network cumulatively grows its capacity from novel tasks.
3 Approach Overview
Let us consider a CNN architecture pre-trained on a source domain with abundant data, e.g., the vanilla AlexNet pre-trained on ImageNet (ILSVRC) with categories [20, 33]. We note in Figure 1 that the CNN is composed of a feature representation module (e.g., the five convolutional layers and two fully connected layers for AlexNet) and a classifier module (e.g., the final fully-connected layer with units and the -way softmax for ImageNet classification). Transferring this CNN to a novel task with limited training data (e.g., scene classification of categories from SUN- ) is typically done through fine-tuning [3, 1, 15].
In classic fine-tuning, the target CNN is instantiated and initialized as follows: (1) the representation module is copied from of the source CNN with the parameters ; and (2) a new classifier model (e.g., a new final fully-connected layer with units and the -way softmax for SUN- classification) is introduced with the parameters randomly initialized. All (or a portion of) the parameters and
are fine-tuned by continuing the backpropagation, with a smaller learning rate for. Because and have identical network structure, the representational capacity is fixed during transfer.
Our underlying thesis is that fine-tuning will be facilitated by increasing representational capacity during transfer learning. We do so by adding new units into . As we will show later in our experiments, this significantly improves the ability to transfer knowledge to target tasks, particularly when fewer target examples are provided . We call our architecture a developmental network, in which the new representation module , and the classifier module remains .
Conceptually, new units can be added to an existing network in a variety of ways. A recent analysis, however, suggests that early network layers tend to encode generic features, while later layers tend to endode task-specific features . Inspired from this observation, we choose to explore new units at later layers. Specifically, we either construct a completely new top layer, leading to a depth augmented network (DA-CNN) as shown in Figure 1(b), or widen an existing top layer, leading to a width augmented network (WA-CNN) as shown in Figure 1(c). We will explain these two types of network configurations in Section 4. Their combinations—a jointly depth and width augmented network (DWA-CNN) as shown in Figure 1(d) and a recursively width augmented network (WWA-CNN) as shown in Figure 1(e)—will also be discussed in Section 5.
4 Developmental Networks
For the target task, let us assume that the representation module with fixed capacity consists of layers with hidden activations , where is the number of units at layer . Let be the weights between layer and layer . That is, , where
is a non-linear function, such as ReLU. For notational simplicity,already includes a constant as the last element and includes the bias terms.
4.1 Depth Augmented Networks
A straightforward way to increase representational capacity is to construct a new top layer of size using on top of , leading to the depth augmented representation module as shown in Figure 1(b). We view as an adaptation layer that allows for novel compositions of pre-existing units, thus avoiding dramatic modifications to the pre-trained layers for their adaptation to the new task. The new activations in layer become the representation that is fed into the classifier module , where denotes the weights between layers and .
4.2 Width Augmented Networks
An alternative way is to expand the network by adding to some existing layers while keeping the depth of the network fixed as shown in Figure 1(c). Without loss of generality, we add all the units to the top layer . Now the new top representation layer consists of two blocks: the original and the added with units , leading to the width augmented representation module . The connection weights between and the underneath layer remains, i.e., . We introduce additional lateral connection weights between and , which are randomly initialized, i.e., . Finally, the concatenated activations of size from layer are fed into the classifier module.
4.3 Learning at the Same Pace
Ideally, our hope is that the new and old units cooperate with each other to boost the target performance. For width augmented networks, however, the units start to learn at a different pace during fine-tuning: while the original units at layer are already well learned on the source domain and only need a small modification for adaptation, the new set of units at layer are just set up through random initialization. They thus have disparate learning behaviors, in the sense that their activations generally have different scales. Naïvely concatenating these activations would restrict the corresponding units, leading to degraded performance and even causing collapsed networks, since the larger activations dominate the smaller ones . Although the weights might adjust accordingly as fine-tuning processes, they require very careful initialization and tuning of parameters, which is dataset dependent and thus not robust. This is partially the reason that the previous work showed that network expansion was inferior to standard fine-tuning .
To reconcile the learning pace of the new and pre-existing units, we introduce an additional normalization and adaptive scaling scheme in width augmented networks, which is inspired by the recent work on combining multi-scale pre-trained CNN features from different layers . More precisely, after weight initialization of , we first apply an -norm normalization to the activations and , respectively:
By normalizing these activations, their scales become homogeneous. Simply normalizing the norms to slows down the learning and makes it hard to train the network, since the features become very small. Consistent with , we normalize them to a larger value (e.g., or ), which encourages the network to learn well. We then introduce a scaling parameter for each channel to scale the normalized value:
We found that for depth augmented networks, while this additional stage of normalization and scaling is not crucial, it is still beneficial. In addition, this stage only introduces negligible extra parameters, whose number is equal to the total number of channels. During fine-tuning, the scaling factor is fine-tuned by backpropagation as in .
5 Experimental Evaluation
In this section, we explore the use of our developmental networks for transferring a pre-trained CNN to a number of supervised learning tasks with insufficient data, including scene classification, fine-grained recognition, and action recognition. We begin with extensive evaluation of our approach on scene classification of the SUN-dataset, focusing on the variations of our networks and different design choices. We also show that the network remains accurate on the source task. We then provide an in-depth analysis of fine-tuning procedures to qualitatively understand why fine-tuning with augmented network capacity outperforms classic fine-tuning. We further evaluate our approach on other novel categories and compare with state-of-the-art approaches. Finally, we investigate whether progressive augmenting outperforms fine-tuning a fixed large network and investigate how to cumulatively add new capacity into the network when it is gradually adapted to multiple tasks.
Following the standard practice, for computational efficiency and easy fine-tuning we use the Caffe implementation of AlexNet , pre-trained on ILSVRC 2012 , as our reference network in most of our experiments. We found that our observations also held for other network architectures. We also provide a set of experiment using VGG16 . For the target tasks, we randomly initialize the classifier layers and our augmented layers. During fine-tuning, after resizing the image to be , we generate the standard augmented data including random crops and their flips as implemented in Caffe 
. During testing, we only use the central crop, unless otherwise specified. For a fair comparison, fine-tuning is performed using stochastic gradient descent (SGD) with the “step” learning rate policy, which drops the learning rate in steps by a factor of. The new layers are fine-tuned at a learning rate times larger than that of the pre-trained layers (if they are fine-tuned). We use standard momentum and weight decay without further tuning.
5.1 Evaluation and Analysis on SUN-397
We start our evaluation on scene classification of the SUN- dataset, a medium-scale dataset with around images and classes . In contrast to other fairly small-scale target datasets, SUN- provides sufficient number of categories and examples while demonstrating apparent dissimilarity with the source ImageNet dataset. This greatly benefits our insight into fine-tuning procedures and leads to clean comparisons under controlled settings.
We follow the experimental setup in [1, 15], which uses a nonstandard train/test split since it is computationally expensive to run all of our experiments on the standard subsets proposed by . Specifically, we randomly split the dataset into train, validation, and test parts using , , and of the data, respectively. The distribution of categories is uniform across all the three sets. We report -way multi-class classification accuracy averaged over all categories, which is the standard metric for this dataset. We report the results using a single run due to computational constraints. Consistent with the results reported in [1, 15]
, the standard deviations of accuracy on SUN-classification are negligible, and thus having a single run should not affect the conclusions that we draw. For a fair comparison, fine-tuning is performed for around epochs using SGD with an initial learning rate of , which is reduced by a factor of around every epochs. All the other parameters are the same for all approaches.
Learning with augmented network capacity: We first evaluate our developmental networks obtained by introducing a single new layer to deepen or expand the pre-trained AlexNet. For the depth augmented network (DA-CNN), we add a new fully connected layer of size on top of whose size is , where . For the width augmented network (WA-CNN), we add a set of new units as to , where . After their structures are adapted to the target task, the networks then continue learning in four scenarios of gradually increasing the degree of fine-tuning: (1) “New”: we only fine-tune the new layers, including the classifier layers and the augmented layers, while freezing the other pre-trained layers (i.e., the off-the-shelf use case of CNNs); (2) “–New”: we fine-tune from the layer; (3) “–New”: we fine-tune from the layer; (4) “All”: we fine-tune the entire network.
Table 1 summarizes the performance comparison with classic fine-tuning. The performance gap between our implementation of the fine-tuning baseline and that in [1, 15] is mainly due to different number of iterations: we used twice of the number of epochs in [1, 15] ( epochs), leading to improved accuracy. Note that these numbers cannot be directly compared against other publicly reported results due to different data split. With relatively sufficient data, fine-tuning through the full network yields the best performance for all the approaches. Both our DA-CNN and WA-CNN significantly outperform the vanilla fine-tuned CNN in all the different fine-tuning scenarios. This verifies the effectiveness of increasing model capacity when adapting it to a novel task. While they have achieved comparable performance, WA-CNN slightly outperforms DA-CNN.
Increasing network capacity through combination or recursion: Given the promise of DA-CNN and WA-CNN, we further augment the network by making it both deeper and wider or two-layer wider. For the jointly depth and width augmented network (DWA-CNN) (Figure 1(d)), we add of size on top of while expanding using of size , where . For the recursively width augmented network (WWA-CNN) (Figure 1(e)), we both expand using of size and using of size , where and is half of .
We compare DWA-CNN and WWA-CNN with DA-CNN and WA-CNN in Table 1. The two-layer WWA-CNN generally achieves the best performance, indicating the importance of augmenting model capacity at different and complementary levels. The jointly DWA-CNN lags a little bit behind the purely WA-CNN. This implies different learning behaviors when we make the network deeper or wider. Their combination thus becomes a non-trivial task.
Diagnostic analysis: While we summarize the best performance in Table 1, a diagnostic experiment in Table 2 on the number of augmented units , , , and shows that all of these variations of network architectures significantly outperform classic fine-tuning, indicating the robustness of our approach. We found that this observation was also consistent with other datasets, which we evaluated in the later section. Overall, the performance increases with the augmented model capacity (represented by the size of augmented layers), although the performance gain diminishes with the increasing number of new units.
Importance of reconciling the learning pace of new and old units: The previous work showed that network expansion did not introduce additional benefits . We argue that its unsatisfactory performance is because of the failure of taking into account the different learning pace of new and old units. After exploration of different strategies, such as initialization, we found that the performance of a width augmented network significantly improves by a simple normalization and scaling scheme when concatenating the pre-trained and expanded layers. This issue is investigated for both types of model augmentation in Table 3. The number of new units is generally ; in the case of copying weights of the pre-trained and then adding random noises as initialization for , we use new units.
|WA-CNN –)||w/o (rand)||53.78||54.66||49.72||51.34|
For WA-CNN, if we naïvely add new units without considering scaling, Table 3 shows that the performance is either only marginally better or even worse than classic fine-tuning (when fine-tuning more aggressively) in Table 1. This is consistent with the observation made in . However, once the learning pace of the new and old units is re-balanced by scaling, WA-CNN exceeds the baseline by a large margin. For DA-CNN, directly adding new units without scaling already greatly outperforms the baseline, which is consistent with the observation in , although scaling provides additional performance gain. This suggests slightly different learning behaviors for depth and width augmented networks. When a set of new units are added to form a purely new layer, they have relatively more freedom to learn from scratch, making the additional scaling beneficial yet inessential. When the units are added to expand a pre-trained layer, however, the constraints from the synergy require them to learn to collaborate with the pre-existing units, which is explicitly achieved by the additional scaling.
Evaluation with the VGG16 architecture: Table 1 also summarizes the performance of DA-CNN and WA-CNN using VGG16  and shows the generality of our approach. Due to GPU memory and time constraints, we reduce the batch size and perform fine-tuning for around epochs using SGD. All the other parameters are the same as before. Also, following the standard practice in Fast R-CNN , we fine-tune from the layer in the “All” scenario.
Learning without forgetting: Conceptually, due to their developmental nature, our networks should remain accurate on their source task. Table 4 validates such ability of learning without forgetting by showing their classification performance on the source ImageNet dataset.
|1-32.5 Type||Method||Acc (%)|
5.2 Understanding of Fine-Tuning Procedures
We now analyze the fine-tuning procedures from various perspectives to gain insight into how fine-tuning modifies the pre-trained network and why it helps by increasing model capacity. We evaluate on the SUN- validation set. For a clear analysis and comparison, we focus on DA-CNN and WA-CNN, both with new units.
Feature visualization: To roughly understand the topology of the feature spaces, we visualize the features using the standard t-SNE algorithm . As shown in Figure 3, we embed the -dim features of the pre-trained and fine-tuned networks, the -dim wider features, and the -dim deeper features into a -dim space, respectively, and plot them as points colored depending on their semantic category. While classic fine-tuning somehow improves the semantic separation of the pre-trained network, both of our networks demonstrate significantly clearer semantic clustering structures, which is compatible with their improved classification performance.
Maximally activating images: To further analyze how fine-tuning changes the feature spaces, we retrieve the top- images that maximally activate some unit as in . We first focus on the common units in of the pre-trained, fine-tuned, and width augmented networks. In addition to using the SUN- images, we also include the maximally activating images from the ILSVRC validation set for the pre-trained network as references. Figure 4 shows an interesting transition: while the pre-trained network learns certain concentrated concept specific to the source task (left), such concept spreads over as a mixture of concepts for the novel target task (middle left). Fine-tuning tries to re-centralize one of the concepts suitable to the target task, but with limited capability (middle right). Our width augmented network facilitates such re-centralization, leading to more discriminative patterns (right). Similarly, we illustrate the maximally activating images for units in of the depth augmented network in Figure 5, which shows quite different behaviors. Compared with the object-level concepts in the width augmented network, the depth augmented network appears to have the ability to model a large set of compositions of the pre-trained features and thus generates more scene-level, better clustered concepts.
5.3 Generalization to Other Tasks and Datasets
We now evaluate whether our developmental networks facilitate the recognition of other novel categories. We compare with publicly available baselines and report multi-class classification accuracy. While the different variations of our networks outperform these baselines, we mainly focus on the width augmented networks (WA-CNN).
Tasks and datasets: We evaluate on standard benchmark datasets for scene classification: MIT- , for fine-grained recognition: Caltech-UCSD Birds (CUB) -  and Oxford Flowers , and for action recognition: Stanford-40 actions . These datasets are widely used for evaluating the CNN transferability , and we consider their diversity and coverage of novel categories. We follow the standard experimental setup (e.g., the train/test splits) for these datasets.
Baselines: While comparing with classic fine-tuning is the fairest comparison, to show the superiority of our approach, we also compare against other baselines that are specifically designed for certain tasks. For a fair comparison, we focus on the approaches that use single scale AlexNet CNNs. Importantly, our approach can be also combined with other CNN variations (e.g., VGG-CNN , multi-scale CNN [12, 53]) for further improvement.
Table 5 shows that our approach achieves state-of-the-art performance on these challenging benchmark datasets and significantly outperforms classic fine-tuning by a large margin. In contrast to task customized CNNs that are only suitable for particular tasks and categories, the consistently superior performance of our approach suggests that it is generic for a wide spectrum of tasks.
5.4 A Single Universal Higher Capacity Model?
One interesting question is that our results could imply that standard models should have used higher capacity even for the source task (e.g., ImageNet). To examine this, we explore progressive widening of AlexNet (WA-CNN). Specifically, in the source domain, Table 6 shows that progressive widening of a network outperforms a fixed wide network trained from scratch. More importantly, in the target domain, Table 7 shows that our progressive widening significantly outperforms fine-tuning a fixed wide network.
Cooperative learning: Figure 6 and Figure 7 provide an in-depth analysis of the cooperative learning behavior between the pre-existing and new units and show that developmental learning appears to regularize networks in a manner that encourages diversity of units.
Continual transfer across multiple tasks: Our approach is in particular suitable for continual, smooth transfer across multiple tasks since we are able to cumulatively increase model capacity as demonstrated in Table 8.
|ImageNetMIT67||ImageNetSUNMIT67||Places ||ImageNet-VGG |
We have performed an in-depth study of the ubiquitous practice of fine-tuning CNNs. By analyzing what changes in a network and how, we conclude that increasing model capacity significantly helps existing units better adapt and specialize to the target task. We analyze both depth and width augmented networks, and conclude that they are useful for fine-tuning, with a slight but consistent benefit for widening. A practical issue is that newly added units should have a pace of learning that is comparable to the pre-existing units. We provide a normalization and scaling technique that ensures this. Finally, we present several state-of-the-art results on benchmark datasets that show the benefit of increasing model capacity. Our conclusions support a developmental view of CNN optimization, in which model capacity is progressively grown throughout a lifelong learning process when learning from continuously evolving data streams and tasks.
Acknowledgments: We thank Liangyan Gui for valuable and insightful discussions. This work was supported in part by ONR MURI N000141612007 and U.S. Army Research Laboratory (ARL) under the Collaborative Technology Alliance Program, Cooperative Agreement W911NF-10-2-0016. DR was supported by NSF Grant 1618903 and Google. We also thank NVIDIA for donating GPUs and AWS Cloud Credits for Research program.
-  P. Agrawal, R. Girshick, and J. Malik. Analyzing the performance of multilayer neural networks for object recognition. In ECCV, 2014.
-  R. K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. JMLR, 6:1817–1853, 2005.
-  H. Azizpour, A. S. Razavian, J. Sullivan, A. Maki, and S. Carlsson. Factors of transferability for a generic ConvNet representation. TPAMI, 38(9):1790–1802, 2016.
-  H. Azizpour, A. Sharif Razavian, J. Sullivan, A. Maki, and S. Carlsson. From generic to specific deep representations for visual recognition. In CVPR Workshops, 2015.
-  L. Bertinetto, J. F. Henriques, J. Valmadre, P. Torr, and A. Vedaldi. Learning feed-forward one-shot learners. In NIPS, 2016.
-  B. Chu, V. Madhavan, O. Beijbom, J. Hoffman, and T. Darrell. Best practices for fine-tuning visual classifiers to new domains. In ECCV Workshops, 2016.
-  J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In ICML, 2014.
-  T. Furlanello, J. Zhao, A. M. Saxe, L. Itti, and B. S. Tjan. Active long term memory networks. arXiv preprint arXiv:1606.02355, 2016.
-  R. Girshick. Fast R-CNN. In ICCV, 2015.
-  R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014.
-  S. Gupta, J. Hoffman, and J. Malik. Cross modal distillation for supervision transfer. In CVPR, 2016.
-  B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. In CVPR, 2015.
-  B. Hariharan and R. Girshick. Low-shot visual object recognition. arXiv preprint arXiv:1606.02819, 2016.
-  G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006.
-  M. Huh, P. Agrawal, and A. A. Efros. What makes ImageNet good for transfer learning? arXiv preprint arXiv:1608.08614, 2016.
-  W. Huitt and J. Hummel. Piaget’s theory of cognitive development. Educational psychology interactive, 3(2):1–5, 2003.
-  Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM MM, 2014.
-  A. Joulin, L. van der Maaten, A. Jabri, and N. Vasilache. Learning visual features from large weakly supervised data. In ECCV, 2016.
-  G. Koch, R. Zemel, and R. Salakhutdinov. Siamese neural networks for one-shot image recognition. In ICML Workshops, 2015.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012.
-  B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015.
-  Z. Li and D. Hoiem. Learning without forgetting. In ECCV, 2016.
-  W. Liu, A. Rabinovich, and A. C. Berg. Parsenet: Looking wider to see better. In ICLR workshop, 2016.
-  I. Misra, A. Shrivastava, A. Gupta, and M. Hebert. Cross-stitch networks for multi-task learning. In CVPR, 2016.
-  T. M. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Betteridge, A. Carlson, B. D. Mishra, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. A. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. Never-ending learning. In AAAI, 2015.
-  C. A. Nelson, M. L. Collins, and M. Luciana. Handbook of developmental cognitive neuroscience. MIT Press, 2001.
-  M.-E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In ICVGIP, 2008.
-  M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring mid-level image representations using convolutional neural networks. In CVPR, 2014.
-  M. Pickett, R. Al-Rfou, L. Shao, and C. Tar. A growing long-term episodic & semantic memory. In NIPS Workshops, 2016.
-  Q. Qian, R. Jin, S. Zhu, and Y. Lin. Fine-grained visual categorization via multi-stage metric learning. In CVPR, 2015.
-  S. Ravi and H. Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017.
-  A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. CNN features off-the-shelf: An astounding baseline for recognition. In CVPR Workshops, 2014.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 115(3):211–252, 2015.
-  A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016.
-  A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. One-shot learning with memory-augmented neural networks. In ICML, 2016.
-  O. Sigaud and A. Droniou. Towards deep developmental learning. TCDS, 8(2):90–114, 2016.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
-  A. V. Terekhov, G. Montone, and J. K. O’Regan. Knowledge transfer in deep block-modular neural networks. In Conference on Biomimetic and Biohybrid Systems, 2015.
-  C. Tessler, S. Givony, T. Zahavy, D. J. Mankowitz, and S. Mannor. A deep hierarchical approach to lifelong learning in minecraft. arXiv preprint arXiv:1604.07255, 2016.
-  S. Thrun. Is learning the -th thing any easier than learning the first? In NIPS, 1996.
-  S. Thrun. Lifelong learning algorithms. In Learning to learn, pages 181–209. Springer, 1998.
-  S. Thrun and J. O’Sullivan. Clustering learning tasks and the selective cross-task transfer of knowledge. In Learning to learn, pages 235–257. Springer, 1998.
-  T. Tommasi, F. Orabona, and B. Caputo. Learning categories from few examples with multi model knowledge transfer. TPAMI, 36(5):928–941, 2014.
-  A. Torralba and A. Quattoni. Recognizing indoor scenes. In CVPR, 2009.
-  E. Tzeng, J. Hoffman, T. Darrell, and K. Saenko. Simultaneous deep transfer across domains and tasks. In ICCV, 2015.
-  L. van der Maaten and G. Hinton. Visualizing data using t-SNE. JMLR, 9:2579–2605, 2008.
-  O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra. Matching networks for one shot learning. In NIPS, 2016.
-  C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 dataset. Technical report, California Institute of Technology, 2011.
-  Y.-X. Wang and M. Hebert. Model recommendation: Generating object detectors from few samples. In CVPR, 2015.
-  Y.-X. Wang and M. Hebert. Learning from small sample sets by combining unsupervised meta-training with CNNs. In NIPS, 2016.
-  Y.-X. Wang and M. Hebert. Learning to learn: Model regression networks for easy small sample learning. In ECCV, 2016.
-  J. Xiao, K. A. Ehinger, J. Hays, A. Torralba, and A. Oliva. SUN database: Exploring a large collection of scene categories. IJCV, 119(1):3–22, 2016.
-  S. Yang and D. Ramanan. Multi-scale recognition with DAG-CNNs. In ICCV, 2015.
-  B. Yao, X. Jiang, A. Khosla, A. L. Lin, L. Guibas, and L. Fei-Fei. Human action recognition by learning bases of action attributes and parts. In ICCV, 2011.
-  D. Yoo, S. Park, J.-Y. Lee, and S. Kweon. Multi-scale pyramid pooling for deep convolutional representation. In CVPR Workshops, 2015.
-  J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? In NIPS, 2014.
-  M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014.
-  L. Zheng, Y. Zhao, S. Wang, J. Wang, and Q. Tian. Good practice in CNN feature transfer. arXiv preprint arXiv:1604.00133, 2016.
B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva.
Learning deep features for scene recognition using places database.In NIPS, 2014.