Network Transplanting

04/26/2018 ∙ by Quanshi Zhang, et al. ∙ 0

This paper focuses on a novel problem, i.e., transplanting a category-and-task-specific neural network to a generic, distributed network without strong supervision. Like playing LEGO blocks, incrementally constructing a generic network by asynchronously merging specific neural networks is a crucial bottleneck for deep learning. Suppose that the pre-trained specific network contains a module f to extract features of the target category, and the generic network has a module g for a target task, which is trained using other categories except for the target category. Instead of using numerous training samples to teach the generic network a new category, we aim to learn a small adapter module to connect f and g to accomplish the task on a target category in a weakly-supervised manner. The core challenge is to efficiently learn feature projections between the two connected modules. We propose a new distillation algorithm, which exhibited superior performance. Our method without training samples even significantly outperformed the baseline with 100 training samples.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Besides end-to-end learning a black-box neural network, in this paper, we propose a new deep-learning methodology, i.e. network transplanting. Instead of learning from scratch, network transplanting aims to merge several convolutional networks that are pre-trained for different categories and tasks to build a generic, distributed neural network.

Network transplanting is of special values in both theory and practice. We briefly introduce key deep-learning problems that network transplanting deals with as follows.

1.1 Future potential of learning a universal net

Instead of learning different networks for different applications, building a universal net with a compact structure for various categories and tasks is one of ultimate objectives of AI. In spite of the gap between current algorithms and the target of learning a huge universal net, it is still meaningful for scientific explorations along this direction. Here, we list key issues of learning a universal net, which are not commonly discussed in the current literature of deep learning.

The start-up cost w.r.t. sample collection is also important, besides the total number of training annotations. Traditional methods usually require people to simultaneously prepare training samples for all pairs of categories and tasks before the learning begins. However, it is usually unaffordable, when there is a large number of categories and tasks. In comparison, our method enables a neural network to sequentially absorb network modules of different categories one-by-one, so the algorithm can start without all data.

Massive distributed learning & weak centralized learning: Distributing the massive computation of learning the network into local computation centers all over the world is of great practical values. There exist numerous networks locally pre-trained for specific tasks and categories in the world. Centralized network transplanting physically merges these networks into a compact universal net with a few or even without any training samples.

Delivering models or data: Our delivering pre-trained networks to the computation center is usually much cheaper than collecting and sending raw training data in practice.

Middle-to-end semantic manipulation for application: How to efficiently organize and use the knowledge in the net is also a crucial problem. We use different modules in the network to encode knowledge of different categories and that of different tasks. Like building LEGO blocks, people can manually connect a category module and a task module to accomplish a certain application (see Fig. 1(left)).

Figure 1: Building a transplant net. We propose a theoretical solution to incrementally merging category modules from teacher nets into a transplant (student) net with a few or without sample annotations. The transplant net has an interpretable, modular structure. A category module, e.g. a cat module, provides cat features to different task modules. A task module, e.g. a segmentation module, serves for various categories. We show two typical operations to learn transplant nets11footnotemark: 1. Blue ellipses show modules in teacher nets used for transplant. Red ellipses indicate new modules added to the transplant net. Unrelated adapters in each step are omitted for clarity.
Annotation cost Sample preparation Interpretability Catastrophic forgetting Modular manipulate Optimization
Directly learning a multi-task net Massive Simultaneously prepare samples for all tasks and categories Low Not support back prop.
Transfer- / meta- / continual-learning

Some support weakly-supervised learning

Some learn a category/task after another Usually low Most algorithmically alleviate No support back prop.
Transplanting A few or w/o annotations Learns a category after another High Physically avoid Support back-back prop.
Table 1: Comparison between network transplanting and other studies. Note that this table can only summarize mainstreams in different research directions considering the huge research diversity. Please see Section 2 for detailed discussions of related work.

1.2 Task of network transplanting

To solve above issues, we propose network transplanting, i.e. building a generic model by gradually absorbing networks locally pre-trained for specific categories and tasks. We design an interpretable modular structure for a target network, namely a transplant net, where each module is functionally meaningful. As shown in Fig. 1(left), the transplant net consists of three types of modules, i.e. category modules, task modules, and adapters. Each category module extracts general features for a specific category (e.g. the dog). Each task module is learned for a certain task (e.g. classification or segmentation) and is shared by different categories. Each adapter projects output features of a category module to the input space of a task module. Each category/task module is shared by multiple tasks/categories.

We can learn an initial transplant net with very few tasks and categories in the scenario of traditional multi-task/category learning. Then, we gradually grow the transplant net to deal with more categories and tasks via network transplanting. Network transplanting can be conducted with or without human annotations as additional supervision. We summarize two typical types of transplanting operations in Fig. 1(right). The core technique is to learn an adapter to connect a task module in the transplant net and a category module from another network111Please see supplementary materials for details..

The elementary transplanting operation is shown in Fig. 2(left). We are given a transplant net with a task module that is learned to accomplish a certain task for many categories, except for the category . We hope the task module to deal with the new category , so we need another network (namely, a teacher net) with a category module and a task module . The teacher net is pre-trained for the same task on the category . We may (or may not) have a few training annotations of category for the task. Our goal is to transplant the category module in the teacher net to the transplant net.

Note that we just learn a small adapter module to connect to . We do not fine-tune and during the transplanting process to avoid damaging their generality.

However, learning adapters but fixing parameters of category and task modules proposes specific challenges to deep-learning algorithms. Therefore, in this study, we proposed a new algorithm, namely back distillation, to overcome these challenges. The back-distillation algorithm uses the cascaded modules of the adapter and to mimic upper layers of the pre-trained teacher net. This algorithm requires the transplant net to have similar gradients/Jacobian with the teacher net w.r.t. ’s output features for distillation. In experiments, our back-distillation method without any training samples even outperformed the baseline with 100 training samples (see Table 2(left)).

1.2.1 Difference to previous knowledge transferring

Figure 2: Overview. (left) Given a teacher net and a student net, we aim to learn an adapter via distillation. The teacher net has a category module for a single or multiple tasks. The student net contains a task module for other categories. We transplant to the student net by using to connect and , in order to enable the task module to deal with the new category . As shown in green ellipses, our method distills knowledge from of a teacher net to the modules of the student net. Three red curves show directions of forward propagation, back propagation, and gradient propagation of back distillation. (right) During the transplant, the adapter learns potential projections between ’s output feature space and ’s input feature space.

Although most transfer-learning algorithms cannot be directly used to solve core problems mentioned in Section 

1.1, the proposed network transplanting is close to the spirit of continual learning (or lifelong learning) [16, 6, 19, 27, 23]. As an exploratory research, we summarize our essential differences from traditional studies in Table 1.

Modular interpretability more controllability: Besides the discrimination power, the interpretability is another important property of a neural network, which has received increasing attention in recent years. In particular, traditional transfer learning and knowledge distillation that are implemented in a black-box manner [16, 5], so the generalization process requires careful control. Whereas, our transplant net clarifies the functional meaning of each intermediate network module, which makes the knowledge-transferring process more controllable. I.e. the interpretable structure clearly points out network modules that are related to the target application.

Bottleneck of transferring upper modules11footnotemark: 1: Most deep-learning strategies are not suitable to directly transfer pre-trained upper modules, including (i) directly optimizing the task loss on the top of the network and (ii) traditional distillation methods [11]. These methods mainly transfer low-layer features and learn new upper modules to reuse these features, rather than directly transfer pre-trained upper modules. In contrast, network transplanting just allows us to modify the lower adapter, when we transfer a pre-trained upper task module to a new category. It is not permitted to modify the upper module. This requirement physically avoids the catastrophic forgetting, but it is difficult to optimize a lower adapter if the upper is fixed (see Section 3.1 for theoretical analysis). Meanwhile, it is difficult to distill knowledge from the teacher net to the adapter. It is because except for the final network output, high-layer features in and those in the teacher net are not semantically aligned.

Thus, in this paper, the proposed back distillation first breaks the bottleneck of transferring upper modules.

Catastrophic forgetting: Continually learning new jobs without hurting the performance of old jobs is a key issue for continual learning [16, 19]. Our method exclusively learns the adapter to physically prevent the learning of new categories from changing existing modules. Furthermore, when a transplant net has been constructed, we can optionally fine-tune a task/category module based on different categories/tasks to ensure the network generality.

1.2.2 Summarization

We can summarize contributions of this study as follows. (i) We propose a new deep-learning method, network transplanting with a few or even without additional training annotations, which can be considered as a theoretical solution to three issues in Section 1.1. (ii) We develop an optimization algorithm, i.e. back-distillation, to overcome specific challenges of network transplanting. (iii) Preliminary experiments proved the effectiveness of our method. Our method significantly outperformed baselines.

2 Related work

Because network transplanting is a new concept in machine learning, we would like to discuss its connections to different state-of-the-art algorithms. Firstly, we propose a new modular structure for networks, which disentangles a black-box network into different meaningful modules. Similarly, some studies have explored new representation structures instead of neural networks, such as forests and decision trees 

[12, 32, 7, 25] and automatic learning of optimal network structures [33, 14, 34, 31]. [20] learned a large modular neural network with thousands of sub-networks.

Interpretability: Unlike above studies, our dividing a network into functionally meaningful modules makes the structure interpretable. Other studies of enhancing network interpretability mainly either learn disentangled features filters/capsules in middle layers [30, 26, 17] or learn meaningful input codes of generative nets [3, 10].

Figure 3:

Feature space of a middle layer. (left) When we initialize parameters of a CNN, middle-layer features randomly cover all area in the feature space. The learning process forces the CNN to focus on typical feature spaces of samples and produces vast forgotten space. (right) We illustrate three toy examples of space projection that are estimated by the adapter.

Meta-learning & transfer learning: Meta-learning [4, 1, 13] aims to extract generic knowledge shared by different tasks/categories/models to guide the learning of a specific model. Transfer-learning methods transfer network knowledge through categories [28] or datasets [8]. Especially, continual learning [16, 6, 19, 27, 23] transfers knowledge from previous tasks to guide the new task. [16, 27] expanded the network structure during the learning process. In contrast, our study defines modular network structures with strict semantics, which allows people to semantically control the knowledge to transfer. Meanwhile, network transplanting physically avoids the catastrophic forgetting. In addition, our back distillation method solves the challenge of transferring upper modules, which is different from traditional transferring of low-layer features.

Distillation: [11] proposed the knowledge distillation to transfer knowledge between networks. Some recent studies [7, 25] distilled network knowledge into decision trees. [2] proposed an online distillation method to efficiently learn distributed networks. [29, 22] distilled the attention distribution/Jacobian from the teacher network to the student network, which is related to our back-distillation technique. However, these Jacobian distillation methods are hampered considering specific challenges of network transplanting (see Section 3.2 and the appendix for details). To overcome these challenges, in this study, we design a new method of back-distillation, which uses pseudo-gradients instead of using real gradients for distillation. We also balance magnitudes of neural activations between two networks, which is necessary for network transplanting.

3 Algorithm of network transplanting

Overview: As shown in Fig. 2(left), we are given a teacher net for a single or multiple tasks w.r.t. a certain category . Let the category module in the bottom of the teacher net have layers, and it connects a specific task module in upper layers. We are also given a transplant net with a generic task module , which has been learned for multiple categories except for the category .

The initial transplant net with a task module (before transplanting) can be learned via traditional scenario of learning from samples of some categories. We can roughly regard to encode generic representations for the task. Similarly, the category module extracts generic features for multiple tasks. Thus, we do not fine-tune or to avoid decreasing their generality.

Our goal is to transplant to by learning an adapter with parameters , so that the task module can deal with the new category module .

The basic idea of network transplanting is that we use the cascaded modules of and to mimic the specific task module in the teacher net. We call the transplant net a student net. Let denote the output feature of the category module given an image , i.e. . and are given as outputs of the teacher net and the student net, respectively. Thus, network transplanting can be described as

(1)

3.1 Problem of space projection & back-distillation

It is a challenge to let an adapter project the output feature space of properly to the input feature space of . The information bottleneck theory [24, 18] shows that a network selectively forgets certain space of middle-layer features and gradually focuses on discriminative features during the learning process (see Fig. 3(left)). Thus, both the output of and the input of have vast forgotten space. Features in the forgotten input space of

cannot pass most feature information through ReLU layers in

and reach . The forgotten output space of is referred to the space that does not contains ’s output features.

Vast forgotten feature spaces significantly boost difficulties of learning. Since valid input features of usually lie in low-dimensional manifolds, most features of the adapter fall inside the forgotten space. I.e. will not pass most information of input features to network outputs. Consequently, the adapter will not receive informative gradients of the loss for learning.

Fig. 3(right) illustrates ideal and problematic projections. A typical problematic projection is to project a feature to a forgotten input space of . Another typical problem is many-to-one projections, which limit the diversity of features and decrease the representation capability of the student net. More crucially, initial many-to-one projections significantly affect the further learning process, because the back-propagation takes current space projections as anchors to fine-tune the network.

To learn good projections, we propose to force the gradient (also known as attention, Jacobian) of the student net to approximate that of the teacher, which is a necessary condition of .

(2)

where is an arbitrary function of that outputs a scalar. denote parameters of the adapter . Therefore, we use the following distillation loss for back-distillation:

(3)

where is the task loss of the student net; denotes the ground-truth label; is a scaling scalar. This formulation is similar to the Jacobian distillation in [22]. We omit , if we learn the adapter without additional training labels.

3.2 Learning via back distillation

It is difficult for most recent techniques, including those for Jacobian distillation11footnotemark: 1, to directly optimize the above back-distillation loss. We briefly analyze the difficulties as follows. The minimization of the distillation loss is actually to push gradients of the student net towards those of the teacher net. To simplify the notation, we use and to denote gradients w.r.t. the feature map in the student net and the teacher net, respectively. As shown in Eqn. (4a), the computation of (or ) is sensitive to feature maps of layers in the and (those in ). Thus, it requires the student network to yield well-optimized feature maps to enable an effective distillation process. However, it is a chicken-and-egg problem—distilling optimal parameters and generating optimal feature maps of middle layers: Chaotic initial feature maps hurt the capability of distilling knowledge into , but the feature maps are produced using .

To overcome the optimization problem, we need to make gradients of agnostic with regard to feature maps. Thus, we propose two pseudo-gradients to replace in the loss, respectively. The pseudo-gradients follow the paradigm in Eqn. (4b).

(4a)
(4b)

where we define . Just like in Eqn. (2), we assume . , each is the derivative of the layer function for back-propagation. denotes a set of feature maps of all middle layers, and is the feature map of the -th layer.

In Eqn. (4b), we make the following revisions to the computation of gradients, in order to make gradients agnostic with regard to

. We ignore dropout operations and replace derivatives of max-pooling

with derivatives of average-pooling . We also revise the derivative of the ReLU to either or , where is a random feature map; denotes the element-wise product. For each input image, we set the same random feature map and initial gradients for both and to make and comparable with each other. Above revisions are made for to ease the back distillation, and they are not related to the computation of the task loss .

In this way, we conduct the back-distillation algorithm by . The distillation loss can be optimized by propagating gradients of gradient maps to the upper layers, and we consider this as back-back-propagation11footnotemark: 1. Tables 2 and 3 have exhibited the superior performance of the back-back-propagation.

Computation of : According to Eqn. (2), can be any arbitrary function. Thus, we can enumerate functions of by randomizing different values of . We use each to produce a pair of and

for back distillation. For the task of object segmentation, the output is a tensor

, where and denote the height and width of the output image, and indicates the number of segmentation labels. For each image, we randomly sample . For the task of single-category classification, the output

is a scalar. Nevertheless, we can still generate a random matrix

() for each image ( in experiments), which produces two enlarged pseudo-gradient maps for back distillation11footnotemark: 1. We normalize to the ranges of , which ensures a stable distillation in experiments.

Figure 4: (a) Teacher net; (b) transplant net; (c) two types of adapters; (d) two sequences of transplanting operations.

4 Experiments

To simplify the story, we limit our attention to testing network transplanting operations. We do not discuss other related operations, e.g. the fine-tuning of category and task modules and the case in Fig. 1(a), which can be solved via traditional learning strategies.

We designed three experiments to evaluate the proposed method. In Experiment 1, we learned toy transplant nets by inserting adapters between middle layers of pre-trained CNNs. Then, Experiments 2 and 3 were designed considering the real application of learning a transplant net with two task modules (i.e. modules for object classification and segmentation) and multiple category modules. As shown in Fig. 4(b,d), we can divide the entire network-transplanting procedure into an operation sequence of transplanting category modules to the classification module and another operation sequence of transplanting category modules to the segmentation module. Therefore, we separately conducted the two sequences of transplanting operations in Experiments 2 and 3 for more convincing results.

Because our back-distillation strategy decreases the demand for training samples, we tested the learning of adapters with limited numbers of samples (i.e. 10, 20, 50, and 100 samples). We even tested network transplanting without any training samples in Experiment 1, i.e. optimizing the distillation loss without considering the task loss.

Baselines: We compared our back-distillation method (namely back-distill) with two baselines. All baselines exclusively learned the adapter without fine-tuning the task module for fair comparisons. The first baseline only optimized the task loss without distillation, namely direct-learn. The second baseline is the traditional distillation [11], namely distill, where the distillation loss is . The distillation was applied to outputs of task modules and , because except for outputs, other layers in and did not produce features on similar manifolds. We tested the distill method in object segmentation, because unlike single-category classification, segmentation outputs had correlations between soft output labels.

Figure 5: Comparison of the projected feature spaces. For each category, blue points indicate 4096-d fc8 features of different images, when our method learned the adapter. Red points correspond to fc8 features of different images, when the adapter was learned by only using the task loss, i.e. the direct-learn baseline. We visualize the first two principal components of fc8 features. Because the direct-learn baseline usually learned problematic many-to-one projections and projections to forgotten spaces (see Fig. 3), most information in could not pass through ReLU layers to the fc8 layer. Therefore, given the adapter learned based on the direct-learn baseline, units in the fc8 layer were weakly triggered, and many-to-one projections decreased the diversity of fc8 features.

Network structures, datasets, & details: We transplanted category modules to a classification module in the first two experiments, and transplanted category modules to a segmentation module in the third experiment. In recent studies, people usually extended the structure of widely-used VGG-16 net [21] to implement classification [21, 30] and segmentation [15], as standard baselines of the two tasks. Thus, as shown in Fig. 4(a), we can represent a teacher net for both classification and segmentation as a network with a single category module and two task modules. The network branch for classification was exactly a VGG-16 net, and the network branch for segmentation was identical to the FCN-8s model proposed in [15]. Because the first five layers of the FCN-8s and those of the VGG-16 share the same structure, we considered the first five layers (including two conv-layers, two ReLU layers, and one pooling layer) as the shared category module and regarded upper layers of the VGG-16 and the FCN-8s as two task modules. Both branches are benchmark networks.

We followed standard experimental settings in [15] to learn the FCN-8s for each category, which used the Pascal VOC 2011 dataset (with segmentation labels on 8498 PASCAL images collected by [9]). For object classification, we followed standard settings in [30] that used the PASCAL VOC images to learn CNNs for the binary classification of a single category from random images. Note that we only learned and merged teacher networks for five mammal categories, i.e. the cat, cow, dog, horse, and sheep categories. Mammal categories share similar object structures, which make features of a category transferable to other categories.

Now, we introduce adapter structures, as shown in Fig. 4(c). An adapter contained conv-layers, each followed by a ReLU layer ( or in experiments)222Each conv-layer in the adapter contained filters. Each filter was a tensor with a in Experiment 1 or a

tensor without padding in Exps. 2 and 3, to avoid changing the size of feature maps, where

is the channel number of .. In addition, we inserted a “reorder” layer and a “re-scaling” layer11footnotemark: 1 in front of conv-layers in the adapter. The “reorder” layer randomly reordered channels of the features from the category module, which enlarged the dissimilarity between output features of different category modules. We inserted the “reorder” layer to mimic feature states in real applications for fair evaluation. The “re-scaling” layer normalized the scale of features from the category module, i.e. , for robust network transplanting. is a fixed scalar. and denote the image set of the new category and the image set of categories that had been already modeled by the transplant net, respectively333Because we used the task module in the dog network as the generic task module , we got .. denotes the input feature of given an image . denotes the Frobenius norm. We set the parameter .

Insert one conv-layer

# of samples cat cow dog horse sheep Avg.

Insert three conv-layers

# of samples cat cow dog horse sheep Avg.
100 direct-learn 12.89 3.09 12.89 10.82 9.28 9.79 100 direct-learn 9.28 6.70 12.37 11.34 3.61 8.66
back-distill 1.55 0.52 3.61 1.55 1.03 1.65 back-distill 1.03 2.58 4.12 1.55 2.58 2.37
50 direct-learn 13.92 15.98 12.37 16.49 15.46 14.84 50 direct-learn 14.43 13.92 15.46 8.76 7.22 11.96
back-distill 1.55 0.52 3.61 1.55 1.03 1.65 back-distill 3.09 3.09 4.12 2.06 4.64 3.40
20 direct-learn 16.49 26.80 28.35 32.47 25.77 25.98 20 direct-learn 22.16 25.77 32.99 22.68 22.16 25.15
back-distill 1.55 0.52 3.09 1.55 1.03 1.55 back-distill 7.22 6.70 7.22 2.58 5.15 5.77
10 direct-learn 39.18 39.18 35.05 41.75 38.66 38.76 10 direct-learn 36.08 32.99 31.96 34.54 34.02 33.92
back-distill 1.55 0.52 3.61 1.55 1.03 1.65 back-distill 8.25 15.46 10.31 13.92 10.31 11.65
0 direct-learn 0 direct-learn
back-distill 1.55 0.52 4.12 1.55 1.03 1.75 back-distill 50.00 50.00 50.00 49.48 50.00 49.90
Table 2: Error rates of classification when we insert conv-layers with ReLU layers to a CNN as the adapter. . The last row shows the performance of network transplanting without training samples, i.e. without optimizing the task loss .

4.1 Exp. 1: Adding adapters to pre-trained CNNs

In this experiment, we conducted a toy test, i.e. inserting and learning an adapter between a category module and a task module to test network transplanting. Here, we only considered networks with VGG-16 structures [21] for single-category classification. These networks were strongly supervised using all training samples and achieved error rates of 1.6%, 0.6%, 4.1%, 1.6%, and 1.0% for the classification of the cat, cow, dog, horse, and sheep categories, respectively. Then, we learned two types of adapters (see Fig. 4(c)), which have been introduced before.

Because the classification output is a scalar without neighboring outputs to provide correlations, we simply set without any gradient randomization. Instead, we used the revised dummy ReLU operations in Eqn. (4b) to ensure the value diversity of for learning. More specifically, we used to compute expedient derivatives of ReLU operations in the task module, and used in the adapter444All derivative functions in Eqn. (4b) are only used for distillation, which will not affect gradient propagations from .. We set for object classification in Experiments 1 and 2.

In Fig. 5, we compared the space of fc8 features, when we used our method and the direct-learn baseline, respectively, to learn the adapter with three conv-layers. The adapter learned by our method passed much stronger information to the final fc8 layer and yielded more diverse features. It demonstrates that our method better avoided problematic projections in Fig. 3 than the direct-learn baseline.

Experiment 2: transplanting to a classification module
# of samples cat cow horse sheep Avg. # of samples cat cow horse sheep Avg.
100 direct-learn 20.10 12.37 18.56 11.86 15.72 20 direct-learn 31.96 37.11 39.69 35.57 36.08
back-distill 9.79 5.67 8.25 4.64 7.09 back-distill 21.13 35.57 32.47 22.68 27.96
50 direct-learn 22.68 19.59 19.07 14.95 19.07 10 direct-learn 41.75 37.63 44.33 33.51 39.31
back-distill 10.82 18.04 13.92 5.15 11.98 back-distill 34.02 42.27 44.85 33.51 38.66
Experiment 3: transplanting to a segmentation module
# of samples cat cow horse sheep Avg. # of samples cat cow horse sheep Avg.
100 direct-learn 76.54 74.60 81.00 78.37 77.63 20 direct-learn 71.13 74.82 76.83 77.81 75.15
distill 74.65 80.18 78.05 80.50 78.35 distill 71.17 74.82 76.05 78.10 75.04
back-distill 85.17 90.04 90.13 86.53 87.97 back-distill 84.03 88.37 89.22 85.01 86.66
50 direct-learn 71.30 74.76 76.83 78.47 75.34 10 direct-learn 70.46 74.74 76.49 78.25 74.99
distill 68.32 76.50 78.58 80.62 76.01 distill 70.47 74.74 76.83 78.32 75.09
back-distill 83.14 90.02 90.46 85.58 87.30 back-distill 82.32 89.49 85.97 83.50 85.32
Table 3: (top) Error rate of single-category classification when we transplanted the classification module from a pre-trained dog network to the network of the target category in Experiment 2. The adapter contained three conv-layers. (bottom) Pixel accuracy of object segmentation when we transplanted the segmentation module from a dog network to the network of the target category in Experiment 3. The adapter contained a conv-layer and a ReLU layer.

In Table 2, we compared our back-distill method with the direct-learn baseline when we inserted an adapter with a single conv-layer and when we inserted an adapter with three conv-layers. Table 2(left) shows that compared to the error rates of the direct-learn baseline, our back-distill method yielded a significant lower classification error (). Even without any training samples, our method still outperformed the direct-learn method with 100 training samples.

Note that given an adapter with multiple conv-layers (e.g. three conv-layers) without any training samples, our back-distill method was not powerful enough to learn the adapter. Because deeper adapters with more parameters had more flexibility in representation, which required stronger supervision to avoid over-fitting. For example, in the last row of Table 2, our method successfully optimized an adapter with a single conv-layer (the error rate was 1.75%), but was hampered when the adapter had three conv-layers (the error rate was 49.9%, and our method produced a biased short-cut solution).

4.2 Exp. 2: Operation sequences of transplanting category modules to the classification module

In this experiment, we evaluated the performance of transplanting category modules to the classification module. We considered the classification module of the dog555Because the dog category contained more training samples, the CNN for the dog was believed to be better learned. Thus, we used the task module learned for dog images as a generic task module. as a generic one. We transplanted category modules of other four mammal categories to this task module. According to the experience in Experiment 1, we set the adapter to contain three conv-layers. Following Eqn. (4b), we used the operation to compute derivatives of ReLU operations in both the task module and the adapter33footnotemark: 3. The only exception was the lowest ReLU operation of the task module, for which we applied . We generated for each input image by concatenating matrices along the third dimension, where contained 20%/80% positive/negative elements. The generation of is introduced in Section 3.2.

Table 3 shows the performance when we transplanted the category module to a task module oriented to categories with similar structures. We tested our method with a few (10–100) training samples. When there were more than 50 training samples, our method yielded about a half classification error of the direct-learn baseline.

# of samples cat cow horse sheep Avg. # of samples cat cow horse sheep Avg.

Classification

100 direct-learn 14.43 20.62 17.01 11.86 15.98 20 direct-learn 25.26 24.23 39.18 23.71 28.10
back-distill 5.67 3.61 6.70 2.58 4.64 back-distill 17.01 19.59 23.71 14.95 18.82
50 direct-learn 21.13 23.71 15.46 10.31 17.65 10 direct-learn 42.27 36.60 40.72 39.18 39.69
back-distill 7.22 9.28 8.76 5.67 7.73 back-distill 42.27 32.99 28.35 30.41 33.51

10 direct-learn 64.97 69.65 80.26 69.87 71.19 20 direct-learn 68.69 81.02 71.88 72.65 73.56
back-distill 74.59 83.51 82.08 80.21 80.10 back-distill 73.34 84.78 81.40 81.04 80.14
Table 4: (top) Error rate of single-category classification when the classification module was learned for both mammals and dissimilar categories. (bottom) Pixel accuracy of object segmentation, the segmentation module was learned for both mammals and dissimilar categories. Other experimental settings were the same as in Experiments 2 and 3. We did not show the result of the dog category, because we needed to compare average performance in this table with results in Table 3.

4.3 Exp. 3: Operation sequences of transplanting category modules to the segmentation module

In this experiment, we evaluated the performance of transplanting category modules to the segmentation module. Five FCNs were strongly supervised using all training samples for single-category segmentation. These networks achieved pixel-level segmentation accuracies (defined in [15]) of 95.0%, 94.7%, 95.8%, 94.6%, and 95.6% for the cat, cow, dog, horse, and sheep categories, respectively. Like in Experiment 2, we considered the segmentation module of the dog55footnotemark: 5 as a generic one. We transplanted category modules of other four mammal categories to this task module. According to the experience in Experiment 1, we set the adapter to contain one conv-layer. Following Eqn. (4b), we used the operation to compute derivatives of ReLU operations33footnotemark: 3. We set for all categories in this experiment.

Table 3 compares pixel-level segmentation accuracy between our method and the direct-learn baseline. We tested our method with a few (10–100) training samples. Our method exhibited 10%–12% higher accuracy than the direct-learn baseline.

4.4 Transplant to similar or dissimilar categories?

Theoretically, just like transfer learning, the task module should deal with a set of categories that have similar structures with the new category to ensure a high efficiency. In fact, identifying categories with similar structures is still an open problem. People manually define sets of similar categories, e.g. learning a task module for mammals and learning another task module for different vehicles.

In order to quantitatively evaluate the performance of transplanting to dissimilar categories, we designed new task modules for additional testing. We considered the first four categories of the VOC dataset, i.e. aeroplane, bicycle, bird, and boat, to have dissimilar structures with mammals. In Experiment 2/Experiment 3, for the transplanting of a mammal category (let us take the cat for example) to a classification/segmentation module, we learned a “leave-one-out” classification/segmentation module to deal with all four dissimilar categories and all mammal categories except the cat.

Table 4 shows the performance of transplanting to a task module trained for both similar and dissimilar categories. Our method outperformed the baseline. Compared to the performance of transplanting to a task module oriented to similar categories in Table 3, transplanting to a more generic task modules for dissimilar categories hurt the segmentation performance but boosted the classification performance. It is because forcing a task module to handle dissimilar categories sometimes made the task module encode more generic and robust representations, while it may also let the task module ignore details of mammal categories.

5 Conclusions and discussion

In this paper, we focused on a new task, i.e. merging pre-trained teacher nets into a generic, modular transplant net with a few or even without training annotations. We discussed the importance and core challenges of this task. We developed the back-distillation algorithm as a theoretical solution to the challenging space-projection problem.

The back-distillation strategy significantly decreases the demand for training samples. Experimental results demonstrated the superior efficiency of our method. Our method without any training samples even outperformed the baseline with 100 training samples, as shown in Table 2(left).

The growth of a large transplant net for different categories and tasks can be divided into lots of elementary operations of network transplanting (see Fig. 1 and supplementary materials for more discussions). When the transplant net has been learned, we can optionally fine-tune task modules using training samples of multiple categories. Note that unlike the back distillation, the performance of fine-tuning depends on the number of training samples. Thus, given a few samples, whether an additional fine-tuning will increase or decrease the generality of the transplant net is a difficult question, and it requires sophisticated analysis in the future.

References

Computation of gradients w.r.t. the distillation loss

In order to optimize the distillation loss, we need to compute , i.e. with respect to the following .

Thus, we first explore the close-form formulation of the function . As shown in the above equation, we can transform the back-propagation process for computing as a number of cascaded functions , which are derivatives of . Since we have formulated in the manuscript and it is easy to obtain , we mainly focus on the formulation of .

In general, it is not difficult to derive the derivative of any convolution operation. Here, we focus on the most common case, i.e. the convolution operation with a padding

and a stride of 1. Given a tensor

and convolutional filter with weights and a bias term , the convolution can be written as . For VGG networks, people usually set . can further absorb by adding the channel to and adding the channel to with . Thus, we can obtain

where and . Thus, we can write the derivative of as

In this way, we obtain the close-form formulation of the function . We can easily compute

using the chain rule of back propagation. We can consider this process as a

back-back-propagation.

About the case of using an enlarged

When we use an enlarged pseudo-gradient , we can obtain an enlarged gradient map . As discussed above, the computation of can be considered as a number of cascaded functions with the input , which are quite similar to the forward propagation in the neural network.

Visualization of network structures used in three experiments

How to learn a large transplant network

In the paper, we limit our attention to the core technique of back distillation for network transplanting and do preliminary experiments to demonstrate the effectiveness of the proposed method. Here, we would like to explain how to use the proposed back-distillation algorithm to gradually grow a large transplant network. The basic idea of growing a large transplant network has been shown in Figure 1 of the paper. We can divide the complex procedure of building a large transplant network into elementary operations of network transplanting.

Generally speaking, there are two types of operations during the learning of the transplant network, i.e. adding a new task module and adding a new category module.

Adding a new task module: Compared to adding category modules, adding task modules is relatively easier. There are two ways to add a task module.

Firstly, given a single or multiple pre-trained category modules and training samples of the category/categories, we can directly learn a new task module upon features of the category module(s). This learning process has been shown as “Step 1” in Figure 1, which follows traditional learning strategy without network transplanting.

Secondly, when the specific network is pre-trained with a single category module and multiple task modules, we can transplant the category module to a generic task module in the target transplant network. In this case, task modules (those do not correspond to ) will automatically be added to the transplant network, although these task modules only connect with a single category module . This case has been shown as “Step 2” in Figure 1, in which the Task-3 module is added to the transplant network when we connect the category- module to the Task-2 module of the transplant network.

Adding a new category module: We can also divide the insertion of a category modules into the following two cases. The first case is the traditional network transplanting that is introduced in this paper. The second case is to build more connections between category modules and task modules that have already been contained by the transplant network. This case is shown as “Step 3” in Figure 1. We learn new adapters to connect existing category modules and task modules via network transplanting.

Comparing our back-distillation with the Jacobian distillation

The loss for the Jacobian distillation [22] is similar to Equation (3). There are two essential differences between them, which make the Jacobian distillation not applicable to network transplanting. (i) The Jacobian distillation does not balance magnitudes of neural activations between the category module and the task module, which significantly increases the difficulties of network transplanting. (ii) More crucially, the Jacobian distillation uses real gradients of the network instead of using pseudo-gradients. However, when we fixed the upper task module during network transplanting, the task module usually blocks most signals during the forward propagation. Thus, during the back propagation, real gradients usually cannot pass ReLU layers in the task module to produce informative Jacobians for distillation.

The core challenge

To clarify the challenge of learning the adapter, we will compare the following two cases, i.e. 1) learning both the adapter and the task module and 2) learning the adapter but fixing the task module . The traditional method of directly optimizing the task loss can easily solve the first case but will be hampered in the second case.

Case 1, Learning both the adapter and the task module : This case corresponds to the traditional problem of deep learning. We initialize the task module

with random parameters, so in the first epoch of learning, input features

will produce random activations in layers of both and , and positive neural activations will pass through ReLU layers to make random predictions . Successfully passing information to the final output is quite important, because we can obtain gradients of the task loss to optimize and . In this way, both and can be well learned.

Case 2, learning the adapter but fixing the task module : Unlike Case 1, we use a pre-trained task module , and we fix its parameters during the learning process. We only initialize the adapter with random parameters. However, as shown in Fig. 3(left) in the paper, the task module has vast forgotten space of its input feature, and can only pass very specific features to the final output. In other words, cannot pass most information of ’s features to the final output in early epochs. Thus, we will not obtain informative gradients to optimize parameters in .