Network Transplanting (extended abstract)

01/21/2019 ∙ by Quanshi Zhang, et al. ∙ 8

This paper focuses on a new task, i.e., transplanting a category-and-task-specific neural network to a generic, modular network without strong supervision. We design a functionally interpretable structure for the generic network. Like building LEGO blocks, we teach the generic network a new category by directly transplanting the module corresponding to the category from a pre-trained network with a few or even without sample annotations. Our method incrementally adds new categories to the generic network but does not affect representations of existing categories. In this way, our method breaks the typical bottleneck of learning a net for massive tasks and categories, i.e., the requirement of collecting samples for all tasks and categories at the same time before the learning begins. Thus, we use a new distillation algorithm, namely back-distillation, to overcome specific challenges of network transplanting. Our method without training samples even outperformed the baseline with 100 training samples.



There are no comments yet.


page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.



Quanshi Zhang is the corresponding author with the John Hopcroft Center and the MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University.

Besides end-to-end learning a black-box neural network, in this paper, we propose a new deep-learning methodology,

i.e. network transplanting. Instead of learning from scratch, network transplanting aims to merge several convolutional networks that are pre-trained for different categories and tasks to build a generic, distributed neural network.

Network transplanting is of special values in both theory and practice. We briefly introduce key deep-learning problems that network transplanting deals with as follows.

Future potential of learning a universal net

Instead of learning different networks for different applications, building a universal net with a compact structure for various categories and tasks is one of ultimate objectives of AI. In spite of the gap between current algorithms and the target of learning a huge universal net, it is still meaningful for scientific explorations along this direction. Here, we list key issues of learning a universal net, which are not commonly discussed in the current literature of deep learning.

The start-up cost w.r.t. sample collection is also important, besides the total number of training annotations. Traditional methods usually require people to simultaneously prepare training samples for all pairs of categories and tasks before the learning begins. However, it is usually unaffordable, when there is a large number of categories and tasks. In comparison, our method enables a neural network to sequentially absorb network modules of different categories one-by-one, so the algorithm can start without all data.

Massive distributed learning & weak centralized learning: Distributing the massive computation of learning the network into local computation centers all over the world is of great practical values. There exist numerous networks locally pre-trained for specific tasks and categories in the world. Centralized network transplanting physically merges these networks into a compact universal net with a few or even without any training samples.

Delivering models or data: Our delivering pre-trained networks to the computation center is usually much cheaper than collecting and sending raw training data in practice.

Middle-to-end semantic manipulation for application: How to efficiently organize and use the knowledge in the net is also a crucial problem. We use different modules in the network to encode knowledge of different categories and that of different tasks. Like building LEGO blocks, people can manually connect a category module and a task module to accomplish a certain application (see Fig. 1(left)).

Figure 1: Building a transplant net. We propose a theoretical solution to incrementally merging category modules from teacher nets into a transplant (student) net with a few or without sample annotations. The transplant net has an interpretable, modular structure. A category module, e.g. a cat module, provides cat features to different task modules. A task module, e.g. a segmentation module, serves for various categories. We show two typical operations to learn transplant nets. Blue ellipses show modules in teacher nets used for transplant. Red ellipses indicate new modules added to the transplant net. Unrelated adapters in each step are omitted for clarity.
Annotation cost Sample preparation Interpretability Catastrophic forgetting Modular manipulate Optimization
Directly learning a multi-task net Massive Simultaneously prepare samples for all tasks and categories Low Not support back prop.
Transfer- / meta- / continual-learning

Some support weakly-supervised learning

Some learn a category/task after another Usually low Most algorithmically alleviate No support back prop.
Transplanting A few or w/o annotations Learns a category after another High Physically avoid Support back-back prop.
Table 1: Comparison between network transplanting and other studies. Note that this table can only summarize mainstreams in different research directions considering the huge research diversity.

Task of network transplanting

To solve above issues, we propose network transplanting, i.e. building a generic model by gradually absorbing networks locally pre-trained for specific categories and tasks. We design an interpretable modular structure for a target network, namely a transplant net, where each module is functionally meaningful. As shown in Fig. 1(left), the transplant net consists of three types of modules, i.e. category modules, task modules, and adapters. Each category module extracts general features for a specific category (e.g. the dog). Each task module is learned for a certain task (e.g. classification or segmentation) and is shared by different categories. Each adapter projects output features of a category module to the input space of a task module. Each category/task module is shared by multiple tasks/categories.

We can learn an initial transplant net with very few tasks and categories in the scenario of traditional multi-task/category learning. Then, we gradually grow the transplant net to deal with more categories and tasks via network transplanting. Network transplanting can be conducted with or without human annotations as additional supervision. We summarize two typical types of transplanting operations in Fig. 1(right). The core technique is to learn an adapter to connect a task module in the transplant net and a category module from another network.

The elementary transplanting operation is shown in Fig. 2. We are given a transplant net with a task module that is learned to accomplish a certain task for many categories, except for the category . We hope the task module to deal with the new category , so we need another network (namely, a teacher net) with a category module and a task module . The teacher net is pre-trained for the same task on the category . We may (or may not) have a few training annotations of category for the task. Our goal is to transplant the category module in the teacher net to the transplant net.

Note that we just learn a small adapter module to connect to . We do not fine-tune and during the transplanting process to avoid damaging their generality.

However, learning adapters but fixing parameters of category and task modules proposes specific challenges to deep-learning algorithms. Therefore, in this study, we proposed a new algorithm, namely back distillation, to overcome these challenges. The back-distillation algorithm uses the cascaded modules of the adapter and to mimic upper layers of the pre-trained teacher net. This algorithm requires the transplant net to have similar gradients/Jacobian with the teacher net w.r.t. ’s output features for distillation. In experiments, our back-distillation method without any training samples even outperformed the baseline with 100 training samples (see Table 3(left)).

Difference to previous knowledge transferring

The proposed network transplanting is close to the spirit of continual learning (or lifelong learning). As an exploratory research, we summarize our essential differences from traditional studies in Table 1.

Modular interpretability more controllability: Besides the discrimination power, the interpretability is another important property of a neural network. Our transplant net clarifies the functional meaning of each intermediate network module, which makes the knowledge-transferring process more controllable.

Figure 2: Overview. (left) Given a teacher net and a student net, we aim to learn an adapter via distillation. We transplant the category module to the student net by using to connect and , in order to enable the task module to deal with the new category .

Bottleneck of transferring upper modules: Most deep-learning strategies are not suitable to directly transfer pre-trained upper modules. Network transplanting just allows us to modify the lower adapter, when we transfer a pre-trained upper task module to a new category. It is not permitted to modify the upper module. However, it is difficult to optimize a lower adapter if the upper is fixed.

Catastrophic forgetting: Continually learning new jobs without hurting the performance of old jobs is a key issue for continual learning [Rusu et al.2016, Schwarz et al.2018]. Our method exclusively learns the adapter to physically prevent the learning of new categories from changing existing modules.

We can summarize contributions of this study as follows. (i) We propose a new deep-learning method, network transplanting with a few or even without additional training annotations, which can be considered as a theoretical solution to three issues in Section Future potential of learning a universal net. (ii) We develop an optimization algorithm, i.e. back-distillation, to overcome specific challenges of network transplanting. (iii) Preliminary experiments proved the effectiveness of our method. Our method significantly outperformed baselines.

Algorithm of network transplanting

Overview: As shown in Fig. 2, we are given a teacher net for a single or multiple tasks w.r.t. a certain category . Let the category module in the bottom of the teacher net have layers, and it connects a specific task module in upper layers. We are also given a transplant net with a generic task module , which has been learned for multiple categories except for the category .

The initial transplant net with a task module (before transplanting) can be learned via traditional scenario of learning from samples of some categories. We can roughly regard to encode generic representations for the task. Similarly, the category module extracts generic features for multiple tasks. Thus, we do not fine-tune or to avoid decreasing their generality.

Our goal is to transplant to by learning an adapter with parameters , so that the task module can deal with the new category module .

The basic idea of network transplanting is that we use the cascaded modules of and to mimic the specific task module in the teacher net. We call the transplant net a student net. Let denote the output feature of the category module given an image , i.e. . and are given as outputs of the teacher net and the student net, respectively. Thus, network transplanting can be described as


Problem of space projection & back-distillation

It is a challenge to let an adapter project the output feature space of properly to the input feature space of . The information bottleneck theory shows that a network selectively forgets certain space of middle-layer features and gradually focuses on discriminative features during the learning process. Thus, both the output of and the input of have vast forgotten space. Features in the forgotten input space of

cannot pass most feature information through ReLU layers in

and reach . The forgotten output space of is referred to the space that does not contains ’s output features.

Vast forgotten feature spaces significantly boost difficulties of learning. Since valid input features of usually lie in low-dimensional manifolds, most features of the adapter fall inside the forgotten space. I.e. will not pass most information of input features to network outputs. Consequently, the adapter will not receive informative gradients of the loss for learning.

To learn good projections, we propose to force the gradient (also known as attention, Jacobian) of the student net to approximate that of the teacher, which is a necessary condition of .


where is an arbitrary function of that outputs a scalar. denote parameters of the adapter . Therefore, we use the following distillation loss for back-distillation:


where is the task loss of the student net; denotes the ground-truth label; is a scaling scalar. This formulation is similar to the Jacobian distillation in [Srinivas and Fleuret2018]. We omit , if we learn the adapter without additional training labels.

Learning via back distillation

It is difficult for most recent techniques, including those for Jacobian distillation, to directly optimize the above back-distillation loss. To overcome the optimization problem, we need to make gradients of agnostic with regard to feature maps. Thus, we propose two pseudo-gradients to replace in the loss, respectively. The pseudo-gradients follow the paradigm in Eqn. (4b).


where we define . Just like in Eqn. (2), we assume . , each is the derivative of the layer function for back-propagation. denotes a set of feature maps of all middle layers, and is the feature map of the -th layer. In Eqn. (4b), we revise layer-wise operations to the computation of gradients, in order to make gradients agnostic with regard to .

In this way, we conduct the back-distillation algorithm by . The distillation loss can be optimized by propagating gradients of gradient maps to the upper layers, and we consider this as back-back-propagation. Tables 3 and 3 have exhibited the superior performance of the back-back-propagation.

Insert one conv-layer

# of samples cat cow dog horse sheep Avg.

Insert three conv-layers

# of samples cat cow dog horse sheep Avg.
100 direct-learn 12.89 3.09 12.89 10.82 9.28 9.79 100 direct-learn 9.28 6.70 12.37 11.34 3.61 8.66
back-distill 1.55 0.52 3.61 1.55 1.03 1.65 back-distill 1.03 2.58 4.12 1.55 2.58 2.37
50 direct-learn 13.92 15.98 12.37 16.49 15.46 14.84 50 direct-learn 14.43 13.92 15.46 8.76 7.22 11.96
back-distill 1.55 0.52 3.61 1.55 1.03 1.65 back-distill 3.09 3.09 4.12 2.06 4.64 3.40
20 direct-learn 16.49 26.80 28.35 32.47 25.77 25.98 20 direct-learn 22.16 25.77 32.99 22.68 22.16 25.15
back-distill 1.55 0.52 3.09 1.55 1.03 1.55 back-distill 7.22 6.70 7.22 2.58 5.15 5.77
10 direct-learn 39.18 39.18 35.05 41.75 38.66 38.76 10 direct-learn 36.08 32.99 31.96 34.54 34.02 33.92
back-distill 1.55 0.52 3.61 1.55 1.03 1.65 back-distill 8.25 15.46 10.31 13.92 10.31 11.65
0 direct-learn 0 direct-learn
back-distill 1.55 0.52 4.12 1.55 1.03 1.75 back-distill 50.00 50.00 50.00 49.48 50.00 49.90
Table 2: Error rates of classification when we insert conv-layers with ReLU layers to a CNN as the adapter. . The last row shows the performance of network transplanting without training samples.
# of samples cat cow horse sheep Avg. # of samples cat cow horse sheep Avg.
100 direct-learn 76.54 74.60 81.00 78.37 77.63 20 direct-learn 71.13 74.82 76.83 77.81 75.15
distill 74.65 80.18 78.05 80.50 78.35 distill 71.17 74.82 76.05 78.10 75.04
back-distill 85.17 90.04 90.13 86.53 87.97 back-distill 84.03 88.37 89.22 85.01 86.66
50 direct-learn 71.30 74.76 76.83 78.47 75.34 10 direct-learn 70.46 74.74 76.49 78.25 74.99
distill 68.32 76.50 78.58 80.62 76.01 distill 70.47 74.74 76.83 78.32 75.09
back-distill 83.14 90.02 90.46 85.58 87.30 back-distill 82.32 89.49 85.97 83.50 85.32
Table 3: Pixel accuracy of object segmentation when we transplanted the segmentation module from a dog network to the network of the target category in Experiment 3. The adapter contained a conv-layer and a ReLU layer.
Figure 3: (a) Teacher net; (b) transplant net; (c) two types of adapters; (d) two sequences of transplanting operations.


To simplify the story, we limit our attention to testing network transplanting operations. We do not discuss other related operations, e.g. the fine-tuning of category and task modules and the case in Fig. 1(a), which can be solved via traditional learning strategies.

We designed three experiments to evaluate the proposed method. In Experiment 1, we learned toy transplant nets by inserting adapters between middle layers of pre-trained CNNs. Then, Experiments 2 and 3 were designed considering the real application of learning a transplant net with two task modules (i.e. modules for object classification and segmentation) and multiple category modules. As shown in Fig. 3(b,d), we can divide the entire network-transplanting procedure into an operation sequence of transplanting category modules to the classification module and another operation sequence of transplanting category modules to the segmentation module. Therefore, we separately conducted the two sequences of transplanting operations in Experiments 2 and 3 for more convincing results.

We compared our back-distillation method (namely back-distill) with two baselines. All baselines exclusively learned the adapter without fine-tuning the task module for fair comparisons. The first baseline only optimized the task loss without distillation, namely direct-learn. The second baseline is the traditional distillation, namely distill, where the distillation loss is . The distillation was applied to outputs of task modules and , because except for outputs, other layers in and did not produce features on similar manifolds. We tested the distill method in object segmentation, because unlike single-category classification, segmentation outputs had correlations between soft output labels.

Exp. 1: Adding adapters to pre-trained CNNs

In this experiment, we conducted a toy test, i.e. inserting and learning an adapter between a category module and a task module to test network transplanting. Here, we only considered networks with VGG-16 structures for single-category classification.

In Table 3, we compared our back-distill method with the direct-learn baseline when we inserted an adapter with a single conv-layer and when we inserted an adapter with three conv-layers. Table 3(left) shows that compared to the error rates of the direct-learn baseline, our back-distill method yielded a significant lower classification error (). Even without any training samples, our method still outperformed the direct-learn method with 100 training samples.

Exp. 3: Operation sequences of transplanting category modules to the segmentation module

In this experiment, we evaluated the performance of transplanting category modules to the segmentation module. Five FCNs were strongly supervised using all training samples for single-category segmentation. We considered the segmentation module of the dog as a generic one. We transplanted category modules of other four mammal categories to this task module.

Table 3 compares pixel-level segmentation accuracy between our method and the direct-learn baseline. We tested our method with a few (10–100) training samples. Our method exhibited 10%–12% higher accuracy than the direct-learn baseline.

Conclusions and discussion

In this paper, we focused on a new task, i.e. merging pre-trained teacher nets into a generic, modular transplant net with a few or even without training annotations. We discussed the importance and core challenges of this task. We developed the back-distillation algorithm as a theoretical solution to the challenging space-projection problem. The back-distillation strategy significantly decreases the demand for training samples. Experimental results demonstrated the superior efficiency of our method. Our method without any training samples even outperformed the baseline with 100 training samples, as shown in Table 3(left).