DEPARA: Deep Attribution Graph for Deep Knowledge Transferability

03/17/2020 ∙ by Jie Song, et al. ∙ 0

Exploring the intrinsic interconnections between the knowledge encoded in PRe-trained Deep Neural Networks (PR-DNNs) of heterogeneous tasks sheds light on their mutual transferability, and consequently enables knowledge transfer from one task to another so as to reduce the training effort of the latter. In this paper, we propose the DEeP Attribution gRAph (DEPARA) to investigate the transferability of knowledge learned from PR-DNNs. In DEPARA, nodes correspond to the inputs and are represented by their vectorized attribution maps with regards to the outputs of the PR-DNN. Edges denote the relatedness between inputs and are measured by the similarity of their features extracted from the PR-DNN. The knowledge transferability of two PR-DNNs is measured by the similarity of their corresponding DEPARAs. We apply DEPARA to two important yet under-studied problems in transfer learning: pre-trained model selection and layer selection. Extensive experiments are conducted to demonstrate the effectiveness and superiority of the proposed method in solving both these problems. Code, data and models reproducing the results in this paper are available at <https://github.com/zju-vipa/DEPARA>.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 6

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Driven by massive labeled data [5] and the developing advanced deep models [9]

, the field of artificial intelligence has made remarkable progress in recent years. However, in real-world scenarios we often encounter the dilemma where limited labeled training data are available for addressing our problems at hand. The common practice in this situation is transferring the pre-trained models, which are open sourced by dedicated researchers or industries, to solve our own problems. Yet, along this road comes up another problem: faced with countless PR-DNNs of various layers, which model and which layer of it should be transferred to benefit the target task most? Currently the model selection is usually done blindly by adopting the ImageNet pre-trained models 

[21, 15]

and the layer selection is usually conducted heuristically. However, the ImageNet pre-trained models will not always produce satisfactory performances for all the tasks, especially when the task is significantly different from the one defined by ImageNet 

[2, 28]. Likewise, the heuristically selected layer may also perform sub-optimally, as the optimal layer for being transferred depends on various factors such as task relatedness and the amount of the target data.

To tackle the aforementioned problems, we need to explore and reveal the underlying transferability among deep knowledge from PR-DNNs of heterogeneous tasks. Recently, Zamir et al[33] did the pioneering work towards this direction. They proposed a fully computational approach, termed taskonomy, to measure the task transferability. However, there are three unneglected limitations in taskonomy tremendously hampering its real-world application. The first is its prohibitively expensive cost in computation. For computing the pairwise relatedness for a given task dictionary, the computation cost will grow quadratically with the number of the tasks, which will be excessively expensive when the number of tasks becomes large. The second limitation is that it adopts transfer learning to model the relatedness between tasks, which still requires a moderately large amount of labeled data to train the transfer models. Lastly, taskonomy only consider the transferability across different models or tasks while ignoring the transferability across different layers, which we argue is also important for a transfer to be successful.

The main obstacle standing in the way of measuring the transferability learned from different PR-DNNs is the “black-box” nature of deep models. As the knowledge (e.g., features) learned from different PR-DNNs is unexplainable and actually in different embedding space, it is very tricky to compute the transferability directly. In this paper, to derive the transferability of knowledge encoded in PR-DNNs, we propose the DEeP Attribution gRAph (DEPARA) to represent the knowledge learned in PR-DNNs. In DEPARA, nodes correspond to the inputs and are represented by their vectorized attribution maps [25, 3, 24] with regards to the outputs of the PR-DNN. Edges denote the relatedness between inputs and are measured by their similarity in the embedding space of the PR-DNN (as seen in Figure 1). As the DEPARAs of different PR-DNNs are defined on the same set of inputs, they are actually in the same embedding space and thus the knowledge transferability of two PR-DNNs is directly measured by the similarity of their corresponding DEPARAs. More similar DEPARAs indicate that more correlated knowledge is learned from different PR-DNNs, thus the knowledge transferability to each other is higher.

The proposed method requires no human annotations, imposes no constraints on architectures and is several-magnitude times faster than taskonomy. Meanwhile, beyond model selection, it can also be easily adopted to the layer selection problem in transfer learning. Extensive experiments conducted demonstrate the effectiveness of DEPARA for quantifying the deep knowledge transferability.

To sum up, we made the following three main contributions: (1) We introduce the challenging, important yet under-studied deep knowledge transferability problem where only PR-DNNs are provided without any labeled data. (2) We propose the DEPARA, an efficient and effective method for deriving the transferability of the knowledge learned from PR-DNNs. To our knowledge, this is the first work to address the pre-trained model selection and the layer selection problems simultaneously. (3) Extensive experiments are conducted to demonstrate the effectiveness of DEPARA in solving both the model and the layer selection problems in transfer learning.

2 Related Work

2.1 Knowledge Transferability

Transferring PR-DNNs to new tasks is an active research topic. Razavian et al. [20] demonstrated that features extracted from deep neural networks could be used as generic image representations to tackle the diverse range of visual tasks. Yosinski et al[31]

investigated the transferability of deep features extracted from every layer of deep neural networks. Azizpour 

et al[2] studied several factors affecting the transferability of deep features. Recently, the effects of pre-training datasets for transfer learning are also studied [12, 8, 11, 28]. Albeit many heuristics are found by these works, none of them explicitly quantify the transferability among different tasks and layers to provide a principled way for model and layer selection. Zamir et al[33] proposed a fully computational approach to measure the task relatedness. Dwivedi and Roig [7] adopted representation similarity analysis for efficient task taxonomy. Song et al[26] utilized the similarity of attribution maps to quantify the model transferability. However, the layer selection problem is still omitted in these works. In this paper, we propose DEPARA to address both the model and the layer selection problems in transfer learning.

2.2 Deep Model Attribution

Attribution refers to assigning importance scores to the inputs for a specified output. Existing attribution methods can be mainly divided into two groups, including perturbation- [34, 35, 36] and gradient-based methods [25, 3, 24, 27, 23, 1]. Perturbation-based methods compute the attribution of an input feature by making perturbations, e.g.

, removing, masking or altering, to individual inputs or neurons and observe the impact on later neurons. In contrast, backpropagation-based methods calculate the attributions for all input features in one or few forward and backward pass through the network, which renders them more efficient. In this paper, we directly adopt existing attribution methods for transferability. Devising more advanced attribution method for our problem is left to future work.

2.3 Deep Knowledge Representation

How to represent the knowledge encoded in PR-DNNs is vital for knowledge reusing. Hinton et al[10] viewed the soft predictions of a trained teacher model as the knowledge for knowledge distillation. Following their work, some other forms of knowledge are proposed to facilitate student learning. For example, Romero et al[22] proposed to adopt intermediate representations learned by the teacher as hints to improve the final performance of the student. Zagoruyko and Komodakis [32] utilized the attention of the teacher model to guide the learning of the student. Recently, the relation of input instances learned from the trained deep models is also found a kind of useful knowledge [4, 16, 14, 29, 17]. For example, Chen et al[4] utilized cross sample similarities to accelerate deep metric learning. Park et al[16] leveraged mutual relations of data examples for knowledge distillation. In this paper, we propose DEPARA to represent the deep knowledge, which enables us easily quantify the knowledge transferability.

3 Deep Knowledge Transferability

3.1 Notation and Problem Setup

Assume there are PR-DNNs available, denoted by . Each model in can be viewed to be composed of a number of nonlinear functions: , where denotes the basic nonlinear function, denotes the number of nonlinear functions in , and the symbol denotes the function composition operation. Note that no constraints are imposed on the architectures of models in , so the number of nonlinear functions in these PR-DNNs may be different. The task handled by is denoted by , and all the tasks involved in are collectively denoted by the task dictionary , . For task , we adopt to denote the joint data distribution of the corresponding data domain. In this paper, the term deep knowledge refers to the embedding space learned by PR-DNNs. The embedding space produced after in is denoted by . Given without any labeled data, we investigate the transferability, which is defined in the next section, between different s for facilitating task selection and layer selection in transfer learning.

Figure 1: The illustrative diagram of the procedure for constructing the deep attribution graph.

3.2 Definition of Transferability

An intuitive description of transferability is “how well a deep ConvNet representation can be transferred to the target task” [31, 2]. Here we introduce a more rigorous definition to facilitate addressing the model and the layer selection problems in transfer learning. Assume there is a deep knowledge pool denoted by 111Note that we use to denote the -th item in , and to denote the knowledge produced by .. Note that in this pool any two knowledge items and may be produced from different models or layers. The transferability of to task , denoted by , is defined as the ascending rank of among for solving the target task. Here the rank is computed based on the standard empirical risk. Formally, let be the target data randomly sampled from , i.e., . denotes the embeddings of in , then

(1)

is the hypothesis produced on . denotes the standard expected risk:

(2)

where is the objective function of task . Detailed descriptions of is provided in the supplementary material. If the transferability of every in to task is known, we can directly select the which ranks first for solving the target task . Note that when every in comes from a different PR-DNN, the definition of transferability can be used for model selection. If all s in come from different layers of the same PR-DNN, the definition can be used for layer selection in transfer learning.

The transferability defined above is intuitively straightforward. However, the computation is expensive for measuring the transferability between every pair of tasks in the task dictionary. What is worse, it needs labeled data for all the tasks involved. To bypass these problems, We propose DEPARA to approximate the defined transferability without any labeled data. We argue two factors must be considered simultaneously for computing the transferability:

  1. Inclusiveness: for a transfer to be successful, produced by the PR-DNN of the source task should be inclusive of sufficient information for solving the target task. Inclusiveness is an intuitively straightforward and fundamental ingredient of transferability. However, since is highly nonlinear and unexplainable, it is very challenging to directly measure the inclusiveness of for solving the target task.

  2. Accessibility: should be sufficiently abstracted and easily re-purposed to the target task so that the target task can be well solved with limited human supervision. Without the requirement of accessibility, produced by shallower layers will be more likely of higher transferability as from shallower layers tend to be of higher inclusiveness than that from higher layers. Measuring the accessibility of is also a challenging problem due to the black-box nature of deep models.

3.3 Deep Attribution Graph

An illustrative diagram of the DEPARA is depicted in Figure 1. Formally, assume there is a set of randomly sampled unlabeled data points . is called probe data in this paper. The probe data are first fed to the PR-DNN to obtain their features, i.e., the outputs of the specific layer, after a forward pass. Then the attribution maps are produced by a backward pass. The back-propagation rule depends on the adopted attribution methods [1]. In DEPARA, each node corresponds to a data point in probe data and its feature is the vectorized attribution map of this data point. The edge between two nodes denotes the relatedness of the two data points and are measured by their similarity in the embedding space of the PR-DNN. For from , a DEPARA symbolized by can be obtained, where and denote the nodes and the edges, respectively. indicates the DEPARA is defined on . More detailed descriptions of the nodes and the edges are provided as follows.

3.3.1 Nodes

The nodes in are collectively denoted by , where is the attribution of with regards to . In this paper, we adopt Gradient*Input [24] for attribution. Gradient*Input refers to a first-order Taylor approximation of how the output would change if the input was set to zero, which implies the importance of the input w.r.t the output. Mathematically, for the -th element in , its attribution score with respect to is computed as:

(3)

where denotes norm.

The nodes are devised for measuring the inclusiveness of . The intuition is that for and of the same input but produced by two PR-DNNs and , if they produce more similar attributions (i.e., they focus on the more similar regions on the input), they are more likely to contain correlated information and be transformed to each other. Otherwise, they focus on different input dimensions so that being less correlated to each other.

3.3.2 Edges

The edges in are collectively denoted by , where is the edge of the -th node and the -th node and denotes the similarity of corresponding inputs in the embedding space . Formally,

(4)

We adopt cosine similarity to define the edge because it is insensitive to the length of

. Note that we assume there exists an edge between every pair of nodes in , so that is actually a fully connected graph. Furthermore, as is devised to be undirected, for any and .

The edges are devised to uncover the accessibility of transferability. If the embedding space produced after of can be easily transferred (i.e., of high accessibility) to another embedding space produced after of , and should be similar in topological structure. Otherwise, it will consume a large amount of labeled data and training time to rebuild the embedding space on top of , which violates the definition of high accessibility. The edges in can be viewed as a representation of the topological structure in the embedding space. Two embedding spaces of the similar topological structure should produce similar edges in for the same set of probe data.

3.4 Task Transferability

Here we adopt DEPARAs to quantify the transferability among different tasks in , a goal similar to taskonomy [33]. However, in our problem only PR-DNNs of corresponding tasks are provided. We assume no labeled data are available for any task.

Before constructing DEPARAs for the tasks in , two issues must be resolved. The first is that for task , which embedding space (i.e., layer) of should we choose to best represent the knowledge needed for task . In this paper, we viewed all PR-DNNs in an encoder-decoder architecture. The encoder extracts compact features and the decoder makes predictions using the features from the decoder. We adopt the embedding space learned by the encoder, denoted as , to represent the knowledge of . Thus the knowledge pool can be denoted by . The second is that we need a set of probe data which are shared among all the tasks for probing the topological structure of and constructing the DEPARAs. In this paper, the probe data are randomly sampled. More details about how the probe data are obtained are provided in the experiment section and the supplementary material.

According to Eq. (3) and (4), for each task in , a DEPARA is obtained on the probe data . The transferability of to task is approximated by the descending rank of in based on the graph similarity:

(5)

where is the similarity function. . For nodes, we adopt the cosine similarity function: . For edges, the similarity is defined to be Spearman correlation coefficient: , where is the difference between the ranks of the -th elements of and . is the trade-off hyper-parameter. Detailed descriptions of are given in the supplementary material.

3.5 Layer Transferability

As aforementioned, deep models are usually composed of many nonlinear functions or layers. For a PR-DNN , actually different embedding spaces can be obtained, which can be denoted by . However, in task transferability described above as well as taskonomy [33], only one embedding space from the encoder is considered and all other learned knowledge is ignored. It may lead to suboptimal performance as reusing can not guarantee to be optimal for different target tasks.

Here we consider the layer selection problem which is also important in transfer learning: for a source task , which layer of its PR-DNN should be choosed to benefit the target task most? The layer selection problem can be viewed as selecting from which benefits the target task most. We adopt produced by the encoder of to denote the knowledge essential to task , as is usually the most compact. The layer selection is conducted by

(6)

With computed from Eq. (6), we adopt for transferring the PR-DNN to the target task .

4 Experiments

Figure 2: Visualization of some examples of the nodes and the edges of DEPARA. For the nodes, we visualize three examples from taskonomy data, Indoor Scene and COCO, respectively. For the edges, we randomly sample nodes from taskonomy data and show their interconnections. Note that some weak connections are omitted for better visualization. Here we select two 3D tasks, three 2D tasks, two geometric tasks, and two semantic tasks for visualization. The task similarity tree derived from taskonomy is depicted above task names.

We first validate the proposed method for task transferability, then show its effectiveness for layer selection.

4.1 Task Transferability on Taskonomy Models

4.1.1 Pre-trained Models

Here we adopt PR-DNNs released by taskonomy [33] to validate the effectiveness of DEPARA for task transferability. Twenty PR-DNNs are selected in this experiment, each of which is for a single-image task. As all taskonomy models naturally follow an encoder-decoder architecture, we directly use the output of the encoder for constructing the DEPARA. Taskonomy measures the task transferability by the performance of transfer learning. We adopt its results to evaluate our method.

4.1.2 Probe Data

Following [26], we construct the probe data by randomly sampling images in the validation set of taskonomy data. We try using more data, but no obvious improvement in performance is observed in our experiment. Additionally, we also adopt Indoor Scene [19] and COCO [13], which are very different from taskonomy data, as the probe data for computing the transferability of taskonomy tasks. For more details, please refer to the supplementary material.

4.1.3 Evaluation Metric

We adopt two evaluation metrics, P@K and R@K, which are widely used in information retrieval, to compare the task transferability constructed from our method with that from taskonomy. Each target task is viewed as a query, and its top-5 source tasks which produce the best transferring performances in taskonomy are regarded as relevant to the query. We adopt the Precision-Recall (PR) curve to demonstrate the overall performance of the proposed method.

4.1.4 Visualization Results across Tasks

Here we visualize some nodes in and edges in of DEPARA to provide a better perceptual understanding of the proposed method. Results are shown in Figure 2

. It can be seen that some tasks produce similar attribution maps and instance relationships, while some others not. For example, Rgb2depth produces highly similar attribution maps and relational graph with Rgb2mist. However, their results are dissimilar with that of Autoencoder. Actually, Rgb2depth and Rgb2mist are proved in taskonomy of high transferability to each other, while their transferability to Autoencoder is relatively low. Furthermore, taskonomy adopts agglomerative clustering to categorize the tasks into four groups:

3D, 2D, geometric, and semantic tasks. From Figure 2, we can see that our method tends to produce relatively similar nodes and edges within each group of tasks. Although some exceptions may occur, the results become more credible as we aggregate results of more nodes and edges.

4.1.5 Task Transferability Results

(a) PR curve.
(b) Task similarity tree.
Figure 3: PR curve and the task similarity tree obtained on probe data randomly sampled from taskonomy data.

In this section, we evaluate the proposed method by the task transferability obtained from taskonomy. To better understand the results, we introduce a baseline using Random Ranking, which indicates the task transferability is randomly determined. To make ablation study of the proposed method, we introduce three variants of our method. DEPARA-: only the nodes in DEPARA are utilized for task transferability; DEPARA-: only the edges are used; DEPARA: the full version of our method using both nodes and edges, where is tuned by randomly sampling a small subset of all the PR-DNNs. Additionally, we also introduce another competitor: Representation Similarity Analysis (RSA) proposed by [6]. Here we adopt PR curve to compare the performance of all the aforementioned methods. To further demonstrate the similarity between the task transferability obtained by our method and that from taskonomy, the task similarity tree produced by DEPARA is also depicted in Figure 3. The task similarity tree from taskonomy and some other more results are provided in the supplementary material. From these results, we can conclude that: (1) The proposed method produces task transferability highly similar to that of taskonomy. As our method is much more efficient222The proposed method takes about GPU hours on one Quadro P5000 card for pre-trained taskonomy models while taskonomy takes thousands of GPU hours on the cloud for 20 tasks. than taskonomy, it is an effectual substitute for taskonomy when human annotations are unavailable or the task library is large in size. (2) DEPARA outperforms RSA [6], which demonstrates its superiority over the state-of-the-art. Actually, DEPARA- and RSA yield comparable performance, as they are quite similar in method. (3) DEPARA outperforms DEPARA- and DEPARA- by a considerable margin, which implies the essentiality of both the nodes and the edges in DEPARA for measuring the knowledge transferability. For more results and interesting observations, please refer to the supplementary material.

To investigate the effects of different types of probe data, we also evaluate the proposed method with probe data from Indoor Scene and COCO. The task-wise P@K and R@K results, as well as the average results of the proposed method and some competitors, are provided in Table 1. It can be seen that although the data from Indoor Scene and COCO are quite different from taskonomy data, the proposed method still produces the task transferability of which the task-wise topological structure is highly similar to the one obtained by taskonomy. It indicates that the proposed method is insensitive to the randomly sampled probe data. Furthermore, the proposed method consistently outperforms DEPARA-, DEPARA- and RSA on all the datasets, which again verifies the effectiveness and superiority of the proposed method.

AutoEnco

Curvature

Denoise

Edge 2D

Edge 3D

Keypts 2D

Keypts 3D

Reshade

RGB2Depth

RGB2Mist

RGB2Norm

RoomLayt

Segmt 25D

Segmt 2D

VanishPts

SegmtSemt

Class1000

DEPARA

DEPARA-

DEPARA-

RSA

Tasknmy P@1 1.0 0.0 1.0 1.0 0.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.88 0.71 0.82 0.82
P@5 1.0 0.6 1.0 0.4 0.8 0.8 0.8 0.8 0.8 0.8 0.6 0.8 0.8 0.8 0.8 0.4 0.8 0.75 0.68 0.75 0.73
R@5 1.0 0.6 1.0 0.4 0.8 0.8 0.8 0.8 0.8 0.8 0.6 0.8 0.8 0.8 0.8 0.4 0.8 0.75 0.68 0.75 0.73
IndoorScn P@1 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.00 0.82 1.00 1.00
P@5 1.0 0.6 1.0 0.6 0.6 1.0 1.0 1.0 0.8 0.8 0.8 0.8 0.8 1.0 0.8 0.6 0.6 0.81 0.72 0.78 0.79
R@5 1.0 0.6 1.0 0.6 0.6 1.0 1.0 1.0 0.8 0.8 0.8 0.8 0.8 1.0 0.8 0.6 0.6 0.81 0.72 0.78 0.79
COCO P@1 1.0 0.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 1.0 0.82 0.82 0.76 0.82
P@5 1.0 0.6 0.8 0.8 0.6 1.0 1.0 0.8 0.8 1.0 1.0 0.8 1.0 0.8 0.4 0.6 0.6 0.80 0.78 0.65 0.69
R@5 1.0 0.6 0.8 0.8 0.6 1.0 1.0 0.8 0.8 1.0 1.0 0.8 1.0 0.8 0.4 0.6 0.6 0.80 0.78 0.65 0.69
Table 1: Task-wise similarity between the result from the DEPARA and that from taskonomy. The average results are shown on the right. For a better comparison, average results of DEPARA-, DEPARA- and RSA are also provided.

4.2 Layer Selection in Transfer Learning

4.2.1 Experimental Settings

We adopt Syn2Real-C [18] dataset to validate the effectiveness of DEPARA for layer selection. In Syn2Real-C, the source and the target data are from different domains, but of the same object categories. The source domain contains synthetic images and the target domain consists of images cropped from the Microsoft COCO dataset. In this paper, we use the data from the source and the target domain to train two domain-specific models. The ultimate goal is improving the performance on the target domain.

We consider two pre-trained models for being transferred to the target: (1) the model trained on the source domain (DNN-Source); (2) the deep model pre-trained on ImageNet (DNN-ImageNet). We adopt the architecture of VGG-19 for both models. DNN-Source is trained from scratch. The initial learning rate is set to be and decayed to after epochs. We set weight decay to be and momentum to be . DNN-Source is trained for epochs totally. For DNN-ImageNet, we directly adopt the pre-trained weights provided by TORCHVISION. To compute the transferability of DNN-Source and DNN-ImageNet to classification task on the target domain, we also trained the DNN-Target from scratch on the target data alone.

4.2.2 Performance of DEPARA for Layer Selection

Here we show that DEPARA can pick out the layers which yield near the highest performance when transferred to the target task. To this end, we exhaustively conduct the transfer learning for every layer in the pre-trained VGG-19. For each layer transferred to the target task, the current layer and all the layers previous to this layer are fixed and all the layers following the current layer are fine-tuned. As transfer learning usually happens when the target data is scarce, we conduct the experiments in two modes: (1) -T: of the target data are used; (2) -T: only of the target data are used. In both modes, the pre-trained VGG-19 is further trained for epochs on target data. To select the transferred layer, we simply set to be for both DNN-ImageNet and DNN-Source in -T mode. In -T mode, as the target data becomes scarcer, the accessibility becomes more important in transferability. Thus we set to be in -T mode.

Results are listed in Table 2. We can see that: (1) The proposed method can successfully pick out the layers which yield the highest performance when transferred to the target. For example, for DNN-ImageNet in -T mode, , , and layers yield the highest transferring performance among all the layers. Our method successfully picks out these layers as they produce the highest DEPARA similarity. Actually, the average Spearman’s correlation between the similarity and the accuracy is for all the results shown in Table 2, implying that the similarity of DEPARA is a good indicator for layer selection in transfer learning. (2) For different trained models, the layers which yield the highest transferring performance differ. Furthermore, as the size of the target data varies, the best-performing layer may also change. For example, in -T mode for DNN-Source, , and layers yield the highest performance. However, in -T mode the highest-performing layers are , and . By appropriately setting , the proposed method can still pick out the best layers for different amounts of target data. (3) Surprisingly, DNN-ImageNet yields much higher transferring performance than DNN-Source. The similarity of in some layers of DNN-ImageNet is significantly higher than that of DNN-Source, which implies that the embedding space learned on ImageNet is more suitable for solving the target task. The DNN-Source, albeit trained on the same task as the target, learned quite a different embedding space due to the large difference between the source and the target domain. Thus it produces relatively worse performance when transferred to the target data. (4) Trained from scratch on the target data, VGG-19 achieves and accuracy in -T and -T mode, respectively. Comparing these figures to the results in Table 2, we can see that some layers produce worse performance when transferred to the target data than they are trained from scratch. This phenomenon is known as negative transfer [30]. Negative transfer occurs especially when the PR-DNN is trained on a quite different domain (like DNN-Source) or for an unrelated task to the target task. For DNN-Source, most layers produce negative transfer when transferred to the target data. All these results imply the importance of both the model selection and the layer selection in transfer learning.

CONVOLUTIONAL LAYERS FC LAYERS
DNN-ImageNet SIM 0.45 0.45 0.48 0.52 0.55 0.55 0.55 0.55 0.54 0.54 0.54 0.54 0.53 0.52
0.16 0.01 0.20 0.03 0.35 0.32 0.14 0.15 0.50 0.43 0.77 0.78 0.81 0.81
0.61 0.46 0.68 0.55 0.90 0.87 0.69 0.70 1.04 0.97 1.31 1.32 1.34 1.33
2.05 0.55 2.48 0.82 4.05 3.75 1.95 2.05 5.54 4.84 8.24 8.34 8.63 8.62
ACC () -T 60.74 63.78 69.23 69.77 73.36 74.89 76.86 77.11 79.50 76.89 81.15 80.81 80.71 79.21
-T 34.03 37.71 40.16 44.67 53.06 58.11 59.35 63.08 67.24 68.50 71.72 72.85 74.33 73.54
DNN-Source SIM 0.60 0.60 0.55 0.53 0.50 0.50 0.50 0.49 0.48 0.48 0.48 0.47 0.46 0.45
0.06 0.11 0.15 0.17 0.18 0.18 0.19 0.19 0.20 0.17 0.15 0.11 0.10 0.09
0.66 0.71 0.70 0.70 0.68 0.68 0.69 0.67 0.68 0.65 0.63 0.58 0.56 0.54
1.20 1.70 2.05 2.23 2.30 2.30 2.40 2.39 2.48 2.18 1.98 1.57 1.46 1.35
ACC () -T 49.84 61.92 62.72 62.28 59.81 60.24 58.49 54.03 54.21 52.67 52.15 48.54 41.50 36.10
-T 30.58 35.49 37.20 39.47 39.64 39.63 40.07 40.11 40.37 39.04 36.88 34.13 31.40 29.13
Table 2: Layer-wise transferring performance of DNN-ImageNet and DNN-Source transferred to the target domain. SIM denotes the similarity between the DEPARAs of the specific layer and the target task. ACC denotes the accuracy on target test data. For space consideration, we omit the -nd, -th, -th and -th layers of VGG-19. Darker color indicates higher values.
(a) DNN-ImageNet.
(b) DNN-Source.
Figure 4: The test accuracy curves of different layers during the fine-tuning period in -T mode.

Some other interesting observations from Table 2 are provided in the supplementary material.

In Figure 4, we depict the test accuracy curves of different layers when transferred to the target data. The results further demonstrate the layers selected by the proposed method are more suitable for being transferred to the target than other layers. From Figure 4, it can be seen that the selected layers converge much faster than other layers when re-trained for the target task. For example, for the PR-DNN DNN-ImageNet, the proposed method picks out the , , , layers for being transferred. The final accuracy also tends to be higher than that of other layers. Furthermore, layers in DNN-ImageNet produce more smooth test accuracy curves than DNN-Source, which indicates that the embedding space learned by DNN-ImageNet are more easily adapted to the target task. The embedding space learned by DNN-Source, however, is quite different in topological structure (as indicated by the low similarity of edges in DEPARA) from the one learned on the target data. When adapted to the target data, it will be largely destroyed and rebuilt for the target, thus the test accuracy curves oscillate and the transferring performance is poor.

5 Discussions and Conclusions

In this paper, we propose the DEPARA to investigate the transferability of knowledge encoded in PR-DNNs. We adopt DEPARA to handle two important yet under-studied problems in transfer learning: measuring the transferability across tasks for pre-trained model selection, and measuring the transferability across layers for layer selection. Extensive experiments are conducted to show its effectiveness in solving both these two problems in transfer learning. We summarize the advantages and the limitations of the proposed method. We hope it could make the contributions of this paper clearer and inspire us to study further.

Advantages. (1) Unlike taskonomy [33] which requires a large amount of labeled data, the proposed method quantifies the task transferability with only pre-trained models available. (2) As no training is involved, the computation cost of the proposed method grows nearly linearly with the size of the task dictionary, which is significantly more efficient than taskonomy. (3) The proposed method solves not only the model selection, but also the layer selection problem. As far as we know, we are the first to simultaneously tackle the model and the layer selection problems in transfer learning. (4) The proposed method imposes no constraints on the model architectures and are insensitive to the probe data. (5) This paper introduces a rigorous definition of knowledge transferability. Meanwhile, two vital ingredients, including inclusiveness and accessibility, are introduced for better approximating the transferability.

Limitations. (1) This paper directly adopts the existing attribution method, Gradient*Input [24], for quantifying transferability. However, different attribution methods may affect the proposed method in some way. In future work, more studies are needed to investigate the effects of different attribution methods on the proposed method. (2) The optimal trade-off between the nodes and the edges of DEPARA for knowledge transferability is proved to be dependent on the probe data and the amount of the target data. In this paper, the trade-off hyper-parameter is set via cross-validation or empirically. However, more study is needed to uncover the relationship between and its influencing factors. (3) The probe data used in the proposed method is randomly sampled. Although different probe data are shown to produces effective task-wise topological structures, they still affect the final performance to some degree. More investigation is needed to study how to construct the probe data for better measuring the transferability across different tasks, models and layers.

Acknowledgments.

This work is supported by National Key Research and Development Program (2018AAA0101503), National Natural Science Foundation of China (61976186), Key Research and Development Program of Zhejiang Province (2018C01004), and the Major Scientific Research Project of Zhejiang Lab (No. 2019KD0AC01).

References

  • [1] M. B. Ancona, E. Ceolini, C. Oztireli, and M. H. Gross (2018) Towards better understanding of gradient-based attribution methods for deep neural networks. In ICLR, Cited by: §2.2, §3.3.
  • [2] H. Azizpour, A. S. Razavian, J. Sullivan, A. Maki, and S. Carlsson (2014) Factors of transferability for a generic convnet representation. TPAMI 38, pp. 1790–1802. Cited by: §1, §2.1, §3.2.
  • [3] S. Bach, A. Binder, G. Montavon, F. Klauschen, K. Müller, W. Samek, and O. D. Suarez (2015)

    On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation

    .
    In PloS one, Cited by: §1, §2.2.
  • [4] Y. Chen, N. Wang, and Z. Zhang (2017) DarkRank: accelerating deep metric learning via cross sample similarities transfer. In AAAI, Cited by: §2.3.
  • [5] J. Deng, W. ping Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) ImageNet: a large-scale hierarchical image database. In CVPR, Cited by: §1.
  • [6] K. Dwivedi and G. Roig (2019-06) Representation similarity analysis for efficient task taxonomy & transfer learning. In CVPR, Cited by: §4.1.5.
  • [7] K. Dwivedi and G. Roig (2019) Representation similarity analysis for efficient task taxonomy & transfer learning. In CVPR, Cited by: §2.1.
  • [8] K. He, R. B. Girshick, and P. Dollár (2019) Rethinking imagenet pre-training. In ICCV, Cited by: §2.1.
  • [9] K. He, X. Zhang, S. Ren, and J. Sun (2015) Deep residual learning for image recognition. CVPR, pp. 770–778. Cited by: §1.
  • [10] G. E. Hinton, O. Vinyals, and J. Dean (2015) Distilling the knowledge in a neural network. ArXiv abs/1503.02531. Cited by: §2.3.
  • [11] M. Huh, P. Agrawal, and A. A. Efros (2016) What makes imagenet good for transfer learning?. ArXiv abs/1608.08614. Cited by: §2.1.
  • [12] S. Kornblith, J. Shlens, and Q. V. Le (2018) Do better imagenet models transfer better?. ArXiv abs/1805.08974. Cited by: §2.1.
  • [13] T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In ECCV, Cited by: §4.1.2.
  • [14] Y. Liu, J. Cao, B. Li, C. Yuan, W. Hu, Y. Li, and Y. Duan (2019-06) Knowledge distillation via instance relationship graph. In CVPR, Cited by: §2.3.
  • [15] J. Long, E. Shelhamer, and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In CVPR, Cited by: §1.
  • [16] W. Park, D. Kim, Y. Lu, and M. Cho (2019-06) Relational knowledge distillation. In CVPR, Cited by: §2.3.
  • [17] B. Peng, X. Jin, J. Liu, S. Zhou, Y. Wu, Y. Liu, D. Li, and Z. Zhang (2019) Correlation congruence for knowledge distillation. In ICCV, Cited by: §2.3.
  • [18] X. Peng, B. Usman, K. Saito, N. Kaushik, J. Hoffman, and K. Saenko (2018) Syn2Real: a new benchmark forsynthetic-to-real visual domain adaptation. CoRR abs/1806.09755. Cited by: §4.2.1.
  • [19] A. Quattoni and A. Torralba (2009) Recognizing indoor scenes. In CVPR, Cited by: §4.1.2.
  • [20] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson (2014) CNN features off-the-shelf: an astounding baseline for recognition. CVPR Workshops, pp. 512–519. Cited by: §2.1.
  • [21] S. Ren, K. He, R. B. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. TPAMI 39, pp. 1137–1149. Cited by: §1.
  • [22] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio (2014) FitNets: hints for thin deep nets. CoRR abs/1412.6550. Cited by: §2.3.
  • [23] A. Shrikumar, P. Greenside, and A. Kundaje (2017) Learning important features through propagating activation differences. In ICML, Cited by: §2.2.
  • [24] A. Shrikumar, P. Greenside, A. Shcherbina, and A. Kundaje (2016) Not just a black box: learning important features through propagating activation differences. CoRR abs/1605.01713. Cited by: §1, §2.2, §3.3.1, §5.
  • [25] K. Simonyan, A. Vedaldi, and A. Zisserman (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. CoRR abs/1312.6034. Cited by: §1, §2.2.
  • [26] J. Song, Y. Chen, X. Wang, C. Shen, and M. Song (2019) Deep model transferability from attribution maps. In NeurIPS, pp. 6179–6189. Cited by: §2.1, §4.1.2.
  • [27] M. Sundararajan, A. Taly, and Q. Yan (2017) Axiomatic attribution for deep networks. In ICML, Cited by: §2.2.
  • [28] A. Torralba and A. A. Efros (2011) Unbiased look at dataset bias. CVPR, pp. 1521–1528. Cited by: §1, §2.1.
  • [29] F. Tung and G. Mori (2019) Similarity-preserving knowledge distillation. In ICCV, Cited by: §2.3.
  • [30] Z. Wang, Z. Dai, B. Poczos, and J. Carbonell (2019-06) Characterizing and avoiding negative transfer. In CVPR, Cited by: §4.2.2.
  • [31] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson (2014) How transferable are features in deep neural networks?. In NIPS, Cited by: §2.1, §3.2.
  • [32] S. Zagoruyko and N. Komodakis (2016)

    Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer

    .
    In ICLR, Cited by: §2.3.
  • [33] A. R. Zamir, A. Sax, W. Shen, L. J. Guibas, J. Malik, and S. Savarese (2018-06) Taskonomy: disentangling task transfer learning. In CVPR, Cited by: §1, §2.1, §3.4, §3.5, §4.1.1, §5.
  • [34] M. D. Zeiler and R. Fergus (2014) Visualizing and understanding convolutional networks. In ECCV, Cited by: §2.2.
  • [35] J. Zhou and O. G. Troyanskaya (2015)

    Predicting effects of noncoding variants with deep learning–based sequence model

    .
    Nature Methods 12, pp. 931–934. Cited by: §2.2.
  • [36] L. M. Zintgraf, T. Cohen, T. Adel, and M. Welling (2017) Visualizing deep neural network decisions: prediction difference analysis. In ICLR, Cited by: §2.2.