DeepAI
Log In Sign Up

Parameter Transfer Unit for Deep Neural Networks

Parameters in deep neural networks which are trained on large-scale databases can generalize across multiple domains, which is referred as "transferability". Unfortunately, the transferability is usually defined as discrete states and it differs with domains and network architectures. Existing works usually heuristically apply parameter-sharing or fine-tuning, and there is no principled approach to learn a parameter transfer strategy. To address the gap, a parameter transfer unit (PTU) is proposed in this paper. The PTU learns a fine-grained nonlinear combination of activations from both the source and the target domain networks, and subsumes hand-crafted discrete transfer states. In the PTU, the transferability is controlled by two gates which are artificial neurons and can be learned from data. The PTU is a general and flexible module which can be used in both CNNs and RNNs. Experiments are conducted with various network architectures and multiple transfer domain pairs. Results demonstrate the effectiveness of the PTU as it outperforms heuristic parameter-sharing and fine-tuning in most settings.

READ FULL TEXT VIEW PDF
11/12/2021

On Transferability of Prompt Tuning for Natural Language Understanding

Prompt tuning (PT) is a promising parameter-efficient method to utilize ...
07/11/2019

Multifaceted Analysis of Fine-Tuning in Deep Model for Visual Recognition

In recent years, convolutional neural networks (CNNs) have achieved impr...
08/22/2022

PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model Adaptation

Prompt-tuning, which freezes pretrained language models (PLMs) and only ...
09/04/2017

Domain-adaptive deep network compression

Deep Neural Networks trained on large datasets can be easily transferred...
04/04/2022

How stable are Transferability Metrics evaluations?

Transferability metrics is a maturing field with increasing interest, wh...
11/06/2014

How transferable are features in deep neural networks?

Many deep neural networks trained on natural images exhibit a curious ph...
07/01/2022

DRESS: Dynamic REal-time Sparse Subnets

The limited and dynamically varied resources on edge devices motivate us...

Introduction

Deep Neural Networks (DNNs) are able to model complex function mappings between inputs and outputs, and they produce competitive results in a wide range of areas, including speech recognition, computer vision, natural language processing, etc. Yet most successful DNNs belong to the supervised learning paradigm, and they require large-scale labeled data for training. Otherwise, they are likely to suffer from over-fitting. The data-hungry nature makes it prohibitive to use DNNs in low-resource domains where labeled data are scarce. There is a gap between the lack of training data in real-world scenarios and the data-hungry nature of DNNs.

Figure 1: Domain Adaptation Network (DAN) [Long et al.2015], a typical example that applies both parameter-sharing and fine-tuning techniques.

The aforementioned dilemma can be addressed by Transfer Learning, which boosts learning in a low-resource target domain by leveraging one or more data-abundant source domain(s) [Pan and Yang2010]. It is found that parameters in a DNN are transferable, i.e., they are general and suitable for multiple domains [Yosinski et al.2014, Mou et al.2016, Zoph et al.2016]

. The generalization ability of parameters is referred as “transferability”. Two popular parameter-based transfer learning methods are

parameter-sharing and fine-tuning. Parameter-sharing assumes that the parameters are highly transferable, and it directly copies the parameters in the source domain network to the target domain network. The fine-tuning method assumes that the parameters in the source domain network are useful, but they need to be trained with target domain data to better adapt to the target domain. These two methods have been widely adopted. One typical example is given in Fig. 1, where the first three convolutional layers in a DAN are shared, and the next two layers are fine-tuned.

Though parameter-based transfer learning by parameter-sharing and fine-tuning is prevalent and effective, it suffers from two limitations. Firstly, the parameter transferability is manually defined as discrete states, usually “random”, “fine-tune” and “frozen” [Yosinski et al.2014, Mou et al.2016, Zoph et al.2016]. But the transferability at a fine-grained scale has not been considered. A block of parameters, for example, all the filters of a convolutional layer, are treated as a whole. If they are regarded as transferable, all the parameters are retained, though some of them are irrelevant or even introduce noises to the target domain; if they are considered as not transferable, they are completely discarded and the baby is thrown out with the bathwater. The second limitation is that the parameter transferability differs with domains and network architectures. A parameter transfer strategy is obtained by assigning transfer states to different blocks of parameters. To find an optimal strategy, one straight-forward solution is the hold-out method. A part of the training data is reserved as a validation set. The network is decomposed into multiple parts and each part is assigned a transfer state. The optimal strategy can be found by choosing the one with the smallest validation error. Let and denote the number of transfer states and the number of parts in the network, the number of possible strategies is . The hold-out method is rather inefficient because it involves long training time and tremendous computational costs. There is no principled approach to learn the optimal transfer strategy.

To tackle the two limitations of existing parameter-based transfer methods, we propose a parameter transfer unit (PTU). Transfer learning with PTUs involves an already trained source domain network and a target domain network, and the two networks are connected by the PTU(s). A PTU produces a weighted sum of the activations from both networks. There are two gates in a PTU, a fine-tune gate and an update gate. The fine-tune gate adapts source domain activations to the target domain, and the update gate decides whether to transfer from the source domain. The two gates control the parameter transferability at a fine-grained scale and can be learned from data.

The contributions of the proposed method are two folds.

  • A principled parameter transfer method. A novel parameter transfer unit is proposed which subsumes hand-crafted discrete transfer states and allows parameter transfer at a fine-grained scale. The unit is learned in an end-to-end approach.

  • Plug-and-play usage. The PTU can be used in both CNNs and RNNs. It is a general and flexible transfer method as it can be easily integrated with almost all the existing models which intuitively apply the parameter-sharing or fine-tuning techniques. Experimental results show that transfer learning with the PTU outperforms the heuristic parameter transfer methods.

Related Works

Though deep learning models have been extensively studied, there are limited research works addressing transfer learning for DNNs. The most popular transfer method for DNNs is parameter-based transfer. It is shown that parameters of the low-level layers in a CNN are transferable

[Yosinski et al.2014]. For natural language processing tasks, Zoph et al. and Mou et al. study the parameter transferability in machine translation [Zoph et al.2016] and sentence classification tasks [Mou et al.2016]. These works define parameter transferability as discrete states and conduct empirical studies. However, the conclusions drawn from these studies can hardly generalize to a new domain or a new network architecture. On the contrary, the proposed PTU defines transferability at a fine-grained scale and learns the transferability in a principled approach.

Another line of research works use parameter-sharing and fine-tuning in joint with other transfer learning methods, for example, feature-based transfer learning [Tzeng et al.2014, Long et al.2015, Ganin et al.2016]. These works heuristically apply the conclusions from the empirical studies, and proposing a principled parameter-based transfer learning method is not their main focus. We expect that integrating the PTU with these models might further improve their performance.

The most relevant work to the proposed method is the cross-stitch network [Misra et al.2016]. Instead of assigning blocks of parameters as “transferable” or “not transferable”, a soft parameter-based transfer method is adopted. Knowledge sharing between networks in two tasks is achieved with a “cross-stitch” unit. It learns a linear combination of activations from different networks. The proposed PTU is different from the cross-stitch network in the following two aspects. First, the transferability is controlled by linear combination coefficients in the cross-stitch unit, while the PTU learns a non-linear combination which is more expressive. Secondly, the cross-stitch unit is proposed for multi-task learning with CNNs while the PTU is designed for transfer learning where the target domain performance is the main focus. Moreover, the PTU is applied and evaluated with both CNNs and RNNs.

Parameter Transfer Unit (PTU)

In this section, we present the proposed PTU. First, three hand-crafted discrete transfer states are introduced as background knowledge. Then we introduce the use of the proposed PTU in CNNs and RNNs. Finally, we discuss several extensions of PTU to handle the scalability issue.

Three Transfer States

There are usually three states for parameter transfer, sorted in ascending order of transferability, as shown in Fig. 2

  1. Random: the parameters are randomly initialized and learned with the target domain data only;

  2. Fine-tune: the parameters are initialized with those from the source domain network, and then fine-tuned with the target domain data;

  3. Frozen: the parameters are initialized with those from the source domain network, and keep unchanged during the training process in the target domain. When parameter-sharing is applied to a convolution layer (or a RNN cell), the parameters of that layer are frozen.

Random

Fine-tune

Frozen
Figure 2: Three hand-crafted discrete transfer states

PTU for CNNs

An overview of transfer learning with PTUs in a CNN is shown in Fig. 3. The whole network, denoted by PTU-CNN, is composed of three parts, a source domain network, a target domain network and a few PTUs, which are denoted by blue blocks, green blocks and red circles in Fig. 3. A labeled target domain data sample is denoted by .

source domain network

target domain network

Figure 3: An overview of the PTU-CNN.

Let denote the number of layers in the target domain network, and the target domain network shares an identical architecture with the source domain network from the first layer to the -th layer. This allows parameter transfer between different tasks or heterogeneous domains where label spaces differ. The parameters in the source domain network are frozen, and the parameters in the target domain network are randomly initialized and learned with target domain data only. PTUs are placed between the two networks in a layer-wise manner to combine activations from both domains. In the training phase, only the target domain network and the PTUs are optimized. Domain-specific knowledge can be encoded by the target domain network, and the PTUs learn how to transfer from the source domain network. In the inference phase, a target domain sample is fed into both networks, following the flows shown by the arrows in Fig. 3, and finally a predicted label is produced by the output layer of the target domain network.

Let denote the -th layer in the target domain network (), and / denote the output of the -th layer in the source/target domain network, respectively. Given and , a PTU learns a nonlinear combination, denoted by , and feeds to the -th layer of the target domain network.

There are two gates in a PTU, a fine-tune gate and an update gate , as defined in Eq. (1):

(1)

where denotes a concatenation operation and

denotes the sigmoid function. The gates are artificial neurons whose parameters are denoted by

and , respectively. They take the activations and as inputs, and output a value between and for each element in the activations. Then the outputs of the gates mask the hidden activations and yield the combined activation , as defined in Eq. (2):

(2)

where

denotes an activation function, usually the hyperbolic tangent function or the Rectified Linear Unit (ReLU), and there is a linear transformation characterized by

, which adapts the source domain activations to the target domain. The nonlinear transformation is equivalent to fine-tuning. The fine-tune gate produces a weighted sum of the source domain activations with and without fine-tuning, denoted by . The update gate determines how to combine the target domain activations with the transformed source domain activations. Details of the PTU are shown in Fig. 4.

In extreme cases, the PTU degenerates to the hand-crafted discrete transfer states. That is, when the update gate equals , the fine-tune gate is ignored and the activations completely come from the target domain; otherwise, it takes source domain information into consideration. When the fine-tune gate equals , the source domain activations are highly transferable and they can be directly copied to the target domain. Otherwise, transformed source domain activations are used. Thus the PTU subsumes the three discrete transfer states. In most cases, the output of the PTU is a fine-grained combination of the activations from both networks.

Identity

Identity

Figure 4: Details of the PTU. The gates look at the activations, and , from both networks. The fine-tune gate decides how to adapt the source domain activations, and the update gate determines how to combine the target domain activations with the transformed source domain activations .

PTU for RNNs

We mainly focus on the PTU in CNNs so far, and here we extend the PTU for RNNs, denoted by PTU-RNN. As shown in Fig. 5, a sequence with steps is inputted where . A RNN can be unrolled into a full network where the parameters in the RNN cell are shared across all time steps. Thus the RNN is able to tackle sequences of arbitrary lengths. The time step in a RNN can be regarded as the -th layer in a CNN. Similarly, / denotes the internal hidden state at the -th time step in the source/target RNN cell, respectively. By building the relationships of the notations in CNNs and RNNs, the PTU can be readily extended to RNNs.

Figure 5: Unrolled PTU-RNN. The source/target domain RNN cell is denoted by blue/green blocks. The connections from the inputs to the RNN cells are denoted by dashed gray arrows.

Scalability

As each PTU introduces three parameters , and , scalability becomes a challenge. This is because additional parameters take up more computational resources, e.g., GPU memory. In addition, more free parameters require more training data, otherwise, over-fitting is likely to occur. To reduce the computational cost, , and are shared across all time steps in the PTU-RNN. But this cannot be applied in the PTU-CNN because the dimensions of the PTU parameters in different layers do not agree. In the PTU-CNN, depth-wise separable convolutions [Sifre and Mallat2014, Howard et al.2017] are used instead of standard convolutions. To address the over-fitting issue, regularization techniques are necessary. Traditional regularizers, such as -1 and -2 regularization, together with structured sparsity [Wen et al.2016] are applied. These techniques allow the PTU to scale up to very deep CNNs, for example, the -layer MobileNets [Howard et al.2017].

Depth-wise Separable Convolution

The depth-wise separable convolution is initially proposed in [Sifre and Mallat2014], and it can greatly reduce computational costs with a slightly degraded performance [Howard et al.2017]. The depth-wise separable convolution factorizes a standard convolution into two steps, a depth-wise convolution and a point-wise convolution. In the depth-wise convolution step, a single filter is shared across all the channels. And then the point-wise convolution applies a convolution to the output of the depth-wise convolution. For a convolution layer with filters whose filter size is , the reduction in computational cost is .

(a) S1 (b) S2 (c) S3
Figure 6: Image classification accuracy of CNN models

Structured Sparsity

As , and

are high-dimensional tensors, structured sparsity learning is imposed to penalizing unimportant weights and improve computation efficiency

[Wen et al.2016]. Filter-wise and channel-wise group Lasso regularization are applied to the parameters in the PTU.

Experimental Results

We evaluate the PTU with both CNNs and RNNs on classification tasks. Classification accuracy is adopted as the evaluation metric. All the neural networks are implemented with Tensorflow

[Abadi et al.2016].

Experiments on CNNs

We first describe the experimental setup, and then report numerical results. We provide an interpretation of the output values of the gates in the PTU.

Experimental Setup

Various network architectures are evaluated on multiple transfer domain pairs. Three transfer settings are composed from four natural image classification datasets and three network architectures, as described below:

For CIFAR-100, there is a standard train/test split, and training data are reserved as a validation set. The batch size is . For Caltech-256, and images are used as the training and validation set for each class, respectively. The batch size is . Hyper-parameters (learning rate) are selected via the hold-out method. The learning rate is chosen from .

Two baseline models are considered:

  • No transfer (NoTL). The parameters are learned from scratch in the target domain.

  • Layer-wise fine-tuning (FT). For a CNN with layers, if a layer has two possible transfer state, “fine-tune” and “frozen”, there are transfer strategies, which generates prohibitive computation costs. To improve the efficiency, we adopt a strategy that layers are incrementally frozen as the parameter transferability drops when moving from low-level layers to high-level layers in a CNN [Yosinski et al.2014]. That is, there are fine-tuning strategies, and FT- denotes a strategy that freezes the first layers and fine-tunes the remaining layers.

(a) S1 (b) S2 (c) S3
Figure 7: Parameter transferability of different layers in CNNs

Results

The results of the three methods are depicted in Fig. 6. As there are fine-tuning models, only the highest test accuracy is summarized in Table 1. denotes the relative improvement of the PTU model over the FT model.

As shown in Fig. 6, the parameter transferability differs with domains and architectures. For S1, the parameters in low-level layers are more transferable than those in high-level layers, and the parameter transferability decreases monotonously, which is generally consistent with the conclusions in [Yosinski et al.2014]. But the conclusion does not generalize to S2 and S3. For example, in S3, freezing the first layers achieves a higher accuracy than fine-tuning the whole network. These results indicate that the heuristic layer-wise fine-tuning method might not yield the optimal parameter transfer strategy.

Low-resource target domains benefit from parameter-based transfer learning, as both the FT model and the PTU model outperform the NoTL model. Furthermore, the PTU model achieves a comparative performance with the FT model in S2, and obtains the optimal test accuracies in the other settings with a relative improvement around . These results demonstrate the effectiveness of the PTU in various domains and with different network architectures. Unlike the layer-wise fine-tuning which involves training processes, a reasonable parameter transfer strategy can be learned in one pass with the PTU.

[width=2cm]SettingsModels NoTL FT PTU (%)
S1
S2
S3
Table 1: Classification accuracy on CNNs (The optimal test accuracy of a setting is highlighted with bold face.)

Quantify Parameter Transferability by Gate Outputs

The two gates in the PTU control the parameter transferability, which provides an approach to quantify the parameter transferability. The average output values of the two gates in different layers are shown in Fig. 7. The classification accuracy of the FT model which is an indicator of the parameter transferability is also included.

For the update gate , it controls how much knowledge is flowed from the source domain to the target domain. A larger indicates more knowledge transfer. For example, in Fig. 7(a), the FT- achieves the optimal test accuracy, and is also the highest value among all the values. In addition to the update gate, the fine-tune gate characterizes how many activations are copied from the source domain, and how many activations need to be transformed before applying to the target domain, which has not been considered by existing works.

Here we quantify the parameter transferability as the average output values of gates which are scalars. A more fine-grained visualization analysis can be performed. For example, a large value of a filter might help us identify important patterns that are shared between domains. This might demystify DNNs which are considered as a black-box process. Since understanding the neural network via visualization analysis is not the main focus of this paper, it will be left as future works.

(a) Training loss of S3 (b) Validation accuracy of S3 (c) Training loss of Greek (d) Validation accuracy of Greek
Figure 8: Learning curves of two transfer settings.
[width=2cm]DatasetsModels RG K-NN NoTL FT PTU ()
Greek
Latin
Korean
JP-hiragana
JP-katakana
Table 2: Classification accuracy of MNIST Omniglot

Experiments on RNNs

The experimental setup is first introduced, and then numerical results are presented.

Experimental Setup

The PTU for RNNs is evaluated with two hand-written character recognition datasets, MNIST [LeCun et al.1998] as the source domain and Omniglot [Lake, Salakhutdinov, and Tenenbaum2015] as the target domain. There are alphabets in the Omniglot dataset, and alphabets are randomly selected and used as target domains. Each alphabet is composed of a few characters, and there are labeled samples for each character. The train, validation, test set ratio are , and respectively. All the images are resized to . At each time step, a row of an image is fed into the RNN. A RNN with

hidden units is used as a classifier. It achieves a classification accuracy at

in the source domain. Since the label spaces of the two domains do not agree, the only transferable parameters are those in the RNN cell, and hence there is only one FT strategy. In addition to the NoTL and FT models, a random guess baseline, denoted by RG, and a K-Nearest Neighbor (K-NN) classifier are included as well. The K-NN classifier is implemented with scikit-learn [Pedregosa et al.2011].

RNNs are optimized with the stochastic gradient descent method. The batch size is

. The learning rate for training RNNs and the number of neighbors of the K-NN classifier are treated as hyper-parameters. They are tuned on the validation set. is selected from . The learning rate for the NoTL model and the PTU model is selected from , and the learning rate for the FT model is selected from .

Results

The classification accuracies are listed in Table 2. Since there are only around labeled training data in each target domain, the NoTL model performs even worse than a simple K-NN classifier in out of domains. Similar conclusions to the CNN experiments can be drawn. Classification accuracy is improved when a parameter-based transfer learning method is applied, and the proposed PTU further improves over the FT model with a large margin where the relative improvement ranges from to .

The reason that the PTU outperforms heuristic parameter-sharing and fine-tuning might be two folds.

  1. The PTU subsumes hand-crafted transfer states by introducing learnable gates. It is more expressive in terms of model capacity.

  2. In PTU, source domain knowledge is retained as frozen parameters, and the domain-specific knowledge is encoded in the target domain network. On the other hand, the parameters in the FT model are changed during training in the target domain, which might impair the useful knowledge from the source domain.

Optimization Efficiency

We investigate the optimization dynamics of different models by learning curves. The learning curves of two transfer settings, S3 in the CNN experiments and the Greek alphabet as the target domain in the RNN experiments, are shown in Fig. 8. Both training loss and validation accuracy are reported for each setting, with the learning rate that yields the optimal test classification accuracy.

For the S3 setting, the NoTL model converges slowly though a large learning rate is used. The validation accuracy is almost in the first steps. The optimization efficiency is significantly improved by parameter transfer. The FT model converges at a similar rate to the NoTL model while its learning rate is -times smaller. For the PTU model, the training loss drops quickly, and the validation accuracy saturates. The PTU is rather resistant to over-fitting since its validation accuracy does not deteriorate as the training process continues.

For the Greek setting, the NoTL model gets stuck with a bad local optimum as the training loss decreases while the validation accuracy converges to around . The FT model starts with the largest training loss and converges the fastest while it suffers from over-fitting. The PTU model uses a learning rate that is times larger than the other two baseline models, as it introduces additional parameters. The PTU model has a better generalization ability as it achieves the highest accuracy on the validation set.

Conclusion

A principled approach to learn parameter transfer strategy is proposed in this paper. A novel parameter transfer unit (PTU) is designed. The parameter transferability is controlled at a fine-grained scale by two gates in the PTU which can be learned from data. Experimental results demonstrate the effectiveness of the PTU with both CNNs and RNNs in multiple transfer settings where it outperforms heuristic parameter-sharing and fine-tuning. In the future, we will apply the PTU in more challenging settings, for example, image captioning which involves multi-modality data. It is also worth exploring improving existing transfer learning models with the PTU.

References

  • [Abadi et al.2016] Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. 2016.

    Tensorflow: A system for large-scale machine learning.

    In OSDI, volume 16, 265–283.
  • [Ganin et al.2016] Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; and Lempitsky, V. S. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research 17:59:1–59:35.
  • [Griffin, Holub, and Perona2007] Griffin, G.; Holub, A.; and Perona, P. 2007. Caltech-256 object category dataset.
  • [Howard et al.2017] Howard, A. G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; and Adam, H. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
  • [Krizhevsky and Hinton2009] Krizhevsky, A., and Hinton, G. 2009. Learning multiple layers of features from tiny images.
  • [Lake, Salakhutdinov, and Tenenbaum2015] Lake, B. M.; Salakhutdinov, R.; and Tenenbaum, J. B. 2015. Human-level concept learning through probabilistic program induction. Science 350(6266):1332–1338.
  • [LeCun et al.1998] LeCun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11):2278–2324.
  • [Long et al.2015] Long, M.; Cao, Y.; Wang, J.; and Jordan, M. 2015. Learning transferable features with deep adaptation networks. In International Conference on Machine Learning, 97–105.
  • [Misra et al.2016] Misra, I.; Shrivastava, A.; Gupta, A.; and Hebert, M. 2016. Cross-stitch networks for multi-task learning. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , 3994–4003.
  • [Mou et al.2016] Mou, L.; Meng, Z.; Yan, R.; Li, G.; Xu, Y.; Zhang, L.; and Jin, Z. 2016. How transferable are neural networks in NLP applications? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, 479–489.
  • [Pan and Yang2010] Pan, S. J., and Yang, Q. 2010. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering 22(10):1345–1359.
  • [Pedregosa et al.2011] Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; Vanderplas, J.; Passos, A.; Cournapeau, D.; Brucher, M.; Perrot, M.; and Duchesnay, E. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12:2825–2830.
  • [Russakovsky et al.2015] Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; Berg, A. C.; and Fei-Fei, L. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115(3):211–252.
  • [Sifre and Mallat2014] Sifre, L., and Mallat, P. 2014. Rigid-motion scattering for image classification. Ph.D. Dissertation, Citeseer.
  • [Simonyan and Zisserman2014] Simonyan, K., and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556.
  • [Tzeng et al.2014] Tzeng, E.; Hoffman, J.; Zhang, N.; Saenko, K.; and Darrell, T. 2014. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474.
  • [Wen et al.2016] Wen, W.; Wu, C.; Wang, Y.; Chen, Y.; and Li, H. 2016. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, 2074–2082.
  • [Yosinski et al.2014] Yosinski, J.; Clune, J.; Bengio, Y.; and Lipson, H. 2014. How transferable are features in deep neural networks? In Advances in neural information processing systems, 3320–3328.
  • [Zoph et al.2016] Zoph, B.; Yuret, D.; May, J.; and Knight, K. 2016.

    Transfer learning for low-resource neural machine translation.

    In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 1568–1575.