Implementation of http://arxiv.org/abs/1511.05641 that lets one build a larger net starting from a smaller one.
We introduce techniques for rapidly transferring the information stored in one neural net into another neural net. The main purpose is to accelerate the training of a significantly larger neural net. During real-world workflows, one often trains very many different neural networks during the experimentation and design process. This is a wasteful process in which each new model is trained from scratch. Our Net2Net technique accelerates the experimentation process by instantaneously transferring the knowledge from a previous network to each new deeper or wider network. Our techniques are based on the concept of function-preserving transformations between neural network specifications. This differs from previous approaches to pre-training that altered the function represented by a neural net when adding layers to it. Using our knowledge transfer mechanism to add depth to Inception modules, we demonstrate a new state of the art accuracy rating on the ImageNet dataset.READ FULL TEXT VIEW PDF
It is often the case that the performance of a neural network can be imp...
We address two questions for training a convolutional neural network (CN...
Pre-training is crucial for learning deep neural networks. Most of exist...
Binary Neural Networks (BNNs) show promising progress in reducing
MobileNet and Binary Neural Networks are two among the most widely used
Recurrent neural networks have been widely used in sequence learning tas...
The last decade has witnessed an increasing transformation in the design...
Implementation of http://arxiv.org/abs/1511.05641 that lets one build a larger net starting from a smaller one.
A curated list of awesome C-Sharp frameworks, libraries and software.
numpy implementation of net 2 net from the paper Net2Net: Accelerating Learning via Knowledge Transfer http://arxiv.org/abs/1511.05641
Tensorflow and Numpy Implementation of Net2Net (http://arxiv.org/abs/1511.05641)
We propose a new kind of operation to perform on large neural networks: rapidly transfering knowledge contained in one neural network to another neural network. We call this the Net2Net family of operations. We use Net2Net as a general term describing any process of training a student network significantly faster than would otherwise be possible by leveraging knowledge from a teacher network that was already trained on the same task. In this article, we propose two specific Net2Net methodologies. Both are based on the idea of function-preserving transformations of neural networks. Specifically, we initialize the student to be a neural network that represents the same function as the teacher, but using a different parameterization. One of these transformations, Net2WiderNet allows replacing a model with an equivalent model that is wider (has more units in each hidden layer). Another of these transformations, Net2DeeperNet allows replacing a model that satisfies some properties with an equivalent, deeper model. After initializing the larger network to contain all of the knowledge previously acquired by the smaller network, the larger network may be trained to improve its performance.
Traditionally, machine learning algorithms have been designed to receive a fixed dataset as input, initialize a new model with no knowledge, and train that model to convergence on that dataset. Real workflows are considerably more complicated than this idealized scenario. We advocateNet2Net operations as a useful tool for accelerating real-world workflows.
One way that real workflows deviate from the idealized scenario is that machine learning practitioners usually do not train only a single model on each dataset. Instead, one typically trains multiple models, with each model designed to improve upon the previous model in some way. Each step in the iterative design process relies on fully training and evaluating the innovation from the previous step. For many large models, training is a long process, lasting for a week or even for a month. This makes data-driven iterative design slow, due to the latency of evaluating whether each change to the model caused an improvement.
Net2Net operations accelerate these workflows by rapidly transferring knowledge from the previous best model into each new model that an experimenter proposes. Instead of training each considered design of model for as much as a month, the experimenter can use Net2Net to train the model for a shorter period of time beginning from the function learned by the previous best model. Fig 1 demonstrates the difference of this approach from traditional one.
More ambitiously, real machine learning systems will eventually become lifelong learning systems (Thrun, 1995; Silver et al., 2013; Mitchell et al., 2015). These machine learning systems need to continue to function for long periods of time and continually experience new training examples as these examples become available. We can think of a lifelong learning system as experiencing a continually growing training set. The optimal model complexity changes as training set size changes over time. Initially, a small model may be preferred, in order to prevent overfitting and to reduce the computational cost of using the model. Later, a large model may be necessary to fully utilize the large dataset. Net2Net operations allow us to smoothly instantiate a significantly larger model and immediately begin using it in our lifelong learning system, rather than needing to spend weeks or months re-train a larger model from scratch on the latest, largest version of the training set.
In this section, we describe our new Net2Net operations and how we applied them on real deep neural nets.
We briefly experimented with a method that proved not to offer a significant advantage: training a large student network beginning from a random initialization, and introducing a set of extra “teacher prediction” layers into the student network. Specifically, several convolutional hidden layers of the student network were provided as input to new, learned, convolutional layers. The cost function was modified to include terms encouraging the output of these auxiliary layers to be close to a corresponding layer in the teacher network. In other words, the student is trained to use each of its hidden layers to predict the values of the hidden layers in the teacher.
The goal was that the teacher would provide a good internal representation for the task that the student could quickly copy and then begin to refine. The approach resembles the FitNets (Romero et al., 2014)
strategy for training very thin networks of moderate depth. Unfortunately, we did not find that this method offered any compelling speedup or other advantage relative to the baseline approach. This may be because our baseline was very strong, based on training with batch normalization(Ioffe & Szegedy, 2015). Mahayri et al. (2015) independently observed that the benefits of the FitNets training strategy were eliminated after changing the model to use batch normalization.
The FitNets-style approach to Net2Net learning is very general, in the sense that, if successful, it would allow any architecture of student network to learn from any architecture of teacher network. Though we were not able to make this general approach work, we encourage other researchers to attempt to design fully general Net2Net strategies in the future. We instead turned to different Net2Net strategies that were limited in scope but more effective.
We introduce two effective Net2Net strategies. Both are based on initializing the student network to represent the same function as the teacher, then continuing to train the student network by normal means. Specifically, suppose that a teacher network is represented by a function where is the input to the network, is the output of the network, and is the parameters of the network. In our experiments, is defined by a convolutional network, is an input image, and.
Our strategy is to choose a new set of parameters for a student network such that
In our description of the method, we assume for simplicity that both the teacher and student networks contain compositions of standard linear neural network layers of the form where is the activations of hidden layer , is the matrix of incoming weights for this layer, and
is an activation function, such as as the rectified linear activation function(Jarrett et al., 2009; Glorot et al., 2011) or the maxout activation function (Goodfellow et al., 2013). We use this layer formalization for clarity, but our method generalizes to arbitrary composition structure of these layers, such as Inception (Szegedy et al., 2014). All of our experiments are in fact performed using Inception networks.
Though we describe the method in terms of matrix multiplication for simplicity, it readily extends to convolution (by observing that convolution is multiplication by a doubly block circulant matrix) and to the addition of biases (by treating biases as a row of that are multiplied by a constant input of ). We will also give a specific description for the convolutional case in subsequent discussions of the details of the method. Function-preserving initialization carries many advantages over other approaches to growing the network:
The new, larger network immediately performs as well as the original network, rather than spending time passing through a period of low performance.
Any change made to the network after initialization is guaranteed to be an improvement, so long as each local step is an improvement. Previous methods could fail to improve over the baseline even if each local step was guaranteed to be an improvement, because the initial change to the larger model size could worsen performance.
It is always “safe” to optimize all parameters in the network. There is never a stage where some layers receive harmful gradients and must be frozen. This is in contrast to approaches such as cascade correlation, where the old units are frozen in order to avoid making poor adaptations while attempting to influence the behavior of new randomly connected units.
Our first proposed transformation is Net2WiderNet transformation. This allows a layer to be replaced with a wider layer, meaning a layer that has more units. For convolution architectures, this means the layers will have more convolution channels.
Let us start with a simple special case to illustrate the operation. Suppose that layer and layer are both fully connected layers, and layer uses an elementwise non-linearity. To widen layer , we replace and . If layer has inputs and outputs, and layer has outputs, then and . Net2WiderNet allows us to replace layer with a layer that has outputs, with . We will introduce a random mapping function , that satisfies
We introduce a new weight matrix and representing the weights for these layers in the new student network. Then the new weights are given by
Here, the first columns of are copied directly into . Columns through of are created by choosing a random as defined in . The random selection is performed with replacement, so each column of is copied potentially many times. For weights in , we must account for the replication by dividing the weight by replication factor given by , so all the units have the exactly the same value as the unit in the original net.
This description can be generalized to making multiple layers wider, with the layers composed as described by an arbitrary directed acyclic computation graph. This general procedure is illustrated by Fig. 2. So far we have only discussed the use of a single random mapping function to expand one layer. We can in fact introduce a random mapping function for every non-output layer. Importantly, these are subject to some constraints as defined by the computation graph. Care needs to be taken to ensure that the remapping functions do in fact result in function preservation.
To explain, we provide examples of two computational graph structures that impose specific constraints on the random remapping functions.
One example is the layer structure used by batch normalization (Ioffe & Szegedy, 2015)
. The layer involves both a standard linear transformation, but also involves elementwise multiplication by learned parameters that allow the layer to represent any range of outputs despite the normalization operation. The random remapping for the multiplication parameters must match the random remapping for the weight matrix. Otherwise we could generate a new unit that uses the weight vector for pre-existing unitbut is scaled by the multiplication parameter for unit . The new unit would not implement the same function as the old unit or as the old unit .
Another example is concatenation. If we concatenate the output of layer 1 and layer 2, then pass this concatenated output to layer 3, the remapping function for layer 3 needs to take the concatenation into account. The width of the output of layer 1 will determine the offset of the coordinates of units originating in layer 2.
To make Net2WiderNet a fully general algorithm, we would need a remapping inference
algorithm that makes a forward pass through the graph, querying each operation in the graph about how to make the remapping functions consistent. For our experiments, we manually designed all of the inference necessary rules for the Inception network, which also works for most feed forward network. This is similar to the existing shape inference functionality that allows us to predict the shape of any tensor in the computational graph by making a forward pass through the graph, beginning with input tensors of known shape.
After we get the random mapping, we can copy the weight over and divide by the replication factor, which is formally given by the following equation.
It is essential that each unit be replicated at least once, hence the constraint that the resulting layer be wider. This operator can be applied arbitrarily many times; we can expand only one layer of the network, or we can expand all non-output layers.
In the setting where several units need to share the same weights, for example convolution operations. We can add such constraint to the random mapping generation, such that source of weight is consistent. This corresponds to make a random mapping on the channel level, instead of unit level, the rest procedure remains the same.
When training with certain forms of randomization on the widened layer, such as dropout (Srivastava et al., 2014), it is acceptable to use perfectly transformation preserving Net2WiderNet, as we have described so far. When using a training algorithm that does not use randomization to encourage identical units to learn different functions, one should add a small amount of noise to all but the first copy of each column of weights. This results in the student network representing only approximately the same function as the teacher, but this approximation is necessary to ensure that the student can learn to use its full capacity when training resumes.
We also introduce a second function-preserving transformation, Net2DeeperNet. This allows us to transform an existing net into a deeper one. Specifically, the Net2DeeperNet replaces a layer with two layers The new matrix
is initialized to an identity matrix, but remains free to learn to take on any value later. This operation is only applicable whenis chosen such that for all vectors . This property holds for the rectified linear activation. To obtain Net2DeeperNet for maxout units, one must use a matrix that is similar to identity, but with replicated columns. Unfortunately, for some popular activation functions, such as the logistic sigmoid, it is not possible to insert a layer of the same type that represents an identity function over the required domain. When we apply it to convolution networks, we can simply set the convolution kernels to be identity filters.
In some cases, to build an identity layer requires additional work. For example, when using batch normalization, we must set the output scale and output bias of the normalization layer to undo the normalization of the layer’s statistics. This requires running forward propagation through the network on training data in order to estimate the activation statistics.
The approach we take is a specific case of a more general approach, that is to build a multiple layer network that factorizes the original layer. Making a deeper but equivalent representation. However it is hard to do general factorization of layers which non-linear transformation units, such as rectified linear and batch normalization. Restricting to adding identity transformation allows us to handle such non-linear transformations, and apply the methodology to the network architectures that we care about. It also saves the computation cost of doing such factorization. We think support for general factorization of layers is an interesting future direction that is worth exploring.
When using Net2DeeperNet, there is no need to add noise. We can begin training with a student network that represents exactly the same function as the teacher network. Nevertheless, a small amount of noise can be added to break the symmetry more rapidly. At first glance, it appears that Net2Net can only add a new layer with the same width as the layer below it, due to the use of the identity operation. However, Net2WiderNet may be composed with Net2DeeperNet, so we may in fact add any hidden layer that is at least as wide as the layer below it.
We evaluated our Net2Net operators in three different settings. In all cases we used an Inception-BN network (Ioffe & Szegedy, 2015) trained on ImageNet. In the first setting, we demonstrate that Net2WiderNet can be used to accelerate the training of a standard Inception network by initializing it with a smaller network. In the second setting, we demonstrate that Net2DeeperNet allows us to increase the depth of the Inception modules. Finally, we use our Net2Net operators in a realistic setting, where we make more dramatic changes to the model size and explore the model space for better performance. In this setting, we demonstrate an improved result on ImageNet.
We will be comparing our method to some baseline methods:
“Random pad”: This is a baseline forNet2WiderNet. We widen the network by adding new units with random weights, rather than by replicating units to perform function-preserving initialization. This operation is implemented by padding the pre-existing weight matrices with additional random values.
“Random initialization” : This is a baseline for Net2DeeperNet. We compare the training of a deep network accelerated by initialization from a shallow one against the training of an identical deep network initialized randomly.
We start by evaluating the method of Net2WiderNet. We began by constructing a teacher network that was narrower than the standard Inception. We reduced the number of convolution channels at each layer within all Inception modules by a factor of . Because both the input and output number of channels are reduced, this reduces the number of parameters in most layers to 30% of the original amount. To simplify the software for our experiments, we did not modify any component of the network other than the Inception modules. After training this small teacher network, we used it to accelerated the training of a standard-sized student network.
Fig. 4 shows the comparison of different approaches. We can find that the proposed approach gives faster convergence than the baseline approaches. Importantly, Net2WiderNet gives the same level of final accuracy as the model trained from random initialization. This indicates that the true size of the model governs the accuracy that the training process can attain. There is no loss in accuracy that comes from initializing the model to mimic a smaller one. Net2WiderNet can thus be safely used to get to the same accuracy quicker, reducing the time required to run new experiments.
We conducted experiments with using Net2DeeperNet to make the network deeper. For these experiments, we used a standard Inception model as the teacher network, and increased the depth of each inception module. The convolutional layers in Inception modules use rectangular kernels. The convolutional layers are arranged in pairs, with a layer using a vertical kernel followed by a layer using a horizontal kernel. Each of these layers is a complete layer with a rectifying non-linearity and batch normalization; it is not just a factorization of a linear convolution operation into separable parts. Everywhere that a pair of vertical-horizontal convolution layers appears, we added two more pairs of such layers, configured to result in an identity transformation. The results are shown in Fig. 5. We find that Net2DeeperNet obtains good accuracy much faster than training from random initialization, both in terms of training and validation accuracy.
One of important property of Net2Net is that it enables quick exploration of modelp space, by transforming an existing state-of-art architecture. In this experiment, we made an ambitious exploration of model design space in both wider and deeper directions. Specifically, we enlarged the width of an Inception model to times of the original one. We also built another deeper net by adding four vertical-horizontal convolutional layer pairs on top of every inception modules in the original Inception model.
The results are shown in Fig. 6. This last approach paid off, yielding a model that sets a new state of the art of 78.5% on our ImageNet validation set. We did not train these larger models from scratch, due to resource and time constraints. However, we reported the convergence curve of the original inception model for reference, which should be easier to train than these larger models. We can find that the models initialized with Net2Net operations converge even faster than the standard model. This example really demonstrate the advantage of Net2Net
approach which helps us to explore the design space faster and advance the results in deep learning.
Our Net2Net operators have demonstrated that it is possible to rapidly transfer knowledge from a small neural network to a significantly larger neural network under some architectural constraints. We have demonstrated that we can train larger neural networks to improve performance on ImageNet recognition using this approach. Net2Net may also now be used as a technique for exploring model families more rapidly, reducing the amount of time needed for typical machine learning workflows. We hope that future research will uncover new ways of transferring knowledge between neural networks. In particular, we hope future research will reveal more general knowledge transfer methods that can rapidly initialize a student network whose architecture is not constrained to resemble that of the teacher network.
We would like to thank Jeff Dean and George Dahl for helpful discussions. We also thank the developers of TensorFlow(Abadi et al., 2015), which we used for all of our experiments. We would like to thank Conrado Miranda for helpful feedback that we used to improve the clarity and comprehensiveness of this manuscript.
International Journal on Artificial Intelligence Tools, 17(03):555–567, 2008.
Proc. International Conference on Computer Vision (ICCV’09), pp. 2146–2153. IEEE, 2009.
Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude.COURSERA: Neural Networks for Machine Learning, 4, 2012.
Net2Net involves training neural networks in a different setting than
usual, so it is natural to wonder whether the hyperparameters must be changed
in some way.
We found that, fortunately, nearly all of the hyperparameters that are
typically used to train a network from scratch may be used to train
a network with
involves training neural networks in a different setting than usual, so it is natural to wonder whether the hyperparameters must be changed in some way. We found that, fortunately, nearly all of the hyperparameters that are typically used to train a network from scratch may be used to train a network withNet2Net. The teacher network does not need to have any of its hyperparameters changed at all. In our experience, the student network needs only one modification, which is to use a smaller learning rate than usual. We find that the initial learning rate for the student network should be approximately the initial learning rate for the teacher network. This makes sense because we can think of the student network as continuing the training process begun by the teacher network, and typically the teacher network training process involves shrinking the learning rate to a smaller value than the initial learning rate.
Cascade-correlation (Fahlman & Lebiere, 1990) includes a strategy for learning by enlarging a pre-existing model architecture. This approach uses a very specific architecture where each new hidden unit receives input from all previous hidden units. Also, the pre-existing hidden units do not continue to learn after the latest hidden unit has been added. A related approach is to add whole new layers while leaving the lower layers fixed (Gutstein et al., 2008). Both these approaches can be considered part of the Net2Net family of operations for rapidly training a new model given a pre-existing one. However, these variants require passing through a temporary period of low performance after a new component has been added to the network but not yet been adapted. Our function-preserving initializations avoid that period of low performance. (In principle we could avoid this period entirely, but in practice we usually add some noise to the student model in order to break symmetry more rapidly—this results in a brief period of reduced performance, but it is a shorter period and the performance is less impaired than in previous approaches) Also, these approaches do not allow pre-existing portions of the model to co-adapt with new portions of the model to attain greater performance, while our method does.
Some work on knowledge transfer between neural networks is motivated by a desire to train deeper networks than would otherwise be possible, by incrementally training deeper networks with knowledge provided by smaller ones (Romero et al., 2014; Simonyan & Zisserman, 2015). These works trained convolutional networks with up to 19 weight layers. Somewhat perplexingly, we were able to train models of considerably greater depth without needing to use knowledge transfer, suggesting that deep models do not pose nearly as much difficulty as previous authors have believed. This may be due to our use of a very strong baseline: Inception (Szegedy et al., 2014) networks with batch normalization (Ioffe & Szegedy, 2015) trained using RMSProp (Tieleman & Hinton, 2012). The models we refer to as “standard size” have 25 weight layers on the shortest path from input to output, and 47 weight layers on the longest path (3 convolution layers + 11 Inception modules, each Inception module featuring a convolution layer followed by three forked paths, the shortest of which involves only one more convolution and the longest of which involves three more). Rather than using knowledge transfer to make greater depth possible, our goal is merely to accelerate the training of a new model when a pre-existing one is available.
Model compression (Buciluǎ et al., 2006; Hinton et al., 2015) is a technique for transferring knowledge from many models to a single model. It serves a different purpose than our Net2Net technique. Model compression aims to regularize the final model by causing it to learn a function that is similar to the average function learned by many different models. Net2Net aims to train the final model very rapidly by leveraging the knowledge in a pre-existing model, but makes no attempt to cause the final model to be more regularized than if it had been trained from scratch.
Our Net2DeeperNet operator involves inserting layers initialized to represent identity functions into deep networks. Networks with identity weights at initialization have been used before by, for example, Socher et al. (2013), but to our knowledge that have not previously been used to design function-preserving transformations of pre-existing neural networks.
The specific operators we propose are both based on the idea of transformations
of a neural network that preserve the function it represents. This is a closely
related concept to the justification for greedy layerwise pretraining of
deep belief networks . A deep belief network is a generative
model that draws samples by first drawing a sample from a restricted Boltzmann
machine then using this sample as input to the ancestral sampling process through
a directed sigmoid belief network. This can be transformed into a deep belief
network with one more layer by inserting a new directed layer in between the
RBM and the directed network. If the new directed layer is initialized with
weights from the RBM, then this added layer is equivalent to performing one
extra step of Gibbs sampling in the RBM. It thus does not change the probability
distribution represented by the deep belief network.
Our function-preserving transformations are similar in spirit.
However, our transformations preserve the function represented by a
feed-forward network. Note that DBN layer stacking does not preserve the
function represented by the deterministic upward pass through DBNs that is
commonly used to define a feed-forward neural network.
Our function-preserving transformations are more general in the sense that
they allow adding a layer with any width greater than the layer below it,
while DBN growth is only known to be distribution-preserving for one specific
size of new layer.
The specific operators we propose are both based on the idea of transformations of a neural network that preserve the function it represents. This is a closely related concept to the justification for greedy layerwise pretraining of deep belief networks(Hinton et al., 2006)
. A deep belief network is a generative model that draws samples by first drawing a sample from a restricted Boltzmann machine then using this sample as input to the ancestral sampling process through a directed sigmoid belief network. This can be transformed into a deep belief network with one more layer by inserting a new directed layer in between the RBM and the directed network. If the new directed layer is initialized with weights from the RBM, then this added layer is equivalent to performing one extra step of Gibbs sampling in the RBM. It thus does not change the probability distribution represented by the deep belief network. Our function-preserving transformations are similar in spirit. However, our transformations preserve the function represented by a feed-forward network. Note that DBN layer stacking does not preserve the function represented by the deterministic upward pass through DBNs that is commonly used to define a feed-forward neural network. Our function-preserving transformations are more general in the sense that they allow adding a layer with any width greater than the layer below it, while DBN growth is only known to be distribution-preserving for one specific size of new layer.