Framework to imitate writing styles using deep learning
I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.READ FULL TEXT VIEW PDF
In this work we evaluate different approaches to parallelize computation...
To train deep convolutional neural networks, the input data and the
Convolutional neural networks belong to the most successul image classif...
The representations of atmospheric moist convection in general circulati...
Convolutional Neural Networks, as most artificial neural networks, are
This paper discusses the need of an automated system for detecting print...
Existing approaches to improve the performances of convolutional neural
Framework to imitate writing styles using deep learning
This is meant to be a short note introducing a new way to parallelize the training of convolutional neural networks with stochastic gradient descent (SGD). I present two variants of the algorithm. The first variant perfectly simulates the synchronous execution of SGD on one core, while the second introduces an approximation such that it no longer perfectly simulates SGD, but nonetheless works better in practice.
Convolutional neural networks are big models trained on big datasets. So there are two obvious ways to parallelize their training:
across the model dimension, where different workers train different parts of the model, and
across the data dimension, where different workers train on different data examples.
These are called model parallelism and data parallelism, respectively.
In model parallelism, whenever the model part (subset of neuron activities) trained by one worker requires output from a model part trained by another worker, the two workers must synchronize. In contrast, in data parallelism the workers must synchronize model parameters (or parameter gradients) to ensure that they are training a consistent model.
In general, we should exploit all dimensions of parallelism. Neither scheme is better than the other a priori. But the relative degrees to which we exploit each scheme should be informed by model architecture. In particular, model parallelism is efficient when the amount of computation per neuron activity is high (because the neuron activity is the unit being communicated), while data parallelism is efficient when the amount of computation per weight is high (because the weight is the unit being communicated).
Another factor affecting all of this is batch size. We can make data parallelism arbitrarily efficient if we are willing to increase the batch size (because the weight synchronization step is performed once per batch). But very big batch sizes adversely affect the rate at which SGD converges as well as the quality of the final solution. So here I target batch sizes in the hundreds or possibly thousands of examples.
Modern convolutional neural nets consist of two types of layers with rather different properties:
Convolutional layers cumulatively contain about 90-95% of the computation, about 5% of the parameters, and have large representations.
Fully-connected layers contain about 5-10% of the computation, about 95% of the parameters, and have small representations.
Knowing this, it is natural to ask whether we should parallelize these two in different ways. In particular, data parallelism appears attractive for convolutional layers, while model parallelism appears attractive for fully-connected layers.
This is precisely what I’m proposing. In the remainder of this note I will explain the scheme in more detail and also mention several nice properties.
I propose that to parallelize the training of convolutional nets, we rely heavily on data parallelism in the convolutional layers and on model parallelism in the fully-connected layers. This is illustrated in Figure 1 for workers.
In reference to the figure, the forward pass works like this:
Each of the workers is given a different data batch of (let’s say) 128 examples.
Each of the workers computes all of the convolutional layer activities on its batch.
To compute the fully-connected layer activities, the workers switch to model parallelism. There are several ways to accomplish this:
Each worker sends its last-stage convolutional layer activities to each other worker. The workers then assemble a big batch of activities for examples and compute the fully-connected activities on this batch as usual.
One of the workers sends its last-stage convolutional layer activities to all other workers. The workers then compute the fully-connected activities on this batch of 128 examples and then begin to backpropagate the gradients (more on this below) for these 128 examples.In parallel with this computation, the next worker sends its last-stage convolutional layer activities to all other workers. Then the workers compute the fully-connected activities on this second batch of 128 examples, and so on.
All of the workers send of their last-stage convolutional layer activities to all other workers. The workers then proceed as in (b).
It is worth thinking about the consequences of these three schemes.
In scheme (a), all useful work has to pause while the big batch of images is assembled at each worker. Big batches also consume lots of memory, and this may be undesirable if our workers run on devices with limited memory (e.g. GPUs). On the other hand, GPUs are typically able to operate on big batches more efficiently.
In scheme (b), the workers essentially take turns broadcasting their last-stage convolutional layer activities. The main consequence of this is that much (i.e. ) of the communication can be hidden – it can be done in parallel with the computation of the fully-connected layers. This seems fantastic, because this is by far the most significant communication in the network.
Scheme (c) is very similar to scheme (b). Its one advantage is that the communication-to-computation ratio is constant in . In schemes (a) and (b), it is proportional to This is because schemes (a) and (b) are always bottlenecked by the outbound bandwidth of the worker that has to send data at a given “step”, while scheme (c) is able to utilize many workers for this task. This is a major advantage for large .
The backward pass is quite similar:
The workers compute the gradients in the fully-connected layers in the usual way.
The next step depends on which of the three schemes was chosen in the forward pass:
In scheme (a), each worker has computed last-stage convolutional layer activity gradients for the entire batch of examples. So each worker must send the gradient for each example to the worker which generated that example in the forward pass. Then the backward pass continues through the convolutional layers in the usual way.
In scheme (b), each worker has computed the last-stage convolutional layer activity gradients for one batch of 128 examples. Each worker then sends these gradients to the worker which is responsible for this batch of 128 examples. In parallel with this, the workers compute the fully-connected forward pass on the next batch of 128 examples. After such forward-and-backward iterations through the fully-connected layers, the workers propagate the gradients all the way through the convolutional layers.
Scheme (c) is very similar to scheme (b). Each worker has computed the last-stage convolutional layer activity gradients for 128 examples. This 128-example batch was assembled from examples contributed by each worker, so to distribute the gradients correctly we must reverse this operation. The rest proceeds as in scheme (b).
I note again that, as in the forward pass, scheme (c) is the most efficient of the three, for the same reasons.
The forward and backward propagations for scheme (b) are illustrated in Figure 2 for the case of workers.
Once the backward pass is complete, the workers can update the weights. In the convolutional layers, the workers must also synchronize the weights (or weight gradients) with one another. The simplest way that I can think of doing this is the following:
Each worker is designated th of the gradient matrix to synchronize.
Each worker accumulates the corresponding th of the gradient from every other worker.
Each worker broadcasts this accumulated th of the gradient to every other worker.
It’s pretty hard to implement this step badly because there are so few convolutional weights.
So what we have here in schemes (b) and (c) is a slight modification to the standard forward-backward propagation which is, nonetheless, completely equivalent to running synchronous SGD with a batch size of . Notice also that schemes (b) and (c) perform forward and backward passes through the fully-connected layers, each time with a different batch of 128 examples. This means that we can, if we wish, update the fully-connected weights after each of these partial backward passes, at virtually no extra computational cost. We can think of this as using a batch size of 128 in the fully-connected layers and in the convolutional layers. With this kind of variable batch size, the algorithm ceases to be a pure parallelization of SGD, since it no longer computes a gradient update for any consistent model in the convolutional layers. But it turns out that this doesn’t matter much in practice. As we take the effective batch size, , into the thousands, using a smaller batch size in the fully-connected layers leads to faster convergence to better minima.
The first question that I investigate is the accuracy cost of larger batch sizes. This is a somewhat complicated question because the answer is dataset-dependent. Small, relatively homogeneous datasets benefit from smaller batch sizes more so than large, heterogeneous, noisy datasets. Here, I report experiments on the widely-used ImageNet 2012 contest dataset (ILSVRC 2012)(Deng et al., 2009). At 1.2 million images in 1000 categories, it falls somewhere in between the two extremes. It isn’t tiny, but it isn’t “internet-scale” either. With current GPUs (and CPUs) we can afford to iterate through it many times when training a model.
The model that I consider is a minor variation on the winning model from the ILSVRC 2012 contest (Krizhevsky et al., 2012). The main difference is that it consists of one “tower” instead of two. This model has 0.2% more parameters and 2.4% fewer connections than the two-tower model. It has the same number of layers as the two-tower model, and the map dimensions in each layer are equivalent to the map dimensions in the two-tower model. The minor difference in parameters and connections arises from a necessary adjustment in the number of kernels in the convolutional layers, due to the unrestricted layer-to-layer connectivity in the single-tower model.111In detail, the single-column model has 64, 192, 384, 384, 256 filters in the five convolutional layers, respectively.
Another difference is that instead of a softmax final layer with multinomial logistic regression cost, this model’s final layer has 1000 independent logistic units, trained to minimize cross-entropy. This cost function performs equivalently to multinomial logistic regression but it is easier to parallelize, because it does not require a normalization across classes.222This is not an important point with only 1000 classes. But with tens of thousands of classes, the cost of normalization becomes noticeable.
I trained all models for exactly 90 epochs, and multiplied the learning rate byat 25%, 50%, and 75% training progress.
The weight update rule that I used was
where is the coefficient of momentum, is the coefficient of weight decay, is the learning rate, and denotes the expectation of the weight gradient for a batch .
When experimenting with different batch sizes, one must decide how to adjust the hyperparametersand . It seems plausible that the smoothing effects of momentum may be less necessary with bigger batch sizes, but in my experiments I used for all batch sizes. Theory suggests that when multiplying the batch size by , one should multiply the learning rate by
to keep the variance in the gradient expectation constant. How should we adjust the weight decay? Given old batch size and new batch size , we’d like to keep the total weight decay penalty constant. Note that with batch size , we apply the weight decay penalty times more frequently than we do with batch size . So we’d like applications of the weight decay penalty under batch size to have the same effect as one application of the weight decay penalty under batch size . Assuming for now, applications of the weight decay penalty under batch size , learning rate , and weight decay coefficient give
While one application of weight decay under batch size , learning rate and weight decay coefficient gives
so we want to pick such that
So, for example, if we trained a net with batch size and , the theory suggests that for batch size we should use and . Note that, as , , an easy approximation which works for the typical s used in neural nets. In our case, the approximation yields . The acceleration obtained due to momentum is no greater than that obtained by multiplying by 10, so the approximation remains very accurate.
Theory aside, for the batch sizes considered in this note, the heuristic that I found to work the best was to multiply the learning rate bywhen multiplying the batch size by I can’t explain this discrepancy between theory and practice333This heuristic does eventually break down for batch sizes larger than the ones considered in this note.. Since I multiplied the learning rate by instead of , and the total weight decay coefficient is , I used for all experiments.
As in (Krizhevsky et al., 2012), I trained on random patches extracted from images, as well as their horizontal reflections. I computed the validation error from the center patch.
The machine on which I performed the experiments has eight NVIDIA K20 GPUs and two Intel 12-core CPUs. Each CPU provides two PCI-Express 2.0 lanes for four GPUs. GPUs which have the same CPU “parent” can communicate amongst themselves simultaneously at the full PCI-Express 2.0 rate (about 6GB/sec) through a PCI-Express switch. Communication outside this set must happen through the host memory and incurs a latency penalty, as well as a throughput penalty of 50% if all GPUs wish to communicate simultaneously.
Table 1 summarizes the error rates and training times of this model using scheme (b) of Section 4. The main take-away is that there is an accuracy cost associated with bigger batch sizes, but it can be greatly reduced by using the variable batch size trick described in Section 4.2. The parallelization scheme scales pretty well for the model considered here, but the scaling is not quite linear. Here are some reasons for this:
The network has three dense matrix multiplications near the output. Parallel dense matrix multiplication is quite inefficient for the matrix sizes used in this network. With 6GB/s PCI-Express links and 2 TFLOP GPUs, more time is spent communicating than computing the matrix products for matrices.444Per sample, each GPU must perform FLOPs, which takes at 2 TFLOPs/sec, and each GPU must receive floats, which takes at 6GB/sec. We can expect better scaling if we increase the sizes of the matrices, or replace the dense connectivity of the last two hidden layers with some kind of restricted connectivity.
One-to-all broadcast/reduction of scheme (b) is starting to show its cost. Scheme (c), or some hybrid between scheme (b) and scheme (c), should be better.
Our 8-GPU machine does not permit simultaneous full-speed communication between all 8 GPUs, but it does permit simultaneous full-speed communication between certain subsets of 4 GPUs. This particularly hurts scaling from 4 to 8 GPUs.
|GPUs||Batch size||Cross-entropy||Top-1 error||Time||Speedup|
The results of Table 1 compare favorably to published alternatives. In (Yadan et al., 2013), the authors parallelize the training of the convolutional neural net from (Krizhevsky et al., 2012) using model parallelism and data parallelism, but they use the same form of parallelism in every layer. They achieved a speedup of 2.2x on 4 GPUs, relative to a 1-GPU implementation that takes 226.8 hours to train for 90 epochs on an NVIDIA GeForce Titan. In (Paine et al., 2013), the authors implement asynchronous SGD (Niu et al., 2011; Dean et al., 2012) on a GPU cluster with fast interconnects and use it to train the convolutional neural net of (Krizhevsky et al., 2012) using model parallelism and data parallelism. They achieved a speedup of 3.2x on 8 GPUs, relative to a 1-GPU implementation that takes 256.8 hours to train on an NVIDIA K20X. Furthermore, this 3.2x speedup came at a rather significant accuracy cost: their 8-GPU model achieved a final validation error rate of 45%.
In (Coates et al., 2013)
, the authors use a GPU cluster to train a locally-connected neural network on images. To parallelize training, they exploit the fact that their network is locally-connected but not convolutional. This allows them to distribute workers spatially across the image, and only neuron activations near the edges of the workers’ areas of responsibility need to be communicated. This scheme could potentially work for convolutional nets as well, but the convolutional weights would need to be synchronized amongst the workers as well. This is probably not a significant handicap as there aren’t many convolutional weights. The two other disadvantages of this approach are that it requires synchronization at every convolutional layer, and that with 8 or more workers, each worker is left with a rather small area of responsibility (particularly near the upper layers of the convolutional net), which has the potential to make computation inefficient. Nonetheless, this remains an attractive dimension of parallelization for convolutional neural nets, to be exploited alongside the other dimensions.
The work of (Coates et al., 2013) extends the work of (Dean et al., 2012), which introduced this particular form of model parallelism for training a locally-connected neural network. This work also introduced the version of the asynchronous SGD algorithm employed by (Paine et al., 2013). Both of these works are in turn based on the work of (Niu et al., 2011) which introduced asynchronous SGD and demonstrated its efficacy for models with sparse gradients.
The scheme introduced in this note seems like a reasonable way to parallelize the training of convolutional neural networks. The fact that it works quite well on existing model architectures, which have not been adapted in any way to the multi-GPU setting, is promising. When we begin to consider architectures which are more suited to the multi-GPU setting, we can expect even better scaling. In particular, as we scale the algorithm past 8 GPUs, we should:
Consider architectures with some sort of restricted connectivity in the upper layers, in place of the dense connectivity in current nets. We might also consider architectures in which a fully-connected layer on one GPU communicates only a small, linear projection of its activations to other GPUs.
Switch from scheme (b) to scheme (c) of Section 4, or some hybrid between schemes (b) and (c).
Reduce the effective batch size by using some form of restricted model parallelism in the convolutional layers, as in the two-column network of (Krizhevsky et al., 2012).
We can expect some loss of accuracy when training with bigger batch sizes. The magnitude of this loss is dataset-dependent, and it is generally smaller for larger, more varied datasets.
Proceedings of The 30th International Conference on Machine Learning, pages 1337–1345, 2013.