1 Introduction
As computer vision and machine learning aim to solve increasingly challenging tasks, models of greater complexity are required. This in turn requires orders of magnitude more data to take advantage of these powerful models while avoiding overfitting. While early benchmark datasets in machine learning contained thousands or tens of thousands of samples [7, 3, 10], current datasets are of the order of millions [6, 2]
. This brings about new challenges as to how to train networks in a feasible amount of time. Even using parallel computing environments, training a network on ImageNet can take weeks
[8]. In addition, although inference of labels using a trained network is comparatively fast, realworld applications such as producing labels for all images on the internet can represent a significant cost in terms of time and resources. Therefore, there is an important need to develop fast algorithms for training and inference.In this work, we present a simple algorithm which accelerates training and inference using convolutional networks. The idea is based on performing convolutions as products in the Fourier domain, and reusing transformed feature maps many times. The significant operations in training convolutional networks can all be viewed as convolutions between pairs of 2D matrices, which can represent input and output feature maps, gradients of the loss with respect to feature maps, or weight kernels. Typically, convolutions are performed for all pairings between two sets of 2D matrices. By computing the Fourier transforms of the matrices in each set once, we can efficiently perform all convolutions as pairwise products.
Although it has long been known that convolutions can be computed as products in the Fourier domain, until recently the number of feature maps used in convolutional networks has been too small to make a method like ours effective. Previous work in the 90’s [1] explored the possibility of using FFTs to accelerate inference at the first layer of a trained network, where the Fourier transforms of the filters could be precomputed offline. However, this was not used during training, possibly because the number of feature maps used at the time was too small to make the overhead of computing FFTs at every iteration worthwhile. When the number of feature maps is large, as is the case for modern convolutional networks, using FFTs accelerates training and inference by a significant factor and can lead to a speedup of over an order of magnitude.
2 Theory
2.1 Backpropagation
The backpropagation algorithm
[9] is the standard method to compute the gradient when training a convolutional network. During training, each layer performs three tasks, which we now describe. First we fix some notation: for a given layer, we have a set of input feature maps indexed by , each one being a 2D image of dimensions . The output is a set of feature maps indexed by, which are also 2D images whose dimension depends on the convolutional kernel and its stride. The layer’s trainable parameters consist of a set of weights
, each of which is a small kernel of dimensions . ^{1}^{1}1In this paper we assume the input images and kernels are square for simplicity of notation, but the results can be trivially extended to nonsquare images and kernels.In the forward pass, each output feature map is computed as a sum of the input feature maps convolved with the corresponding trainable weight kernel:
(1) 
During the backward pass, the gradients with respect to the inputs are computed by convolving the transposed weight kernel with the gradients with respect to the outputs:
(2) 
This step is necessary for computing the gradients in (3) for the previous layer. Finally, the gradients of the loss with respect to the weight are computed by convolving each input feature map with the gradients with respect to the outputs:
(3) 
Note that is a 2D matrix with the same dimensions as the output feature map , and that all operations consist of convolutions between various sets of 2D matrices.
2.2 Algorithm
The wellknown Convolution Theorem states that circular convolutions in the spatial domain are equivalent to pointwise products in the Fourier domain. Letting denote the Fourier transform and its inverse, we can compute convolutions between functions and as follows:
Typically, this method is used when the size of the convolution kernel is close to that of the input image. Note that a convolution of an image of size with a kernel of size using the direct method requires operations. The complexity of the FFTbased method requires operations: each FFT requires , and the pointwise product in the frequency domain requires (note that the products are between two complex numbers). Here represents the hidden constant in the notation. ^{2}^{2}2Since the FFTbased method is actually computing a circular convolution, the output is cropped to discard coefficients for which the kernel is not completely contained within the input image. This yields an output of the same size as the direct method, and does not require additional computation.
Our algorithm is based on the observation that in all of the operations (1), (2) and (3), each of the matrices indexed by is convolved with each of the matrices indexed by . We can therefore compute the FFT of each matrix once, and all pairwise convolutions can be performed as products in the frequency domain. Even though using the FFTbased method may be less efficient for a given convolution, we can effectively reuse our FFTs many times which more than compensates for the overhead.
The following analysis makes this idea precise. Assume we have input feature maps, output feature maps, images consisting of pixels and kernels of pixels. Also assume we are performing updates over minibatches of size , and that represents the hidden constant in the FFT complexity. As an example, using the direct approach (1) will take a total of operations. Our approach requires operations to transform the input feature maps and kernels to the Fourier domain, a total of additions and multiplications in the Fourier domain, and
operations to transform the output feature maps back to the spatial domain. The same analysis yields similar complexity estimates for the other operations:
Direct Convolution  Our Method  

Here represents the size of the output feature map. Note that the high complexity of the direct method for convolution comes from the product of five terms, whereas our method has a sum of products with at most four terms. Figure 2 shows the theoretical number of operations for direct convolution and our FFT method for various input sizes.
2.3 Implementation and Memory Considerations
Although conceptually straighforward, a number of challenges relating to GPU implementation needed to be addressed. First, current GPU implementations of the FFT such as cuFFT are designed to parallelize over individual transforms. This can be useful for computing a limited number of transforms on large inputs, but is not suitable for our task since we are performing many FFTs over relatively small inputs. Therefore, we developed a custom CUDA implementation of the CooleyTukey FFT algorithm [5] which enabled us to parallelize over feature maps, minibatches and within each 2D transform. Note that 2D FFTs lend themselves naturally to parallelization since they can be decomposed into two sets of 1D FFTs (one over rows and the other over columns), and each set can be done in parallel.
Second, additional memory is required to store the feature maps in the Fourier domain. Note that by keeping the Fourier representations in memory for all layers after the forward pass, we could avoid recomputing several of the FFTs during the backward pass. However, this might become prohibitively expensive in terms of memory for large networks. Therefore we reuse the same memory for all the different convolutions in the network, so that the necessary amount of memory is determined only by the largest convolution layer. All of the analysis in the previous section and all experiments in the remainder of the paper assume we are using this memoryefficient approach.
For a convolution layer taking an input of size , with input features, output features and a minibatch of size , we need to store a total of frequency representations of size . As another means to save memory, we can use symmetry properties of FFTs of real inputs to store only half the data, i.e. complex numbers. Assuming float representations, the necessary memory in bytes is:
The following table shows the amount of RAM used for typical sizes of convolutions:
RAM used  

128  16  96  256  76MB 
128  32  96  256  294MB 
64  64  96  256  784MB 
128  64  96  256  1159MB 
128  16  256  384  151MB 
128  32  256  384  588MB 
128  16  384  384  214MB 
128  32  384  384  830MB 
Note that this is a relatively small additional memory requirement compared to the total amount of memory used by large networks.
3 Experiments
To test our analysis, we ran a series of experiments comparing our method to the CudaConv GPU implementation of [8]
and a custom implementation using the Torch 7 machine learning environment
[4]. Both of these implementations compute convolutions using the direct method in the spatial domain. All experiments were performed on the same GeForce GTX Titan GPU. We began by performing unit tests comparing the results of convolutions computed by our method to those computed by the Torch implementation for each of the three operations. We found that the differences in results for operations (1) and (2) to be of the order of and for operation (3) to be of the order . The differences are likely due to rounding errors in floatingpoint operations and are within an acceptable range.We then compared how each method performed in terms of speed with varying kernel sizes, input sizes and minibatch sizes. The results are shown in Figure 3. For all experiments, we chose 96 input feature maps and 256 output feature maps, which represents a typical configuration of a deep network’s second layer. The functions updateOutput, updateGradInput and accGradParameters correspond to the operations in (1), (2) and (3) respectively. All times are measured in seconds.
We see that our method significantly outperforms the other two in nearly all cases. The improvement is especially pronounced for the accGradParameters
operation, which is the most computationally expensive. This is likely due to the fact that the convolution we are computing has a large kernel, for which FFTs are better suited in any case. Also note that our method performs the same regardless of kernel size, since we pad the kernel to be the same size as the input image before applying the FFT. This enables the use of much larger kernels, which we intend to explore in future work.
We next ran experiments with parameter configurations typical of those used in different layers of a large convolutional network. The time taken by the different methods are given in milliseconds. The top row is a 4tuple indicating the width of the kernel, width of the input image, number of input feature maps and number of output feature maps. All kernels and input images are square, of size and respectively. All configurations have minibatches of size 128. The first configuration represents the first layer, which is why we did not report times for the updateGradInput operation. For each configuration, the bestperforming method is highlighted in bold.
We see that our FFTbased method performs faster in total for all configurations, sometimes to a substantial degree. The improvement is very significant on the forward pass, which makes the method especially well suited for inference on very large datasets using a trained network.
Finally, we tested times taken to perform a training iteration for a network obtained by composing the above layers, inserting maxpooling and rectified linear units between them, and adding a fully connected layer for prediction with 1000 outputs. This was to account for possible changes in performance due to implementation details such as padding, accessing memory and so on. The following table shows the results in milliseconds:
Our FFTbased method still significantly outperforms the other two implementations.
4 Discussion and Future Work
We have presented a simple and fast algorithm for training and inference using convolutional networks. It outperforms known stateoftheart implementations in terms of speed, as verified by numerical experiments. In the future we plan to explore the possibility of learning kernels directly in the Fourier domain. Another interesting direction would be to investigate the use of nonlinearities in the Fourier domain rather than in the spatial domain, since this would remove the need for inverse transforms and accelerate training and inference further.
It is worth mentioning that in our current implementation of the FFT algorithm, input images which are not a power of 2 must be padded to the next highest power. For example, using input images of size will be suboptimal in terms of speed since they must be padded to be . This limitation is not intrinsic to the FFT and we intend to extend our implementation to accept other sizes in the future. On the other hand, the fact that our method’s speed is invariant to kernel size enables us to use larger kernels at different layers of the network. In future work we intend to thoroughly explore the effect of input image and kernel sizes on performance.
References

[1]
S. BenYacoub, B. Fasel, and J. Luttin.
Fast face detection using mlp and fft.
In Proceedings of the Second International Conference on Audio and Videobased Biometric Person Authentification (AVBPA 1999), 1999.  [2] Thierry BertinMahieux, Daniel P.W. Ellis, Brian Whitman, and Paul Lamere. The million song dataset. In Proceedings of the 12th International Conference on Music Information Retrieval (ISMIR 2011), 2011.
 [3] A. Bosch, A. Zisserman, and X. Munoz. Representing shape with a spatial pyramid kernel. In Proceedings of the ACM International Conference on Image and Video Retrieval, 2007.
 [4] Ronan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A matlablike environment for machine learning. In NIPS, 2011.
 [5] James Cooley and John Tukey. An algorithm for the machine calculation of complex fourier series. Mathematics of Computation, (19):297–301, 1965.
 [6] Jia Deng, Wei Dong, Richard Socher, LiJia Li, Kai Li, and Li FeiFei. Imagenet: A largescale hierarchical image database. 2009.
 [7] L. FeiFei, R. Fergus, and Pietro Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. 2004.

[8]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton.
Imagenet classification with deep convolutional neural networks.
In NIPS, pages 1106–1114, 2012.  [9] Y. LeCun, L. Bottou, G. Orr, and K. Muller. Efficient backprop. In G. Orr and Muller K., editors, Neural Networks: Tricks of the trade. Springer, 1998.
 [10] G. Tzanetakis and P. Cook. Musical genre classification of audio signals. IEEE Transactions on Speech and Audio Processing, 10(5):293–302, July 2002.
Comments
There are no comments yet.