Highly Efficient Forward and Backward Propagation of Convolutional Neural Networks for Pixelwise Classification

12/15/2014 ∙ by Hongsheng Li, et al. ∙ The Chinese University of Hong Kong 0

We present highly efficient algorithms for performing forward and backward propagation of Convolutional Neural Network (CNN) for pixelwise classification on images. For pixelwise classification tasks, such as image segmentation and object detection, surrounding image patches are fed into CNN for predicting the classes of centered pixels via forward propagation and for updating CNN parameters via backward propagation. However, forward and backward propagation was originally designed for whole-image classification. Directly applying it to pixelwise classification in a patch-by-patch scanning manner is extremely inefficient, because surrounding patches of pixels have large overlaps, which lead to a lot of redundant computation. The proposed algorithms eliminate all the redundant computation in convolution and pooling on images by introducing novel d-regularly sparse kernels. It generates exactly the same results as those by patch-by-patch scanning. Convolution and pooling operations with such kernels are able to continuously access memory and can run efficiently on GPUs. A fraction of patches of interest can be chosen from each training image for backward propagation by applying a mask to the error map at the last CNN layer. Its computation complexity is constant with respect to the number of patches sampled from the image. Experiments have shown that our proposed algorithms speed up commonly used patch-by-patch scanning over 1500 times in both forward and backward propagation. The speedup increases with the sizes of images and patches.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Convolutional Neural Networks (CNNs) are trainable multistage feed-forward neural networks

[10]. They have been extensively investigated to extract good hierarchical feature representations for image recognition tasks. CNN includes three types of layers: convolution layer, pooling layer and non-linearity layer. The input and output of each layer are called feature maps.

The convolution layer

convolves input feature maps with 3D filter banks to generate output feature maps. Each filter extracts the same type of local features at all locations of the input feature map. Conventionally, a convolution layer has a stride of 1

111A stride of in convolution and pooling layers denotes that the centers of every two neighboring patches extracted from the input feature map are exactly -pixel away from each other. It down-samples the output feature map such that its height and width are of of the original values.. But in some recent CNN models, greater-than-1 strides were also used in convolution to down-sample the output feature map.

The pooling layer

decreases the resolution of the feature maps to make the output feature maps less sensitive to input shift and distortions. Max-pooling and average-pooling are most commonly used. Conventionally, a pooling layers has a stride equalling its kernel size. But in some recent CNN models, strides different than kernel sizes were also used.

The non-linearity layer is a point-wise non-linear function applied to each entry of feature maps.

(a) Patch-by-patch scanning for CNN based pixelwise classification
(b) Our approach
Figure 1: Comparison of (a) patch-by-patch scanning and (b) the proposed efficient forward and backward propagation for pixelwise classification. The scene labeling task is used for illustration here.

After extracting features with a multilayer convolutional network, fully connected layers with a final classifier are added to output class predictions. Given training samples and their labels, the parameters of CNNs are learned in an end-to-end supervised way by minimizing a loss function on training data. Forward and backward propagation is used to make class predictions for input samples and to update CNN parameters based on prediction errors, respectively.

CNN together with its forward and backward propagation algorithms was originally designed for whole-image classification, i.e., predicting one label for a whole image. CNN-based OCR algorithms [10], [19], [1], [16] drew a lot of attention and were improved over the last decade. With deep CNN, Krizhevsky et al. [9]

won the image classification challenge in ImageNet LSVRC 2012 and beat other computer vision algorithms with large margins.

In all the applications above, the input samples of CNNs are whole images without redundant information between them, and therefore they can be processed independently.

In recent years, CNN has also been applied to object detection [15], [6], [18], [17], [13], image segmentation [20], scene labeling [5], [14], and tracking [4], and significantly improved the accuracies in these applications. These tasks are considered as pixelwise classification, i.e., predicting a class label for every pixel, and have fundamental difference with whole-image classification problems. The input samples of CNNs are image patches surrounding pixels and have large overlaps. Studies [5], [14] have shown that inputting larger image patches to CNNs leads to better accuracies, since CNNs can capture more contextual information. In [14], the chosen patch size covers 1/3 of the whole image. However, this implies larger overlaps between patches.

Existing approaches ignored such difference and still process image patches independently in a way like treating whole-images and without modifying the forward and backward propagation algorithms. They involved a lot of redundant computation on overlapped patches, and the redundancy increases with both image size and patch size. Figure 1.(a) shows straightforward patch-by-patch scanning for both forward and backward propagation. Computation efficiency has become a major bottleneck for these CNN based pixelwise classification tasks. As a compromised solution, one could sacrifice the resolution of the predicted label maps by subsampling, such that overlaps between image patches can be reduced. In object detection, some image patches can be early rejected by fast algorithms before being feed to CNNs, however, sacrificing recalls. Even given that, redundancy still exists and many CNNs based approaches for pixelwise classification were not suitable for realtime applications.

In pixelwise classification tasks, it is easy to collect thousands of training image patches from a single image, since every pixel has a label. From a large image set, the number of available training samples could reach one billion. Existing approaches treated these training patches independently. Due to the efficiency problem, it is impossible to make use of all the available training samples. Usually only a small subset was randomly sampled for training.

1.1 Our approach

In this paper, we propose highly efficient forward and backward propagation algorithms for CNN based pixelwise classification. It generates exactly the same result as patch-by-patch scanning, without sacrificing the resolutions of predicted label maps or early rejecting any patches. Experimental results show that more than times speedup is achieved on images for both forward and backward propagation. Theoretical analysis shows that compared with patch-by-patch scanning, the complexity of our algorithms is much less sensitive to patch size and the speedup increases with the sizes of images and patches. This is important since image resolutions will become higher and higher in future applications and CNNs prefer large patches which contain more contextual information.

The proposed algorithms also have potential impact on CNN training. With fast forward propagation, the prediction errors of all the pixels in an image can be estimated quickly at every backward propagation iteration. As shown in Figure

1.(b), based on the error map, an arbitrary portion of pixels of interest (even all) on an image can be selected by a mask, and their surrounding patches are used to update CNN with our modified backward propagation. The computation complexity of our backward propagation is constant with respect to the number of image patches selected.

Figure 1 compares patch-by-patch scanning and our approach. At the test stage, patch-by-patch scanning sequentially and independently feeds patches to CNN and the forward propagation is repeated for all the pixels. In our approach, the whole image is taken as the input of CNN which predicts the whole label map with only one pass of the modified forward propagation. At each training iteration, existing approaches predict the error of each sampled patch and use it to calculate gradients with backward propagation. If a mini-batch contains training patches, both forward propagation and backward propagation are repeated for times and the gradients estimated from the patches are averaged to update CNN parameters. In our approach, a whole-image and its label map are treated as an input-target pair. With the proposed fast forward propagation, class labels at all the pixels can be quickly predicted, and all the prediction errors in the same image can be used to update CNN parameters with only one pass of the modified backward propagation.

If with 1-stride convolution and without pooling layers, it is not difficult to implement the one-pass forward propagation and one-pass backward propagation described above. Otherwise it is nontrivial, because convolution and pooling operations with greater-than-1 strides have down-sampling effect within each patch. The key of our approach is to modify both the convolution and pooling kernels of the original CNN by inserting a specific number of all-zero columns and rows to compensate for the down-sampling by the convolution and pooling layers. We call such kernels the -regularly sparse kernels. Moreover, based on -regularly sparse kernels, all strides of the convolution and pooling operations become 1, regardless of the strides of convolution and pooling in the original CNN. The 1-strides ensure continuous memory access, which is the key to maximize the computational efficiency on GPUs.

The main contributions of this work can be summarized as three-fold. (1) Our proposed algorithms eliminate all the redundant computation of forward and backward propagation in CNN based pixelwise classification, and achieve significant speedup. (2) The proposed -regularly sparse kernels not only ensure exactly the same results as patch-by-patch scanning in both forward and backward propagation, but also allow to access memory in a continuous manner, which is the key to fully utilize the computational capability of GPUs, regardless of the strides of convolution and pooling in the original CNN. 3) By applying a mask to the error map at the last layer of a CNN, one can choose an arbitrary subset of patches of interest from a training image to update CNN parameters via backward propagation and with constant computation complexity.

2 Related Work

There have been previous works [9], [3], [12] on efficient computation of CNNs. But most methods assumed input samples are independent and did not take the redundant computation between samples into account.

Our work is most related to the fast scanning method in [7], which was applied in scene labeling [14]. Fast scanning can be viewed as performing convolution or pooling with different starting offsets to generate multiple feature “fragments”. The output fragments at the last layer were re-organized to generate the final label maps. Compared with fast scanning [7], our key advantage is to ensure 1-strides in all convolution and pooling operations after introducing the -regularly sparse kernels. It allows to access memory addresses continuously, which is the key to fully utilize the computational power of GPUs. In contrast, fast scanning [7] still keeps large strides in its operations and is not ideal for GPU implementation. It was only implemented on CPUs in [7], [14]. Even with GPU implementation, it is multiple times slower than our algorithms.

There are works [11] taking whole images as inputs to perform forward and backward propagation with multilayer fully connected networks. However, these works used fully connected layers and are therefore only suitable for images with regular structures such as pedestrians or faces, but not for general objects and images.

3 Efficient forward propagation

3.1 Convolutional neural network

For CNN consisting of layers, without loss of generalization, we assume that the input and output of each layer consist of only one 2D feature map throughout the paper. Let and denote the input and output feature map of the th layer, respectively. is also the input of the next layer, i.e., .

If the th layer is a convolution layer, we denote and as the convolution kernel and the bias parameter of this layer. The output of this layer is , where denotes the convolution operation on with a kernel and a stride of .

If the th layer is a pooling layer, it can be viewed as first extracting feature patches from strided locations of the input feature map with a binary mask . The maximal or the mean value of each feature patch is then calculated as the pooling result of that patch. Let denote a general pooling operation with a binary kernel and a stride of on the input feature map.

The parameters of a -layer CNN can be optimized by gradient descent. For pixelwise classification tasks, patches centered at pixels of training images are cropped as training samples. For each patch in an input image , the CNN outputs a prediction or a score. When feeding the image as the input of the CNN by setting , forward propagation cannot generate a prediction at every pixel location due to the greater-than-1 strides in convolution and pooling layers.

Our goal is to make a one-time scan for each layer and generate exactly the same result as the time-consuming patch-by-patch scanning, however, without redundant computation. To achieve this goal, we introduce -regularly sparse kernels to replace convolution and pooling kernels. Then it allows to take a whole image as input and directly output a label map for all the pixels without loss of resolution.

(a) (b)
(c) conversion result (d) conversion result
Figure 2: (a) convolution kernel whose entries are generally non-zeros (represented by colored squares). (b) pooling kernel whose entries act as binary masks to extract features only at masked locations (represented by shaded squares). (c) Convert the convolution kernel in (a) to a -regularly sparse convolution kernel . Colored squares represent entries from kernel , and white squares represent 0. (d) Convert the pooling kernel in (c) to a -regularly sparse pooling kernel . Shaded (white) squares represent masked (unmasked) locations.

3.2 -regularly sparse kernels

For convolution kernel and pooling kernel , each entry is 1-pixel away from its neighboring entries (see Figures 2.(a) and 2.(b) for examples). We create -regularly sparse kernels for convolution and pooling layers by intersting all-zero rows and columns into the original kernels to make every two original neighboring entries -pixel away. The -regularly sparse convolution and pooling kernels are denoted by and , respectively. In Figures 2.(c) and 2.(d), we show a 2-regularly sparse kernel , and a 3-regularly sparse kernel . The original kernels can be viewed as 1-regularly sparse kernels where , i.e., and . Note that the -regularly sparse kernels are not equivalent to using a stride of in the conventional convolution and pooling layers. Actually, with our -regularly sparse kernels, all strides in the convolution and pooling layers of the modified CNN are fixed to 1.

3.3 Image padding

Note that the original CNN is trained with image patches centered at pixel locations of training images. If a CNN takes image patches as inputs, a test image of size

should be padded to the size of

to ensure patches centered at the border of the original image are also of size .

(a) Padded Input Image of size (b) Top-left Image Patch of size
(c1) Proposed Pooling (d1) Proposed Convolution (e1) Proposed Pooling (f1) Proposed Convolution
(c2) Original Pooling (d2) Original Convolution (e2) Original Pooling (f2) Original Convolution
Figure 3: Illustration of our forward propagation using a CNN with 3 convolution and 2 pooling layers described in Section 3.4. (a) A input image (shown as shaded squares) is padded to to ensure a patch centered on the border of the original image can cover pixels. In our algorithm, the padded image is treated as the input of CNN and denoted as . (b) patch (shown as blue squares) centered at the top-left (shown as the yellow square) of the image. In the original CNN, the patch is cropped and fed to the network as input . (c1) Pooling on with pooling kernel . is obtained by convolving in (a) with the convolution kernel . The pooling result is a feature map . Red squares represent feature scores corresponding to those obtained by the original CNN on after pooling (as shown in (c2)). (c2) Pooling in the original CNN on . is obtained by convolving in (b) with the convolution kernel . The pooling result is a feature map . (d1) Convolving the feature map with the regularly sparse convolution kernel . The result is a feature map. Blue squares represent feature scores corresponding to those obtained by convolving with in the original CNN (as shown in (d2)). (d2) Convolving the feature map with original convolution kernel . The result is a feature map . (e1) Pooling on with regularly sparse pooling kernel . The result is a feature map . Yellow squares represent feature scores corresponding to those obtained with the original CNN on after pooling (as shown in (e2)). (e2) Pooling with the original CNN on . The result is a feature map . (f1) Convolving the feature map with the regularly sparse convolution kernel . The result is a feature map . Purple squares represent feature scores corresponding to those obtained by convolving with in the original CNN. (f2) Convolving the feature map with original convolution kernel , which results in a singe feature score .
Input: Input image , convolution parameters , , pooling kernels , strides of each layer
1 begin
2       
3       
4        for  do
5               if Layer == convolution layer then
6                      Convert to (see Section 3.2)
7                      ,
8               else if Layer == pooling layer then
9                      Convert to (see Section 3.2)
10                     
11               end if
12              
13              
14              
15        end for
16       return output feature map
17       
18 end
19
Algorithm 1 Efficient Forward Propagation of CNN

3.4 Efficient forward propagation

Our algorithm for efficient forward propagation is summarized in Algorithm 1. Note that strides in all layers are fixed to 1. We explain the algorithm step-by-step by converting a conventional CNN which includes 3 convolution layers and 2 max-pooling layers to a new one with regularly sparse kernels. For simplicity, non-linearity layers are not included and only strides of pooling layers are greater than 1. The original CNN is composed of a convolution layer followed by a pooling layer with a stride of , another convolution layer and a pooling layer with a stride of , and a final convolution layer to generate a feature score. The original network takes image patches as inputs and outputs a single feature score after the forward propagation. The output feature map of the CNN is of size . Given a input image222We choose an image size smaller than the patch size for the convenience of illustration; otherwise, the figure will be too big. (Figure 3.(a)), where each pixel needs a prediction score, it is first padded to . The image patch centered at the top-left corner of the original image is shown in Figure 3.(b).

We illustrate how our algorithm computes at each layer of the modified CNN in Figure 3 and compare its computation with that of the original CNN by using the full input image in Figure 3.(a) and the top-left image patch in Figure 3.(b). For the first convolution and pooling layers, the proposed algorithm performs convolution in the same way as the original CNN, but pools patches with stride . The difference between our algorithm and the original CNN for the first pooling layer is illustrated in Figure 3.(c). Our algorithm performs convolution on the whole padded input image and does not reduce its resolution after pooling with stride . In contrast, the original CNN performs convolution only on one patch and the resolution of the feature map is reduced after pooling. For our modified CNN, because the stride in the previous pooling layer equals 1, the input feature maps for the second convolution and pooling layers are not equivalent to the ones obtained by the original CNN. As shown in Figures 3.(d)-3.(e), each pair of neighboring entries obtained with the original CNN is now -pixel away from each other in the feature map obtained with our algorithm. To generate same feature scores as the original CNN after another round of convolution and pooling operations, the convolution kernel and pooling kernel should be converted to 2-regularly sparse kernels and to let neighboring entries in these kernels be pixels away. After the 2nd pooling layer in our modified CNN, each pair of neighboring entries in the output feature map obtained with the original CNN is now pixels away from each other in the output feature map. Therefore, the convolution kernels should be converted to -regularly sparse kernels to generate the final feature map (see Figure 3.(f)).

Since fully connected layers can be viewed as convolution operations, such layers following the convolution or poolying layers in the original CNN can also be converted to convolution layers with regularly sparse kernels.

3.5 Theoretical speedup of forward propagation

We assume that each convolution layer is followed by a pooling layer and a non-linearity layer. Let be the number of pixels in the input feature map of the 1st layer (usually a pre-processed image), be the number of pixels in each image patch at layer , be the number of pixels of the convolution kernel at layer , and be the number of foreground masks in the pooling kernel following layer . The computation complexity of patch-by-patch scanning at layer with a stride of 1 can be calculated as,

(1)

On the left-hand side, the term denotes that a total of image patches are evaluated, the term denotes the complexity of convolving each image patch at the th convolution layer, and the

term denotes that each pixel is compared or added once at the pooling layer followed by a non-linear transformation at the non-linearity layer.

For our algorithm, the time complexity is calculated as

(2)

On the left-hand side, the term denotes that the input image needs to be padded before being processing as described in Section 3.3, denotes the complexity of convolving the input feature map at layer , denotes the complexity of pooling following layer , and denotes the complexity of applying point-wise non-linear transformation to output feature maps. On the right hand side, is omitted because it is usually smaller than , and pooling operations are generally much faster than convolutions.

Our algorithm has a speedup of compared with the patch-by-patch scanning. The speedup increases with image size and image patch size. Since the size of intermediate feature map gradually decreases due to greater-than-1 strides, the speedup is the largest for the st layer and gradually decreases.

(a) Memory accessed by 25 threads
(b) Iter. 1 (c) Iter. 2 (d) Iter. 1 (e) Iter. 2
by ours by ours by [7] by [7]
Figure 4: (a) Input feature map is accessed iteratively by our proposed convolution operation with 25 GPU threads. Each thread extracts 4 values iteratively from the input feature map to form a matrix. Convolution is performed by matrix multiplication with the original kernel. (b)-(c) GPU threads 1-3 in (a) access the input feature map at iterations 1 and 2. Note that the memory addresses accessed by the threads are consecutive. (d)-(e) Illustration of how GPU threads 1-3 access strided locations of the GPU memory at iterations 1 and 2 by fast scanning in [7].

3.6 GPU implementation

Our algorithm can run very efficiently on GPU. The efficiency on GPU is limited by the way it accesses GPU memory. Our forward propagation algorithm has the advantage of continuously accessing GPU memory by threads in the same blocks, which is the key to fully utilize the computational capacity of GPU.

The Caffe library

[8] provides one of the fastest implementations of convolution on GPU [2]. It extracts feature patches from the input feature map and convert them into a large matrix. Convolution is calculated as a matrix multiplication between this large matrix and the convolution kernel. Our algorithm is implemented based on the conv_layer of the Caffe library. Every thread is in charge of extracting values from the input feature map for calculating one entry on the output feature map. At every location of the input feature map, all values specified by the non-zero entries of the convolution kernel are iteratively extracted by a single thread. The extracted values are then organized into a large matrix and multiplied by the convolution kernel to generate the final result (Figure 4.(a)). In this way, consecutive threads in a same block can access consecutive addresses in the GPU memory and take full use of the GPU memory bandwidth (Figures 4.(b) and 4.(c)).

The max and average pooling can be implemented in a similar way, i.e., each GPU thread performing max or average calculation on the extracted feature patches for one output entry. Thus the continuous access to the GPU memory is achieved in both convolution and pooling layers.

Fast scanning [7] performs convolution or pooling operations with greater-than-1 strides in the original manner but with different starting offsets. Therefore, it has to access strided addresses of memory (Figures 4.(d) and 4.(e)), which is unable to fully utilize the bandwidth of GPU memory and significantly hinders its efficiency. Moreover, each operation with different offsets leads to multiple output sub-feature maps of different sizes, and the number of such sub-feature maps increases exponentially as the number of strided layers increases, further hindering its efficiency.

4 Efficient backward propagation

Backward propagation on the modified CNN with regularly sparse kernels can be performed by directly feeding whole images and their pixelwise label maps as the inputs. Compared with a conventional CNN, there are two differences for performing backward propagation on the modified CNN: 1) the errors at the last layer are no longer single values but the errors of all pixels (or a fraction of chosen pixels) in a training image; and 2) only gradients of the non-zeros entries in the regularly sparse convolution kernels are calculated and only those entries are updated during training.

4.1 Backward propagation of convolution layers

Let denote the error map corresponding to the input feature map at layer . To compute one entry in the error map , one should extract the next layer’s errors of units that are connected to the entry of interest in , then multiply each of them by the associated weights in the convolution kernel, and sum the weighted errors. The calculation of all entries of the error map can be converted to a convolution operation: , where pad() denotes zero padding the error map , and denotes rotating convolution kernel for 180.

For each non-zero entry in the convolution kernel , its gradient is calculated as the sum of all the weighted errors that are calculated with the entry. The weights for errors are determined by the input values from the input feature map for convolution: , where are the input values in and are multiplied elementwise by during convolution to compute the entry at of the output feature map . Calculating the gradients of can be converted into a convolution operation: , where denotes that the error map is inserted with all-zero rows and columns as the kernel does at layer . Similarly, the gradient for the bias is calculated as the sum of all the entries in . The speedup of backward propagation can be derived similarly to that of forward propagation as .

4.2 Backward propagation of pooling layers

For max pooling with regularly sparse kernels, the index within every patch where the maximal value is chosen is recorded during forward propagation. During backward propagation, the errors at transfer back to the errors at , and accumulate at the locations specified by those indices. For average pooling, it can be viewed as a mean filtering and calculated similarly as the convolution layer.

4.3 Selecting pixels of interest

We can select prediction errors of only a fraction of pixels in a training image for backward propagation. This is achieved by applying a mask on the error map of the last layer, where the prediction errors of pixels of interest are kept and the errors at other entries are set to zero (see “Error Mask” in Figure 1.(b)). The gradients calculated in this way are exactly the same as those calculated by extracting the image patches centered at the pixels of interest in the training image and feeding them as a mini-batch into the original CNN. The computation complexity does not change when different subsets of pixels are chosen.

This is an important property for tasks such as object detection, where only a small number of positive samples exist in each training image. If image patches at all pixel locations are used for training, the gradients by the positive samples would be overwhelmed by those calculated with the significantly larger number of negative samples. For scene labeling tasks, other strategies of choosing pixels during training might be beneficial for impoving the accuracy.

Layer Type conv1 pool1 tanh1 conv2 pool2 tanh2 conv3 Overall
Kernel Size / Stride / / - / - /
Patch-by-Patch 22983.8 4916.4 73.71 5066.2 46.68 16.76 22134.8 55238.4
Fwd. Prop. (ms)
Fast Scanning [7] 3.103 68.04 0.518 10.63 2.464 0.386 72.95 158.09
Fwd. Prop. (ms)
Our Method 3.074 6.688 0.526 7.088 1.211 0.395 16.41 35.39
Fwd. Prop. (ms)
Speedup by Ours 7476.8 735.1 140.1 714.8 38.5 42.4 1348.86 1560.8
Fwd. Prop.
Patch-by-Patch 56992.3 14765.7 64.53 6886.0 186.3 19.8 8285.2 87199.8
Bwd. Prop. (ms)
Our Method 7.42 14.52 0.481 27.11 1.538 0.424 39.78 91.26
Bwd. Prop. (ms)
Speedup by Ours 7680.9 1016.9 134.2 254.0 121.1 46.7 208.3 955.5
Bwd. Prop.
Table 1: The layewise timing and speedup results of forward and backward propagation by our proposed algorithm, and the layerwise timing results of forward propagation by the fast scanning method [7] on the Plain CNN model with images as inputs.
Layer Type conv11 pool11 tanh11 conv12 conv13 conv21 pool21 tanh21
Kernel Size / Stride / / - / / / / -
Sliding Window 39485.6 1960.2 693.0 59017.2 6473.1 63548.4 332.2 98.14
Fwd. Prop. (ms)
Our Method 4.398 0.854 0.337 24.42 2.466 28.90 0.70 0.227
Fwd. Prop. (ms)
Speedup by Ours 8978.1 2295.3 2056.4 2416.8 2631.3 2198.9 474.6 426.7
Fwd. Prop.
Sliding Window 73961.5 10054.8 602.6 146019.3 25206.7 133706.2 1623.8 106.7
Bwd. Prop. (ms)
Our Method 8.193 1.428 0.282 66.55 6.778 71.69 0.844 0.245
Bwd. Prop. (ms)
Speedup by Ours 9027.4 7041.2 2136.9 2194.1 3718.9 1865.1 1923.9 6627.8
Bwd. Prop.
Layer Type conv22 conv23 conv31 pool31 tanh31 conv32 conv33 Overall
Kernel Size / Stride / / / / - / /
Sliding Window 14765.3 2433.4 17059.8 32.15 13.81 17015.4 2069.7 224997.4
Fwd. Prop. (ms)
Our Method 18.98 1.920 20.55 0.488 0.164 10.76 1.080 116.2
Fwd. Prop. (ms)
Speedup by Ours 777.9 1267.4 830.2 65.9 84.2 1581.4 1916.4 1935.6
Bwd. Prop.
Sliding Window 28744.1 8522.3 16727.5 128.358 15.91 8657.7 2793.6 456871.1
Bwd. Prop. (ms)
Our Method 52.35 5.368 50.89 0.630 0.180 29.47 3.117 298.0
Fwd. Prop. (ms)
Speedup by Ours 549.1 1587.6 328.7 203.7 88.4 293.8 896.2 1533.1
Bwd. Prop.
Table 2: The layewise timing and speedup results of the forward and backward propagation by our proposed algorithm on the RCNN model with images as inputs.

5 Experiments

All the experiments are conducted on an NVIDIA K40 GPU. Fast scanning in [7] and patch-by-patch scanning are used for comparison. Fast scanning only supports forward propagation. All the methods were implemented based on the Caffe library. The actual running time is used to evaluate the efficiency of the compared methods.

5.1 Running times of practical CNN models

We tested the running times of forward and backward propagation of two practical CNN models, the Plain CNN and the RCNN models, for scene labeling from [14]. Detailed network structures are recorded in Tables 1 and 2. The output feature map is of size with 32 channels. The input feature map is padded accordingly to and for the two models respectively. Note that the running times of the CNN models depend only on the image size and network structures but not on specific feature and kernel values. Random numbers were filled in the input images and the convolution kernels.

As shown by the layerwise and overall timing results, our proposed method achieves a speedup of over 1500 times compared with the traditional patch-by-patch approach. Compared with the fast scanning method [7], our proposed algorithm has a speedup over 10 times at the pool1 layer and a speedup over 2 times at the pool2 layer. Since the fast scanning method outputs multiple sub-feature maps at the pool1 and pool2 layers, the large number of sub-feature maps also hinders the efficiency of the conv2 and conv3 layers. Those timing results show that the performance of the fast scanning method decreases significantly as the stride increases and it is therefore not suitable for GPU implementation. We also observed that some pooling layers might take even longer time to calculate than the convolution layers. Since the pooling operations are mostly limited by the bandwidth of GPU memory, this again shows the importance of continuous memory access by our proposed algorithm.

We also tested the running time of backward propagation by randomly choosing 128, 512, or 1024 pixels from the error map at the last CNN layer as described in Section 4.3. The running times of backward propagation with error masks have no difference with those without masks.

The maximal numerical differences between the gradients calculated by our proposed algorithm and by the patch-by-patch method are smaller than for all above testing cases. The numerical results validate the correctness of our algorithm and our implementation.

Padded image size Time (ms) Overall speedup
12.57 1098.0
35.39 1560.8
121.68 1815.9
Table 3: The timing and speedup results by our proposed algorithm on the Plain CNN model with different input image sizes.
Input patch pool1 kernel Overall conv1 pool1 Overall
size size / stride time (ms) speedup speedup speedup
/ 2 24.53 2264.0 218.0 1293.8
/ 4 26.84 3406.9 390.7 1320.8
/ 8 35.39 7476.8 735.1 1560.8
Table 4: The timing and speedup results by our proposed algorithm on the Plain CNN model with modifications to the pool1 layer to take and image patches as inputs.

5.2 Effects of different image and image patch sizes

We also tested the running times of forward propagation of the above mentioned Plain CNN model with different image sizes and image patch sizes.

Images of two additional image sizes, and , which are padded to and respectively, are fed into the CNN model as the inputs. The timing and speedup results are reported in Table 3.

To make the Plain CNN model take image patches of sizes different than as input and still output single values, we adjusted the pooling kernel size of the pool1 layer to with a stride of to take image patches as inputs, and to with a stride of to take image patches as inputs. The timing and speedup results are recorded in Table 4. It shows that the speedup of the conv1 and pool1 layers decrease significantly as the image patch size decreases, but the overall speedup only decreases slightly. This is because image patch sizes after the pool1 layer are not significantly changed.

The speedup results of the above two experiments show that the speedup increases as the image size and image patch size increase, which validates our theoretical analysis.

6 Conclusions and future works

This work has fundamental contributions to deep learning, since forward and backward propagation is the foundation of CNN. By analyzing the key difference with whole-image classification, the proposed algorithms eliminate all the redundant computation in the forward and backward propagation in CNN for pixelwise classification. With guarantee on producing exactly the same results as patch-by-patch scanning, over

times speedup has been achieved in our experiments, and the speedup will further increase with the sizes of images and patches. The proposed -regularly sparse kernels can covert convolution and pooling kernels with various strides into operations with 1-strides, which allows continuous memory access on GPU. Therefore, it has great flexibility to handle various CNNs with different designs and structures, and reaches high efficiency on GPU implementation.

It opens the door to many high-impact applications and research in the future. It breaks the efficiency bottleneck of CNN based pixelwise classification, and makes the realtime applications possible. It has the potential to change CNN training fundamentally. Since at each training iteration the error map over all pixels in an image can be estimated quickly with fast forward propagation, it can be used to guide the selection of training patches by considering the spatial distribution and dynamic variation of errors. In contrast, training patches were completely randomly selected in existing works. Moreover, an arbitrary subset of training patches can be selected from an image for the proposed fast backward propagation with constant computation complexity. Many interesting training strategies are expected to be developed based our work. It is not difficult to extend our algorithms to video analysis with 3D convolution and 3D pooling, where much more redundancy exists in cube-by-cube scanning and even higher speedup is expected.

References

  • [1] K. Chellapilla, S. Puri, and P. Y. Simard. High performance convolutional neural networks for document processing. Proc. Int’l Workshop on Frontiers in Handwriting Recognition, 2006.
  • [2] S. Chintala. Convnet benchmarks. https://github.com/soumith/convnet-benchmarks, 2014.
  • [3] J. Dean, G. S. Corrado, R. Monga, K. Chen, M. Devin, Q. V. Le, M. Z. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, and A. Y. Ng. Large scale distributed deep networks. Proc. Int’l Conf. on Neural Information Processing Systems, 2012.
  • [4] J. Fan, W. Xu, Y. Wu, , and Y. Gong. Human tracking using convolutional neural networks. IEEE Transactions on Neural Networks, 21(10):1610–1623, 2010.
  • [5] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning hierarchical features for scene labeling. IEEE Trans. on Pattern Analysis and Machine Intelligence, 35(8):1915–1929, 2013.
  • [6] A. Frome, G. Cheung, A. Abdulkader, M. Zennaro, B. Wu, A. Bissacco, H. Adam, H. Neven, and L. Vincent. Large-scale privacy protection in google street view. Proc. Int’l Conf. on Computer Vision, 2009.
  • [7] A. Giusti, D. C. Ciresan, J. Masci, L. M. Gambardella, and J. Schmidhuber. Fast image scanning with deep max-pooling convolutional neural networks. Proc. Int’l Conf. on Image Processing, 2013.
  • [8] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
  • [9] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. Proc. Int’l Conf. on Neural Information Processing Systems, 2012.
  • [10] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proc. of IEEE, 86(11):2278–2324, 1998.
  • [11] P. Luo, X. Wang, and X. Tang. Pedestrian parsing via deep decompositional neural network. Proc. Int’l Conf. Computer Vision, 2013.
  • [12] M. Mathieu, M. Henaff, and Y. LeCun. Fast training of convolutional networks through FFTs. arXiv preprint arXiv:1312.5851, 2014.
  • [13] M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring mid-level image representations using convolutional neural networks.

    Proc. Int’l Conf. on Computer Vision and Pattern Recognition

    , 2014.
  • [14] P. O. Pinheiro and R. Collobert. Recurrent convolutional neural networks for scene labeling.

    Proc. Int’l Conf. on Machine Learning

    , 2014.
  • [15] H. A. Rowley, S. Baluja, and T. Kanade.

    Neural network-based face detection.

    IEEE Trans. on Pattern Analysis and Machine Intelligence, 20(1):23–28, 1998.
  • [16] P. Sermanet, S. Chintala, and Y. LeCun. Convolutional neural networks applied to house numbers digit classification. Proc. Int’l Conf. on Pattern Recognition, 2012.
  • [17] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013.
  • [18] P. Sermanet, K. Kavukcuoglu, S. Chintala, and Y. LeCun. Pedestrian detection with unsupervised multi-stage feature learning. Proc. Int’l Conf. on Computer Vision and Pattern Recognition, 2013.
  • [19] P. Y. Simard, D. Steinkraus, and J. C. Platt. Best practices for convolutional neural networks applied to visual document analysis. Proc. Int’l Conf. Document Analysis and Recognition, pages 958–963, 2003.
  • [20] Z. Wu, Y. Huang, Y. Yu, L. Wang, and T. Tan. Early hierarchical contexts learned by convolutional networks for image segmentation. Proc. Int’l Conf. Pattern Recognition, 2014.