Pushing the Limits of Deep CNNs for Pedestrian Detection

03/15/2016 ∙ by Qichang Hu, et al. ∙ The University of Adelaide 0

Compared to other applications in computer vision, convolutional neural networks have under-performed on pedestrian detection. A breakthrough was made very recently by using sophisticated deep CNN models, with a number of hand-crafted features, or explicit occlusion handling mechanism. In this work, we show that by re-using the convolutional feature maps (CFMs) of a deep convolutional neural network (DCNN) model as image features to train an ensemble of boosted decision models, we are able to achieve the best reported accuracy without using specially designed learning algorithms. We empirically identify and disclose important implementation details. We also show that pixel labelling may be simply combined with a detector to boost the detection performance. By adding complementary hand-crafted features such as optical flow, the DCNN based detector can be further improved. We set a new record on the Caltech pedestrian dataset, lowering the log-average miss rate from 11.7% to 8.9%, a relative improvement of 24%. We also achieve a comparable result to the state-of-the-art approaches on the KITTI dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The problem of pedestrian detection has been intensively studied in recent years. Prior to the very recent work in deep convolutional neural networks (DCNNs) based methods [1, 2], the top performing pedestrian detectors are boosted decision forests with carefully hand-crafted features, such as histogram of gradients (HOG) [3], self-similarity (SS) [4], aggregate channel features (ACF) [5], filtered channel features [6] and optical flow [7].

Recently, DCNNs have significantly outperformed comparable methods on a wide variety of vision problems [8, 9, 10, 11, 12, 13, 14, 15]. A region-based convolutional neural network (R-CNN) [11] achieved excellent performance for generic object detection, for example, in which a set of potential detections (object proposals) are evaluated by a DCNN model. CifarNet [16] and AlexNet [8] have been extensively evaluated in the R-CNN detection framework in [17] for pedestrian detection. In their work, the best performance (

) was achieved by AlexNet pre-trained on the ImageNet 

[18] classification dataset. Note that this result is still inferior to conventional pedestrian detectors such as [6] and [7]. The DCNN models in [17] under-perform mainly because the network design is not optimal for pedestrian detection. The performance of R-CNNs for pedestrian detection has further improved to in [2] through the use of a deeper GoogLeNet model which is fine-tuned using Caltech pedestrian dataset.

To explicitly model the deformation and occlusion, another line of research for object detection is part-based models [19, 20, 21, 22] and explicit occlusion handling [23, 24, 25]. DCNNs have also been incorporated along this stream of work for pedestrian detection [26, 27, 28], but none of these approaches has achieved better results than the best hand-crafted features based method of [6] on the Caltech dataset.

The performance of pedestrian detection is improved over hand-crafted features by a large margin (a gain on Caltech), by two very recent approaches relying on DCNNs: CompACT-Deep [1] combines hand-crafted features and fine-tuned DCNNs into a complexity-aware cascade. Tian et al.  [2] fine-tuned a pool of part detectors using a pre-trained GoogLeNet, and the resulting ensemble model (refer to as DeepParts) delivers similar results as CompACT-Deep. Both approaches are much more sophisticated than the standard R-CNN framework: CompACT-Deep involves the use of a variety of hand-crafted features, a small CNN model and a large VGG model [9]. DeepParts contains fine-tuned DCNN models and needs a set of strategies (including bounding-box shifting handling and part selection) to arrive at the reported result. Note that the high complexity of DCNN models can lead to practical difficulties. For example, it can be too costly to load all 45 DCNN models into a GPU card.

Here we ask a question: Is a complex DCNN based learning approach really a must for achieving the state-of-the-art performance? Our answer to this question is negative. In this work, we propose alternative methods for pedestrian detection, which are simpler in design, with comparable or even better performance. Firstly, we extensively evaluate the CFMs extracted from convolutional layers of a fine-tuned VGG model for pedestrian detection. Using only a CFM of a single convolutional layer, we train a boosted-tree-based detector and the resulting model already significantly outperforms all previous methods except the above two sophisticated DCNN frameworks. This model can be seen as a strong baseline for pedestrian detection as it is very simple in terms of implementation.

We show that the CFMs from multiple convolutional layers can be used for training effective boosted decision forests. These boosted decision forests are combined altogether simply by score averaging. The resulting ensemble model beats all competing methods on the Caltech dataset. We further improve the detection performance by incorporating a semantic pixel labelling model. Next we review some related work.

1.1 Related Work

1.1.1 Convolutional feature maps (CFMs)

It has been shown in [29, 30, 31] that CFMs have strong representation abilities for many tasks. Long et al.  [32] cast all fully-connected layers in DCNNs as convolutions for semantic image segmentation. In [30]

, the CFMs from multiple layers are stacked into one vector and used for segmentation and localization. Ren

et al.  [29] learn a network on the CFMs (pooled to a fixed size) of a pre-trained model.

The work by Yang et al.  [31] is close to ours, which trains a boosted decision forest for pedestrian detection with the CFM features from the Conv- layer of the VGG model [9], and the performance () on Caltech is comparable to checkerboards [6]. It seems that there is no significant superiority of the CFM used in [31] over hand-crafted features on the task of pedestrian detection. The reason may be two-fold. First, the CFM used in [31] are extract from the pre-trained VGG model which is not fine-tuned on a pedestrian dataset; Second, CFM features are extracted from only one layer and the multi-layer structure of DCNNs is not fully exploited. We show in this work that both of these two issues are critically important in achieving good performance.

1.1.2 Segmentation for object detection

The cues used by segmentation approaches are typically complementary to those exploited by top-down methods. Recently, Yan et al.  [33] propose to perform generic object detection by labelling super-pixels, which results in an energy minimization problem with data term learned by DCNN models. In [34, 13], segmented image regions (not bounding boxes) are generated as object proposals and then used for object detection.

In contrast to the above region (or super-pixel) based methods, we here exploit at an even finer level of information, that is, pixel labelling. In particular, in this work we demonstrate that we can improve the detection performance by simply re-scoring the proposals generated by a detector, using pixel-level scores.

1.2 Contributions

We revisit pedestrian detection with DCNNs by studying the impact of a few training details and design parameters. We show that fine-tuning of a DCNN model using pedestrian data is critically important. Proper bootstrapping has a considerable impact too. Besides these findings, other main contributions of this work can be summarized as follows.

  1. The use of multi-layer CFMs for training a state-of-the-art pedestrian detector.

    We show that it is possible to train an ensemble of boosted decision forests using multi-layer CFMs that outperform all previous methods. For example, with CFM features extracted from two convolutional layers, we can achieve a log-average miss rate of

    on Caltech, which already perform better than all previous methods, including the two sophisticated DCNNs based methods [1, 2].

  2. Incorporating semantic pixel labelling. We also propose a combination of sliding-window detectors and semantic pixel-labelling, which performs on par with the best of previous methods. To keep the method simple, we use the weighted sum of pixel-labelling scores within a proposal region to represent the score of the proposal.

  3. The best reported pedestrian detector. A new performance record for Caltech is set by exploiting a DCNN as well as two complimentary hand-crafted features: ACF and optical-flow features. This shows that some types of hand-crafted features are complementary to deep convolutional features.

Before we present our methods, we briefly describe the datasets, evaluation metric and boosting models in our experiments.

1.3 Datasets, Evaluation metric and Models

Caltech pedestrian dataset The Caltech dataset [35] is one of the most popular datasets for pedestrian detection. It contains k frames extracted from hours of urban traffic video. There are in total k annotated bounding boxes with unique pedestrians. The standard training set and test set consider one out of each frames. In our experiments, the training images are increased to one out of each frames. Note that many competing methods [6, 31, 17] have used the same extended training set or even more data (every third frame).

For Caltech dataset, we evaluate the performance of various detectors using the log-average miss rate (MR) which is computed by averaging the miss rate at false positive rates spaced evenly between to false-positives-per-image (FPPI) range. Unless otherwise specified, the detection performance on our experiments shown in the remainder of the paper is the MR on the Caltech test set.

KITTI pedestrian dataset The KITTI dataset [36] consists of 7481 training images and 7518 test images, comprising more than 80 thousands of annotated objects in traffic scenes. The KITTI dataset provides a large number of pedestrians with different sizes, viewpoints, occlusions, and truncations. Due to the diversity of these objects, the dataset has three subsets (, , ) with respect to the difficulty of object size, occlusion and truncation. We use the training subset as the training data in our experiments.

For KITTI dataset, average precision (AP) is used to evaluate the detection performance. The average precision summaries the shape of the precision-recall curve, and is defined as the mean precision at a set of evenly spaced recall levels. All methods are ranked based on the difficult results.

Boosted decision forest Unless otherwise specified, we train all our boosted decision forests using the following parameters. The boosted decision model consists of depth-decision trees, trained via the shrinkage version of real-Adaboost [37]. The size of model is set to

pixels, and one bootstrapping iteration is implemented to collect hard negatives and re-trains the model. The sliding window stride is set to

pixels.

2 Boosted Decision Forests with Multi-layer CFMs

In this section, we firstly show that the performance of boosted decision forests with CFMs can be significantly improved by simply fine-tuning DCNNs with hard negative data extracted through bootstrapping. Then boosted decision forests are trained with different layers of CFMs, and the resulting ensemble model is able to achieve the best reported result on the Caltech dataset.

2.1 Fine-tuning DCNNs with Bootstrapped Data

In this work, the VGG [9] model is used to extract CFMs. As we know, the VGG model was originally pre-trained on the ImageNet data with image-level annotations and was not trained specifically for the pedestrian detection task. It is expected that the detection performance of boosted decision forests trained with CFMs ought to be improved by fine-tuning the VGG model with Caltech pedestrian data.

To adapt the pre-trained VGG model to the pedestrian detection task, we modify the structure of the model. We replace the -way classification layer with a randomly initialized binary classification layer and change the input size from to

pixels. We also reduce the number of neurons in fully connected layers from

to . We fine-tune all layers of this modified VGG, except the first convolutional layers since they correspond to low-level features which are largely universal for most visual objects. The initial learning rate is set to for convolutional layers and for fully connected layers. The learning rate is divided by at every iterations. For fine-tuning, k positive and k negative examples are collected by different approaches. The positive samples are those overlapping with a ground-truth bounding box by , and the negative samples by . At each SGD iteration, we uniformly sample positive samples and negative samples to construct a mini-batch of size .

We train boosted decision forests with the CFM extracted from the Conv- layer of differently fine-tuned VGG models and the results are shown in Table 1. Note that all the VGG models in this table are fine-tuned from the original model pre-trained on ImageNet data. It can be observed that the log-average miss rate is reduced from to by replacing the pre-trained VGG model with the one fine-tuned on data collected by applying an ACF [5] detector on the training dataset. The detection performance is further improved to MR if it is fine-tuned on the bootstrapped data using the previous trained model 3b. Another performance gain is obtained by applying shrinkage to the coefficients of weak learners, with shrinkage parameter being (see [38]). The last model (corresponding to row in Table 1) is referred to as 3 from now on.

Model Fine-tuning data Shrinkage Miss rate (%)
3a No fine-tuning
3b Collected by ACF
3c Bootstrapping with 3b
3 Bootstrapping with 3b 13.49
Table 1: Performance improvements with different fine-tuning strategies and shrinkage (on ). All boosted decision forests are trained with the CFM extracted from the Conv- layer of VGG. 3a: the original VGG model pre-trained on ImageNet is used to extract features. 3b: the VGG model is fine-tuned with the data collected by an ACF [5] detector. 3c and 3: the fine-tuning data is obtained by bootstrapping with 3b. With the same fine-tuning data, setting the shrinkage parameter of Adaboost to brings an additional reduction on the MR

2.2 Ensemble of Boosted Decision Forests

In the last experiment, we only use a CFM from a single layer of the VGG model. In this section, we intensively explore the deep structure of the VGG model which consists of convolutional layers, fully connected layers, and classification layer. These convolutional layers are organized into convolutional stacks, convolutional layers in the same stack have the same down-sampling ratio. We ignore the CFMs of the first two convolutional stacks (each one contains layers) since they are universal for most visual objects.

We train boosted decision forests with CFMs from individual convolutional layers of the VGG model which is the one fine-tuned with bootstrapped data (same as row in Table 1). All boosted decision forests are trained with the same data as 3. For models with Conv- features, the input image are directly applied on the convolutional layers and resulting in a feature map with the down-sampling ratio of . The corresponding boosted decision forests work as a sliding-window detector with step-size of . For models with Conv- and Conv- features, they are applied to proposals generated by 3 model. This is due to the large downsampling ratio of Conv- and Conv-. If the step-size of the sliding-window detector is too large, it will hurt the detection performance.

Convolutional Channels Down-sampling Miss rate (%)
layer ratio
Conv-
Conv-
Conv- (3)
Conv-
Conv-
Conv- (4)
Conv- (5)
Conv-
Conv-
Table 2: Comparison of detection performance (on ) of boosted decision forests trained on individual CFMs. Note that models with Conv- features works as sliding-window detectors, and models with Conv- and Conv- features are applied to the proposals generated by 3. The top performing layers in each convolutional stack are Conv-, Conv- and Conv- respectively. The models trained with these three layers are denoted as 3, 4, and 5 respectively

Table 2 shows the comparison of detection performance of these boosted decision forests on Caltech setting. We can observe that the MR is relatively high for the Conv- layer and the Conv- layer. We conjecture that the Conv- layer provides relatively low-level features which result in an under-fitting training. In contrast, the semantic information in the Conv- layer may be too coarse for pedestrian detection. According to Table 2, the best performing layer in each convolutional stack, are from inner layers of Conv- (3), Conv- (4), and Conv- (5) respectively. Fig. 1 shows the spatial distribution of convolutional features, which are frequently selected by above three models. We observe that most active regions correspond to important human-body parts (such as head and shoulder).

Figure 1: The spatial distribution of CFMs selected by boosting algorithms. For a input image, the size of feature maps are , , respectively. Red pixels indicate that a large number of features are selected in that region and blue pixels correspond to low frequency regions. The most important region correspond to the head, shoulder, waist and feet of a human.
Figure 2: The framework of an ensemble of boosted decision forests with multi-layer CFMs (345), which obtain a MR on the Caltech test set.

The boosted decision forests trained with CFMs of these three layers are further combined together simply through score averaging. Fig. 2 shows the framework of the resulting ensemble model. Firstly, 3 model works as a sliding-window detector, which rejects the majority of negative examples and pass region proposals to 4 and 5. Both 4 and 5 generate the confidence score for each incoming proposal. The final score is computed by averaging over the scores output by these three boosted decision forests. This model delivers the best reported log-average miss rate () on Caltech setting without using any sophisticatedly designed algorithms.

Model combination Avg. miss rate (%)
34
35
345
345DCNN 10.07
Table 3: The comparison of performance (on ) of different ensemble models. DCNN: the entire VGG model fine-tuned by data collected by 3. The combination of multi-layer CFM models improves the detection performance of single-layer CFM models significantly ()

We also evaluate other combinations of the ensemble models. Furthermore, a VGG model is fine-tuned with another round of bootstrapping (using 3) and its final output is also combined to improve the detection performance. The corresponding results can be found in Table 3 We can see that combining two layers already beats all existing approaches on Caltech, and adding the entire large VGG model also gives a small improvement.

3 Pixel Labelling Improves Pedestrian Detection

Figure 3: The framework for pedestrian detection with pixel-labelling. The region proposals and pixel-level score maps are obtained by individually applying the sliding-window detector and the pixel labelling model. Next, the weighted sum of pixel scores within a proposal region is aggregated with the detector score of the same proposal region.

In this section, the sliding-window based detectors are enhanced by semantic pixel labelling. By incorporating DCNNs, the performance of pixel labelling (semantic image segmentation) methods have been recently improved significantly [32, 39, 30, 40, 41]. In general, we argue that pixel labelling models encode information complementary to the sliding-window based detectors. Empirically, we show that consistent improvements are achieved over different types of detectors.

The segmentation method proposed in [39] is used here for pixel labelling, in which a DCNN model (VGG) is trained on the Cityscapes dataset [42]. The prediction map is refined by a fully-connected conditional random field (CRF) [43]

with DCNN responses as unary terms. The Cityscapes dataset that we use for training is similar to the KITTI dataset which contains dense pixel annotations of 19 semantic classes such as road, building, car, pedestrian, sky, etc. Note that our models that exploiting pixel labelling have used extra data for training on top of the Caltech dataset. However, most deep learning based methods

[1, 2] have used extra data, at least the ImageNet dataset for pre-training the deep model. Pedestrian detection may benefit from the semantic pixel labelling in the following aspects:

Multi-class information: Learning from multiple classes, in contrast to the object detectors typically trained with two-class data, the pixel labelling model carries richer object-level information.

Long-range context: Using CRFs (especially fully-connected CRFs) as post-processing procedure, many models (for example, [39, 41, 40]) have the ability to capture long-range context information. In contrast, sliding-window detectors only extract features from fixed-sized bounding boxes.

Object parts: The trained pixel labelling model may cater for more fine-grained details, such that they are more insensitive to deformation and occlusion to some extent.

However, it is not straightforward to apply pixel labelling models to pedestrian detection problems. One of the main impediments is that it is difficult to estimate the object bounding boxes from the pixel score map, especially for people in crowds.

To this end, we propose to bring the pedestrian detector and pixel labelling model together. In our framework (see Fig. 3), a sliding-window detector is responsible for providing region proposals and a pixel labelling model is applied to the input image at the same time to generate a score map for the “person” class. Next, a weighted mask is applied to the proposal region of the “person” score map to generate the weighted sum of pixel scores. Finally, the weighted sum and the detector score for the same proposal are aggregated together as the final score. The weighted mask is learned by averaging the pixel scores of ground truth region on the training images. To match the mask and the input proposals, we resize both ground truth and test proposals to pixels (no surrounding pixels). Note that, there are more sophisticated methods for exploiting the labelling scores. For example, one can use the pixel labelling scores as the image features, similar to ‘object bank’ [44], and train a linear model. In this work, we show that even simply weighted sum of the pixel scores considerably improves the results.

Method Avg. miss rate (%) Improve. (%)
ACF [5]
ACFPixel label.
Checkerboards [6]
CheckerboardsPixel label.
3 (ours)
3Pixel label.
Table 4: Performance improvements by aggregating pixel labelling models with sliding-window detectors (on ). All the three detectors achieve performance gains, which shows that pixel labelling can be used to help detection. Note that the performance of our model ‘3 with Pixel labelling’ is already on par with the previously best reported result of [1]
Figure 4: Examples of some region proposals on the original images and the corresponding pixel score maps. A strong complementary relationship can be found in the generated proposals and the pixel score maps.

Table 4 shows the detection performance of different sliding-window detectors enhanced by pixel labelling. Boosted decision forests are trained here with three types of features, which are ACF [5], checkerboard features [6] and the CFM from the Conv- layer of VGG model (3). We can see that the performances of all the three detectors are improved by aggregating pixel labelling models. Fig. 4 presents some region proposals on the original images and the corresponding pixel score maps. Some of the false proposals generated by pedestrian detectors (3) can be removed by considering the context of a larger region (the largest bounding box in the first column in Fig. 4). Some occluded pedestrians have responses on the pixel score map (the rightmost bounding box in the fourth column in Fig. 4). This clearly illustrates why this combination works.

4 Fusing Models

4.1 Using Complementary Hand-crafted Features

The detection performance of the 3 model is critical in the proposed ensemble model, since later components often reply on the detection results of this model. In order to enhance the detection performance of the 3 model, we make two variants of it by combining two hand-crafted features: the ACF and optical flow. We augment the 3 features with the ACF and optical flow features to train an ensemble of boosted decision forests. Optical flow features are extracted the same way as in [7].

Method Avg. miss rate (%)
3 only
3ACF
3ACFFlow
(3ACF)45DCNN
(3ACFFlow)45DCNN
Table 5: Comparison of detection results of different variants of the 3 detector (on ). The convolutional features of the Conv- layer are combined with different types of hand-crafted features, and used to train a boosted decision forest. Both the performance of the variants and the ensemble models is improved with these additional features. Flow: optical flow features. DCNN: the entire VGG model fine-tuned by data collected by 3

Table 5 shows the detection results of different variants of 3 model. With adding the ACF features, the MR of 3 detector is reduce by . With the extra optical flow features, the MR is further reduced to . These experimental results demonstrate that hand-crafted features carry complimentary information which can further improve the DCNN convolutional features. This is easy to understand: the ACF features may be viewed as lower-level features, compared with the middle-level features in 3. The optical flow clearly encodes motion information which is not in 3 features. By adding the other components of the proposed ensemble model, our detector can achieve MR. The MR is slightly increased to by removing motion information.

4.2 Pixel Labelling

As shown in Section 3, the pixel labelling model is also complementary to convolutional features. Table 6 shows the detection performance of different ensemble models enhanced by pixel labelling model. The best result is achieved by combining the most number of different types of models (which is refer to as All-in-one), which reduces the MR on the Caltech test set from the previous best to . Note that the combination rule used by our methods is simple, which implies a potential for further improvement.

Method Avg. miss rate (%)
3Pixel label.
345Pixel label.
345DCNNPixel label.
(3ACF)45
DCNNPixel label.
(3ACFFlow)45
DCNNPixel label. (All-in-one)
Table 6: Comparison of detection performance (on ) of different ensemble models with pixel labelling. DCNN: the entire VGG model fine-tuned by hard negative data collected by 3; Pixel label.: pixel labelling model; Flow: optical flow. The pixel labelling model consistently improves all the considered models in this table. The All-in-one model set a new record on the Caltech pedestrian benchmark

4.3 Ablation Studies

Model 3a 3 34 34 34 345 All-in-one
5 5DCNN DCNNLabel.
Pipeline 3a fine-tuning Add 4 Add 5 Add DCNN Add Pixel Label. Use (3+ACF+Flow)
Miss rate (%)
Improve. (%)
Table 7: Ablation studies of the All-in-one model on the Caltech test set

We investigate the overall pipeline of the All-in-one model by adding each component step by step, which is shown in Table 7. As the start point, the 3a model with the original VGG model pre-trained on ImageNet data achieves a miss rate of . A performance gain can be obtained by fine-tuning the VGG model with bootstrapped data. The detection results can be improved to (better than all previous methods) by adding 4 and 5 models to construct an ensemble model. We obtain performance improvement if we use the entire VGG model (fine-tuned by bootstrapped data with 3) as a component of our ensemble model. Combining the pixel labelling information to predicted bounding boxes can further reduce the miss rate by . By replacing the 3 model to 3ACFFlow model, the MR of our ensemble mode can eventually achieve on the Caltech test set.

4.4 Fast Ensemble Models

In this section, we investigate the speed issue of the proposed detector. Our All-in-one model takes about 8s for processing one image on a workstation with one octa-core Intel Xeon 2.30GHz processor and one Nvidia Tesla K40c GPU. Most of time (about 7s) is spent on the extraction of the CFMs on a multi-scale image pyramid. The remaining components of the ensemble model takes less than 1s to process the passed region proposals. The pixel labelling model only uses about 0.25s to process one image since it only need to be applied to one scale. It can be easily observed that the current bottleneck of the proposed detector is the 3 which is used to extract region proposals with associated detection scores. The speed of our detector can be accelerated using a light-weight proposal method at the start of the pipeline in Fig. 2.

We use two pedestrian detectors ACF [5] and checkerboards [6] as the proposal methods. Our ACF detector consists of 4096 depth-4 decision trees, trained via real-Adaboost. The model has size pixels, and is trained via four rounds of bootstrapping. The sliding window stride is 4 pixels. The checkerboards detector is trained using almost identical parameters as for ACF. The only difference is that the feature channels are the results of convolving the ACF channels with a set of checkerboard filters. In our implementation, we adopt a set of 12 binary filters to generate checkerboard feature channels. To limit the number of region proposals, we set the threshold of the above two detectors to generate about 20 proposals per image.

Table 8 shows the detection performance of the original ensemble model and fast ensemble models on Caltech test set. We can observe that the quality of proposals are enhanced by a large margin using both ensemble models and the pixel labelling model. The best result of fast ensemble models is achieved by using proposals generated by the checkerboards detector. This method uses the data collected by checkerboards detector as the initial fine-tuning data. With a negotiable performance loss (e.g., 1.12%), it’s about 6 times faster than the original method. Note that the fast ensemble model (with checkerboard proposals) also achieves the state-of-the-art results.

Method Avg. miss rate (%) runtime (s)
3 (proposals)45DCNNPixel label. 8
ACF (proposals)345DCNNPixel label. 0.75
Checkerboards (proposals)345DCNNPixel label. 1.25
Table 8: Comparison of detection performance (on ) between the original ensemble model and fast ensemble models

4.5 Comparison to State-of-the-art Approaches

We compare the detection performance of our models with existing state-of-the-art approaches on the Caltech dataset. Table 9 compares our models with a wide range of detectors, including boosted decision trees trained on hand-crafted features, RCNN-based methods and the state-of-the-art methods on the Caltech test set. The performance of the first two types are quite close to each other. Using only one single layer of convolutional feature map, our 3 model has outperformed all other methods expect the two sophisticated methods [2, 1]. Note that the RCNN based methods are based on larger models than 3. As feature representation, the CFM from the Conv- layer of our fine-tuned model significantly outperforms all other hand-crafted features. The 3Pixel labelling model is comparable to the state-of-the-art performance achieved by sophisticated methods [2, 1]. Our 345 model performs even better. Without using hand-crafted features, our model can achieve MR. The best result is achieved by the All-in-one model which combines a number of hand-crafted features and models.

Type Method Miss Rate (%)
Hand-crafted Features SpatialPooling [38]
SpatialPooling [7]
LDCF [45]
Checkerboards [6]
Checkerboards [6]
RCNN based AlexNet [17]
GoogLeNet [2]
State-of-the-arts DeepParts [2]
CompACT-Deep [1]
Ours 3
3Label.
345
345+DCNN+Label.
All-in-one 8.93
Table 9: Detection performance of different types of detectors on the Caltech test set. Three types of approaches are compared in this table, including boosted decision trees trained on hand-crafted features, RCNN-based methods and the state-of-the-art sophisticated methods. All of our models outperform the first three types of models, and our All-in-one set a new recorded MR on Caltech pedestrian benchmark. indicates the methods trained with optical flow features
Figure 5: Comparison to state-of-the-art on various Caltech test settings.

Fig. 5 shows a more complete evaluation of the proposed detection framework on various Caltech test settings, including , , , and . We can observe that our ensemble model achieves the best results on most test subsets (including ). On the set, our models are only outperformed by DeepParts [2], which is specifically trained for handling occlusions.

Table 10: Detection results on the KITTI dataset. Note: indicates the methods trained with stereo images Method Moderate(%) Easy(%) Hard(%) 3DOP [46] Fast-CFMs (Ours) Reionlets [47] CompACT-Deep [1] DeepParts [2] FilteredICF [6] pAUCEnsT [7] R-CNN [17] Figure 6: Comparison to state-of-the-art on the KITTI test set.

Table 10 shows the results on the KITTI dataset. Since images of KITTI are larger than in Caltech, the feature extraction of 3 model is time-consuming. In our experiments, only the fast ensemble model with Checkerboards proposals is used for testing on KITTI. Our model achieves competitive results, , , and AP on , , and subsets respectively. Fig. 4.5 presents the comparison of detection performance on the KITTI test subset. It can be observed that the proposed detector outperforms all published monocular-based methods. Note that the 3DOP [46] is based on stereo images. The proposed ensemble model is the best-performing detector based on DCNN, and surpasses CompACT-Deep [1] and DeepParts [2] by and respectively.

5 Conclusions

In this work, we have built a simple-yet-powerful pedestrian detector, which re-uses inner layers of convolutional features extracted by a properly fine-tuned VGG16 model. This ‘vanilla’ model has already achieved the best reported results on the Caltech dataset, using the same training data as previous DCNN approaches. With a few simple modifications, its variants have achieved even more significant results.

We have presented extensive and systematic empirical evaluations on the effectiveness of DCNN features for pedestrian detection. We show that it is possible to build the best pedestrian detector, yet avoiding complex custom designs. We also show that a pixel labelling model can be used to improve performance by simply incorporating the labelling scores with the detection scores of a standard pedestrian detector. Note that simple combination rules are used here, which leaves potentials for further improvement. For example the ROI pooling for further speed and performance improvement.

References

  • [1] Cai, Z., Saberian, M., Vasconcelos, N.: Learning complexity-aware cascades for deep pedestrian detection. In: Proc. IEEE Int. Conf. Comp. Vis. (2015)
  • [2] Tian, Y., Luo, P., Wang, X., Tang, X.: Deep learning strong parts for pedestrian detection. In: Proc. IEEE Int. Conf. Comp. Vis. (2015)
  • [3] Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2005)
  • [4] Shechtman, E., Irani, M.: Matching local self-similarities across images and videos. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2007)
  • [5] Dollár, P., Appel, R., Belongie, S., Perona, P.: Fast feature pyramids for object detection. IEEE Trans. Pattern Anal. Mach. Intell. 36(8) (2014) 1532–1545
  • [6] Zhang, S., Benenson, R., Schiele, B.: Filtered channel features for pedestrian detection. Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2015)
  • [7] Paisitkriangkrai, S., Shen, C., Hengel, A.v.d.: Pedestrian detection with spatially pooled features and structured ensemble learning. IEEE Trans. Pattern Anal. Mach. Intell. (2015)
  • [8] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proc. Adv. Neural Inf. Process. Syst. (2012)
  • [9] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proc. Int. Conf. Learning Representations. (2015)
  • [10] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2015)
  • [11] Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2014)
  • [12] Tompson, J.J., Jain, A., LeCun, Y., Bregler, C.: Joint training of a convolutional network and a graphical model for human pose estimation. In: Proc. Adv. Neural Inf. Process. Syst. (2014)
  • [13] Hariharan, B., Arbeláez, P., Girshick, R., Malik, J.: Simultaneous detection and segmentation. In: Proc. Eur. Conf. Comp. Vis. (2014)
  • [14] Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: Proc. Adv. Neural Inf. Process. Syst. (2014)
  • [15] Branson, S., Van Horn, G., Belongie, S., Perona, P.: Bird species categorization using pose normalized deep convolutional nets. In: Proc. Bri. Conf. Mach. Vis. (2014)
  • [16] Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical report, University of Toronto (2009)
  • [17] Hosang, J., Omran, M., Benenson, R., Schiele, B.: Taking a deeper look at pedestrians. Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2015)
  • [18] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2009)
  • [19] Enzweiler, M., Eigenstetter, A., Schiele, B., Gavrila, D.M.: Multi-cue pedestrian classification with partial occlusion handling. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2010) 990–997
  • [20] Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9) (2010) 1627–1645
  • [21] Lin, L., Wang, X., Yang, W., Lai, J.H.: Discriminatively trained and-or graph models for object shape detection. IEEE Trans. Pattern Anal. Mach. Intell. 37(5) (2015) 959–972
  • [22] Girshick, R., Iandola, F., Darrell, T., Malik, J.: Deformable part models are convolutional neural networks. Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2015)
  • [23] Mathias, M., Benenson, R., Timofte, R., Van Gool, L.:

    Handling occlusions with franken-classifiers.

    In: Proc. IEEE Int. Conf. Comp. Vis. (2013) 1505–1512
  • [24] Ouyang, W., Wang, X.: Single-pedestrian detection aided by multi-pedestrian detection. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2013)
  • [25] Tang, S., Andriluka, M., Schiele, B.: Detection and tracking of occluded people. Int. J. Comp. Vis. 110(1) (2014) 58–69
  • [26] Ouyang, W., Wang, X.: A discriminative deep model for pedestrian detection with occlusion handling. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2012)
  • [27] Ouyang, W., Wang, X.: Joint deep learning for pedestrian detection. In: Proc. IEEE Int. Conf. Comp. Vis. (2013)
  • [28] Luo, P., Tian, Y., Wang, X., Tang, X.: Switchable deep network for pedestrian detection. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2014)
  • [29] Ren, S., He, K., Girshick, R., Zhang, X., Sun, J.: Object detection networks on convolutional feature maps. arXiv:1504.06066 (2015)
  • [30] Hariharan, B., Arbeláez, P., Girshick, R., Malik, J.: Hypercolumns for object segmentation and fine-grained localization. Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2015)
  • [31] Yang, B., Yan, J., Lei, Z., Li, S.Z.: Convolutional channel features. Proc. IEEE Int. Conf. Comp. Vis. (2015)
  • [32] Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. arXiv:1411.4038 (2014)
  • [33] Yan, J., Yu, Y., Zhu, X., Lei, Z., Li, S.Z.: Object detection by labeling superpixels. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2015)
  • [34] Fidler, S., Mottaghi, R., Urtasun, R., et al.: Bottom-up segmentation for top-down detection. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2013)
  • [35] Dollar, P., Wojek, C., Schiele, B., Perona, P.: Pedestrian detection: An evaluation of the state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 34(4) (2012) 743–761
  • [36] Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2012)
  • [37] Hastie, T., Tibshirani, R., Friedman, J., Franklin, J.: The elements of statistical learning: data mining, inference and prediction. The Mathematical Intelligencer 27(2) (2005) 83–85
  • [38] Paisitkriangkrai, S., Shen, C., van den Hengel, A.: Strengthening the effectiveness of pedestrian detection with spatially pooled features. In: Proc. Eur. Conf. Comp. Vis. (2014)
  • [39] Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Semantic image segmentation with deep convolutional nets and fully connected CRFs. In: Proc. Int. Conf. Learning Representations. (2015)
  • [40] Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D., Huang, C., Torr, P.: Conditional random fields as recurrent neural networks. arXiv:1502.03240 (2015)
  • [41] Lin, G., Shen, C., Reid, I., et al.: Efficient piecewise training of deep structured models for semantic segmentation. arXiv:1504.01013 (2015)
  • [42] Cordts, M., Omran, M., Ramos, S., Scharwächter, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B.: The cityscapes dataset. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. Workshops. (2015)
  • [43] Krähenbühl, P., Koltun, V.: Efficient inference in fully connected CRFs with Gaussian edge potentials. In: Proc. Adv. Neural Inf. Process. Syst. (2011)
  • [44] Li, L.J., Su, H., Lim, Y., Fei-Fei, L.: Object bank: An object-level image representation for high-level visual recognition. Int. J. Comput. Vision 107(1) (2014) 20–39
  • [45] Nam, W., Dollár, P., Han, J.H.: Local decorrelation for improved pedestrian detection. In: Proc. Adv. Neural Inf. Process. Syst. (2014)
  • [46] Chen, X., Kundu, K., Zhu, Y., Berneshawi, A.G., Ma, H., Fidler, S., Urtasun, R.: 3d object proposals for accurate object class detection. In: Proc. Adv. Neural Inf. Process. Syst. (2015) 424–432
  • [47] Wang, X., Yang, M., Zhu, S., Lin, Y.: Regionlets for generic object detection. In: Proc. IEEE Int. Conf. Comp. Vis. (2013) 17–24