The problem of pedestrian detection has been intensively studied in recent years. Prior to the very recent work in deep convolutional neural networks (DCNNs) based methods [1, 2], the top performing pedestrian detectors are boosted decision forests with carefully hand-crafted features, such as histogram of gradients (HOG) , self-similarity (SS) , aggregate channel features (ACF) , filtered channel features  and optical flow .
Recently, DCNNs have significantly outperformed comparable methods on a wide variety of vision problems [8, 9, 10, 11, 12, 13, 14, 15]. A region-based convolutional neural network (R-CNN)  achieved excellent performance for generic object detection, for example, in which a set of potential detections (object proposals) are evaluated by a DCNN model. CifarNet  and AlexNet  have been extensively evaluated in the R-CNN detection framework in  for pedestrian detection. In their work, the best performance (
) was achieved by AlexNet pre-trained on the ImageNet classification dataset. Note that this result is still inferior to conventional pedestrian detectors such as  and . The DCNN models in  under-perform mainly because the network design is not optimal for pedestrian detection. The performance of R-CNNs for pedestrian detection has further improved to in  through the use of a deeper GoogLeNet model which is fine-tuned using Caltech pedestrian dataset.
To explicitly model the deformation and occlusion, another line of research for object detection is part-based models [19, 20, 21, 22] and explicit occlusion handling [23, 24, 25]. DCNNs have also been incorporated along this stream of work for pedestrian detection [26, 27, 28], but none of these approaches has achieved better results than the best hand-crafted features based method of  on the Caltech dataset.
The performance of pedestrian detection is improved over hand-crafted features by a large margin (a gain on Caltech), by two very recent approaches relying on DCNNs: CompACT-Deep  combines hand-crafted features and fine-tuned DCNNs into a complexity-aware cascade. Tian et al.  fine-tuned a pool of part detectors using a pre-trained GoogLeNet, and the resulting ensemble model (refer to as DeepParts) delivers similar results as CompACT-Deep. Both approaches are much more sophisticated than the standard R-CNN framework: CompACT-Deep involves the use of a variety of hand-crafted features, a small CNN model and a large VGG model . DeepParts contains fine-tuned DCNN models and needs a set of strategies (including bounding-box shifting handling and part selection) to arrive at the reported result. Note that the high complexity of DCNN models can lead to practical difficulties. For example, it can be too costly to load all 45 DCNN models into a GPU card.
Here we ask a question: Is a complex DCNN based learning approach really a must for achieving the state-of-the-art performance? Our answer to this question is negative. In this work, we propose alternative methods for pedestrian detection, which are simpler in design, with comparable or even better performance. Firstly, we extensively evaluate the CFMs extracted from convolutional layers of a fine-tuned VGG model for pedestrian detection. Using only a CFM of a single convolutional layer, we train a boosted-tree-based detector and the resulting model already significantly outperforms all previous methods except the above two sophisticated DCNN frameworks. This model can be seen as a strong baseline for pedestrian detection as it is very simple in terms of implementation.
We show that the CFMs from multiple convolutional layers can be used for training effective boosted decision forests. These boosted decision forests are combined altogether simply by score averaging. The resulting ensemble model beats all competing methods on the Caltech dataset. We further improve the detection performance by incorporating a semantic pixel labelling model. Next we review some related work.
1.1 Related Work
1.1.1 Convolutional feature maps (CFMs)
It has been shown in [29, 30, 31] that CFMs have strong representation abilities for many tasks. Long et al.  cast all fully-connected layers in DCNNs as convolutions for semantic image segmentation. In 
, the CFMs from multiple layers are stacked into one vector and used for segmentation and localization. Renet al.  learn a network on the CFMs (pooled to a fixed size) of a pre-trained model.
The work by Yang et al.  is close to ours, which trains a boosted decision forest for pedestrian detection with the CFM features from the Conv- layer of the VGG model , and the performance () on Caltech is comparable to checkerboards . It seems that there is no significant superiority of the CFM used in  over hand-crafted features on the task of pedestrian detection. The reason may be two-fold. First, the CFM used in  are extract from the pre-trained VGG model which is not fine-tuned on a pedestrian dataset; Second, CFM features are extracted from only one layer and the multi-layer structure of DCNNs is not fully exploited. We show in this work that both of these two issues are critically important in achieving good performance.
1.1.2 Segmentation for object detection
The cues used by segmentation approaches are typically complementary to those exploited by top-down methods. Recently, Yan et al.  propose to perform generic object detection by labelling super-pixels, which results in an energy minimization problem with data term learned by DCNN models. In [34, 13], segmented image regions (not bounding boxes) are generated as object proposals and then used for object detection.
In contrast to the above region (or super-pixel) based methods, we here exploit at an even finer level of information, that is, pixel labelling. In particular, in this work we demonstrate that we can improve the detection performance by simply re-scoring the proposals generated by a detector, using pixel-level scores.
We revisit pedestrian detection with DCNNs by studying the impact of a few training details and design parameters. We show that fine-tuning of a DCNN model using pedestrian data is critically important. Proper bootstrapping has a considerable impact too. Besides these findings, other main contributions of this work can be summarized as follows.
The use of multi-layer CFMs for training a state-of-the-art pedestrian detector.
We show that it is possible to train an ensemble of boosted decision forests using multi-layer CFMs that outperform all previous methods. For example, with CFM features extracted from two convolutional layers, we can achieve a log-average miss rate ofon Caltech, which already perform better than all previous methods, including the two sophisticated DCNNs based methods [1, 2].
Incorporating semantic pixel labelling. We also propose a combination of sliding-window detectors and semantic pixel-labelling, which performs on par with the best of previous methods. To keep the method simple, we use the weighted sum of pixel-labelling scores within a proposal region to represent the score of the proposal.
The best reported pedestrian detector. A new performance record for Caltech is set by exploiting a DCNN as well as two complimentary hand-crafted features: ACF and optical-flow features. This shows that some types of hand-crafted features are complementary to deep convolutional features.
Before we present our methods, we briefly describe the datasets, evaluation metric and boosting models in our experiments.
1.3 Datasets, Evaluation metric and Models
Caltech pedestrian dataset The Caltech dataset  is one of the most popular datasets for pedestrian detection. It contains k frames extracted from hours of urban traffic video. There are in total k annotated bounding boxes with unique pedestrians. The standard training set and test set consider one out of each frames. In our experiments, the training images are increased to one out of each frames. Note that many competing methods [6, 31, 17] have used the same extended training set or even more data (every third frame).
For Caltech dataset, we evaluate the performance of various detectors using the log-average miss rate (MR) which is computed by averaging the miss rate at false positive rates spaced evenly between to false-positives-per-image (FPPI) range. Unless otherwise specified, the detection performance on our experiments shown in the remainder of the paper is the MR on the Caltech test set.
KITTI pedestrian dataset The KITTI dataset  consists of 7481 training images and 7518 test images, comprising more than 80 thousands of annotated objects in traffic scenes. The KITTI dataset provides a large number of pedestrians with different sizes, viewpoints, occlusions, and truncations. Due to the diversity of these objects, the dataset has three subsets (, , ) with respect to the difficulty of object size, occlusion and truncation. We use the training subset as the training data in our experiments.
For KITTI dataset, average precision (AP) is used to evaluate the detection performance. The average precision summaries the shape of the precision-recall curve, and is defined as the mean precision at a set of evenly spaced recall levels. All methods are ranked based on the difficult results.
Boosted decision forest Unless otherwise specified, we train all our boosted decision forests using the following parameters. The boosted decision model consists of depth-decision trees, trained via the shrinkage version of real-Adaboost . The size of model is set to
pixels, and one bootstrapping iteration is implemented to collect hard negatives and re-trains the model. The sliding window stride is set topixels.
2 Boosted Decision Forests with Multi-layer CFMs
In this section, we firstly show that the performance of boosted decision forests with CFMs can be significantly improved by simply fine-tuning DCNNs with hard negative data extracted through bootstrapping. Then boosted decision forests are trained with different layers of CFMs, and the resulting ensemble model is able to achieve the best reported result on the Caltech dataset.
2.1 Fine-tuning DCNNs with Bootstrapped Data
In this work, the VGG  model is used to extract CFMs. As we know, the VGG model was originally pre-trained on the ImageNet data with image-level annotations and was not trained specifically for the pedestrian detection task. It is expected that the detection performance of boosted decision forests trained with CFMs ought to be improved by fine-tuning the VGG model with Caltech pedestrian data.
To adapt the pre-trained VGG model to the pedestrian detection task, we modify the structure of the model. We replace the -way classification layer with a randomly initialized binary classification layer and change the input size from to
pixels. We also reduce the number of neurons in fully connected layers fromto . We fine-tune all layers of this modified VGG, except the first convolutional layers since they correspond to low-level features which are largely universal for most visual objects. The initial learning rate is set to for convolutional layers and for fully connected layers. The learning rate is divided by at every iterations. For fine-tuning, k positive and k negative examples are collected by different approaches. The positive samples are those overlapping with a ground-truth bounding box by , and the negative samples by . At each SGD iteration, we uniformly sample positive samples and negative samples to construct a mini-batch of size .
We train boosted decision forests with the CFM extracted from the Conv- layer of differently fine-tuned VGG models and the results are shown in Table 1. Note that all the VGG models in this table are fine-tuned from the original model pre-trained on ImageNet data. It can be observed that the log-average miss rate is reduced from to by replacing the pre-trained VGG model with the one fine-tuned on data collected by applying an ACF  detector on the training dataset. The detection performance is further improved to MR if it is fine-tuned on the bootstrapped data using the previous trained model 3b. Another performance gain is obtained by applying shrinkage to the coefficients of weak learners, with shrinkage parameter being (see ). The last model (corresponding to row in Table 1) is referred to as 3 from now on.
|Model||Fine-tuning data||Shrinkage||Miss rate (%)|
|3b||Collected by ACF|
|3c||Bootstrapping with 3b|
|3||Bootstrapping with 3b||13.49|
2.2 Ensemble of Boosted Decision Forests
In the last experiment, we only use a CFM from a single layer of the VGG model. In this section, we intensively explore the deep structure of the VGG model which consists of convolutional layers, fully connected layers, and classification layer. These convolutional layers are organized into convolutional stacks, convolutional layers in the same stack have the same down-sampling ratio. We ignore the CFMs of the first two convolutional stacks (each one contains layers) since they are universal for most visual objects.
We train boosted decision forests with CFMs from individual convolutional layers of the VGG model which is the one fine-tuned with bootstrapped data (same as row in Table 1). All boosted decision forests are trained with the same data as 3. For models with Conv- features, the input image are directly applied on the convolutional layers and resulting in a feature map with the down-sampling ratio of . The corresponding boosted decision forests work as a sliding-window detector with step-size of . For models with Conv- and Conv- features, they are applied to proposals generated by 3 model. This is due to the large downsampling ratio of Conv- and Conv-. If the step-size of the sliding-window detector is too large, it will hurt the detection performance.
|Convolutional||Channels||Down-sampling||Miss rate (%)|
Table 2 shows the comparison of detection performance of these boosted decision forests on Caltech setting. We can observe that the MR is relatively high for the Conv- layer and the Conv- layer. We conjecture that the Conv- layer provides relatively low-level features which result in an under-fitting training. In contrast, the semantic information in the Conv- layer may be too coarse for pedestrian detection. According to Table 2, the best performing layer in each convolutional stack, are from inner layers of Conv- (3), Conv- (4), and Conv- (5) respectively. Fig. 1 shows the spatial distribution of convolutional features, which are frequently selected by above three models. We observe that most active regions correspond to important human-body parts (such as head and shoulder).
The boosted decision forests trained with CFMs of these three layers are further combined together simply through score averaging. Fig. 2 shows the framework of the resulting ensemble model. Firstly, 3 model works as a sliding-window detector, which rejects the majority of negative examples and pass region proposals to 4 and 5. Both 4 and 5 generate the confidence score for each incoming proposal. The final score is computed by averaging over the scores output by these three boosted decision forests. This model delivers the best reported log-average miss rate () on Caltech setting without using any sophisticatedly designed algorithms.
|Model combination||Avg. miss rate (%)|
We also evaluate other combinations of the ensemble models. Furthermore, a VGG model is fine-tuned with another round of bootstrapping (using 3) and its final output is also combined to improve the detection performance. The corresponding results can be found in Table 3 We can see that combining two layers already beats all existing approaches on Caltech, and adding the entire large VGG model also gives a small improvement.
3 Pixel Labelling Improves Pedestrian Detection
In this section, the sliding-window based detectors are enhanced by semantic pixel labelling. By incorporating DCNNs, the performance of pixel labelling (semantic image segmentation) methods have been recently improved significantly [32, 39, 30, 40, 41]. In general, we argue that pixel labelling models encode information complementary to the sliding-window based detectors. Empirically, we show that consistent improvements are achieved over different types of detectors.
The segmentation method proposed in  is used here for pixel labelling, in which a DCNN model (VGG) is trained on the Cityscapes dataset . The prediction map is refined by a fully-connected conditional random field (CRF) 
with DCNN responses as unary terms. The Cityscapes dataset that we use for training is similar to the KITTI dataset which contains dense pixel annotations of 19 semantic classes such as road, building, car, pedestrian, sky, etc. Note that our models that exploiting pixel labelling have used extra data for training on top of the Caltech dataset. However, most deep learning based methods[1, 2] have used extra data, at least the ImageNet dataset for pre-training the deep model. Pedestrian detection may benefit from the semantic pixel labelling in the following aspects:
Multi-class information: Learning from multiple classes, in contrast to the object detectors typically trained with two-class data, the pixel labelling model carries richer object-level information.
Long-range context: Using CRFs (especially fully-connected CRFs) as post-processing procedure, many models (for example, [39, 41, 40]) have the ability to capture long-range context information. In contrast, sliding-window detectors only extract features from fixed-sized bounding boxes.
Object parts: The trained pixel labelling model may cater for more fine-grained details, such that they are more insensitive to deformation and occlusion to some extent.
However, it is not straightforward to apply pixel labelling models to pedestrian detection problems. One of the main impediments is that it is difficult to estimate the object bounding boxes from the pixel score map, especially for people in crowds.
To this end, we propose to bring the pedestrian detector and pixel labelling model together. In our framework (see Fig. 3), a sliding-window detector is responsible for providing region proposals and a pixel labelling model is applied to the input image at the same time to generate a score map for the “person” class. Next, a weighted mask is applied to the proposal region of the “person” score map to generate the weighted sum of pixel scores. Finally, the weighted sum and the detector score for the same proposal are aggregated together as the final score. The weighted mask is learned by averaging the pixel scores of ground truth region on the training images. To match the mask and the input proposals, we resize both ground truth and test proposals to pixels (no surrounding pixels). Note that, there are more sophisticated methods for exploiting the labelling scores. For example, one can use the pixel labelling scores as the image features, similar to ‘object bank’ , and train a linear model. In this work, we show that even simply weighted sum of the pixel scores considerably improves the results.
|Method||Avg. miss rate (%)||Improve. (%)|
Table 4 shows the detection performance of different sliding-window detectors enhanced by pixel labelling. Boosted decision forests are trained here with three types of features, which are ACF , checkerboard features  and the CFM from the Conv- layer of VGG model (3). We can see that the performances of all the three detectors are improved by aggregating pixel labelling models. Fig. 4 presents some region proposals on the original images and the corresponding pixel score maps. Some of the false proposals generated by pedestrian detectors (3) can be removed by considering the context of a larger region (the largest bounding box in the first column in Fig. 4). Some occluded pedestrians have responses on the pixel score map (the rightmost bounding box in the fourth column in Fig. 4). This clearly illustrates why this combination works.
4 Fusing Models
4.1 Using Complementary Hand-crafted Features
The detection performance of the 3 model is critical in the proposed ensemble model, since later components often reply on the detection results of this model. In order to enhance the detection performance of the 3 model, we make two variants of it by combining two hand-crafted features: the ACF and optical flow. We augment the 3 features with the ACF and optical flow features to train an ensemble of boosted decision forests. Optical flow features are extracted the same way as in .
|Method||Avg. miss rate (%)|
Table 5 shows the detection results of different variants of 3 model. With adding the ACF features, the MR of 3 detector is reduce by . With the extra optical flow features, the MR is further reduced to . These experimental results demonstrate that hand-crafted features carry complimentary information which can further improve the DCNN convolutional features. This is easy to understand: the ACF features may be viewed as lower-level features, compared with the middle-level features in 3. The optical flow clearly encodes motion information which is not in 3 features. By adding the other components of the proposed ensemble model, our detector can achieve MR. The MR is slightly increased to by removing motion information.
4.2 Pixel Labelling
As shown in Section 3, the pixel labelling model is also complementary to convolutional features. Table 6 shows the detection performance of different ensemble models enhanced by pixel labelling model. The best result is achieved by combining the most number of different types of models (which is refer to as All-in-one), which reduces the MR on the Caltech test set from the previous best to . Note that the combination rule used by our methods is simple, which implies a potential for further improvement.
|Method||Avg. miss rate (%)|
|DCNNPixel label. (All-in-one)|
4.3 Ablation Studies
|Pipeline||3a||fine-tuning||Add 4||Add 5||Add DCNN||Add Pixel Label.||Use (3+ACF+Flow)|
|Miss rate (%)|
We investigate the overall pipeline of the All-in-one model by adding each component step by step, which is shown in Table 7. As the start point, the 3a model with the original VGG model pre-trained on ImageNet data achieves a miss rate of . A performance gain can be obtained by fine-tuning the VGG model with bootstrapped data. The detection results can be improved to (better than all previous methods) by adding 4 and 5 models to construct an ensemble model. We obtain performance improvement if we use the entire VGG model (fine-tuned by bootstrapped data with 3) as a component of our ensemble model. Combining the pixel labelling information to predicted bounding boxes can further reduce the miss rate by . By replacing the 3 model to 3ACFFlow model, the MR of our ensemble mode can eventually achieve on the Caltech test set.
4.4 Fast Ensemble Models
In this section, we investigate the speed issue of the proposed detector. Our All-in-one model takes about 8s for processing one image on a workstation with one octa-core Intel Xeon 2.30GHz processor and one Nvidia Tesla K40c GPU. Most of time (about 7s) is spent on the extraction of the CFMs on a multi-scale image pyramid. The remaining components of the ensemble model takes less than 1s to process the passed region proposals. The pixel labelling model only uses about 0.25s to process one image since it only need to be applied to one scale. It can be easily observed that the current bottleneck of the proposed detector is the 3 which is used to extract region proposals with associated detection scores. The speed of our detector can be accelerated using a light-weight proposal method at the start of the pipeline in Fig. 2.
We use two pedestrian detectors ACF  and checkerboards  as the proposal methods. Our ACF detector consists of 4096 depth-4 decision trees, trained via real-Adaboost. The model has size pixels, and is trained via four rounds of bootstrapping. The sliding window stride is 4 pixels. The checkerboards detector is trained using almost identical parameters as for ACF. The only difference is that the feature channels are the results of convolving the ACF channels with a set of checkerboard filters. In our implementation, we adopt a set of 12 binary filters to generate checkerboard feature channels. To limit the number of region proposals, we set the threshold of the above two detectors to generate about 20 proposals per image.
Table 8 shows the detection performance of the original ensemble model and fast ensemble models on Caltech test set. We can observe that the quality of proposals are enhanced by a large margin using both ensemble models and the pixel labelling model. The best result of fast ensemble models is achieved by using proposals generated by the checkerboards detector. This method uses the data collected by checkerboards detector as the initial fine-tuning data. With a negotiable performance loss (e.g., 1.12%), it’s about 6 times faster than the original method. Note that the fast ensemble model (with checkerboard proposals) also achieves the state-of-the-art results.
|Method||Avg. miss rate (%)||runtime (s)|
|3 (proposals)45DCNNPixel label.||8|
|ACF (proposals)345DCNNPixel label.||0.75|
|Checkerboards (proposals)345DCNNPixel label.||1.25|
4.5 Comparison to State-of-the-art Approaches
We compare the detection performance of our models with existing state-of-the-art approaches on the Caltech dataset. Table 9 compares our models with a wide range of detectors, including boosted decision trees trained on hand-crafted features, RCNN-based methods and the state-of-the-art methods on the Caltech test set. The performance of the first two types are quite close to each other. Using only one single layer of convolutional feature map, our 3 model has outperformed all other methods expect the two sophisticated methods [2, 1]. Note that the RCNN based methods are based on larger models than 3. As feature representation, the CFM from the Conv- layer of our fine-tuned model significantly outperforms all other hand-crafted features. The 3Pixel labelling model is comparable to the state-of-the-art performance achieved by sophisticated methods [2, 1]. Our 345 model performs even better. Without using hand-crafted features, our model can achieve MR. The best result is achieved by the All-in-one model which combines a number of hand-crafted features and models.
|Type||Method||Miss Rate (%)|
|Hand-crafted Features||SpatialPooling |
|RCNN based||AlexNet |
Fig. 5 shows a more complete evaluation of the proposed detection framework on various Caltech test settings, including , , , and . We can observe that our ensemble model achieves the best results on most test subsets (including ). On the set, our models are only outperformed by DeepParts , which is specifically trained for handling occlusions.
Table 10 shows the results on the KITTI dataset. Since images of KITTI are larger than in Caltech, the feature extraction of 3 model is time-consuming. In our experiments, only the fast ensemble model with Checkerboards proposals is used for testing on KITTI. Our model achieves competitive results, , , and AP on , , and subsets respectively. Fig. 4.5 presents the comparison of detection performance on the KITTI test subset. It can be observed that the proposed detector outperforms all published monocular-based methods. Note that the 3DOP  is based on stereo images. The proposed ensemble model is the best-performing detector based on DCNN, and surpasses CompACT-Deep  and DeepParts  by and respectively.
In this work, we have built a simple-yet-powerful pedestrian detector, which re-uses inner layers of convolutional features extracted by a properly fine-tuned VGG16 model. This ‘vanilla’ model has already achieved the best reported results on the Caltech dataset, using the same training data as previous DCNN approaches. With a few simple modifications, its variants have achieved even more significant results.
We have presented extensive and systematic empirical evaluations on the effectiveness of DCNN features for pedestrian detection. We show that it is possible to build the best pedestrian detector, yet avoiding complex custom designs. We also show that a pixel labelling model can be used to improve performance by simply incorporating the labelling scores with the detection scores of a standard pedestrian detector. Note that simple combination rules are used here, which leaves potentials for further improvement. For example the ROI pooling for further speed and performance improvement.
-  Cai, Z., Saberian, M., Vasconcelos, N.: Learning complexity-aware cascades for deep pedestrian detection. In: Proc. IEEE Int. Conf. Comp. Vis. (2015)
-  Tian, Y., Luo, P., Wang, X., Tang, X.: Deep learning strong parts for pedestrian detection. In: Proc. IEEE Int. Conf. Comp. Vis. (2015)
-  Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2005)
-  Shechtman, E., Irani, M.: Matching local self-similarities across images and videos. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2007)
-  Dollár, P., Appel, R., Belongie, S., Perona, P.: Fast feature pyramids for object detection. IEEE Trans. Pattern Anal. Mach. Intell. 36(8) (2014) 1532–1545
-  Zhang, S., Benenson, R., Schiele, B.: Filtered channel features for pedestrian detection. Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2015)
-  Paisitkriangkrai, S., Shen, C., Hengel, A.v.d.: Pedestrian detection with spatially pooled features and structured ensemble learning. IEEE Trans. Pattern Anal. Mach. Intell. (2015)
-  Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proc. Adv. Neural Inf. Process. Syst. (2012)
-  Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proc. Int. Conf. Learning Representations. (2015)
-  Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2015)
-  Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2014)
-  Tompson, J.J., Jain, A., LeCun, Y., Bregler, C.: Joint training of a convolutional network and a graphical model for human pose estimation. In: Proc. Adv. Neural Inf. Process. Syst. (2014)
-  Hariharan, B., Arbeláez, P., Girshick, R., Malik, J.: Simultaneous detection and segmentation. In: Proc. Eur. Conf. Comp. Vis. (2014)
-  Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: Proc. Adv. Neural Inf. Process. Syst. (2014)
-  Branson, S., Van Horn, G., Belongie, S., Perona, P.: Bird species categorization using pose normalized deep convolutional nets. In: Proc. Bri. Conf. Mach. Vis. (2014)
-  Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical report, University of Toronto (2009)
-  Hosang, J., Omran, M., Benenson, R., Schiele, B.: Taking a deeper look at pedestrians. Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2015)
-  Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2009)
-  Enzweiler, M., Eigenstetter, A., Schiele, B., Gavrila, D.M.: Multi-cue pedestrian classification with partial occlusion handling. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2010) 990–997
-  Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9) (2010) 1627–1645
-  Lin, L., Wang, X., Yang, W., Lai, J.H.: Discriminatively trained and-or graph models for object shape detection. IEEE Trans. Pattern Anal. Mach. Intell. 37(5) (2015) 959–972
-  Girshick, R., Iandola, F., Darrell, T., Malik, J.: Deformable part models are convolutional neural networks. Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2015)
Mathias, M., Benenson, R., Timofte, R., Van Gool, L.:
Handling occlusions with franken-classifiers.In: Proc. IEEE Int. Conf. Comp. Vis. (2013) 1505–1512
-  Ouyang, W., Wang, X.: Single-pedestrian detection aided by multi-pedestrian detection. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2013)
-  Tang, S., Andriluka, M., Schiele, B.: Detection and tracking of occluded people. Int. J. Comp. Vis. 110(1) (2014) 58–69
-  Ouyang, W., Wang, X.: A discriminative deep model for pedestrian detection with occlusion handling. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2012)
-  Ouyang, W., Wang, X.: Joint deep learning for pedestrian detection. In: Proc. IEEE Int. Conf. Comp. Vis. (2013)
-  Luo, P., Tian, Y., Wang, X., Tang, X.: Switchable deep network for pedestrian detection. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2014)
-  Ren, S., He, K., Girshick, R., Zhang, X., Sun, J.: Object detection networks on convolutional feature maps. arXiv:1504.06066 (2015)
-  Hariharan, B., Arbeláez, P., Girshick, R., Malik, J.: Hypercolumns for object segmentation and fine-grained localization. Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2015)
-  Yang, B., Yan, J., Lei, Z., Li, S.Z.: Convolutional channel features. Proc. IEEE Int. Conf. Comp. Vis. (2015)
-  Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. arXiv:1411.4038 (2014)
-  Yan, J., Yu, Y., Zhu, X., Lei, Z., Li, S.Z.: Object detection by labeling superpixels. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2015)
-  Fidler, S., Mottaghi, R., Urtasun, R., et al.: Bottom-up segmentation for top-down detection. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2013)
-  Dollar, P., Wojek, C., Schiele, B., Perona, P.: Pedestrian detection: An evaluation of the state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 34(4) (2012) 743–761
-  Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. (2012)
-  Hastie, T., Tibshirani, R., Friedman, J., Franklin, J.: The elements of statistical learning: data mining, inference and prediction. The Mathematical Intelligencer 27(2) (2005) 83–85
-  Paisitkriangkrai, S., Shen, C., van den Hengel, A.: Strengthening the effectiveness of pedestrian detection with spatially pooled features. In: Proc. Eur. Conf. Comp. Vis. (2014)
-  Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Semantic image segmentation with deep convolutional nets and fully connected CRFs. In: Proc. Int. Conf. Learning Representations. (2015)
-  Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D., Huang, C., Torr, P.: Conditional random fields as recurrent neural networks. arXiv:1502.03240 (2015)
-  Lin, G., Shen, C., Reid, I., et al.: Efficient piecewise training of deep structured models for semantic segmentation. arXiv:1504.01013 (2015)
-  Cordts, M., Omran, M., Ramos, S., Scharwächter, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B.: The cityscapes dataset. In: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. Workshops. (2015)
-  Krähenbühl, P., Koltun, V.: Efficient inference in fully connected CRFs with Gaussian edge potentials. In: Proc. Adv. Neural Inf. Process. Syst. (2011)
-  Li, L.J., Su, H., Lim, Y., Fei-Fei, L.: Object bank: An object-level image representation for high-level visual recognition. Int. J. Comput. Vision 107(1) (2014) 20–39
-  Nam, W., Dollár, P., Han, J.H.: Local decorrelation for improved pedestrian detection. In: Proc. Adv. Neural Inf. Process. Syst. (2014)
-  Chen, X., Kundu, K., Zhu, Y., Berneshawi, A.G., Ma, H., Fidler, S., Urtasun, R.: 3d object proposals for accurate object class detection. In: Proc. Adv. Neural Inf. Process. Syst. (2015) 424–432
-  Wang, X., Yang, M., Zhu, S., Lin, Y.: Regionlets for generic object detection. In: Proc. IEEE Int. Conf. Comp. Vis. (2013) 17–24