Spatiotemporal Stacked Sequential Learning for Pedestrian Detection

07/14/2014 ∙ by Alejandro González, et al. ∙ Universitat Autònoma de Barcelona 0

Pedestrian classifiers decide which image windows contain a pedestrian. In practice, such classifiers provide a relatively high response at neighbor windows overlapping a pedestrian, while the responses around potential false positives are expected to be lower. An analogous reasoning applies for image sequences. If there is a pedestrian located within a frame, the same pedestrian is expected to appear close to the same location in neighbor frames. Therefore, such a location has chances of receiving high classification scores during several frames, while false positives are expected to be more spurious. In this paper we propose to exploit such correlations for improving the accuracy of base pedestrian classifiers. In particular, we propose to use two-stage classifiers which not only rely on the image descriptors required by the base classifiers but also on the response of such base classifiers in a given spatiotemporal neighborhood. More specifically, we train pedestrian classifiers using a stacked sequential learning (SSL) paradigm. We use a new pedestrian dataset we have acquired from a car to evaluate our proposal at different frame rates. We also test on a well known dataset: Caltech. The obtained results show that our SSL proposal boosts detection accuracy significantly with a minimal impact on the computational cost. Interestingly, SSL improves more the accuracy at the most dangerous situations, i.e. when a pedestrian is close to the camera.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Localizing humans in images is key for applications such as video surveillance, avoiding pedestrian-to-vehicle collisions, collecting statistics of players or athletes in sport videos, etc. Developing a reliable vision-based pedestrian detector is a very challenging task with more than a decade of history by now. As a result, a plethora of features, models, and learning algorithms, have been proposed to develop the pedestrian classifiers which are at the core of pedestrian detectors [Geronimo:2013].

The research for boosting the accuracy of pedestrian classifiers has followed different lines. Some authors have researched image descriptors well-suited for pedestrians (, HOG [Dalal:2005], HOG+LBP [Wang:2009], HOG+CSS+HOF [Walk:2010], OppHOG [Rao:2011], Haar+EOH [Geronimo:2010b], Integral Channels [Dollar:2009b], Macrofeatures [Nam:2011]), others have researched different image modalities (, appearance + motion [Wojek:2009], appearance+depth+motion [Enzweiler:2011]), others have focused on the pedestrian model (, deformable multi-component part-based models [Felzenszwalb:2010, Ramanan:2011, Cho:2012], multi-resolution [Park:2010, Benenson:2012]), others on the classification architecture (, HOG-SVM/LRF-MLP cascades [Oliveira:2010], Haar + EOH-AdaBoost cascades with meta-stages [Chen:2008]

, random forest of HOG+LBP-SVMs

[Marin:2013b]), and others in the process of collecting good samples for training (, generative approach [Enzweiler:2008]

, active learning

[Abramson:2005], virtual-world data with domain adaptation [Vazquez:2013b]).

The outcome of each of the above mentioned proposals is a pedestrian classifier, termed here as base classifier, which determines if a given image window contains a pedestrian or background. In practice, such classifiers provide a relatively high response at neighbor windows overlapping a pedestrian, while the responses around potential false positives are expected to be lower. Note that, in fact, non-maximum suppression (NMS) is usually performed as last detection stage in order to reduce multiple detections arising from the same pedestrian to a single one. An analogous reasoning applies for image sequences. If there is a pedestrian located within a frame, the same pedestrian is expected to appear close to the same location in neighbor frames. Therefore, such a location has chances of receiving high classification scores during several frames, while false positives are expected to be more spurious. In fact, this may allow removing such undesired spurious by the use of a tracker.

In this paper we propose to exploit such expected response correlations for improving the accuracy of the classification stage itself. In other words, instead of only exploiting spatiotemporal coherence by means of general post-classification stages like NMS and tracking, we propose to add such a type of reasoning in the classification stage itself as well. In particular, we propose to use a two-stage classification strategy which not only rely on the image descriptors required by the base classifiers, but also on the response of the own base classifiers in a given spatiotemporal neighborhood. More specifically, we train pedestrian classifiers using a stacked sequential learning (SSL) paradigm [Cohen:2005].

Temporal SSL involves the analysis of window volumes. The different types of temporal volumes can be potentially useful for different applications depending on the motion of the camera and the targets of interest, as well as the working frame rate and the targets size. In this paper, we are specially interested in on-board pedestrian detection within urban scenarios. Therefore, camera and targets are in movement. Accordingly, in this paper we test our SSL approach for a fixed neighborhood (, fixed spatial window coordinates across frames) and for an scheme relying on an ego-motion compensation approximation (, varying spatial window coordinates across frames). Moreover, in order to assess the dependency of the results with respect to the frame rate, we acquired our own pedestrian dataset at 30fps by normal driving in an urban scenario. This new dataset is used as main guide for our experiments, but we also complement our study with other challenging dataset publicly available: Caltech.

In this paper we start by using a competitive baseline in pedestrian detection [Dollar:2012], namely a holistic base classifier based on HOG+LBP features and linear SVM. Note that HOG/linear-SVM is the core of more sophisticated pedestrian detectors as the popular deformable part-based model (DPM) [Felzenszwalb:2010]. Moreover, HOG with LBP are also used as base descriptors of multi-modal multi-view pedestrian models [Enzweiler:2011], and HOG+LBP/linear-SVM has been used for classifiers with occlusion handling [Wang:2009, Marin:2013], as well as for acting as node experts in random forest ensembles [Marin:2013b]. In addition, it has recently been shown that HOG+LBP/linear-SVM approaches are well suited for domain adaptation [Vazquez:2013b]. Altogether, we think that HOG+LBP/linear-SVM is a proper baseline to start assessing our proposal. Moreover we have extended this baseline with the HOF [Walk:2010] motion descriptor that complements the appearance and texture features of the baseline.

Overall, the obtained results show that our spatiotemporal SSL proposal boosts detection accuracy significantly. Especially, when the pedestrians are close to the camera, in the most critical situations. Therefore, encouraging to augment the study for other pedestrian base classifiers as well as other object categories.

The rest of the paper is organized as follows. In Sect. 2 we review some works related to our proposal. Section 3 briefly introduces the SSL paradigm. In Sect. 4 we develop our proposal. Section 5 presents the experiments carried out to assess our spatiotemporal SSL, and discuss the obtained results. Finally, Sect. 6 draws our main conclusions.

2 Related work

The use of motion patterns as image descriptors was already proposed as an extension of spatial Haar-like filters for video surveillance applications (static zenital camera) [Viola:2003, Cui:2007, Jones:2008] and for detecting human visual events [Ke:2005]. In these cases, original spatial Haar-like filters were extended with a temporal dimension. Popular HOG descriptor was also extended to encode temporal information for detecting humans [Dalal:2006b], in this case using optical flow to compensate motion. In the same spirit the histograms of flow (HOF) were also introduced for detecting pedestrians [Walk:2010]. In all cases motion information was complemented with appearance information (, Haar/HOG for luminance and/or color channels).

In contrast with these approaches, our proposal does not involve to compute new temporal image descriptors as new features for the classification process. As we will see, we use the responses of a given base classifier in neighbor frames as new features for our SSL classifier. In fact, our proposal can also be applied to base classifiers that already incorporate motion features. Therefore, the reviewed literature and our proposal are complementary strategies.

Focusing on single frames, it has been recently shown how pedestrian detection accuracy can be boosted by analyzing the image area surrounding potential pedestrian detections. In particular, [Ding:2012, Chen:2013] follow an iterative process that uses contextual features of several orders (, involving co-occurences) for progressively enhancing the response of base classifiers for true pedestrians and lowering it for hallucinatory ones. Our SSL proposal does not require new image descriptors of pedestrian surroundings and is not iterative, which makes it inherently faster. Moreover, we treat equally spatial and temporal response correlations, , under the SSL paradigm, giving rise to a more straightforward method.

Finally, we would like to clarify that our SSL proposal is not a substitute for NMS and tracking post-classification stages. What we expect is to allow these stages to produce more accurate results by increasing the accuracy of the classification stage. For instance, tracking must be used for predicting pedestrian intentions [Schneider:2013], thus, if less false positives reach the tracker, we can reasonably expect to obtain more reliable pedestrian trajectories and so guessing intentions in the very short time this information is required (, around a quarter of second before a potential collision).

3 Stacked sequential learning (SSL)

Figure 1: SSL learning. See main text in Sect. 3 for details.

Stacked sequential learning (SSL) was introduced by Cohen [Cohen:2005] with the aim of improving base classifiers when the data to be processed has some sort of sequential order. In particular, given a data sample to be classified, the core intuition is to consider not only the features describing the sample but also the response of the base classifier in its neighbor samples. Figure 1 summarizes the SSL learning process that we explain in more detail in the rest of this section.

Let be an ordered training sequence of cardinality . The SSL approach involves to select a sub-sequence for training a base classifier, , and the rest to apply and so training the SSL classifier, . If this is done once, then the final classifier would be trained with less than samples. Thus, to avoid this, it is followed a cross-validation style were is divided in disjoint sub-sequences, , and rounds are performed by using a different subset each round to test the and the rest of subsets for training this . At the end of the process, joining the sub-sequences processed by the corresponding , we can have augmented training samples for learning . means to train the and on the same training set, without actually doing partitions.

Let us explain what means augmented training samples. The elements of , , the initial training samples, are of the form , where

is a vector of features with associated label

. Therefore, the elements of each sub-sequence are of the same form. As we have mentioned before, during each round of the cross-validation-style process, a sub-sequence is selected among , while the rest are appended together to form a sub-sequence . From it is learned and applied to to obtain a new . The elements of are of the form , where we have augmented the feature with the classifier score . Therefore, after the rounds, we have a training set of samples of the form . It is at this point when we can introduce the concept of neighbor scores into the learning process. In particular, the final training samples are of the form , where denotes a neighborhood of size anchored to the sample . For instance, is a past neighborhood, is a future neighborhood, and is a centered neighborhood, which are analogous concepts to the ones of filtering, extrapolation and smoothing, resp., used in the classical tracking nomenclature.

4 SSL for pedestrian detection

Figure 2: Different types of neighborhood for SSL. See main text in Sect. 4.1 for details.

In this section, without loosing generality, we will assume the use of the past neighborhood (Sect. 3) to illustrate and explain our SSL approach. From the viewpoint of the processing of image sequences, this means to use previous images to do detection in the current one (, in the last one acquired when processing directly from a camera). Actually there is no need to save the previous images. The detection scores of the neighbouring windows, that were already computed, are enough to compute the current SSL descriptor making the computation of SSL very computational efficient.

4.1 Spatiotemporal neighborhoods for SSL

For object detection in general and for pedestrian detection in particular, applying SSL starts by defining which are the neighbors of a given window under analysis. In learning time, such a window will correspond either to the bounding box of a labeled pedestrian or to a rectangular chunk of the background. In operation time (, testing), such a window will correspond to a candidate generated by a pyramidal sliding window scheme or any other candidate selection method. In this paper we assume the processing of image sequences and, consequently, we propose the use of a spatiotemporal neighborhood.

Temporal SSL involves the analysis of window volumes. Therefore, there are several possibilities to consider (see Fig. 2). Let us term as the set of coordinates defining an image window in frame , and the window volume defined by a temporal neighbor of frames. The simplest volume is obtained by assuming fixed locations across frames, which we term as projection approach. In other words, . Another possibility consists in building volumes taking into account motion information. For instance, , where is a 2D translation defined by considering the optical flow contained in , and stands for summation to all coordinates defining .

Spatial SSL involves the analysis of windows spatially overlapping the window of interest (see Fig. 2). For instance, we can fix a 2D displacement and displacements in the axis, to the left and to the right, an analogously for the axis given a number of up and down displacements.

Our proposal combines both ideas, , the temporal volumes and the spatial overlapping windows, in order to define the spatiotemporal neighborhood required by SSL (Sect. 3).

4.2 SSL training

As usual, we assume an image sequence with labeled pedestrians (, using bounding boxes) for training. Negative samples for training are obtained by random sampling of the same images, of course, these samples cannot highly overlap labeled pedestrians. The cross-validation-style rounds of SSL (Sect. 3) are performer with respect to the images of the sequence, not with respected to the set of labeled pedestrians and negative samples as it may suggest the straightforward application of SSL (note that pedestrian/negative labels are for individual windows not for full images). Moreover, as we have seen in Sect. 4.1, the neighborhood relationship is not only temporal but spatial too. The training process is divided in two stages. First, we train the auxiliary classifiers () as usual using three bootstraping rounds. Then we train the SSL classifier (using final as auxiliary), again we run three bootstrapping rounds for obtaining the final classifier ().

Using the full training dataset, we also assume the training of a base classifier . Another possibility is to understand the different as the result of a bagging procedure and ensemble them to obtain . Without loosing generality, in this paper we have focused on the former approach.

Figure 3: Two-stage pedestrian detection based on SSL. See main text in Sect. 4.3 for details.

4.3 SSL detector

The proposed pedestrian detection pipeline is shown in Fig. 3. As we can see there are two main stages. The first stage basically consists in a classical pedestrian detection method relying on the learned base classifier . In Fig. 3 we have illustrated the idea for a pyramidal sliding window approach, but using other candidate selection approaches is also possible. Detections at this stage are just considered as potential ones. Then, the second stage applies the spatiotemporal SSL classifier, , to such potential detections in order to reject or keep them as final detections.

There are some details worth to mention. First, the usual non-maximum suppression (NMS) step included in pedestrian detectors is not performed for the output of the first stage, but it is done for the output of the second stage. Second, for ensuring that true pedestrians reach the second stage, we apply a threshold on such that it guarantees a very high detection rate even having a very high rate of false positives. In our experiments this usually implies that while the processes hundred of thousands windows (for pyramidal sliding window), only process a few thousands. Third, although in Fig. 3 we show pyramids of images for a temporal neighborhood of T frames, what we actually keep from frame to frame are the already computed features, so that we compute them only once. However, this depends on the type of temporal neighborhood we use (Sect. 4.1). For instance, using projection style no feature are needed to keep (, keeping the classification scores is enough). However, if we use optical flow we may need to compute features in previous frames if the window under consideration does not map to a location where they were already computed.

5 Experimental results

Protocol.

As evaluation methodology we follow the de-facto Caltech standard for pedestrian detection [Dollar:2012], we plot curves of false positives per image (FPPI) miss rate. The miss rate average in the range of to FPPI is taken as indicative of each detector accuracy, . the lower the better. Moreover, during testing we consider three different subset based on the pedestrian height. Near subset include pedestrians with height equal or higher than 75 pixels, medium subset include pedestrian between 50 and 75 pixel height. Finally we group the two previous subset in the reasonable subset (height 50 pixels).

Dataset FPS Experiment Near Medium Reasonable
OursDS Any Base: HOG+LBP 39.71 50.83 45.91
c2-6 3 SSL(Base) Proj. - OptFl. 36.03 - 36.72 50.01 - 50.04 44.40 - 44.02
c3-6 Base+HOF 47.98 56.65 50.88
c3-6 SSL(Base+HOF) Proj. 37.62 52.21 45.47
c2-6 10 SSL(Base) Proj. - OptFl. 35.49 - 34.79 50.22 - 49.42 43.56 - 42.10
c3-6 Base+HOF 39.24 52.37 42.43
c3-6 SSL(Base+HOF) Proj. 29.42 44.62 37.13
c2-6 30 SSL(Base) Proj. - OptFl. 34.18 - 34.01 49.84 - 48.04 42.90 - 41.73
c3-6 Base+HOF 37.81 53.39 38.78
c3-6 SSL(Base+HOF) Proj. 27.37 46.53 35.85
Caltech 25 Base 45.4 82.3 59.4
c3-6 SSL(Base) Proj. - OptFl. 40.6 - 38.9 81.2 - 80.4 59.4 - 57.6
c3-6 Base+HOF 33.8 78.4 52.9
c3-6 SSL(Base+HOF) Proj. 32.0 77.1 51.6
Table 1: Evaluation of SSL over different datasets, frame rates and pedestrian sizes. For FPPI , the miss rate average is indicated.

Our own dataset (OurDS).

Since the temporal axis is important for the SSL classifier, we acquired our own dataset to be sure we have stable 30 fps sequences. The sequences were acquired on-board under normal urban driving conditions. The images are monochrome and of pixels. We used a 4mm focal length lens, so providing a wide field of view. We drove during 30 minutes approximately, giving rise to a sequence of around 60,000 frames. Then, using steps of 10 frames we annotated all the pedestrians. This turns out in 7,900 annotated pedestrians, 5,400 reasonable and non occluded. We have divided the video sequence into three sequential parts, the first one for training, the last one for testing, in the middle we have leaved a gap for avoiding testing and training with the same persons. Overall we train with 3,600 reasonable pedestrians, and test on 1,300 reasonable ones.

Caltech dataset.

We have also used other popular dataset acquired on-board. The Caltech dataset [Dollar:2012], which contain 3,700 reasonable pedestrians for training.

Base detectors.

For the experiments presented in this section we use our own implementation of HOG and LBP features, which provides significant better results than the one proposed in [Wang:2009], , removing the occlusion handling reasoning. Moreover, using TV-L1 [Zach:2007] for computing optical flow, we obtain HOF features [Walk:2010] as well. These features complement HOG and LBP by motion information. We call Base to the HOG+LBP/Linear-SVM and Base+HOF to the HOG+LBP+HOF/Linear-SVM.

Figure 4: Results for OursDS and Caltech datasets. At the top row there are the 30fps, 10fps and 3fps cases of OursDS using the near testing subset. The last two cases are obtained by sub-sampling the video sequence, but always keeping the same training and testing pedestrians. At the bottom row there are the experiments over the near, medium and reasonable testing of Caltech dataset.

Ssl.

The experiments are based on the spatiotemporal SSL (with past temporal window style) and settings . In preliminary experiments we tested several values of (Fig. 1), ranging from to . The obtained results were very similar, thus we decided to set (i.e., omitting the partition of the training sequence) since then the training is faster.

Figure 5: Qualitative results from the OursDS dataset comparing the base classifier and the SSL for 3, 10 and 30 fps. The first three columns focus on improvements regarding false positives rejection, while the rest focus on examples where SSL avoids missing pedestrians. The non-detected pedestrians with the SSL approach (last two columns) correspond to occluded pedestrians.

Experiments.

In table 1 we show the results for the SSL experiments. As baseline detectors we use the Base and Base+HOF. The experiments are run over the different datasets, and different frame rates for the OursDS case. We tested them for different ranges of pedestrian sizes. We observe significant accuracy improvements for all the tested datasets comparing the baseline detector and its SSL counterpart. For instance, in OurDS near with SSL(Base+HOF) we obtain an accuracy improvement of ten points approximately. Also, significant accuracy improvements are obtained for all the tested frame rates (30 fps, 10 fps, 3 fps) of OurDS dataset. Besides, we observe an improvement due to the optical flow in the volume generation at high frame rates. However, no significant difference is observed at low frame rates. The SSL accuracy improvement is more clear for the near pedestrians. In Fig. 4 we plot the accuracy curves obtained for some representative experiments.

Discussion.

SSL approach outperforms its baseline in almost all the tested configurations. However, the improvement is more clear for near pedestrians at high frame rates. If we generate the past neighborhood over the far away pedestrians, we should expect a past neighborhood with pedestrians smaller than the minimum pedestrian size that the base detector can detect. That is why the SSL improvement is not so clear for the medium subset. However, in near pedestrians past neighborhood

is more probable to find a history of confident responses. This is a very relevant improvement since for close pedestrians the detection system has less time to take decisions like braking or doing any other manoeuvre. Regarding the neighborhood generation approaches, the optical flow slightly improves the projection one as it captures the movement of the pedestrians in the temporal neighborhood.

6 Conclusion

In this paper we have presented a new method for improving pedestrian detection based on spatiotemporal SSL. We have shown how even simple projection windows can boost the detection accuracy in different datasets acquired on-board. We have shown that our approach is effective for different frame rates. In this paper we have focused on HOG+LBP/Linear-SVM and HOG+LBP+HOF/Linear-SVM pedestrian base classifiers, thus, our immediate future work will focus on testing the same approach for other base classifiers of the pedestrian detection state-of-the-art. Regarding the improvement obtained using optical flow neighborhood, we want to further explore different approaches for dealing with the neighborhood generation for moving pedestrians.

Acknowledgements

This work is supported by the Spanish MICINN projects TRA2011-29454-C03-01 and TIN2011-29494-C03-02 and Sebastian Ramos’ FPI Grant BES-2012-058280.