Multiple Human Tracking in RGB-D Data: A Survey

06/14/2016 ∙ by Massimo Camplani, et al. ∙ University of Bristol 0

Multiple human tracking (MHT) is a fundamental task in many computer vision applications. Appearance-based approaches, primarily formulated on RGB data, are constrained and affected by problems arising from occlusions and/or illumination variations. In recent years, the arrival of cheap RGB-Depth (RGB-D) devices has led to many new approaches to MHT, and many of these integrate color and depth cues to improve each and every stage of the process. In this survey, we present the common processing pipeline of these methods and review their methodology based (a) on how they implement this pipeline and (b) on what role depth plays within each stage of it. We identify and introduce existing, publicly available, benchmark datasets and software resources that fuse color and depth data for MHT. Finally, we present a brief comparative evaluation of the performance of those works that have applied their methods to these datasets.



There are no comments yet.


page 21

page 22

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Human tracking is a key component in many computer vision applications, including video surveillance (1), smart environments (2), assisted living (3), advanced driver assistance systems (ADAS) (4), and sport analysis (5). They are usually centered around RGB sensors and are characterized by a variety of limitations, such as occlusions due to cluttered or crowded scenes and varying illumination conditions. The vast literature landscape in this research area has widened even further in the last few years, due to the introduction and popularity of low-cost RGB-Depth (RGB-D) cameras (such as the Kinect (6) and Asus Xtion (7)). This has enabled the development of new algorithms that integrate depth and color cues to improve detection and tracking systems (8).

The aim of this survey paper is to summarise and focus on the area of multiple human tracking from the combination of color (RGB) and depth (D) data, given that cheap depth-enabled sensors are becoming ubiquitous in vision research labs. The survey is not however limited to methods using active sensing RGB-D devices, but also encompasses state-of-the-art passive sensing stereo-based human tracking techniques, where color and depth are again jointly relied upon to enable tracking.

We do not review methods based only on RGB features as that would need a dedicated survey of its own and would demand much greater space - for RGB only MHT, the reader is referred to the reviews presented by Dollar et al. (9) on color-based pedestrian detection and Luo et al. for color-based multi-object tracking (10). The intention here rather is to address and summarize an area that is now of far-reaching interest to a huge community of researchers.

Four main computer vision topics were identified in (8) that could benefit from depth information: human activity analysis and recognition (11; 12), hand gesture analysis (13), 3D mapping (14) and object detection and tracking. For example, the effect of occlusions can be reduced by using the 3D information contained in depth data, or more reliable features can be extracted in scenes undergoing illumination variations since such variations have low impact on depth sensors. Moreover, depth can be used to extract a richer description of the scene allowing to simplify the tracking problem, for example by adding physical constraints on human appearance and size. On the other hand, certain depth sensor characteristics, such as limited capture range, or scene characteristics, such as excessive natural light and reflective surfaces, reduce the reliability of depth data in some operating conditions, e.g. in outdoor scenarios. Color and depth data can be significantly complementary, and hence their efficient combination and processing can dramatically reduce the effect of the problems that affect them individually. In this survey, we focus on the analysis of algorithms, and available datasets and software, that combine color and depth data for multiple human tracking. Most previous survey papers on human tracking do not provide such coverage and are limited to one or other aspects of MHT. For example, in (1), an in-depth review of surveillance systems is provided, with particular focus on challenges in using large camera networks. In (15), pedestrian detection methods using color-based approaches are surveyed while the pedestrian detection review presented in (4) is mainly focused on ADAS systems. The survey presented in (9) proposes an extensive evaluation of sixteen pedestrian detectors that are based on a sliding window strategy. In (16)

, the focus of the survey is on algorithms for high-level crowd scene understanding. A review of human detection algorithms in video surveillance applications is presented in

(17) where the main sub-modules of the human detection task are identified (object detection and object classification) and the state of the art algorithms are appraised by describing the different strategies used in each sub-module. The survey presented in (18) summarizes the advances in human body parts tracking for rehabilitation purposes. Table 1 lists the recent surveys that are related in some fashion to MHT.

To the best of our knowledge, the surveys most closely associated to ours here are those presented in (8; 10; 11). These cover similar themes but come with certain limitations. The work in (8) reviews recent Kinect-based applications in computer vision, including a very brief survey of RGB-D based trackers. The review in (11) is focused on the recent advances on human activity analysis using depth imagery, while the problem of human detection and tracking is only marginally discussed. Finally, in (10), a general review of multiple object tracking is presented, but the analysis, dedicated to approaches that combine color and depth data, is limited and brief.

Year Article Topic
2016 Zhang et al. (12) RGB-D dataset for action recognition
2014 Luo et al. (10) Multiple object tracking
2013 Chen et al. (11) Human activity analysis
2013 Han et al. (8) Recent Kinect applications
2013 Li et al. (16) Crowd monitoring
2013 Manoranjan et al. (17) Human detection in surveillance
2013 Wang (1) Multi-camera video surveillance
2012 Dollar et al. (9) Pedestrian detection
2010 Geronimo et al. (4) ADAS systems
2009 Enzweiler et al. (15) Pedestrian detection
2008 Zhou et al. (18) Human tracking in rehabilitation
Table 1: Recent related surveys (most recent first)

It is worth noting that we do not consider general single object trackers based on combined depth and color features, such as the recent works presented in (19; 20; 21; 22), since they are more focused on the optimization of appearance and motion models rather than facing the specific challenges of MHT, or are concerned with tracking inanimate objects. Furthermore, we do not include detection only methods, e.g. (23; 24; 25), and methods that use depth only for MHT, e.g. (26; 27; 28; 29), or depth and reflectance, such as (30).

In summary, we provide here a review of the state-of-the-art on MHT algorithms that integrate depth and color data, characterizing them based on (a) trajectory representation and matching and (b) how they exploit depth information to improve various stages of the processing pipeline. We also provide a review of the constraints of use of these algorithms, and we examine existing online resources, i.e. benchmark datasets and source codes, and present a comparison of the very few such resources made available to the community. The audience of this survey is not limited to researchers working directly in the development of tracking algorithms, but also includes those who wish to employ a tracking method that is relevant to their application area, where colour and depth sequences are to be analysed, such as the very active research area of action recognition (11; 12), smart environments (31), health-care applications (32; 33), and applications mentioned in (8).

Next, in Section 2, we present the common processing stages of a typical MHT system, along with a variation on it employed by some works. Amongst other topics, we cover some generic descriptions of a person and introduce two characterizations of MHT systems based on their matching strategy and use of depth. These characterisations are then used in Sections 3 and 4 to survey state-of-the-art approaches. Practical issues, such as type of sensor, camera position, and speed of computation, are considered in Section 5. Section 6 presents an overview of online datasets and software resources for RGB-D MHT. We then compare existing evaluations derived from some of the works in this survey in Section 7 and conclude in Section 9.

2 Multiple People Detection and Tracking Techniques in RGB-D Data

In this section, we identify the main approaches to MHT from combined color and depth data. We first present the processing pipeline that can be attributed to the greater set of works in the literature and then characterize the works we review based on (a) which trajectory representation is used and its matching, and (b) how and for which purpose depth data is exploited.

In MHT, detections of multiple people are normally aggregated into independent tracks, one for each person, in order to establish their respective trajectories. Tracks may contain position, motion, and appearance descriptions. We shall use the words ‘track’ and ‘trajectory’ interchangeably in the rest of this article.

The common processing pipeline is illustrated in Fig. 1. MHT methods normally perform the stages indicated by the solid lines in Fig. 1, with first a detection stage that searches for occurrences of humans in a new frame, based on a generic description of a person (elaborated later below). It may possibly be preceded by an optional Regions of Interest (ROIs) selection stage (the dashed-line box in Fig. 1), that allows for the reduction of the search space. Then, a matching step associates these new detections to the trajectories based on a matching strategy and a similarity measure, computed from position and, more often than not, appearance.

Figure 1: Common processing pipeline for MHT - the dashed-line stage is an optional step of the pipeline, while the dotted rectangle and arrow depict a variation of it.

There are numerous approaches to performing the matching process. These rely on the active trajectories to provide (i.e. effectively feedback) a representation of the target’s motion and appearance to their matching stage (the solid arrow in Fig. 1). The pool of active trajectories is managed by the matching stage, with new trajectories created when detections cannot be associated to the existing ones, and old trajectories discontinued when certain termination criteria are met.

In a variation to the common pipeline, depicted by the dotted line and box in Fig. 1, the detection and matching stages may be directed by trajectories and their representations rather than by a generic representation of a person. Thus, currently tracked people are directly detected at the position predicted by their trajectory representation’s motion model in a significantly reduced search space. In effect, this amounts to combined detection and matching. This variation of the MHT processing pipeline still requires a generic person description for initializing new trajectories by detecting people that are not yet tracked. Note that some methods also use a generic person description in the combined detection and matching stage in addition to the trajectory representation, in order to ensure that tracks do not switch to background objects of similar predicted position and appearance to that of the target.

Section 3 provides a detailed description of implementations of the MHT pipeline (and its variation), including comparing different fulfilments of the matching stage. It should be stressed that both the main pipeline and its variation are by no means specific to RGB-D based methods, and the same can apply to MHT methods based on RGB data only.

Trajectory representations in MHT methods vary significantly between implementations as well, as illustrated in Fig. 2. Thus, although all reviewed methods employ a motion model in their trajectory representation, the use of an appearance model (e.g. color histogram, texture, joint RGB and depth feature, etc…) is optional, as represented by the dashed arrow (or blue sub-tree) in Fig. 2. Both motion and appearance models may be built from an observation in a single frame, or from richer information that accounts for the history of the target.

Figure 2: Categorisation of the different models that make up the trajectory representation used for matching. The dashed arrow denotes an optional model for the trajectory representation, while semi-dotted arrows indicate where one or the other of two possibilities is selected.

Two types of motion models may be identified. The first, denoted as ’zero-velocity motion model’, assumes stationary position of the target, while the second describes their velocity, yielding a first order characterization of their movements. Higher order motion models, such as one that includes the target’s acceleration, would be equally possible, but are not addressed in this survey as no methods in RGB-D MHT were encountered that employed them. Static appearance models may be built from one or a few initial frames and remain fixed for the duration of the trajectory’s lifetime, while dynamic appearance models may be derived from all previous observations of the target or from a sliding window. Such models are updated as new observations become available, in order to account for varying appearances, due to different body orientation relative to the sensor or changing illumination conditions. Yet these dynamic models could result in incorrect descriptions in case of failure in tracking, such as drifting. MHT methods may use any combination of these different possible (static and dynamic) motion and appearance models.

Depth data can be exploited to enhance RGB-based MHT. The methods that we review can be characterized by how and at which stage of the MHT pipeline they employ depth information. Indeed, depth may support each and every stage of the MHT pipeline, as indicated in Fig. 3. It can help to specify ROIs in the image corresponding to 3D physical scene regions of significance, e.g. a doorway or passage, to help reduce the search space for the detection stage (left branch of Fig. 3). Depth information may also increase the robustness of human detection, by enhancing the generic description of a person with 3D shape information (middle branch of Fig. 3). Finally, depth can help in matching detected candidates and trajectories (right branch of Fig. 3), by providing the information needed to track people in 3D, and by further enriching the appearance descriptions of people, that are traditionally based on RGB information only. The various uses of depth information in published work will be detailed in Section 4.

Figure 3: Categorisation of the uses of depth information in MHT methods from RGB-D data.

The generic description of a person that drives the detection stage, is often made up of a number of RGB and depth cues. Then, in the detection stage, a cascade of RGB and depth based descriptors is applied to either the full image or ROIs, starting with the less computationally expensive ones, which are generally depth based descriptors of the human shape. When using RGB information, the generic representations for a person often takes the form of an Histogram of Oriented Gradients (HOG) (34) descriptor of the full or upper-body. Other examples of possible generic person descriptions from RGB data are provided by the poselet-based human detector of (35), the deformable part-based models of (36) (DPM) or use the Viola and Jones Adaboost cascade (37). Table 2 summarizes the different generic descriptions of people used by the various methods reviewed here.

Method Depth descriptor RGB descriptors
Bansal et al. (38) 2D contour matching + HOG
Salas & Tomasi (39) - HOG
Dan et al. (40) -
Darrell et al. (41) - Face and skin detector
Han et al. (42) -
Bajracharya et al. (43) -
Zhang et al. (44) HOG + poselet
Galanakis et al. (45) -
Liu et al. (46; 47; 48) Joint rgb & height histogram + physical priors (48)
Luber et al. (49) and
Linder et al. (50) HOG
Ess et al. (51) HOG based detectors
Jafari et al. (52) HOG
Muñoz-Salinas et al. (53) - Face detector
Munaro et al. (54; 55) HOG
Almazàn and Jones (56; 57) -
Bahadori et al. (58) - Temporal Color based model
Beymer et al. (59) -
Satake et al. (60) SIFT
Vo et al. (61) HOG + face detector
Harville et al. (62) -
Muñoz-Salinas et al. (63; 64) -
Muñoz-Salinas et al. (65)

Adaboost classifier for upper body + ellipse fitting at head location

Choi et al. (66; 67) HOG + face detector + motion detector + skin color recognition
Migniot et al. (68) -
Gao et al. (69) - HOG + DPM
Ma et al. (70) - HOG + DPM
Table 2: Types of generic descriptions of a person.

Next, in Section 3, we review all RGB-D MHT methods known to us, leveraged on how they implement the MHT pipeline. We characterize these works based on their applied trajectory representation and matching strategy, following the categorization proposed in Fig. 2. Then, in Section 4, we again review and characterize these same methods based on their adoption of depth information, describing the uses of depth for each stage of the MHT pipeline, according to Fig. 3.

3 Survey by MHT Pipeline Implementation

This section details how the pipeline for MHT, described in Section 2, has been implemented, including optional stages and variations. The emphasis is on (complexity of) trajectory representation (see Fig. 2) and matching. We may refer to depth data in this section for some of the works - the details of their use of depth will be provided in Section 4.

3.1 Implementations of the main pipeline

We encountered only four works that build their trajectory representation exclusively from the previous frame (41; 38; 40; 39). The principle characteristics of their implementation of the MHT pipeline are indicated in the first four rows of Table 3.

Darrell et al. (41) present a stereo-based tracking approach using the target’s position and size constancy from frame to frame. In particular, candidates are detected by using a segmentation approach that allows to identify connected component in the disparity images corresponding to regions in the 3D space with a typical volume occupied by a person facing the camera. For each detected region, a cascade of face and skin detectors, and geometric constraints, are applied to validate the target’s head position. A long term model is generated by considering skin and face average color, appearance color histogram, face pattern and height extracted from depth data. These features are used to solve occlusions and target re-identification in case of targets re-entering in the scene.

Bansal et al. (38)

first detect people after an ROI selection stage, using a combination of depth cues, and a HOG detector that is applied to a selection of edges obtained by preliminary template matching with several 2D contours of different body parts. Then, they match detections with trajectories from the previous frame by image patch-correlation. This is performed in the area of the image that contained the previous observation of the person, after correction for camera motion estimated by visual odometry. Thus, the trajectory representation is made up of a zero-velocity motion model in the 2D image coordinates and an appearance model that consists of an image patch around the detection in the previous frame. This amounts to a dynamic appearance model built from a sliding-window of one frame-width.

Salas and Tomasi (39)

detect and track all objects in ROIs that denote foreground, and then they select the paths that have, at some point in time, a detection with a high confidence score from a HOG based human detector. The matching stage is performed by dynamically building a directional connected graph of the foreground object detections. These are organised into layers that correspond to the frames they originate from, and they are interconnected by the graph’s edges in chronological order. The cost of an edge is the probability for its two nodes (or foreground detections) belonging to the same object. Based on these costs, the tracks are selected as the most strongly connected paths in the graph. A greedy algorithm is used for extracting individual paths, starting from the oldest remaining detection, and selecting the strongest connection locally between two adjacent node layers. The edge cost, used for matching, is estimated from the similarity of the color signatures, measured using the Earth Mover’s Distance

(71), and from the distance between the 3D locations of both detections that is expected to be proportional to the elapsed time. Thus, the trajectory representation consists of a zero-velocity motion model, and an appearance model made up of the color signature (71) of the blob in the previous frame.

Dan et al. (40) use depth information for detection. All detected candidates are then matched independently to detections in the previous frame by maximizing a score that assesses both appearance similarity and closeness in 3D space. The trajectory representation used for matching is made up of a RGB-D based dynamic appearance model with a sliding window of one frame, along with a zero-velocity motion model. A backward/forward matching strategy is used, where all detections in frame are matched to those in frame (backward matching), and vice versa (forward matching), which allows handling trajectory splits and merges, which may arise from the failure of detection in one direction that may match two people against the same candidate.

All four methods in (41; 38; 40; 39) propose a crude motion model that does not describe a person’s movements sufficiently well, although (39) expects the distance travelled to be proportional to time. The movement itself, and in particular its direction, are not captured by the trajectory representation. Thus, these methods are more likely to suffer from incorrect identifications when a track ‘jumps’ from one person to another, and from wrong detections being integrated into the tracks. In addition, both their motion and appearance models are made from the observations in the previous frame only. Hence, in case of occlusion, a person cannot be tracked any longer and the associated trajectory is automatically discontinued. A new, independent trajectory would have to be created if the person re-emerges.

The methods we present in the rest of this, and the next subsection, occupy rows 5 to the end of Table 3. These address the above issues (a) by proposing motion models that describe the motion of the target to the first order, and (b) by building appearance models from richer temporal information, which allow for maintaining consistent trajectory representations, and help prevent the model from changing dramatically in cases of temporary detection failure over a few frames.

Method ROI selection Pipeline variation Motion model Appearance model
Zero velocity 1st order velocity Static Dynamic - sliding window Dynamic - full history
Bansal et al. (38) ✓(1 frame)
Salas & Tomasi (39) ✓(1 frame)
Dan et al. (40) ✓(1 frame)
Darrell et al. (41)
Bajracharya et al. (43) ✓(1 frame)
Zhang et al. (44)
Galanakis et al. (45)
Liu et al. (46; 47) ? ? ?
Luber et al. (49) and
Linder et al. (50)
Ess et al. (51)
Jafari et al. (52)
Muñoz-Salinas et al. (53)
Munaro et al. (54; 55)
Almazàn and Jones (56) ✓(1 frame)
Bahadori et al. (58)
Beymer et al. (59)
Satake et al. (60) ✓ (30 frames)
Vo et al. (61)
Harville et al. (62)
Almazàn and Jones (57)
Muñoz-Salinas et al. (63)
Muñoz-Salinas et al. (64)
Muñoz-Salinas et al. (65)
Choi et al. (66)
Choi et al. (67)
Migniot et al. (68)
Gao et al. (69)
Ma et al. (70)
Table 3: Characterization of the methods based on their MHT pipeline implementation. The number of frames indicated in the column ’Dynamic – sliding window’ indicates the width of the window. For Liu et al. [38], it is not known if the appearance model is static or dynamic.

In the work of Han et al. (42)

, the motion model determines target’s velocity approximately by the mean and variance of its depth variations in the past 10 frames. Their static appearance model is made up of color and texture histograms for the torso and legs, generated at the first observation of a new person, with the torso and leg locations being detected using depth information. This trajectory representation is kept after the person leaves the scene, in order to allow re-identification in case of re-entry. People are first detected in ROIs, as objects within a pre-defined height range appearing for a number of successive frames, based on depth information. Their best matching trajectory is selected from a linear combination of the appearance similarity and the continuity of the depth variation. The former is assessed with the Bhattacharyya distance measure and the latter is expected to follow a Gaussian distribution with a mean and variance provided by the motion model, under the assumption of a constant speed.

In (43), Bajracharya et al. assume a target velocity of 2ms in any one direction, hence the motion model does not depend on the data. The appearance model of the trajectory representation is made up of the color histogram of the last observation for the track. Matching is performed by comparing candidates, detected from depth information in ROIs, to trajectories, based on the color histograms of the candidate and of the appearance model of the track, evaluated by the Bhattacharyya measure. Only trajectories that are predicted to be located close to the candidate’s are considered.

In all other RGB-D MHT methods reviewed next which apply the main MHT pipeline, motion is modelled as the position and velocity of the tracked person from the previous frame. The position, and sometimes the velocity, of the next observation are predicted from the model, and then compared with the positions of new detections during the matching stage. With the exception of (45; 51)

, the methods reviewed next carry out their predictions using Kalman filtering.

Some works find the best association of a detected candidate to a track independently for each detection or track. For example, in (44), Zhang et al. find people in ROIs using a cascade of RGB and depth based detectors, where detected candidates from depth cues are verified by a HOG detector, and by the poselet-based human detector of (35) that detects body parts. This last detector is rather computationally expensive, hence it is only applied to detected candidates that cannot be associated with existing targets in the matching stage. The matching stage locates the best matching track or static background object for each new detected candidate, using a Directed Acyclic Graph (DAG) to handle the decision process. The DAG performs coarse matching by position similarity first and then finer matching to account for appearance similarity. The appearance is represented by a dynamic model, updated online by an AdaBoost algorithm. A classifier is trained by AdaBoost from weak nearest neighbor classifiers and color histogram features, with positive and negative examples taken from previous observations of the target and of other people and objects, respectively. This model is kept after the person leaves the scene to enable future re-identification.

Similarly in (45), Galanakis et al. model motion as the target’s speed, computed between the last two frames, and use it to predict the next position of the target, assuming a constant velocity. Following the matching strategy of (72), candidate detections, found by background subtraction, are associated with their nearest neighbor trajectories. Unlike (72) however, the distance to a trajectory combines the 3D distance to its predicted position and the appearance similarity, quantified as in (73) by a correlation metric. The appearance model comprises the hue and saturation histograms of the upper and lower body which are found by reference to the depth data. It is updated by linear combination of the current model and the new histogram. Liu et al. (46; 47) detect all candidate people in ROIs of a new frame from RGB-D data and then, for each track, select the best detected candidate by maximizing a correspondence likelihood that is a linear combination of distance to the predicted position and appearance similarity, assessed by the Bhattacharyya measure. The appearance model of the trajectory representation is a joint color and height histogram. The authors do not give any indication whether their appearance model is updated. In order to handle short-term occlusions, the trajectory is only terminated after 10 seconds of being lost.

Other works consider all possible associations of detections to tracks in order to find a global optimisation that takes into account possible interactions between tracks, such as crossing of trajectories and sharing of detections. In (49), Luber et al. build a tree of association hypotheses in a Multi-Hypothesis Tracker (MHT) framework, where matching probabilities, for all past and current frames, are computed from closeness to position and velocity predictions, and from appearance similarity. The MHT grows a hypothesis tree, pruned to the k-best hypotheses at each iteration in order to prevent exponential growth of the tree. The current best hypothesis, that jointly describes all tracks, is then selected at each frame, following (74). Similarly to (44), the appearance model relies on a color and depth Adaboost classifier. Linder et al. (50) propose an extension of the method in (49) for group tracking. In particular, to characterize group movements, they add to the MHT framework a set of coherent motion indicators, such as relative spatial distance, difference in velocity, and difference in orientation of two given tracks.

Beymer et al. (59) propose a combination of stereo-based background subtraction (see (75)) and a full body binary template to identify candidate targets. The binary template size is chosen according to the mean depth value of the foreground blob. A Kalman filter with a constant velocity model is used for tracking. A target’s representation includes 3D space coordinates and two appearance models, a color model and the average disparity. These models are linearly updated taking into account the confidence rate of the person detector module, such that it introduces a smoothing factor in the update process, hence limiting the models’ drift. A similar approach was proposed by Bahadori et al. (58), using detected foreground regions and geometric constraints in their stereo setup to identify blobs containing candidate targets. For each blob, a fixed resolution and adaptive color based appearance model is generated, with each pixel modeled as a unimodal distribution in the color space. Tracking is also performed with a Kalman filter, with a constant velocity model that takes into account the 3D depth position of the target and its appearance. The matching strategy is based on the minimization of the distance, considering both position and appearance, between all the detected candidates and the current active tracks. The generation of new tracks and the termination of lost ones is managed by a finite state machine system.

Ess et al. (51)

detect people in a Bayesian network that accounts for the probabilities of human presence, as output by a color based detector, given the scene geometry and a generic person geometry description, both provided by depth data. Areas around the next expected target locations also see their detection likelihood increased. Then, they also build multiple candidate tracks, from forward and backward matching hypotheses, following

(76)’s tracking framework. These hypotheses are generated from position predictions by a constant velocity model and from appearance similarity measured using the Bhattacharyya distance on color histograms. The best tracks are selected, while enforcing that each person detection can only be matched to one trajectory. The trajectory’s appearance model used for matching is the mean color histogram of all previous observations of the tracked person. Jafari et al. (52) use the same matching stage and trajectory representation. They perform detection in ROIs based on depth at a close range and using a HOG detector (77) in the far range.

Satake et al. (60) detect people by applying a classifier cascade to the RGB-D data. First, a set of three binary templates (78), containing frontal and side views of head and shoulders, are used to identify candidate regions in the disparity map. These are then validated and refined with an SVM classifier trained on HOG features to detect humans. An Extended Kalman filter is used to track the target in the 3D space. SIFT features (79) of the target are periodically collected to build an appearance model. Association between tracked targets and current frame detections is performed by thresholding on the number of matching SIFT features.

Muñoz-Salinas et al. (53) detect people from a face detector applied in ROIs selected from depth information. The face detector may suffer from false negatives in non-fronto-parallel views, therefore it is only applied at the very end of the detection cascade, and only to detected candidates that cannot be associated with existing targets in the matching stage. The matching stage finds the globally optimal associations of detected candidates to existing tracks using the Hungarian method (80). The matching likelihoods are computed from the distance to the predicted position and the similarity to the color histogram appearance model estimated with the Bhattacharyya measure. This model is updated by linearly combining its current values and the new observed color histogram. The track is discontinued if new observation of the target are not encountered after a time-limit. Almazàn and Jones (56) also use the Hungarian method to match candidates, detected from motion and size using depth information, to trajectories. The correspondence likelihood is based on the distance to the predicted position and on appearance similarity, evaluated using the Bhattacharyya measure. The appearance model combines a height histogram and the color distributions of its bins, and it is updated every 10 frames by replacing bins and their associated distributions by newly observed ones if available, i.e. if no occlusion happens.

Another method based on the Hungarian algorithm for matching detected and tracked objects is that of Vo et al. in (61), where the authors identify background areas with a depth-based occupancy grid system. Candidate targets’ search space is limited to the foreground areas which is analyzed with a cascade of classifiers, comprising face and skin detectors (see (81) for more details) and a full body HOG-based human detector (34). Detected objects are tracked simultaneously with a compressive tracker and a Kalman filter.

Munaro et al. (54; 55) find the optimal assignment of detections to tracks in a Global Nearest Neighbor framework. Their matching likelihoods are obtained from the distance to the predicted position and velocity, the probability of being a human as evaluated by a HOG based human detector, and the similarity to the appearance model of the track. The latter is provided by an online Adaboost classifier trained on previous observations, and selects features in the color histogram space. Harville et al. (62) detect moving candidates by applying the background subtraction algorithm presented in (82) to RGB-D data. The detected foreground objects are projected to a 2D reference plane where occupancy and height maps are generated. A box filter system is applied to the occupancy map such that 3D clusters not corresponding to a volume occupied by an average adult are filtered out. Their tracking Kalman filter state includes position in the reference plane and the height and occupancy maps data. These features are linearly combined to calculate the matching score that it is used in the measurements and update phases of the Kalman filter.

Ma et al. (70) present a tracking approach where a set of HOG based DPM detectors (36) is applied to both depth and color images to detect body parts to enable their system to deal with a person’s articulated motion. The conditional random field based approach of (83) is used and extended to solve data association and trajectory estimation. In particular, person locations are inferred by minimizing an objective function, that includes detection matching, spatial correlation, mutual exclusion, temporal consistence, and regularization constraints. One interesting aspect of this method is that it can deal with flexible number and type of detectors.

3.2 Implementations of the variation of the pipeline

The detection of humans driven by generic full body descriptions, such as those mentioned earlier in Subsection 3.1, may sometimes be problematic, e.g. when there is partial occlusion which can significantly alter the appearance of the target. In Section 2 (also see Fig. 1), we stated that in a variation to the common pipeline, some works attempt to address such difficult detections by exploiting trajectories and their representations in a combined detection and matching stage to enable more robust detection. The trajectory representations provide descriptions of the targets, including first order motion models that enable predictive tracking.

After an ROI selection stage, Almazàn and Jones (57) use the Mean-shift algorithm to find the ROI that best matches the appearance model of a target. For each trajectory, this search is initialised at the position predicted by a Kalman filter, and it is performed in the area defined by both the position variance estimated by the filter and by the ROI selection. The appearance model is made up of the color histogram of the highest 3D point of the person’s cluster for each ground coordinate, and it is updated dynamically with each new observation as the weighted mean of the model at the previous frame and of the new histogram. The trajectory remains active until a number of frames after the target leaves the scene to allow its re-identification in case of temporary occlusion.

All other methods we review here that employ this variation of the MHT pipeline implement their first order motion model in a particle filter framework. A potential drawback of this is that particle filters tend to be computationally expensive and may require optimisations to achieve practical running times. In the works of Muñoz-Salinas and co-workers (63; 65; 64) one particle filter is used per track, using a constant speed model to predict the next location of the target, and new target observations are searched for by maximizing a detection probability. In (63; 64)

, candidates are identified in ROIs based on depth information, wherein the probability of the presence of (any) person is computed based on heuristic rules on the number of points in a cluster and its maximal height. In order to compute the probability of detecting the tracked person, this human presence probability is combined with an interaction factor that allows handling trajectory crossings by imposing a minimal separation between the positions of different people. In

(63), the detection probability also includes the Bhattacharyya appearance similarity measure, while in (64) it uses a measure of confidence on depth. Hence, the trajectory representation in (64) does not include any appearance model, and in (63) it models appearance by the color histogram of the cluster. This model is updated with new observations that have high detection and matching confidence by the linear combination of the previous model and of the new histogram. This confidence condition avoids the model being updated when the detection contains parts of a different person, in case of close interaction between people. In (65), the detection probability is made of three terms. It includes the probability of being a (facing) human, firstly by verifying that the cluster may be approximated by a vertical plane at the expected distance from the camera, and secondly, by evaluating the fitting of an ellipse on the RGB image in order to validate the presence of the elliptical shape of a head at this position. It also uses the Bhattacharyya appearance similarity measure to compare to the trajectory representation’s appearance model, made up of two (color) histograms inside two ellipses of pre-defined sizes and respective positions that represent the head and torso respectively. This appearance model is updated dynamically as in (63).

In all three methods in (63; 65; 64), new tracks are initialised when unknown targets are detected based on the use of generic person descriptions. In (63), heuristic rules on the size and height of clusters are used. In (64), confidence on depth is added and new trajectories are initialised only after a few consecutive detections. In (65), the detection of new people is first performed by an Adaboost classifier trained on RGB images to detect upper-bodies which are verified by heuristics on their width and planarity using depth information. Tracks are kept for a number of frames after occlusion or departure.

In (68), Migniot et al. use a top-down view of a depth camera and propose a 2D model composed of two ellipsoids corresponding to head and shoulder regions which are obtained by simply thresholding the depth data. The chamfer distance between the observed regions and the ellipsoidal models is then used to assign the particle weights in their particle filtering tracker. In case of multiple persons in the scene, an independent tracker is created for each target.

Choi et al. (66; 67)

use particle-filtering with Reversible Jump Markov Chain Monte Carlo (RJ-MCMC) sampling to track multiple people simultaneously, as well as static non-human objects (obstacles) and the camera’s position. Given the positions and velocities of all tracked targets and the results from generic person detectors applied to ROIs, at each iteration a move is attempted to initialise, delete or update a trajectory. The moves are sampled from the space of possible moves, one at a time, and the likelihood of the modified solution is estimated. Moves are accepted or rejected similar to MCMC sampling until the chain converges. The moves are guided by the probability of continuous tracking, based on a smooth target’s motion constraint, that may also account for people interactions

(67), and the probability of being a human, as computed by a combination of HOG-based human detection, face and motion detection, skin color and 2D shape recognition. While (67) accounts for the person’s appearance in the likelihood, by computing the distance from a target-specific appearance-based mean-shift tracker (84) that uses color information, (66) does not use any appearance model. The appearance model for the tracker in (67) is static though and built from a small number of consecutive frames, in order to minimise tracking drifts.

In the pedestrian tracking system presented by Gao et al. (69), a layered graph model is used to estimate pedestrian trajectories in RGB-D sequences. The color-based classifier of (36) is used to detect target candidate regions from which several features such as 3-D position, appearance, and motion are extracted. The layered graph nodes represent the detected regions, and the edges the feature similarity. By minimizing the cost function of the graph, using an heuristic searching algorithm, the pedestrian trajectories obtained.

4 Survey by Use of Depth Information

The works that we review in this paper seek to improve the stages of ROIs selection, human detection, and matching by the use of depth information as an additional cue. In this section, we outline how the use of depth, in combination with RGB information, can improve each and every stage of MHT.

4.1 Use of Depth in ROI selection

Amongst the reviewed methods, all those that select ROIs rely heavily on depth and the additional information it provides on the scene geometry to identify the areas where people may potentially be found. As illustrated in the left branch of Fig. 3, we distinguish three categories of depth-based ROI selection methods, i.e. those based on the estimation of the ground plane, those that model the scene’s background, and those that detect motion. We now look at these categories in turn, and characterize the reviewed methods based on their ROI selection method - see columns 2 to 4 of Table 4.

Use of ground/ceiling plane – The assumption that people are usually located in areas of limited height above the ground plane can greatly reduce the search space for detection, and this strategy has been used in many of the reviewed works. Munaro et al. (55) estimate the ground plane using a Hough-based method (85), and select as ROI the volume above the ground plane at typical human height.

Method ROI Selection Human Detection Matching
Ground Plane Background subtraction Motion Detection 3D geometry 2D depth classifier 3D depth classifier 3D Tracking Joint RGB-D description
Bansal et al. (38)
Salas and Tomasi (39)
Dan et al. (40)
Darrell et al. (41)
Han et al. (42)
Bajracharya et al. (43)
Zhang et al. (44)
Galanakis et al. (45)
Liu et al. (46; 47; 48) ✓only in (48)
Luber et al. (49) and
Linder et al. (50)
Ess et al. (51)
Jafari et al. (52)
Muñoz-Salinas et al. (53)
Munaro et al. (54; 55)
Almazàn and Jones (56)
Bahadori et al. (58)
Beymer et al. (59)
Satake et al. (60)
Vo et al. (61)
Harville et al. (62)
Almazàn and Jones (57)
Muñoz-Salinas et al. (63; 64)
Muñoz-Salinas et al. (65)
Choi et al. (66; 67)
Migniot et al. (68)
Gao et al. (69)
Ma et al. (70)
Table 4: Characterization of the reviewed methods based on their use of depth information

Liu et al. (46; 47) detect 3D points that are local height maxima, located at a reasonable distance from the ground. ROIs are defined as vertical cylinders of fixed size centered on these maxima. In (48), the same authors filter these positions by using a fast approach that applies typical head sizes and geometry to remove false candidates.

Ess et al. (51) estimate the ground plane jointly with object detection in a Bayesian network. The ground plane is inferred from the bounding boxes of detected objects and the depth-weighted median residual between the ground plane estimate in the previous frame and the lower regions of the depth image. Jafari et al. (52) first produce a rough estimation of the ground plane based on the known height of the camera, and then they project onto this plane the points that have a relative height of no more than 2m. 3D points that project in dense areas of the initial plane are excluded, and the remaining points are used to fit a more accurate plane to the ground surface using RANSAC (86). They then classify the remaining points into 3 different classes (‘object’, ‘free space’, and ‘fixed structure’ that usually denotes walls) based on their height and on their density when projected onto the ground plane. ROIs are searched for amongst the points labelled as ‘object’, by clustering them based on the connectivity of their ground projections and by retaining the clusters that contain a high enough number of points. They are then divided into sub-clusters that are likely to contain single humans, using the Quick-Shift algorithm (87) that groups points around local maxima in the density of their ground projections. Similarly, Bansal et al. (38)

also classify the 3D points into ‘object’, ‘ground’, and ‘overhanging structure’ (e.g. walls) using the distribution of heights in the cells of a grid superposed on the ground plane. The associations between these distributions and cells’ labels are learnt off-line by kernel density estimation. Finally, a smoothing is applied to the pixels’ labelling using a Markov Random Field that penalizes neighboring pixels that have different labels. Pixels labelled as ‘object’ are used in the detection phase to validate candidate detections.

Detecting and removing the ground plane from a point cloud also facilitates the clustering of the remaining points into separate objects, since they are no longer connected to each other through the floor. Bajracharya et al. (43) project all 3D points onto a ground plane, presumably estimated based on known camera height and orientation. The resulting map is used to select areas of high density as ROIs of foreground points. Zhang et al. (44) exploit the known height of their camera to produce a rough estimation of the ground and ceiling planes, similarly to (52). Then, at each new frame, they use the previously estimated planes to select 3D points within a distance threshold to the planes, that are used in a RANSAC algorithm to refine the planes’ estimations. After removing the ground and ceiling planes, the remaining points are clustered, first by isolating regions of similar depths around local maxima in the depth distribution, and then, for each region, by extracting connected components in the image plane. Munaro et al. (54) estimate the ground plane using a RANSAC-based least square method that is updated at each new frame to compensate for possible movements of the camera. The authors do not provide a detailed description of their RANSAC-based plane fitting stage. After suppressing the ground plane from the point cloud, they cluster the remaining points from their Euclidean distances. In order to avoid over-segmentation of objects, the neighbor clusters in ground plane coordinates are merged. Humans belonging to the same cluster are separated later in the detection stage, as will be explained in Section 4.2.

Choi et al. use a similar strategy in (66; 67) for detection. After ground plane removal, they cluster 3D points and then select the clusters whose heights are within an acceptable range. HOG-based detectors of both the upper and full body, and a face detector are then applied to the clusters to generate their weak, initial detection hypotheses. In (67), a skin color detector, a motion detector, and a detector based on upper body shape from depth are also used. Galanakis et al. (45) estimate the ground plane (without stating how) to discard any ROI points obtained by background subtraction that would be located on or close to the ground.

Bahadori et al. (58) apply off-line calibration to map their fixed stereo camera disparity data to the 3D world coordinate system. Their resulting reference plane is used to track moving objects by using 3D spatial information. A similar calibration is applied also in the stereo system in (62).

Background subtraction – While similar to ground plane removal, the background subtraction strategy has the advantage of producing ROIs that are more likely to contain humans and to exclude static objects. Its drawback is that it requires learning a model of the background, and updating it in case of moving cameras or variable background. In the latter case, people need to be moving in the scene faster than the background model is updated, to be detected as moving objects. A background model that employs depth may be more robust than a color one to modifications of appearance that are not correlated with changes of the scene’s geometry, such as due to illumination variations (8; 88).

In (57; 56), Almazàn and Jones initialize a background model from the first few frames of a sequence. The model is then updated progressively by a linear combination of the model’s and current depth values where foreground objects are detected, without modifying background areas that are assumed to remain unchanged. The result is that new background objects are eventually added to the model after they enter the scene, with the risk of adding stationary people when they stop moving for a significant amount of time. Foreground points are detected when the difference of their depth value with that of the model exceeds a threshold, which was empirically established in (57), and that accounts for the measured standard depth variation of the sensor as a function of the distance in (56). Foreground points are then projected onto a coarse horizontal grid, whose cells which contain a high enough number of foreground points are selected as ROIs. Galanakis et al. (45) also detect foreground points based on their difference with the depth values of a background model. No information is provided on the creation and possible update of the model. In a multi-camera setup, a global 3D coordinate system is used, and foreground points are reconstructed using triangular meshes. Triangles that are too close to the estimated ground plane are discarded. A top-down view of this scene - which in effect is a projection onto the floor - is generated using a GPU and used as a 2D map of the ROI clusters.

Muñoz-Salinas et al. define a background model in (53) as a height map, i.e. the map of maximum height for each ground plane coordinate. This model is built as the median of 10 consecutive maps, and it is updated every 10s. This update rate is chosen empirically based on the observed people’s dynamics to reduce the risk of introducing a person who is temporarily standing in the scene into the model. Background subtraction is performed by selecting the points whose height are above the model’s value. Foreground points are clustered using their projection on the ground plane, and the clusters that occupy a suitably large area and that contain enough points are selected as ROIs. These ROIs are used as human detections for the matching stage. A color-based face detector initializes new tracks. In (62), Harville applies the mixture of Gaussian based foreground segmentation method of (82) to their stereo-based RGB-D data. The foreground objects are then projected to a 2D reference plane where occupancy and height maps are generated. These features are then used to track the foreground blobs with Kalman filters in the 2D reference plane.

Salas and Tomasi (39) use the background model introduced by Gordon et al. (89)

to combine color and depth in a 4D Gaussian mixture model. Foreground points are detected as those that are more than 3

away from the nearest background mode, and large clusters of 3D connected components are selected as ROIs. These ROI clusters are validated as humans or rejected in the detection stage by a color-based HOG detector. Muñoz-Salinas and co-workers in (63; 64) use a similar model that was defined in (82), which is updated by excluding points that belong to detected people. The foreground points are projected onto the ground plane for use in the detection and matching stages, and regions around local density maxima in this plane are selected as ROIs. Bahadori et al. (58) propose a very simple unimodal background model by exploiting both intensity images and the estimated stereo disparity. The background model is dynamically adapted after a short initial phase where moving objects are assumed to be not in the scene. Moving object blobs are obtained by subtracting the actual data from the background model and then projecting it to the reference plane to be tracked.

Beymer et al. (59) employ the stereo based background subtraction algorithm proposed in (75). The background model is initialized with an empty scene. The foreground regions are then segmented to extract dominant disparity layers, assuming that different people in the scene may be located at different distances from the camera. The obtained blobs are then processed by the person detection module. Vo et al. (61; 81) identify the background areas combining the navigation information of the moving robot and a depth-based occupancy grid. Background areas are excluded from the candidate search space, speeding up the next steps of their algorithm.

Motion detection – For indoor applications, it may be reasonable to assume that moving objects are likely to be human, and to select areas with motion as ROIs for human detection. Our previous discussion on the respective sensitivities of depth and color to appearance changes for background subtraction also applies here, and motion may be detected more reliably using depth than from color only. Thus, authors such as Choi et al. (66; 67) detect changes in 3D point clouds of consecutive frames, represented in octrees, following the method proposed in (90). The motion term in their estimation of human presence likelihood is then the ratio of moving pixels in the candidate region.

Han et al. (42) apply the same foreground detection technique as (57; 56), but they use the previous frame as the background model. Thus, their foreground points selection is equivalent to selecting moving objects between two successive frames. The moving points are then clustered into ROIs based on the continuity of their values in the depth image.

4.2 Human detection

Depth information has been found by many authors to provide cues for human detection that are complementary to color-based appearance information. These cues mostly describe the 3D shape of the target, and they can be taken advantage of by (a) direct comparison against simple geometrical characteristics of a human shape, or (b) through the classification of 2D and 3D features, as detailed next. Columns 5 to 7 of Table 4 summarize the reviewed methods based on their exploitation of these depth cues for people detection.

3D geometrical properties – To speed up the detection process, many authors apply a cascade of small detectors to the ROIs, starting from the most lightweight ones, followed by the more computationally intensive ones on the few remaining candidates not dealt with by the earlier stages. A very fast and popular early detection stage is the assessment of the ROI clusters against simple geometrical constraints that are determined empirically. In (43), Bajracharya et al. select ROI clusters based on the expected width, height, and depth variance of a standing adult. Then a classifier on 3D features further refines the selection of clusters that have a human-like shape. In (44), Zhang et al. first verify that the height of objects, as well as the number of points in their clusters, are within the expected ranges for a human target. Then, a random selection of normals to the cluster’s surface vote to discard vertical (e.g. wall) and horizontal (e.g. tables) surfaces. Finally, a HOG and SVM-based detector is used to validate the remaining human candidates using RGB data. In (54; 55), Munaro et al. consider that ROI clusters may contain several humans, or a miscellany of humans and background objects. They extract sub-clusters that are likely to contain individual humans by detecting heads, denoted as local height maxima that follow heuristic rules on their distance from the scene floor and on the minimal separation with others. These initial detections are then validated or rejected as humans by a HOG-based detector on RGB data. Vo et al. in (61; 81) apply different geometric constraints to limit the search space of their skin and face detector modules. In particular, features like size and height (considering sitting and standing person) are used.

In (65)

, Muñoz-Salinas et al. detect the upper body using an Adaboost classifier with Haar-like features in color images, and then confirm these detections by verifying their width and planarity. For each positive classification, a binary mask of the upper body shape is applied to the depth image in order to compute the mean and standard deviation of the depth inside the template shape. These are used to estimate the probability for human detection, following heuristic assumptions on the expected width and planarity of a person. When confirmed by this test on depth distribution, the new detections are used to initialize new tracks. Ess et al.

(51) combine the output of a (or any) color based person detector with depth based cues in a Bayesian network, where the object detection probability depends on the probability provided by the color based human detector, the probability of human presence given the scene geometry (using ROIs), and the probability of the detected object to be a human given its 3D geometrical properties, evaluated based on typical human height. The detection is also refined around the estimated location of the color detector by imposing a uniform depth inside its bounding box, when depth is sufficiently available.

Other works, such as (42; 40; 68) limit their human detection stage to an assessment of the geometry of ROI clusters to achieve an even higher frame rate. In (40), Dan et al. detect humans in top-down depth views by selecting local height maxima within a specified range that show the characteristic empirically-determined shape and size of head and shoulders when seen from above. A similar top-down camera approach has been used by Migniot et al. (68) where the head and shoulder area is obtained by thresholding the depth data and fitting two ellipsoids to the identified regions. This model is then used to estimate body and head orientation.

Han et al. (42) only evaluate the height of moving ROI clusters, assuming that human-sized objects that move in an indoor environment are likely to be humans. Thus, ROI clusters, that have a height within a specified range that does not change significantly over five frames, are selected as human detections. A similar approach is also presented in (62; 58; 60) where the height of the detected 3D blobs is used to discard moving objects that are unlikely to be people. Similarly, Almazàn and Jones (57; 56) select ROI clusters of moving points that have a high density when projected onto the ground. In (57), detections are defined as areas of a pre-defined size around local density maxima in the ground projection map of the ROI points. In (56), a blob detection technique is used, with smoothing and hole filling of the projected points into blobs, as well as filtering out those blobs that have a projected points density below a certain threshold. The authors note that the depth resolution of their sensors decreases with distance, producing an increasing spread of measured depth values around the exact values, i.e. a stretching of blobs on the line-of-sight of the camera. Thus, the blobs are first normalized in a polar coordinate system. The density threshold is chosen as a function of depth, in order to compensate for the perspective effect that decreases the number of points in the blobs with distance. Galanakis et al. (45) also select blobs in their top-down 2D view of ROIs. Morphological operations are applied to a binary mask of their 2D view, and blobs that are too small are discarded.

In (63; 64), Muñoz-Salinas and co-workers compute the likelihood of human presence in candidate regions (i.e. regions around local density maxima of foreground points projected onto the ground plane), based on the maximal height and the number of points in these regions that follow empirically established expected values and associated standard variations. This likelihood is used both for detecting new people to initialize tracks and for tracking existing targets.

Darrell et al. (41) segment the disparity stereo map with a simple combination of a gradient operator and thresholding based on the typical volume occupied by a person facing the camera. Large connected components are then considered as possible human candidates, and head locations are estimated at the top of each connected component. A combination of face and skin detectors is used to rule out false detections.

Liu et al. (48), improved the detection performance of their previous algorithms (46; 47) both in terms of accuracy and processing speed by using a cascade of classifiers based on 3D geometry data. They use a very fast filter based on empirical thresholds on the typical human head size. A second classification stage, based on a Ring-wedge Mask Detector (91), is then applied to identify the head and shoulders region.

Ma et al. (70) use a pool of classifiers based on color and depth data to detect the human body and its articulated parts. The 3D spatial structure of the tracked object’s parts is taken into account such that the detector pool learns pre-determined configurations, and hence the system is able to cope with pose variations.

Classifier of 2D features in the depth map – Classifiers that are traditionally used on grey-level or color images may also be applied to depth data, or disparity maps in stereo imaging, to recognize the 2D shape of a human. Munaro et al. (55) apply a Haar-like feature classifier in a cascade to both the color image and the disparity map, to exploit different and independent features that increase the detection rate and reduce the number of false positives. Luber et al. (49) introduce a variant of the HOG detector for depth maps, the Histogram of Oriented Depth (HOD), that they use in a SVM classifier to compute probabilities which are linearly combined with the ones obtained from a classical HOG-based SVM classifier on RGB data to detect humans. A similar approach is used in (70) where several DPM classifiers (36)

trained on HOG features extracted from the depth maps are used in the detection phase.

Template matching in depth images has also been used (59; 60; 66; 67; 52) for recognizing the 2D shape of the upper-body. Choi et al. (66; 67) compute the likelihood of the depth image to contain a person by template matching of the thresholded depth map with the upper body shape. This probability is combined with the output of a HOG-based detector, and of face, skin color, and motion detectors, to obtain the human presence likelihood term in their tracking algorithm. Similarly, Jafari et al. (52) perform template matching of the depth map with a depth template of the upper body. This depth-based detection is used in close-range images, while a color-based HOG detector allows for detecting people at a further range where depth sensors may not operate satisfactorily. In (59), the binary template is applied in a classic fashion to foreground blobs and a person is detected when the response is above a certain threshold. Satake et al. (60) apply a set of three binary templates (78), containing frontal and side views of head and shoulders, to the disparity map. The sum of squared distance criterion is then used to select human candidates. Detections are checked by using a SVM classifier trained on HOG features.

Classifiers on the 3D point cloud – Similarly to (66; 67; 52), template matching of a human shape may also be performed in 3D. Bansal et al. (38) adapt a 3D template to the camera view-point, before its correlation with the disparity map is computed for each ground plane coordinate. Local maxima in this correlation map, together with neighboring correlation values above 60% of the associated maximum, are selected as initial detection candidates. These regions are refined by discarding points with divergent depths, and by selecting areas with a high density of edges in the color image.

Bajracharya et al. (43)

apply a linear classifier to a number of features derived from the 3D points of detected candidates. Some of these features capture the variance of the height of the points within the candidate, and the object’s size and extent. Three rotationally invariant features also account for the eigenvalues of the point cloud’s covariance matrix.

In order to avoid making hard assumptions on the shape of a human body or upper body, Liu et al. (46; 47) train an SVM classifier on two features computed from the height and color distributions of 3D points. Their features are a histogram of the heights of the upper body, and a joint color and height histogram of the head, respectively. The upper body and head points are found in regions of pre-defined sizes in the ROI clusters. Harville et al. (62) apply a box filter, set by considering the average adult human height and torso width, to the occupancy map corresponding to the segmented 3D foreground clusters. The peak of the response is thresholded to discard false positive detections.

4.3 Use of depth in matching

This section reviews how the use of depth information reduces ambiguities for establishing correspondences of detected people against existing tracks through (a) the provision of 3D trajectories, and (b) by enhancing description of the target in combination with color. These two uses of depth information for matching in the reviewed methods are summarized in the last two columns of Table 4.

3D tracking – The majority of methods reviewed in this survey construct trajectories in the 3D space to facilitate 3D tracking. This allows better handling of trajectory crossings when objects move past each other in the scene in the camera’s viewpoint. We now highlight the role of depth position information in the matching stage which was described earlier in Section 3.

Dan et al. (40) place their camera on the ceiling with a top-down view, therefore the 2D coordinate system of the image can be seen as a good approximation of the 2D ground plane coordinate system. Then, they match detected 3D shapes in adjacent frames from their degree of physical overlap. Galanakis et al. (45) also track people in a top-down view of the 3D scene rendered from multiple views, by comparing their distance to predicted target positions on the 2D ground plane.

Gao et al. (69) employ depth data to build a 3D layered graph model of the scene to solve possible occlusions, and thus, they boost their proposed tracking algorithm. In (39), Salas and Tomasi exploit 3D location information for computing one-to-one correspondences between candidates, by including a term based on their separating distance in their appearance and motion based correspondence likelihood formulation. Similarly, the various authors of (58; 59; 60; 43; 44; 46; 47; 49; 52; 51; 53; 54; 55; 56) all perform matching by determining the 3D distance of a detected candidate to its predicted position. In (62; 57; 63; 65; 64; 66; 67; 70), 3D position predictions are used to initialize the search for targets in the 3D space.

Only three works described in Section 3 do not exploit the 3D trajectory information. In (42), Han et al. use similarities in color, and variations of depth across two adjacent frames, in order to compute matching correspondences, without taking into account the 3D coordinates. In (38), although Bansal et al. use 3D coordinates for ROI selection for their human detection stage, and for camera motion estimation, their matching stage is performed in 2D. Similarly, Vo et al. (61) implement their compressive tracking and Kalman filter by considering the target’s movements only in the image plane.

Joint RGB and depth description – The fusion of color and depth information allows for more reliable correspondence of detected candidates to tracks. An example of such fusion is in Luber et al. (49), where they build their model from a combination of three possible features: Haar-like features in intensity and depth images, and Lab color feature in the RGB image. Several such features are calculated from small rectangles, randomly sized and positioned inside the bounding box of the detected person. A combination of a few of these features is selected by on-line boosting to produce a classifier that attempts to distinguish the tracked person from its surroundings. Liu et al. (46),(47) use a joint color and height histogram of the full body as their appearance model. The likelihood of new detected candidates to match this model is computed using the Bhattacharyya similarity measure. Similarly, Almazàn and Jones (56) model people’s appearances from a height histogram associated with color distributions for each histogram bin, approximated by 3D Gaussians in the RGB space. Dan et al. (40) assess the correspondence between two detected candidates by linearly combining a Bhattacharyya measure of similarity of their color histograms, and the overlap of the 3D shapes of both candidates. This last value, in addition to accounting for the distance in the 3D space between the candidates, may also capture the similarity of their shapes if their 3D locations are close enough. Beymer et al. (59) use an intensity model and average disparity value to describe a person. Both are linearly updated, and their drift is limited by applying their person detector confidence as a smoothing factor.

Han et al. (42) propose the use of depth for generating an appearance model, where a silhouette obtained from depth information helps in isolating the relevant parts of the body from which a color-based appearance model is built. The neck and waist are identified as local width minima of the silhouette along the vertical direction. They divide the color image of the person into head, torso, and legs areas. Torso and legs colors are then used to build the appearance model, by concatenating histograms of color and texture for both regions. Galanakis et al. (45) also exploit depth to produce a two-part color histogram model of upper and lower body, using their textured mesh representation obtained during their ROI selection stage. The mesh is divided into lower and upper body parts at an empirically determined height.

In (65)

, Muñoz-Salinas et al. assume planarity of standing people in order to compute a single-valued depth term of the RGB and a depth based likelihood of a target detection. The mean depth of a candidate region is assessed against a normal distribution with mean equal to the predicted target’s distance to the camera, and standard deviation chosen heuristically and decreasing with an increased confidence in depth (measured as the proportion of pixels in the region that have a depth measure). Two depth terms are computed for the head and torso separately. The detection likelihood of a target also includes the comparison to two color histogram-based appearance models for the head and torso respectively, using the Bhattacharyya measure, and the assessment of the fitting of an ellipse on the color image at the expected head location, using image gradients.

5 Considerations on the practical applications of MHT

The methods presented in this survey are, almost always, customised for specific scenarios or application areas by employing assumptions on aspects, such as the position of the camera(s) (e.g. static or mobile, top-down or head-level view), the geometry of the scene, and the generic description of a person. In order to guide the reader in their choice of RGB-D MHT method, we next outline the conditions of use of the reviewed methods. These are also summarized in Table 5.

5.1 Type of depth sensor

Historically, depth has been mostly obtained from passive stereo cameras, which offered a cheaper alternative to other technologies such as active sensor cameras. Depth from stereo vision is still widely used in MHT methods, such as (59; 41; 62; 58; 60; 53; 63; 65; 64; 38; 52; 51; 43). The recently introduced and affordable Kinect camera (and those like it) generate depth from structured light and are more convenient to use than stereo vision for indoor scenes, since they do not require calibration and an elaborate computation of a disparity map. Thus, computer vision researchers are increasingly adopting such cheaper and more immediate technology for RGB-D MHT when it can sufficiently serve their purpose, such as in (68; 61; 81; 49; 66; 39; 40; 54; 42; 67; 44; 57; 56; 46; 47; 55; 52; 45; 70).

Most of the methodologies presented in this survey could use passive and active sensors interchangeably, including those that extract features directly from disparity maps, as disparity and depth maps have similar properties. However, the optimal conditions of use for both types of sensors differ significantly, both in terms of depth range and illumination conditions. For example, the depth range of structured light cameras tends to be more limited than that of stereo vision, and they are also more sensitive to infra-red light, which makes them unsuitable for outdoor uses. On the other hand, color based stereo cameras require good illumination conditions and they may not operate in dark environments for example. Moreover, the additional processing time required to obtain disparity from stereo data can be critical for real time applications. These particularities were highlighted by Jafari et al., who used both sensors in (52) to track people in close and far ranges. The second column of Table 5 shows the type of sensor used in the works in this survey.

5.2 Camera Position

Handling of moving cameras – Some applications rely on static sensors that provide a stable background model, e.g. (58; 62; 59; 53; 63; 64; 39; 57; 56; 45), especially when this model is static itself and not updated on-line to account for camera movements, as in (39; 57; 56) and we presume in (45). Some methods attempt to update the background model continuously, e.g. (58; 62; 59; 53; 63; 64). Although these MHT methods did not present any experiments with mobile cameras, the authors of (53) state that their method has been developed with ’human-mobile robot’ interaction in mind, and that their background modelling technique is specially appropriate for mobile devices. Indeed, these models may be able to adapt to camera motion, provided that the update rate is faster. The implementation of this strategy is not easy, since, as discussed in Section 4.1, an update rate that is too fast would be likely to result in slow people being included in the background model. Thus, as the authors explain, it has to be tuned depending on the application.

On the contrary, methods that do not assume a static or nearly static background can generally be used with a mobile camera, such as a PTZ or one mounted on a mobile robot. Some methods assume that both person and camera motion are smooth, and they treat their combined effects as that of a single speed for the tracked person relative to the camera (65; 54; 44; 60). Others exploit the global 3D coordinate system provided via the depth dimension in order to track the camera’s movements. Choi et al. (66) and Vo et al. (61) compute the position of the camera using the ROS library (92) in order to project target locations onto the global 3D coordinate system, and Bansal et al. (38), Bajracharya et al. (43), Ess et al. (51), and Jafari et al. (52) do the same using visual odometry. In (51), the visual odometry algorithm is improved by feedback from the tracker which helps avoid using areas that are likely to contain moving objects. In their later work, Choi et al. (67) estimate both the motion of the camera and the humans in the scene in their combined detection and matching stage. In general, these approaches assume that the camera is moving, at least locally, on a mostly flat ground plane.

Method Sensor type Handle a mobile camera Handle multiple cameras Camera location constraints Real-time Processing hardware Require flat ground Other special requirements
Bansal et al. (38) Stereo x Roughly frontal view 10 fps CPU Intel Dual-Core None identified
Salas and Tomasi (39) Structured light x Not tested Roughly frontal view No data No data Limited to standing people
Dan et al. (40) Structured light x Not tested Vertical top-down view 55 fps on QVGA stream CPU 2.4GHz, 4GB RAM Limited to standing adults
Darrell et al. (41) Stereo Not tested x Roughly frontal view 12 fps No data x None identified
Han et al. (42) Structured light Not tested x Roughly frontal view 10 fps (2 people) CPU Dual core 2.53GHz, 4GB RAM x Limited to standing adults – People must be moving to be detected
Bajracharya et al. (43) Stereo Not tested None identified 5–10 fps No data x Limited to standing adults
Zhang et al. (44) Structured light Not tested Roughly frontal view 7–15 fps CPU 2.0GHz, 4GB RAM None identified
Galanakis et al. (45) Structured light x Multiple views from different angles are desirable Real-time (no exact data) GPU NVIDIA GTX680 x None identified
Liu et al. (46; 47) Structured light Not tested Not tested None identified 30-50 fps CPU i5-2500, 8GB RAM May be limited to standing adults
Luber et al. (49) Structured light Not tested Roughly frontal view No data No data x May be limited to standing people
Ess et al. (51) Stereo Not tested Roughly frontal view when using HOG based detectors 3 fps GPU nVidia GeForce 8800 and CPU 2.66 GHz None identified
Jafari et al. (52) Combined stereo and structured light Roughly frontal view 18-24 fps CPU i7-3630QM and GPU NVIDIA GeForce GT650m, 12GB RAM May be limited to standing people in the far range
Muñoz-Salinas et al. (53) Stereo Not tested Not tested Roughly frontal view 10 fps CPU 3.2 GHz Pentium IV People only detected in a close range (by face detector) but tracked on a larger range
Munaro et al. (54; 55) Structured light Roughly frontal view 30 fps (55), 26 fps (54) CPU Xeon E31225 3.10GHz (54) Minimal separation between people’s heads: 30 cm – May be limited to standing adults
Almazàn and Jones (57; 56) Structured light x None identified No data No data x Stationary people may not be detected after a while
Bahadori et al. (58) Stereo x Not tested Fixed to ceiling, pointing down at 10 fps CPU 2.4 GHz May be limited for other camera configurations, 3D coordinate system calibration
Beymer et al. (59) Stereo x Not tested Parallel to the ground floor 10 fps No data x None identified
Satake et al. (60) Stereo Not tested None identified 9 fps No data x Developed for wheel-chair applications, camera placed around 1m height
Vo et al. (61; 81) Structured light x None identified 23 fps CPU 2.4 GHz x Developed for robot applications, camera placed around 1m height
Harville et al. (62) Stereo x Not tested None identified 8 fps CPU 750 MHz None identified
Muñoz-Salinas et al. (63; 64) Stereo Not tested None identified 15 fps (63) and 100 fps (64) (4 people) AMD Turion 3200, 1GB of RAM None identified
Muñoz-Salinas et al. (65) Stereo Not tested Frontal view at head level 23 fps (3 people) AMD-K7 2.4 GHz x Limited to standing people
Choi et al. (66; 67) Structured light Not tested Roughly frontal view 5-10 fps GPU x May be limited to adults
Migniot et al. (68) Structured light x Not tested Vertical top-down view 40 fps CPU 3.1 GHz x None identified
Ma et al. (70) Structured light Not Tested None identified No data No data x Hand labelled data of body parts to train DPM
Table 5: Conditions of use of the presented methods

Note that the works in (53; 46; 47; 63; 64; 70), although not tested with mobile cameras, perform tracking in a global 3D coordinate system similarly to (38; 66; 52), and we believe could therefore apply the same mobile camera-handling strategy if combined with camera motion estimation. These approaches can be successful in a moving camera scenario if the camera position requirements (discussed in the next section) can be generally met.

The works in (49; 42; 55) also could employ the same ‘smooth relative-speed strategy’ as (65; 54; 44). The possibility of using mobile cameras with the reviewed methods is indicated in the third column of Table 5.

Handling of multiple cameras – The works in (59; 62; 60; 69; 49; 54; 52) can exploit information from multiple cameras simultaneously and fuse detections from independent cameras at the matching stage. This requires the relative positions and orientations of the cameras to be known or estimated off-line. In particular, (54) has been extended to the multi camera scenario in (93; 94). This strategy would be accessible to all methods that apply the main MHT pipeline and perform tracking in a 3D global coordinate system.

This multi-sensor data fusion strategy is not possible in works that apply the variation of the MHT pipeline since they do not perform the detection and matching stages sequentially, such as in (57; 66; 67; 63; 64; 65). However, in (57; 56; 63; 64), detection and matching are performed on a global representation of data on the 2D ground plane, which is generated in (57; 56; 64) from the point clouds of several cameras. The methods in (58; 66; 67; 65) use 2D color image based people detectors, but they track people in a 3D space. Thus, they could use multiple cameras if all the transformations from the individual image spaces to the global 3D space are known. Similarly to (57; 56; 63; 64), Galanakis et al. (45) detect and then track people in a common 2D coordinate system. The fourth column in Table 5 indicates which of the reviewed works can (or could) handle multiple cameras.

Requirements on the camera’s position and orientation – Methods that use human detectors that are trained on specific view positions and angles, such as HOG trained from roughly frontal views, require similar views of people. This is the case in (54; 55; 49; 44; 65; 39; 38; 52), and also in the implementation of (51), although the authors stress that other color based detectors can be used. Similarly, the works in (66; 67; 53) employ face detectors, and require a roughly frontal view of the face to be visible in a significant number of frames. The methods in (41; 53; 65) were specifically designed for a camera located at head (or just under head) height. In particular, the work in (65) assumes the human shape as seen by the camera can be approximated by a vertical plane. Han et al. (42) also require a frontal view for analyzing human silhouettes, as explained earlier in Section 4.3. The methods in (40; 68) operate on a top-down view due to their specific detection strategy centered around monitoring humans seen from above. In (45), a sufficient coverage of the scene by multiple cameras at various viewing angles is preferred to produce 3D textured meshes of humans. In (58), the camera is placed on the ceiling at an angle of , so as to reduce occlusions as upper body parts are always visible. A similar camera position is used in the e-health application presented by Ma et al. in (70). However, their proposed system, based on different DPM classifiers, is able to deal with considerable variations of human body pose, hence ensuring also a certain invariability to camera viewpoints. Beymer et al. (59) propose a 3D motion model based on the assumption that the stereo camera is placed parallel to the ground floor. In (60), the system has been specifically designed for a wheel-chair navigation system, and the stereo camera is placed at an approximate height of 1m. The various requirements and limitations of the camera position and orientation are summarized in column 5 of Table 5.

5.3 Speed of computation

The works in (39; 49; 57; 56; 70) provide no computational information. Harville et al. (62) report a processing rate of 8fps, however this is obtained using obsolete hardware and it would dramatically improve if tested on current workstations. The rest of the methods we review claim real-time performances, with the exception of Ess et al. (51), who report a running time of 300ms per frame on a GPU, plus an additional (off-line) 30s for the color based human detector. Their method can be used with other, more efficient, color based detectors.

Running times and the hardware platforms used, when available, are reported in column 6 and 7 of Table 5. Methods that use stereo vision suffer from the overhead of deriving disparity maps, while depth information is readily available from structured light-based sensors. Some authors, such as Bansal et al. in (38), speed-up this computation using a GPU.

Works such as in (42; 63; 64) report performances that vary significantly according to the number of people being tracked. This is particularly the case in methods that use multiple trackers for individual people, such as the 3D Kalman filter in (60) or particle filters in (63; 64; 65; 68). Such methods also need to establish a trade-off between the number of particles used and the accuracy and robustness of tracking.

Jafari et al. (52) exploit both depth and color information in complementary distance ranges, and speed-up the total process from 33fps (on GPU) to 18fps by applying the computationally intensive color detector only in far ranges (over 7m) where the depth-based detector does not operate.

Finally, Liu et al. (47) report a processing rate range of 30-50 fps, without the use of GPU hardware, for their detection and tracking system. In addition, in their more recent work (48), they boosted the detection phase by using a cascade of classifiers on top of their depth-color histogram model. This meant that their detection module can operate in a range of 77-140 fps, however no rate is given for the entire detection and tracking processing.

5.4 Specific Constraints

Flat ground – Methods that detect ROIs based on an estimation of the ground plane, as detailed in Section 4.1, cannot handle environments where the ground cannot be approximated by a plane. This is the case, for example, of staircases, where Munaro et al. report worse results in (54). Similarly, the methods in (38; 52) classify the scene into general categories that include a flat ground and vertical structures, and would most likely not generalize well to a staircase environment.

In both (40) and (68), people are detected by thresholding the distance of their head to the camera, which has to be within an acceptable range. Therefore, although there is no hard constraint on ground planarity, varying ground level can influence the head-to-camera distances significantly.

Methods, such as those in (57; 56; 46; 47; 53; 63; 64; 45; 43), project detections onto a flat ground plane. In (57; 56; 45; 43), ROIs are not selected based on height from this plane111In (43), ROI clusters are selected based on their absolute height, and in (45), only a few points close to the ground plane are discarded, not the full ROI clusters., so the flat ground assumption does not need to correspond to reality. However, this is not the case in (46; 47; 53; 63; 64) where people have to be located in a relatively narrow band above the floor to be detected. In (58; 62) a reference plane is used to track 3D blobs in a real world coordinate system. As the camera and real world scene are calibrated, the reference plane does not have to be necessarily flat. Column 7 in Table 5 indicates which of the methods operate only in ground plane scenarios.

Constraints on pose – Several works, e.g. (59; 62; 58; 68; 61; 54; 55; 40; 42; 66; 67; 46; 47; 53; 63; 64; 43) select ROIs based on height and volume assumptions derived from a model of a standing adult person. Such methods may not be able to detect and track, e.g., children, adults with abnormal heights, and sitting people, if the acceptable ranges for height and volume are not chosen appropriately. This is the case, for example, for Choi et al. (66) and Han et al. (42), who filter heights in ranges of 1.3 to 2.3 m and 1.5 to 2 m respectively. Other authors, such as Zhang et al. (44) and Liu et al. (46; 47), accept quite larger range of values (0.4 to 2.3 m and 0.6 to 2 m for height, respectively), to prepare their ROIs to handle children or people who may not be standing. Ess et al. (51) suggest the possibility of detecting children by increasing the standard deviation of their normal height distribution.

Methods that use full-body detectors such as HOG and HOD, i.e. (61; 54; 55; 49; 39; 52), may also struggle to detect sitting people if these detectors are trained on standing people only. To alleviate this shortcoming, Choi et al. (66; 67) combine full-body and upper-body detectors, in order to cope with both occlusions of the lower part of the body and various poses. Multiple different DPM detectors based on HOG features are used to deal with deformable body pose (as for example sitting, bending etc) in (70). Jafari et al. (52) also apply an upper-body detector based on a depth template, as described in Section 4.2, and Liu et al. (46; 47) detect people based on a model of height of the upper-body. Similarly, Zhang et al. (44) use a poselet-based detector (95) to identify body parts, and Bansal et al. (38) perform matches of several local contours, thus allowing the detection of people in arbitrary poses. In (65), detected candidates are checked against a planar model of a standing person using depth information. Thus, sitting people would be rejected by the human detector.

Miscellaneous – Munaro et al. (54; 55) distinguish people in close interaction based on the separation of their heads which needs to be at least 30cm. This constraint is generally easily respected, especially in a public environment. Han et al. (42) detect people based exclusively on movement and on their height (see Section 4.2). Therefore, motionless people would not be detected. In (53), new people are detected by a face detector, and the authors set the detector to only scan the close range area (0.5-2.5m) to speed-up the process. Tracking is still performed on the full space, but would be initialized only after the person enters this detection region. Constraints on body pose and other miscellaneous constraints are stated in the last column of Table 5.

5.5 Preprocessing the depth map

Depth maps tend to suffer from noise and areas of missing values, whatever the sensor, and result in inhomogeneous point clouds. A few of the reviewed works correct these deficiencies before exploiting depth information.

Depth map denoising and completion – Zhang et al. (44)

suppress outliers from the 3D point clouds by removing the points that have only a few neighbors. In Galanakis et al.

(45), the overlapping views of the structured light sensors create interferences that add noise to the scene. The authors report that in their setup this noise is predominantly on the ground plane and negligible on humans, and use an estimate of the ground plane to eliminate any points close to the ground in their ROI selection stage. The method proposed by Dan et al. in (40)

detects people based exclusively on their 3D shape in the depth map, and so can underperform when faced with missing depth values. To close the holes in their map, they first apply morphological operations to the binarized depth values, obtained by thresholding the heights above the ground plane, and then a nearest neighbor interpolation is used to recover the depth values in the gaps that were filled in the binary map.

Voxel grid filtering – Works, such as Jafari et al. (52) and Almazàn and Jones (56), which consider the number of points in ROI clusters have to take into account the perspective effect that makes the density of points depend on the distance to the camera. Munaro et al. (54) address this issue to produce an homogeneous density of points in the volume space by re-sampling their 3D point cloud before ROI detection and clustering. Thus, they ensure that the number of points in a cluster depends only on the size of the object within it, rather than on a combination of its size and distance to the camera. In (61), Vo et al. reduce their initial search space by subsampling their color and depth images.

Fusion of point clouds – Jafari et al. (52) obtain richer 3D point clouds by combining those obtained over a time window of 5 to 10 frames, using their mobile camera motion, estimated by visual odometry. In (64), Muñoz-Salinas et al. merge the ground plane representation of overlapping views from several sensors by retaining the points in a global coordinate system that have the highest confidence. Galanakis et al. (45) fuse foreground points of overlapping views in a global coordinate system during their ROIs selection stage. Note that works, such as (57; 56), which fuse foreground points of non-overlapping views, do not require specific manipulation of the point clouds and only need to calibrate their cameras’ positions and orientations.

6 Online resources: benchmark datasets and software

In this section, we provide an overview of publicly available benchmark datasets and source code, with a summarised list provided in Tables 6 and 7, respectively.

6.1 Dataset resources

The ETH dataset from (96) contains stereo sequences obtained by a pair of AVT Marlin F033C cameras mounted on a mobile platform. Images are acquired at resolution at 14fps. The corresponding disparity maps are obtained by using the stereo algorithm presented in (97), but are not available for download. The dataset is composed of 5 sequences recorded in very busy pedestrian zones, and these have been manually annotated every four frames by labeling only pedestrians that are greater than 60 pixels in height. The groundtruth does not contain tracks IDs, hence only the detectors’ performances can be obtained. An example of the ETH stereo data is shown in Figure 4.

Figure 4: ETH stereo dataset example.
Figure 5: RGB-D UHD dataset example.

The dataset presented in (24) and (49) is obtained by using static cameras, positioned 1.5m high, in a large university hall. We refer to this dataset as the University Hall Dataset (UHD). An array of three Kinect devices, with non-overlapping fields of view to avoid IR interferences, is used to record people passing through the university hall. Due to the Kinect sensor’s range limitations, depth data is not available beyond a certain range in the hall. The image resolution is , with synchronised sequences recorded at 30Hz. This rather small dataset is composed of three sequences each approximately 1130 frames in length. There are 3021 instances of people, and 31 tracks have been manually annotated as groundtruth (for detection and tracking). An example of this UHD data is shown in Figure 5.

The RGB-D tracking dataset presented in (67) contains two different scenarios captured with Kinects, one static and one mobile. We refer to this dataset as the StanfordRGB-D dataset. The first scenario, the Kinect office, contains 17 sequences with the camera placed 2m high in an office. These videos contain different occlusion scenarios and human poses. The second scenario, the Kinect mobile, contains 18 sequences with people performing daily activities in offices, corridors, and hallways. These sequences were recorded with the camera mounted on a mobile platform (a PR2 robot). In both sets, human positions are hand-annotated (four images every second) with bounding boxes around their upper bodies - hence, both detection data and targets ID are included. Groundtruth odometry information of the camera’s location in 3D space is also available. An example of the StanfordRGB-D dataset for the static camera scenario is given in Fig. 6.

Figure 6: StanfordRGB-D dataset example.

The Kinect Tracking Precision Dataset (KTP) proposed in (54) and (98) was acquired with a Microsoft Kinect, at a resolution of and recorded at 30Hz, on board a mobile platform. It contains 5 different sequences, exhibiting 14766 instances of humans in 8475 frames. Both manually labelled 2D image and metric groundtruth (for detection and target IDs) are provided, and 3D positions are also available since an infrared marker was placed on every subject’s head. Figure 7 shows an example frame from the KTP dataset.

Figure 7: RGB-D KTP dataset example.

The dataset in (46; 47) contains 10 sequences recorded with a Kinect sensor in an indoor (shop) environment, and we refer to it as the SD dataset. The device was mounted at 2.2m high with about a tilt towards the floor and the sequences were recorded at 30Hz at a resolution of . The groundtruth, produced once every 30 frames, does not contain target ID information, and thus only detection accuracy can be tested. An example of the SD dataset is displayed in Fig. 8.

Figure 8: RGB-D SD dataset example.

A recent dataset introduced in (56) was obtained using three static Kinect devices, all positioned at about 2m high in a lab with non-overlapping views. We refer to this multicamera dataset as the KingstonRGB-D dataset. The sequences contain people moving in the lab, individually or in numbers, with paths crossing at times. The dataset comprises six 1000-frame sequences split equally into a training set and a test set. The cameras’ calibration matrices and the data to a obtain a wider planar map of the scene are also available. The groundtruth supplies detections and target IDs for all the different views. An example of the KingstonRGB-D dataset is shown in Fig.9.

Figure 9: RGB-D KingstonRGB-D dataset example.

To recapitulate, only the ETH dataset (96) is based on stereo data, while the others presented here have all been recorded using the Kinect and hence contain only indoor scenes. As also highlighted in Section 5, in most of the proposed approaches the position of the camera facilitates the acquisition of frontal views of the moving human. Only in the dataset presented in (67), (46; 47) and (56) is the camera placed close to the ceiling, giving a top-oblique view of the scene. This setup yields a more realistic example of a typical surveillance camera location.

Device # Sequences # Frames Groundtruth Comments
ETH(96) Stereo device (AVT Marlins F033C) 8 5017 YES manually annotated detection only Minimum pedestrian size 48 pixels, Calibration and odometry data available
UHD(49), (24) Multiple Kinect 1 static 3 1130 frames per sequence YES Manually annotated detections and 31 people tracks Part of the scene out of depth range
StanfordRGB-D(67) Kinect 1 static and mobile 17 (static) 18 (moving) 4500 frames per sequence on average YES manually annotated four images per second, detections and tracks Camera positioned 2m high for the static sequences. Groundtruth odometry of camera location available
KTP(98), (54) Kinect 1 static and mobile 5 8475 YES manually annotated and infrared marker groundtruth, detections and tracks Device placed at robot level, sequences with different complexity
SD(46; 47) Kinect 1 static 10 YES manually annotated one image per second, only detections Camera positioned 2.2m high, inclination. Cluttered and crowded scenes
KingstonRGB-D(56) Multiple Kinect 1 static 6 1000 frames per sequence YES manually annotated, detections and tracks Cameras positioned around 2m high. Cameras’calibration matrices available
Table 6: RGB-D benchmark datasets - In all cases resolution = and frame rate = Hz (except ETH (96) = Hz)
Processing Arch. Processing Rate (fps) Dependencies Requirements Availability
Luber et al. (49), (24) CPU Eigen2 Partial: Depth-based detector module and integration with Kinect not available
Choi et al. (67) CPU+GPU 5-10 Opencv, boost Partial: Depth based detector module not available
Munaro et al. (98), (54) CPU 23-28 Detector only, 19-26 Detector + Track boost, eigen3,flann,Openni,PCL Partial: only detector module available, manual initialization of ground plane required. Integrated with PCL
Jafari et al. (52) CPU+GPU 24 without HOG-GPU 18 with HOG-GPU FOVIS, Openni x64, CImg, CUDA, KConnectedComponentLabeler, boost, eigen3, ImageMagick++ Partial: missing modules for stereo data processing to estimate tracked camera position and projections. GPU-CPU processing to enable far distance detections
Munaro et al. (93; 94) CPU 30 ROS, PCL Full for live camera network, but no plug and play module to test offline data. Multi-camera and Multi-device support for calibration and synchronization.
The authors of the paper were contacted (who responded) to clarify details about their software release.
Table 7: Software resources

6.2 Software resources

There are only very few software resources for RGB-D MHT tracking publicly available. A list of these can be found in Table 7. The source code for the method presented in (49) is available for Linux platforms - however, it does not provide all the functionalities described in (49). For example, the code corresponding to the detection module based on HOD features (described in Section 4.2) has not been released, but the code for the tracking core, based on MHT (Section 3.1) and the online adaboost detector (Section 3.1) are available and are integrated with a laser range scanner. Despite the source code being incomplete, this resource is still very useful as the missing HOD module can be developed by the interested researcher starting from one of the available color based HOG versions and then by using the UHD data to train the classifier. The code can also be easily ported onto a Windows environment as its only dependency, Eigen, is available for both linux and Windows. The authors of (49) do not provide details of computational performance of their method.

The authors of (67) provide the source code for their tracking module, based on a RJ-MCMC particle filter (Section 3.2), but some of their proposed detectors (Section 4.1), in particular their depth-based silhouette (Section 4.2), are not made available or integrated into the main processing loop of their software. Their method also runs on Linux, but we have ported it to Windows as the main dependencies needed to run it, OpenCV and Boost, are available on both platforms.

Munaro and co-workers in (54) and (98) have integrated the detector stage of their tracking methodology into the Point Cloud Library (PCL) (99). This integration with such a widely used library, is one of the main advantages of this source code as it can be easily ported to both Windows and Linux. They indicate a processing throughput of 19fps on an Intel i5-520M 2.40GHz CPU and 26fps on an Intel Xeon E31225 3.10GHz CPU; in both cases a 4GB DDR3 memory was used. These remarkable results can be associated with the specific optimization approaches used, for example, as stated in Section 5.5, the algorithm in (54) dramatically reduces the point cloud size by a subsampling procedure and by eliminating ground plane points as described in Section 4.1.

Recently, this software package has been upgraded by Munaro et al. (93; 94) to support a multi-camera RGB-D system. The new software library, OpenPTrack, is compatible with Microsoft Kinect and Mesa SwissRanger and can achieve real-time tracking of people at Hz. Each sensor stream independently detects people, while tracking is performed in a central unit by fusing the contribution of all the network nodes. The detection and tracking software however is not easily accessible as a plug and play module. The algorithms presented in (54; 98; 55) are included in OpenPTrack.

Jafari et al. (52) provide the source code for their method which imposes different dependencies as shown in Table 7. The OpenNI library is used as their interface for both the Kinect and Asus Xtion sensors. Their system is based on both a short-range depth based human detector (100) running at 24fps on a single CPU and a far-range HOG-based human detector (77) (see also Section 3.1) which must run on a GPU. Their experimental results were obtained using an Intel i7-3630QM with 12GB RAM and a NVIDIA GeForce GT650m GPU. The main advantage of this software resource is the possibility to activate the two different detection modules independently. This adaptability offers the opportunity to balance accuracy and processing speed, and the possibility to avoid using modules when not necessary, e.g. the longer range detector in an indoor environment. Note the system requires calibration and odometry data to operate.

7 Comparative evaluations

We now consider how various works have used the datasets introduced in the previous section for evaluating their methods. Two types of comparative evaluations are presented next - the first attempts to compare different published works on a publicly available dataset (or part of it), while the second presents within-method comparative results by switching one or more of the method’s components off. Unfortunately, we are not able to compare the results of the software listed in Table 7 due to the limitations outlined in the previous section.

7.1 Inter-Method Comparative Results

Two publicly available datasets have been used by more than one published work, the ETH and the UHD datasets.

ETH - The stereo ETH dataset has been used by (67; 52; 38; 54; 51; 43) to test their specific methods, with many utilizing different sequences, and metrics, for evaluation. Bearing this in mind, Table 8 displays the log-average miss rate (LAMR) (9) results which is focused on people detection accuracy for the ETH-Bahnhof sequence of the ETH dataset. LAMR is computed by averaging the miss rate vs false positive per image (MR-FPPI) graph in the range in the false positive axis. In particular, we use the reported MR-FPPI results in (67; 38; 54; 51) to extrapolate the LAMR values (second column of Table 8). Note, the MR-FPPI results reported in (67; 38) are not available for the entire range, and for this reason we estimate the Modified_LAMR (third column of Table 8) by considering a smaller interval in the range . The best Modified_LAMR result is obtained by Choi et al. (67).

The sequence ‘Sunny day’ of the ETH dataset is used to test the methods proposed in (52; 38; 43). The results are reported with graphs of ‘recall versus false positive per image’. As reported by Jafari et al. (52), their method achieves the best results - for example, for a fixed FPPI value of 0.5, their recall rate is which is greater than by (43) and by (38).

LAMR Modified_LAMR
Ess et al. (51) 0.645 0.527
Bansal et al. (38) 0.612
Choi et al. (67) 0.434
Munaro et al. (98; 54) 0.663 0.592
Table 8: ETH dataset detection results

UHD – The UDH dataset was used to evaluate the methods proposed in (49),(54),(70), and (67) tested with only color-based features. Tracking performance are reported by considering the CLEARMOT metrics (101) for which two indexes are given - the multiple object tracking accuracy (MOTA) index estimates the tracking error by considering the false negatives, false positives and mismatches, and the multiple object tracking precision (MOTP) index which measures how well exact target positions are estimated. False positive, false negative ratios and identity switches are also reported. Table 9 shows the results reported in (49), (54; 98). While the method proposed by Luber et al. (49) guarantees best performance in term of MOTA, FP and FN, the method of Ma et al. allows to dramatically reduce the number of identity switches. Similar top performance are obtained by Munaro et al. in (98) who state that their poor performance on this dataset is mainly due to mis-detection of people on the staircase sequence as it breaks the flat ground assumption that is central for this approach (as described in Section 4.1). When ignoring these mis-detections in the stairs, as well as re-annotating some groundtruth which were believed to be incorrect, the authors reported an improved MOTA result of 88.9%. This result cannot be used for comparative evaluation here as the groundtruth is modified. For the MOTA metric, the methods presented in (70; 67) lead to significantly low scores.

Luber et al. (49) - 78.0% 4.5% 16.8% 32
Munaro et al. (54; 98) 73.7% 71.8% 7.7% 20.0% 19
Choi et al. (67) (only color) 57.6% 20.2% 20.9% 57.6% 1.28
Ma et al. (70) 70.4% 26.9% 13.9% 57% 2.1
Table 9: UHD dataset tracking results

StanfordRGB-D – The StanfordRGB-D dataset has been used by its creators in (67), and by Liu et al. (47) and Vo et al. (61), to evaluate the detection accuracy of their proposed approaches to MHT. Choi et al. (67) present their results in terms of MR-FPPI and the LAMR for the two different scenarios (fixed camera and mobile platform), obtained by averaging across the different sequences. Table 10 summarizes the LAMR values, reported in (67), for the two scenarios: static camera (second column) and moving camera (third column). After the full method in the first row of the table, each row reports the results obtained by turning off one of the detectors (see Section 3.2). As shown, the depth cue is the most important for the system, where the LAMR value increases by around 0.25 in both scenarios when this detector is not employed. The HOG based detector is also significant to the final performance of the system, while the impact of the other detectors is less. The full system obtains the same LAMR value of around 0.6 for both scenarios. The recent results obtained by Vo et al. (61) show that for both scenarios (moving and static cameras) the proposed approach outperforms the results obtained by the RJ-MCMC method in (67). Liu et al. (47) only report their results in terms of MR-FPPI and thus it is not possible to precisely calculate Modified_LAMR values.

Method Static camera Mobile camera
Choi et al. (67) {61mm Full 0.60 0.601
No depth 0.844 0.858
No Hog 0.657 0.695
No Face 0.612 0.608
No Skin 0.626 0.629
No Motion 0.592 0.637
Vo et al. (61) 0.52 0.514
Table 10: StanfordRGB-D dataset detection results

7.2 Intra-Method Comparative Results

Three datasets have been compared on variations of the same method providing comparative results.

KTP – The KTP dataset was prepared and used by (98) to evaluate the tracking performance of their method (54) with the CLEARMOT metrics. In Table 11, we present some of the results reported in (98). The first row shows the results obtained by the algorithm presented in (54) by using all its components, including the sub-sampling strategy described in Section 5.5. This strategy guarantees real-time performance (see Section 6) at little loss of performance in comparison to when not subsampling the point cloud (second row). The last three rows contain the results for using different color spaces as input to the color classifier (see Section 3.1). The authors claim that the best results are obtained with the HSV color space, especially for the reduction of identity switches.

Full (HSV) 84.2% 85.8% 0.7% 12.5% 53
No sub. 84.2% 83.0% 0.6% 15.9% 56
Full (RGB) 84.2% 86.1% 0.8% 12.7% 60
Full (CIELab) 84.2% 86.5% 0.9% 12.2% 56
Full (CIELab) 84.2% 86.7% 0.9% 12.9% 65
Table 11: KTP dataset tracking results

As previously mentioned, in (94), Munaro et al. present an extended software library containing the algorithms presented in (98) that is able to cope with different depth devices. In (94) they also evaluated the performance of their tracking algorithm by using different devices. They present results for three different sequences recorded with both the Kinect (based on structured light technology) and the recent Kinect V2 (based on time-of-flight technology), and one other time-of-flight device (SR4500). The time-of-flight sensors both did better than the first generation Kinect, while Kinect V2 performed better than the SR4500 due to its higher resolution depth representation.

SD – In (46; 47), the authors first compare color-only to the depth-color detector, reporting the value of the break-even point (i.e. where precision is equal to the recall in the PR curve) of 93% when the depth-color detector is used, compared to 52% when the standard color-based HOG detector is employed. The tracking results presented by the authors in (46; 47) are reported in Table 12. They show how the proposed method based on depth-color combination guarantees a better performance, for both lost tracks and ID switches, with respect to the proposed tracker relying only on color or depth solely to solve the data association problem.

Lost tracks ID switches
Depth-color tracker 13 1
Depth tracker 14 15
Color tracker 15 6
Table 12: SD dataset tracking results

KingstonRGB-D – This dataset has been used only by its creators in (56) to estimate the performance of their tracking method. They evaluate their methods by considering some of the metrics proposed in (102), i.e. Correct Detected Tracks (CDT), False Alarm Tracks (FAT), Track Detection Failure (TDF), and ID Switches. Additionally, the F1-score metric is used as a summarizing metric. The results obtained by the authors in (56) are reported in Table 13 and demonstrate that the proposed tracking strategy based on a color and depth appearance model (described in Section 4.3) is able to outperform an alternative tracking strategy that uses depth data only.

Depth/Spatial model 27 18 35 0.5 60
Color+depth 40 5 19 0.77 15
Table 13: KingstonRGB-D dataset tracking results

8 Challenges

In summary, depth data is a fundamental cue that can bring more reliability to MHT methods, but there are many challenges that the vision community needs to address to advance this area further.

To start with, it is important that this research community can generate for itself standard and diverse datasets to cover all kinds of application areas (e.g. surveillance, health monitoring, pedestrian tracking, etc) that can help it to evaluate old and new algorithms in a consistent fashion. However, this predicates on researchers to make their data and software more widely available, and report their methodology and processes in a reasonably reproducible fashion.

There are still many challenges where depth can be explored further. For example, depth can be a fundamental tool for better (partial) occlusion detection while tracking, so we should expect to see some creative uses of depth information to achieve higher accuracy rates for multiple human tracking - perhaps even in busy scenes depending on the camera viewpoint. Developments on resilient part-based tracking of humans will also help in better occlusion handling.

Depth sensors’ accuracy is limited to a certain range of distance, hence another important challenge is handling of scale while tracking. The better occlusion and scale handling, the greater the diversity of applications color appearance and depth-based tracking can contribute to. Indoor applications, such as in-home health monitoring may be well served by active sensors, whereas outdoor or longer range applications, such as surveillance monitoring, would be handled by passive sensors. Improvements to the detection range and technology of active sensors will help to overcome shortcomings in scale handling in indoor environments, as already evidenced by the new Kinect V2 compared to the first generation Kinect.

Humans have articulated parts so they will be observed in a variety of poses in various scenarios, compounded by the fact that they also interact with each other. The majority of current works, if not all, track humans while they are walking or standing. A huge challenge lies in tracking people engaged in other activities, e.g. to monitor their routine for health monitoring, or maintaining tracking while they undergo drastic pose changes, e.g. if they bend down, sit down and then stand up, or perform certain prescribed exercises.

Other challenges include more regular issues, such as developing better features and more elaborate adaptive and dynamic methodologies (e.g. such as by applying deep learning techniques).

9 Conclusion

This survey provided an overview of all existing works known to us that fuse RGB and depth data for multiple human tracking. It is a snapshot of the current works in the last few years, along with data and software resources, as well as some comparative results. MHT is still a relatively young but quickly progressing area where the availability of cheap depth sensors is a huge contributing factor to the regeneration of old, and creation of new, human detection and tracking methods. The analysis and the results reported in the review demonstrates that depth data is fundamental to boost RGB-only MHT methods in terms of both detection accuracy and tracking reliability as depth data introduce very powerful spatial cues (3D shapes and 3D locations) that are also less sensitive to scene illumination conditions. Moreover, the combined color-depth appearance model can be used to describe humans also at region level. Further, despite the processing of the additional depth cue, real-time performance can still be maintained, as depth data allows for the significant reduction of the search space for both detector and tracker modules, even when simple heuristic rules are used.


This work was performed under the SPHERE IRC project funded by the UK Engineering and Physical Sciences Research Council (EPSRC), Grant EP/K031910/1.



  • (1)

    X. Wang, Intelligent multi-camera video surveillance: A review, Pattern Recognition Letters 34 (1) (2013) 3 – 19.

  • (2) X. Zabulis, D. Grammenos, T. Sarmis, K. Tzevanidis, P. Padeleris, P. Koutlemanis, A. Argyros, Multicamera human detection and tracking supporting natural interaction with large-scale displays, Machine Vision and Applications 24 (2) (2013) 319–336.
  • (3) F. Cardinaux, D. Bhowmik, C. Abhayaratne, M. S. Hawley, Video based technology for ambient assisted living: A review of the literature, Journal of Ambient Intelligence and Smart Environments 3 (3) (2011) 253–269.
  • (4) D. Geronimo, A. Lopez, A. Sappa, T. Graf, Survey of pedestrian detection for advanced driver assistance systems, Pattern Analysis and Machine Intelligence, IEEE Transactions on 32 (7) (2010) 1239–1258.
  • (5) W.-L. Lu, J.-A. Ting, J. Little, K. Murphy, Learning to track and identify players from broadcast sports videos, Pattern Analysis and Machine Intelligence, IEEE Transactions on 35 (7) (2013) 1704–1716.
  • (6) Microsoft Corporporation. Kinect for Xbox 360 (2009).
  • (7) Asustek Computer Inc. Xtion PRO LIVE (2009).
  • (8) J. Han, L. Shao, D. Xu, J. Shotton, Enhanced computer vision with Microsoft Kinect sensor: A review, Cybernetics, IEEE Transactions on 43 (5) (2013) 1318–1334.
  • (9) P. Dollar, C. Wojek, B. Schiele, P. Perona, Pedestrian detection: An evaluation of the state of the art, Pattern Analysis and Machine Intelligence, IEEE Transactions on 34 (4) (2012) 743–761.
  • (10) W. Luo, X. Zhao, T. Kim, Multiple object tracking: A review, CoRR abs/1409.7618, pre-Print Version.
  • (11) L. Chen, H. Wei, J. Ferryman, A survey of human motion analysis using depth imagery, Pattern Recognition Letters 34 (15) (2013) 1995 – 2006.
  • (12) J. Zhang, W. Li, P. O. Ogunbona, P. Wang, C. Tang, RGB-D-based action recognition datasets: A survey abs/1601.05511, pre-Print Version.
  • (13) J. Suarez, R. Murphy, Hand gesture recognition with depth images: A review, in: RO-MAN, 2012 IEEE, 2012, pp. 411–417.
  • (14) F. Endres, J. Hess, J. Sturm, D. Cremers, W. Burgard, 3D mapping with an RGB-D camera, IEEE Transactions on Robotics (T-RO).
  • (15) M. Enzweiler, D. Gavrila, Monocular pedestrian detection: Survey and experiments, Pattern Analysis and Machine Intelligence, IEEE Transactions on 31 (12) (2009) 2179–2195.
  • (16) T. Li, H. Chang, M. Wang, B. Ni, R. Hong, S. Yan, Crowded scene analysis: A survey, Circuits and Systems for Video Technology, IEEE Transactions on PP (99) (2014) 1–1.
  • (17) M. Paul, S. Haque, S. Chakraborty, Human detection in surveillance videos and its applications - a review, EURASIP Journal on Advances in Signal Processing 2013 (1).
  • (18) H. Zhou, H. Hu, Human motion tracking for rehabilitation—a survey, Biomedical Signal Processing and Control 3 (1) (2008) 1 – 18.
  • (19) G. García, D. Klein, J. Stückler, S. Frintrop, A. Cremers, Adaptive multi-cue 3D tracking of arbitrary objects, in: Pattern Recognition, Vol. 7476 of Lecture Notes in Computer Science, 2012, pp. 357–366.
  • (20) S. Song, J. Xiao, Tracking revisited using RGBD camera: Unified benchmark and baselines, in: Computer Vision (ICCV), 2013 IEEE International Conference on, 2013, pp. 233–240.
  • (21) Q. Wang, J. Fang, Y. Yuan, Multi-cue based tracking, Neurocomputing 131 (0) (2014) 227 – 236.
  • (22) B. Zhong, Y. Shen, Y. Chen, W. Xie, Z. Cui, H. Zhang, D. Chen, T. Wang, X. Liu, S. Peng, J. Gou, J. Du, J. Wang, W. Zheng, Online learning 3D context for robust visual tracking, Neurocomputing 151, Part 2 (0) (2015) 710 – 718.
  • (23) S. Walk, K. Schindler, B. Schiele, Disparity statistics for pedestrian detection: Combining appearance, motion and stereo, in: European Conference on Computer Vision (ECCV), 2010.
  • (24) L. Spinello, K. O. Arras, People detection in RGB-D data., in: Proc. of The International Conference on Intelligent Robots and Systems (IROS), 2011.
  • (25) H. Wang, C. anf Liu, L. Ma, Depth Motion Detection - A Novel RS-Trigger Temporal Logic based Method, IEEE Signal Processing Letters 21.
  • (26) L. Xia, C.-C. Chen, J. Aggarwal, Human detection using depth information by Kinect, in: Computer Vision and Pattern Recognition Workshops (CVPRW), 2011 IEEE Computer Society Conference on, 2011, pp. 15–22.
  • (27) C. Stahlschmidt, A. Gavriilidis, J. Velten, A. Kummert, Applications for a people detection and tracking algorithm using a time-of-flight camera, Multimedia Tools and Applications (2014) 1–18.
  • (28) T. Bagautdinov, F. Fleuret, P. Fua, Probability Occupancy Maps for Occluded Depth Images, in: Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
  • (29) B. Fosty, C. F. Crispim-Junior, J. Badie, F. Bremond, M. Thonnat, Event Recognition System for Older People Monitoring Using an RGB-D Camera , in: ASROB - Workshop on Assistance and Service Robotics in a Human Environment, 2013.
  • (30) P. Dondi, L. Lombardi, L. Cinque, Multisubjects tracking by time-of-flight camera, in: Image Analysis and Processing – ICIAP 2013, Vol. 8156 of Lecture Notes in Computer Science, 2013, pp. 692–701.
  • (31) E. Grenader, D. Gasques Rodrigues, F. Nos, N. Weibel, The videomob interactive art installation connecting strangers through inclusive digital crowds, ACM Trans. Interact. Intell. Syst. 5 (2) (2015) 7:1–7:31.
  • (32) E. E. Stone, M. Skubic, Fall detection in homes of older adults using the Microsoft Kinect, IEEE Journal of Biomedical and Health Informatics 19 (1) (2015) 290–301.
  • (33) N. Zhu, T. Diethe, M. Camplani, L. Tao, A. Burrows, N. Twomey, D. Kaleshi, M. Mirmehdi, P. Flach, I. Craddock, Bridging e-health and the internet of things: The SPHERE project, IEEE Intelligent Systems 30 (4) (2015) 39–46.
  • (34) N. Dalal, B. Triggs, Histograms of oriented gradients for human detection, in: Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, Vol. 1, 2005, pp. 886–893 vol. 1.
  • (35) L. Bourdev, J. Malik, Poselets: Body part detectors trained using 3-D human pose annotations, in: ICCV, 2009.
  • (36) P. F. Felzenszwalb, R. B. Girshick, D. McAllester, D. Ramanan, Object detection with discriminatively trained part based models, IEEE Transactions on Pattern Analysis and Machine Intelligence 32 (9) (2010) 1627–1645.
  • (37)

    P. Viola, M. Jones, Robust real-time face detection, International Journal of Computer Vision 57 (2) (2004) 137–154.

  • (38) M. Bansal, S.-H. Jung, B. Matei, J. Eledath, H. Sawhney, A real-time pedestrian detection system based on structure and appearance classification, in: Robotics and Automation (ICRA), 2010 IEEE International Conference on, 2010, pp. 903–909.
  • (39) J. Salas, C. Tomasi, People detection using color and depth images, Mexican Conference on Pattern Recognition (25288).
  • (40) B.-K. Dan, Y.-S. Kim, Suryanto, J.-Y. Jung, S.-J. Ko, Robust people counting system based on sensor fusion, Consumer Electronics, IEEE Transactions on 58 (3) (2012) 1013–1021.
  • (41) T. Darrell, G. Gordon, M. Harville, J. Woodfill, Integrated person tracking using stereo, color, and pattern detection, International Journal of Computer Vision 37 (2) (2000) 175–185.
  • (42) J. Han, E. J. Pauwels, P. M. de Zeeuw, P. H. N. de With, Employing a RGB-D sensor for real-time tracking of humans across multiple re-entries in a smart environment, Consumer Electronics, IEEE Transactions on 58 (2) (2012) 255–263.
  • (43) M. Bajracharya, B. Moghaddam, A. Howard, S. Brennan, L. H. Matthies, A Fast Stereo-based System for Detecting and Tracking Pedestrians from a Moving Vehicle, The International Journal of Robotics Reasearch OnLine.
  • (44) H. Zhang, C. Reardon, L. Parker, Real-time multiple human perception with color-depth cameras on a mobile robot, Cybernetics, IEEE Transactions on 43 (5) (2013) 1429–1441.
  • (45) G. Galamakis, X. Zabulis, P. Koutlemanis, S. Paparoulis, V. Kouroumalis, Tracking persons using a network of RGBD cameras, in: 7th International Conference on Pervasive Technologies for Assistive Environments (PETRA), 2014.
  • (46) J. Liu, Y. Liu, Y. Cui, Y. Q. Chen, Real-time human detection and tracking in complex environments using single RGB-D camera, in: Image Processing (ICIP), 2013 20th IEEE International Conference on, 2013, pp. 3088–3092.
  • (47) J. Liu, Y. Liu, G. Zhang, P. Zhu, Y. Q. Chen, Detecting and tracking people in real time with rgb-d camera, Pattern Recognition Letters 53 (0) (2015) 16 – 23.
  • (48) J. Liu, G. Zhang, Y. Liu, L. Tian, Y. Q. Chen, An ultra-fast human detection method for color-depth camera, Journal of Visual Communication and Image Representation 31 (2015) 177 – 185.
  • (49) M. Luber, L. Spinello, K. O. Arras, People tracking in RGB-D data with on-line boosted target models., in: Proc. of The International Conference on Intelligent Robots and Systems (IROS), 2011.
  • (50) T. Linder, K. O. Arras, Multi-model hypothesis tracking of groups of people in RGB-D data, in: IEEE Int. Conf. on Information Fusion (FUSION’14), Salamanca, Spain, 2014.
  • (51) A. Ess, B. Leibe, K. Schindler, L. Van Gool, Robust multiperson tracking from a mobile platform, Pattern Analysis and Machine Intelligence, IEEE Transactions on 31 (10) (2009) 1831–1846.
  • (52) O. Jafari, D. Mitzel, B. Leibe, Real-time RGB-D based people detection and tracking for mobile robots and head-worn cameras, in: IEEE International Conference on Robotics and Automation (ICRA’14), 2014.
  • (53) R. Muñoz Salinas, E. Aguirre, M. García-Silvente, People detection and tracking using stereo vision and color, Image and Vision Computing 25 (6) (2007) 995 – 1007.
  • (54) M. Munaro, F. Basso, E. Menegatti, Tracking people within groups with RGB-D data, in: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, 2012, pp. 2101–2107.
  • (55) M. Munaro, C. Lewis, D. Chambers, P. Hvass, E. Menegatti, RGB-D Human Detection and Tracking for Industrial Environments, in: In Proceedings of the 13th International Conference on Intelligent Autonomous Systems (IAS-13), 2014.
  • (56) E. Almazàn, G. Jones, A Depth-based Polar Coordinate System for People Segmentation and Tracking with Multiple RGB-D Sensors, in: IEEE ISMAR 2014 Workshop on Tracking Methods and Applications, 2014.
  • (57) E. Almazàn, G. Jones, Tracking people across multiple non-overlapping RGB-D sensors, in: Computer Vision and Pattern Recognition Workshops (CVPRW), 2013 IEEE Conference on, 2013, pp. 831–837.
  • (58)

    S. Bahadori, L. Iocchi, G. Leone, D. Nardi, L. Scozzafava, Real-time people localization and tracking through fixed stereo vision, in: Innovations in Applied Artificial Intelligence, Vol. 3533 of Lecture Notes in Computer Science, 2005, pp. 44–54.

  • (59) D. Beymer, K. Konolige, Real-time tracking of multiple people using stereo, in: Computer Vision Workshops (ICCV Workshops), 1999 IEEE International Conference on, 1999, pp. 1076–1083.
  • (60) J. Satake, M. Chiba, J. Miura, Visual person identification using a distance-dependent appearance model for a person following robot, International Journal of Automation and Computing 10 (5) (2013) 438–446.
  • (61) D. M. Vo, L. Jiang, A. Zell, Real time person detection and tracking by mobile robots using RGB-D images, in: Robotics and Biomimetics (ROBIO), 2014 IEEE International Conference on, 2014, pp. 689–694.
  • (62) M. Harville, Stereo person tracking with adaptive plan-view templates of height and occupancy statistics, Image and Vision Computing 22 (2) (2004) 127 – 142.
  • (63) R. Muñoz Salinas, A bayesian plan-view map based approach for multiple-person detection and tracking, Pattern Recognition 41 (12) (2008) 3665 – 3676.
  • (64) R. Muñoz Salinas, R. Medina-Carnicer, F. Madrid-Cuevas, a. Carmona-Poyato, People detection and tracking with multiple stereo cameras using particle filters, Journal of Visual Communication and Image Representation 20 (5) (2009) 339–350.
  • (65) R. Muñoz Salinas, M. García-Silvente, R. M. Carnicer, Adaptive multi-modal stereo people tracking without background modelling, Journal of Visual Communication and Image Representation 19 (2) (2008) 75 – 91.
  • (66) W. Choi, C. Pantofaru, S. Savarese, Detecting and tracking people using an RGB-D camera via multiple detector fusion, in: Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, 2011, pp. 1076–1083.
  • (67) W. Choi, C. Pantofaru, S. Savarese, A general framework for tracking multiple people from a moving camera, Pattern Analysis and Machine Intelligence, IEEE Transactions on 35 (7) (2013) 1577–1591.
  • (68) C. Migniot, F. Ababsa, Hybrid 3D-2D human tracking in a top view, Journal of Real-Time Image Processing (2014) 1–16.
  • (69) S. Gao, Z. Han, C. Li, Q. Ye, J. Jiao, Real-time multipedestrian tracking in traffic scenes via an RGB-D-based layered graph model, Intelligent Transportation Systems, IEEE Transactions on PP (99) (2015) 1–12.
  • (70) A. J. Ma, P. C. Yuen, S. Saria, Deformable distributed multiple detector fusion for multi-person tracking, CoRR abs/1512.05990.
  • (71)

    Y. Rubner, C. Tomasi, L. Guibas, The earth mover’s distance as a metric for image retrieval, Journal of International Computer Vision 40 (2) (2000) 99–121.

  • (72) A. Argyros, M. Lourakis, Real-time tracking of multiple skin-colored objects with a possibly moving camera, in: European Conference on Computer Vision (ECCV), 2004.
  • (73) P. Padeleris, X. Zabulis, A. Argyros, Multicamera tracking of multiple humans based on colored visual hulls, in: Emerging Technologies Factory Automation (ETFA), 2013 IEEE 18th Conference on, 2013, pp. 1–8.
  • (74) I. J. Cox, S. L. Hingorani, An efficient implementation of Reid’s multiple hypothesis tracking algorithm and its evaluation for the purpose of visual tracking, IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 18 (2) (1996) 138–150.
  • (75) C. Eveland, K. Konolige, R. Bolles, Background modeling for segmentation of video-rate stereo sequences, in: Computer Vision and Pattern Recognition. Proceedings. IEEE Computer Society Conference on, 1998, pp. 266–271.
  • (76) B. Leibe, K. Schindler, N. Cornelis, L. Van Gool, Coupled object detection and tracking from static cameras and moving vehicles, Pattern Analysis and Machine Intelligence, IEEE Transactions on 30 (10) (2008) 1683–1698.
  • (77) P. Sudowe, B. Leibe, Efficient use of geometric constraints for sliding-window object detection in video, in: Computer Vision Systems, Vol. 6962 of Lecture Notes in Computer Science, 2011, pp. 11–20.
  • (78) J. Satake, J. Miura, Robust stereo-based person detection and tracking for a person following robot, in: ICRA Workshop on People Detection and Tracking, 2009.
  • (79) D. Lowe, Object recognition from local scale-invariant features, in: Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, Vol. 2, 1999, pp. 1150–1157 vol.2.
  • (80) H. Kuhn, The Hungarian method for the assignment problem, Naval Reasearch Logistics Quaterly 2 (1955) 83–97.
  • (81) V. D. My, A. Masselli, A. Zell, Real time face detection using geometric constraints, navigation and depth-based skin segmentation on mobile robots, in: Robotic and Sensors Environments (ROSE), 2012 IEEE International Symposium on, 2012, pp. 180–185.
  • (82) M. Harville, G. Gordon, J. Woodfill, Foreground segmentation using adaptive mixture models in color and depth, in: IEEE Workshop on Detection and Recognition of Events in Video, 2001, pp. 3–11.
  • (83) A. Milan, K. Schindler, S. Roth, Multi-target tracking by discrete-continuous energy minimization, IEEE Transactions on Pattern Analysis and Machine Intelligence PP (99) (2015) 1–1.
  • (84) D. Comaniciu, P. Meer, Mean shift: A robust approach toward feature space analysis, IEEE Trans. Pattern Analysis and Machine Intelligence 24 (5) (2002) 603–619.
  • (85) D. R. Chambers, C. Flannigan, B. Wheeler, High-accuracy real-time pedestrian detection system using 2D and 3D features, in: Proc. SPIE, Vol. 8384, 2012, pp. 83840G–83840G–11.
  • (86) M. A. Fischler, R. C. Bolles, Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, Commun. ACM 24.
  • (87) A. Vedaldi, S. Soatto, Quick Shift and kernel methods for mode seeking, in: European Conference on Computer Vision (ECCV), 2008.
  • (88) M. Camplani, L. Salgado, Background foreground segmentation with RGB-D kinect data: An efficient combination of classifiers, Journal of Visual Communication and Image Representation 25 (1) (2014) 122 – 136.
  • (89) G. Gordon, T. Darrell, J. Woodfill, Background estimation and removal based on range and color, in: In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR-99), 1999.
  • (90) J. Kammerl, Octree Point Cloud Compression in PCL (2011).
  • (91)

    D. Ganotra, J. Joseph, K. Singh, Modified geometry of ring-wedge detector for sampling fourier transform of fingerprints for classification using neural networks, Optics and Lasers in Engineering 42 (2) (2004) 167 – 177.

  • (92) M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, , A. Y. Ng, ROS: an open-source robot operating system, in: ICRA Workshop on Open Source Software, 2009.
  • (93) M. Munaro, A. Horn, R. Illum, J. Burke, R. Rusu, OpenPTrack: People Tracking for Heterogeneous Networks of Color-Depth Cameras, in: IAS-13 Workshop Proceedings: 1st Intl. Workshop on 3D Robot Perception with Point Cloud Library, 2014, pp. 235–247.
  • (94) M. Munaro, F. Basso, E. Menegatti, OpenPTrack: Open source multi-camera calibration and people tracking for RGB-D camera networks, Robotics and Autonomous Systems 75, Part B (2016) 525 – 538.
  • (95) L. Bourdev, J. Malik, Poselets: Body part detectors trained using 3D human pose annotations, in: Computer Vision, 2009 IEEE 12th International Conference on, 2009, pp. 1365–1372.
  • (96) A. Ess, B. Leibe, K. Schindler, L. Van Gool, A mobile vision system for robust multi-person tracking, in: Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, 2008, pp. 1–8.
  • (97) P. Felzenszwalb, D. Huttenlocher, Efficient belief propagation for early vision, in: Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, Vol. 1, 2004, pp. I–261–I–268 Vol.1.
  • (98) M. Munaro, E. Menegatti, Fast RGB-D people tracking for service robots, Autonomous Robots (2014) 1–16.
  • (99) R. B. Rusu, S. Cousins, 3D is here: Point Cloud Library (PCL), in: IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 2011.
  • (100) D. Mitzel, B. Leibe, Close-range human detection and tracking for head-mounted cameras, in: Proceedings of the British Machine Vision Conference, BMVA Press, 2012, pp. 8.1–8.11.
  • (101) K. Bernardin, R. Stiefelhagen, Evaluating multiple object tracking performance: The clear mot metrics, J. Image Video Process. 2008 (2008) 1:1–1:10.
  • (102) M. Szczodrak, P. Dalka, A. Czyzewski, Performance evaluation of video object tracking algorithm in autonomous surveillance system, in: Information Technology (ICIT), 2nd International Conference on, 2010, pp. 31–34.