For making sense of the vast quantity of video data generated by large scale surveillance camera networks in public spaces, automatically (re-)identifying individual persons across non-overlapping camera views distributed at different physical locations is essential. This task is known as person re-identification (ReID). Automatic ReID enables the discovery and analysis of person-specific long-term activities over widely expanded areas and is fundamental to many important surveillance applications such as multi-camera people tracking and forensic search. Specifically, for performing cross-view person ReID, one matches a probe (or query) person against a set of gallery people for generating a ranked list according to their matching similarity. Typically, it is assumed that the correct match is assigned to one of the top ranks, ideally the top- rank [1, 2, 3, 4, 5]. As the probe and gallery people are often captured from a pair of disjoint camera views at different times, cross-view visual appearance variations can be significant. Person ReID by visual matching is thus inherently challenging . The state-of-the-art methods perform this task mostly by matching spatial appearance features (e.g. colour and texture) using a pair of single-shot person images [2, 3, 7, 8]. However, single-shot appearance features of people are intrinsically limited due to the inherent visual ambiguity caused by clothing similarity among people in public spaces and appearance changes from cross-camera viewing condition variations (Fig. 1). It is desirable to explore space-time information from image sequences of people for ReID.
Space-time information has been explored extensively for action recognition [10, 11]. Moreover, discriminative space-time video patches have also been exploited for action recognition . Nonetheless, action recognition approaches are not directly applicable to person ReID because pedestrians in public spaces exhibit similar walking activities without distinctive and semantically categorisable action patterns unique to different identities.
On the other hand, gait recognition techniques have been developed for person recognition using image sequences by discriminating subtle distinctiveness in the style of walking [13, 14]. Different from action recognition, gait is a behavioural biometric that measures the way people walk. An advantage of gait recognition is no assumption being made on either subject cooperation (framing) or person distinctive actions (posing). These characteristics are similar to person ReID situations. However, existing gait recognition models are subject to stringent requirements on person foreground segmentation and accurate alignment over time throughout a gait image sequence (a walking cycle). It is also assumed that complete gait/walking cycles were captured in the target image sequences [15, 16]. Most gait recognition methods do not cope well with cluttered background and/or random occlusions with unknown covariate conditions . Person ReID is hence inherently challenging for gait recognition techniques (Fig. 1).
In this study, we aim to construct a discriminative video matching framework for person re-identification by selecting more reliable space-time features from person videos, beyond the often-adopted spatial appearance features. To that end, we assume the availability of image sequences of people which may be highly noisy, i.e., with arbitrary sequence duration and starting/ending frames, unknown camera viewpoint/lighting variations during each image sequence, incomplete frames due to uncontrolled occlusions, no guaranteed high frame rates, and possible clothing changes over time. We call these videos unregulated image sequences of people (Fig. 1 and Fig. 5). More specifically, we propose a novel approach to Discriminative Video fragment selection and Ranking (DVR) based on a robust space-time and appearance feature representation given unregulated person image sequences.
The main contributions of this study are: (1) We derive a multi-fragment based appearance and space-time feature representation of image sequences of people. This representation is based on a combination of HOG3D, colour and optic flow energy profile of image sequence, designed to break down automatically unregulated video clips of people into multiple fragments. (2) We formulate a discriminative video ranking model for cross-view person re-identification by simultaneously selecting and matching more reliable appearance and space-time features from video fragments. The model is formulated using a multi-instance ranking strategy for learning from pairs of image sequences over non-overlapping camera views. The proposed method can relax significantly the strict assumptions made by gait recognition techniques. (3) We extensively provide comparative evaluations of the proposed model against a wide range of contemporary methods (e.g. gait recognition, holistic sequence matching and state-of-the-art person ReID models) on three challenging image sequence based datasets.
2 Related Work
Space-time features Space-time feature representations have been extensively explored in action/activity recognition [10, 18, 19]. One common representation is constructed based on space-time interest points [20, 21, 22, 23]. They facilitate a compact description of image sequences based on sparse interest points, but are somewhat sensitive to shadows and highlights in appearance  and may lose discriminative information . Therefore, they may not be suitable for person ReID scenarios where lighting variations and viewpoints are unknown and uncontrolled. Relatively, space-time volume/patch based representations  can be richer and more robust. Mostly these representations are spatial-temporal extensions of corresponding image descriptors, e.g. HoGHoF , 3D-SIFT  and HOG3D . In this study, we adopt HOG3D  as the space-time feature of video fragment because: (1) It can be computed efficiently; (2) It contains both spatial gradient and temporal dynamic information, and is therefore potentially more expressive [18, 28]; (3) It is more robust against cluttered background and occlusions . The choice of space-time feature is independent of our model.
. However, these methods often make stringent assumptions on the image sequences, e.g. uncluttered background, consistent silhouette extraction and alignment, accurate gait phase estimation and complete gait cycles, most of which are unrealistic in ordinary person ReID scenarios. It is challenging to extract a suitable gait representation from typical ReID data. In contrast, our approach relaxes significantly these assumptions by simultaneously selecting discriminative video fragments from noisy sequences, learning and matching them without temporal alignment.
Temporal sequence matching One approach to exploiting image sequences for ReID is holistic sequence matching. For instance, Dynamic Time Warping (DTW) is a popular sequence matching method widely used for action recognition , and recently also for person ReID . However, given two unregulated sequences, it is difficult to align sequence pairs for accurate matching, especially when the image sequences are subject to significant noise caused by unknown camera viewpoint changes, background clutter and drastic lighting changes. Our approach is designed to address this problem while avoiding any implicit assumptions on sequence alignment and camera view similarity among image frames both within and between sequences.
Multi-shot person re-identification Multiple images from a sequence of the same person have been exploited for person re-identification. For example, interest points were accumulated across images for capturing appearance variability , manifold geometric structures in image sequences of people were utilised to construct more compact spatial descriptors of people , and the time index of image frames and identity consistency of a sequence were used to constrain spatial feature similarity estimation . There were also attempts on training a person appearance model from image sets  or by selecting best pairs . Multiple images of a person sequence were often used either to enhance spatial feature descriptions of local image regions or patches [4, 3, 36, 37], or to extract additional appearance information such as appearance change statistics . In contrast, the proposed model aims to simultaneously select and match discriminative video appearance and space-time features for maximising cross-view identity ranking. Our experiments show the advantages of the proposed model over existing multi-shot models for person ReID.
3 Discriminative Video Ranking
We formulate the person re-identification problem as a ranking problem [7, 39]. Although image sequences of people may provide intuitively richer content to learn discriminative information about an individual’s visual appearance when compared to a single still image widely used by existing person ReID methods [35, 40, 41, 42], the availability of more (and often redundant) data poses additional challenges in model learning, e.g. more random inter-object occlusions and thus incomplete frames, arbitrary sequence duration and uncertain starting/ending postures, and potential clothing variations of some people over time. Moreover, human annotators may implicitly and unconsciously have the tendency to select carefully more clear and better-segmented person images for learning image-based ReID models. On the other hand, tracked sequences of person bounding boxes in typical surveillance videos are inherently more noisy and incomplete. Directly utilising all the sequence data for constructing ReID models can easily result in unstable models, which is undesirable. A selection mechanism is required to be part of the learning method in order to optimally explore the redundant information available in sequence data.
In the context of relative ranking based person ReID model learning, it is non-trivial to automatically learn a robust discriminative ranking function from such contaminated and uncontrolled image sequence data. Inherently, one needs to address the problem of how to mitigate the negative influence of unknown noisy observations, e.g. various types of occlusion and clutter in the background. This is beyond solving the more common problem of misalignment over time in sequence matching. In this work, we formulate a novel discriminative re-identification model capable of simultaneously selecting and ranking informative video fragments from pairs of unregulated person image sequences captured in two non-overlapping camera views. Our model not only mitigates unwanted data whilst exploring useful information from image sequences for person ReID, but also requires no rigid sequence alignment as in the case of traditional methods, e.g. dynamic time warping. Specifically, our model is based on : (i) Video fragmentation by motion energy profiling (Fig. 2(b,c) and Sec. 3.2) ; (ii) Learning a sequence based relative ranking function by simultaneously selecting and ranking cross-view video fragment pairs (Fig. 2(d,e) and Sec. 3.3) . Once learned, our model can then be deployed to re-identify previously unseen people given cross-view unregulated image sequences (Sec. 3.4). An overview diagram of the proposed approach is presented in Fig. 2.
3.1 Problem Definition
Suppose we have a collection of image sequence pairs , where and denote the image sequences of person captured by two disjoint cameras and , and the total number of training people. Each image sequence is defined by a set of consecutive frames as , where is not a constant because in typical surveillance videos, tracked person image sequences [43, 44] are not guaranteed to have (1) a uniform duration (arbitrary frame numbers), (2) the same number of walking cycles, (3) similar starting/ending postures, (4) high video frame rates, or (5) invariant clothing over time.
For model training, we aim to learn a ranking function of image sequence pairs that satisfies the following ranking constraints:
i.e. the sequence pair of the same person is constrained/optimised to have a higher rank over any cross-view sequence pairing of person and with .
Learning a ranking function holistically without discrimination and selection from pairs of unsegmented and temporally unaligned person image sequences will subject the learned model to significant noise and degrade any meaningful discriminative information contained in the image sequences. This is an inherent drawback of any holistic sequence matching approach, including those with dynamic time warping applied for non-linear mapping (see experiments in Sec. 4). Reliable human parsing/pose detection  or occlusion detection  may help, but such approaches are difficult to scale, especially with image sequences from crowded public scenes. The challenge is to learn a robust ranking model effective in coping with incomplete and partial image sequences by identifying and selecting discriminative/informative video fragments from each sequence suitable for extracting trustworthy fragment features. Let us first consider generating a pool of candidate fragments for each video, i.e. video fragmentation.
3.2 Video Fragmentation
Given unregulated image sequences of people, it is too noisy to attempt to holistically locate and extract reliable discriminative features from entire image sequences. Instead, we consider breaking down each sequence into a pool of localised video fragments to allow a learning model to automatically select the discriminative fragments (Sec. 3.3).
It can be observed that motion energy intensity induced by the activity of human muscles during walking exhibits regular periodicity . This motion energy intensity can be approximately estimated by optic flow computation. We call this a Flow Energy Profile (FEP), see Fig. 3
. This FEP signal is particularly suitable to address our video fragmentation problem due to: (i) the local minima and maxima landmarks probably correspond to characteristic gestures of a walking process, and thus help in detecting them (e.g. one foot is about to land); (ii) it is relatively robust to changes in camera viewpoint. More specifically, we first compute the optic flow fieldfor each image frame from a sequence . Its flow energy is defined as
where is the pixel set of the lower body, e.g. the lower half of . The FEP of is then obtained as , which is further smoothed by a Gaussian filter to suppress noise.
Subsequently, we locate the local minima and maxima landmarks of and for each landmark create a video fragment by extracting the surrounding frames . We fix for all our experiments, determined by cross-validation on the iLIDS-VID dataset. Finally, we build a candidate set of video fragments by pooling all the fragments from . Note that some fragments of each sequence can have similar walking phases since the local minima/maxima landmarks of the FEP signal are likely to correspond to certain characteristic walking postures (Fig. 3). This increases the possibility of finding temporally aligned video fragment pairs (i.e. centred at similar walking postures) given a pair of video fragment sets from two disjoint camera views, facilitating discriminative video fragment selection and matching during model learning. Also, Fig. 3 shows that the FEP signal can be sensitive to random occlusions and background clutter that could lead to non-characteristic fragments. However, this has limited impact on the overall effectiveness of the proposed selection-and-ranking model (Sec. 3.3), as it is designed specifically to identify and exploit automatically discriminative video fragments from largely redundant sets for training a ReID model.
Video fragment representation To encode both the dynamic and static appearance information of the subjects, we represent video fragments with both space-time and colour features. They complement each other, especially in the context of person ReID. Colour features have been shown to be significant for person ReID [2, 39, 48, 49, 50], implicitly capturing the chromatic patterns of clothing independent from space-time characteristics of a person’s appearance, such as the way people walk. In contrast, the latter is encoded by the space-time features.
Space-time feature – Particularly, we exploit HOG3D  as the space-time feature representation of a video fragment, due to its advantages demonstrated for applications in action and activity recognition [18, 28]. In order to capture spatially more detailed and localised space-time information of a person in motion, we decompose a video fragment spatially into uniform cells according to human biological body topology such as head, torso, arms and legs. To capture separately the information of sub-intervals before and after the characteristic walking posture (Fig. 3 (d)) potentially situated in the middle of a video fragment, the fragment is further divided temporally into two smaller sub-phases, resulting in a total of (i.e. ) cells for every video fragment. Two adjacent cells have overlap for increased robustness to possible spatio-temporal fragment misalignment. A space-time gradient histogram is computed in each cell and then concatenated to form the HOG3D space-time descriptor of the fragment .
Colour feature – We adopt the localised average colour histogram as the appearance feature of a video fragment from a great number of alternative descriptors  because of its simplicity and effectiveness . Specifically, for each component frame in a video fragment, the colour features are extracted from rectangular patches ( pixels in size) sampled from each frame with an overlap of and pixels vertically and horizontally between each patch (i.e.
overlap between adjacent patches). In each patch, we compute the mean values of the HSV and LAB colour channels and form a framewise colour feature vector by concatenating the mean values of all the patches in a frame. To minimise noise and obtain a more reliable colour representation, all the framewise colour features of a fragment are averaged over time to produce a fragment-wise appearance representationof that fragment .
Finally, both space-time (HOG3D) and colour appearance (Colour) features and are concatenated into a fragment descriptor (ColHOG3D) . Note, the image frames of all sequences are normalised into a fixed size ( pixels in our implementation) before computing any features.
Notations – Formally, for the -th fragment from the person ’s image sequence captured in camera , its descriptor is denoted by . The same is for and . We denote and as the descriptor set for the fragments segmented from the sequences and of person in camera and respectively, where represents the set cardinality. The entire collection of descriptors for training image sequence pairs is denoted as .
3.3 Selection and Ranking
As shown in Fig. 3, the fragments of a person image sequence can be contaminated by unknown occlusions and background dynamics, and may also be extracted at an arbitrary time-instance of a walking cycle. Given such noisy fragment pair collections generated from cross-view image sequences, a significant challenge for sequence matching based ReID is how to identify and select discriminative/informative and temporally aligned fragment pairs (rather than the entire sequences) to learn a suitable ranking model. Formally, the objective is to learn a linear ranking function on the entry-wise absolute difference of two cross-view fragments and :
We assume that for each person, there exists at least one cross-view fragment pair that is sufficiently aligned over time and carries desired identity-sensitive information for this person. Our aim is to construct a model capable of automatically discovering and locating not only the best cross-view fragment pair but also multiple cross-view fragment pairs that are sufficiently aligned and discriminative for person ReID. For model training with the best fragment pair, it is equivalent to constraining a ranking function to prefer the most discriminative cross-view fragment pair of the same person to the pairings over and any other person , , i.e.
For notation simplicity, we define as the -th positive instance of person , i.e., the entry-wise absolute difference of two cross-view fragments of the same person , and as the -th negative instance, i.e., the absolute difference of two cross-view fragments of and another person. For each person , we form a positive bag by pooling the positive instances, and a negative bag by pooling the negative instances. The formation process of positive and negative bags for individual persons is illustrated in Fig. 4. Note, we only consider a single directional pairing (from camera to ) without considering the opposite direction when constructing negative bags. This is because our empirical experiments suggest that the addition of negative instances from camera to only gains negligible () ReID performance advantage whilst the additional cost is significantly more () and complex in both bag construction and model learning. A plausible explanation is that the negative instances from camera to are sufficiently diverse (in our experiments) and bi-directional negative sampling does not add meaningfully richer data. This is also supported by that only of the full negative instances from camera to were utilised and shown to be sufficient in model learning, with the added benefit of reducing the number of pairwise constraints (the first inequality constraint in Eqn. (7)) in model learning, therefore speeding up the training process.
By redefining the ranking function , Eqn. (4) can be rewritten as
With the ranking constraints in Eqn. (5), we aim to automatically discover and select the most discriminative/informative and temporally aligned cross-view fragment pair within the positive bag for each person for learning an identity discriminative ranking model. To that end, we introduce a binary selection variable with each entry being either or and of unity norm for each person , and then obtain
where each column of corresponds to one , , and denotes a vector of all “1”s.
To achieve good generalisation ability for the ranking model given the ranking constraints in Eqn. (6), we formulate our problem as a max-margin ranking problem by defining the objective function as:
where is the parameter of the objective ranking function defined in Eqn. (3), and the number of people in the training set. is the concatenation of the binary selection variables of all persons: . is the flattened slack variable, formed by all the possible . We solve Eqn. (7) by iteratively optimising and between a ranking step and a selecting step.
Ranking step We fix to optimise . Eqn. (7) turns into
Selecting step We fix to optimize . The term on (i.e. ) can be eliminated and Eqn. (7) becomes
Considering that the person-wise is associated only with and we are optimising the summation of all possible , Eqn. (9) is equivalent to optimising for each person separately, as
where . The inequality constraints in Eqn. (10) can be transformed as
Therefore, for any particular that holds and in the selecting space , the entries of the optimal that minimises the summation shall be
It is obvious that the summation is a function of ,
Finally we can obtain the by optimising via:
For each person , we only have a limited number of in . Therefore Eqn. (14) can be efficiently solved even with a greedy search.
To begin the model training process, we set to initiate a balanced/moderate start since the quality of is unknown a priori. The iteration terminates when does not change any more. Typically, the training process stops after iterations. For learning efficiency, out of all the are randomly selected to form . Since only a single for each person is selected and utilised for model learning, we call this model DVR(single).
3.3.1 Multiple Cross-View Fragment Pair Selection
Thus far we have detailed the procedure of training our DVR(single) model via identifying the best cross-view fragment pair in each positive bag (corresponding to person ) for learning the ranking function (Eqn. (3)). This allows us to largely avoid the contamination effect from harmful data. Nonetheless, we may simultaneously lose some useful information from discarding the majority of instances of each bag , because some of these ignored can be of good quality. Identifying and exploiting these “good though not the best” fragment data is likely to benefit the model learning. To that end, we shall describe next our multiple cross-view fragment pair selection algorithm for better exploring image sequence data.
Our multiple fragment-pair selection algorithm is based on a goodness/quality measure of individual . Once all instances of person are measured by assigning a score (higher is better) to each instance, we can easily locate multiple (top ) discriminative from the ranked list of all sorted in descending order of . Formally, we define for each as
We denote as the ranking margin of against , which can be obtained by Eqn. (12). Given Eqn. (15), the with a larger cumulated ranking margin over all the negative instance is preferred. This formulation generalises the single selection case that searches for the best (Eqn. (14)), i.e. the and the highest leads to the same selection of positive instance .
After the top for each person are found and selected, we can obtain multiple (i.e. ) s by setting the corresponding entry of each to “” whilst the remaining entries to “”. We call this model DVR(top). Similar to the single selection model DVR(single), these ranking constraints associated with the selected top are then employed for optimising with Eqn. (8). In Sec. 4.1, we shall evaluate the effect of different top positive instances on the person ReID performance. An overview of learning the proposed DVR model is presented in Algorithm 1.
3.3.2 Model Complexity
We analyse the training complexity of the DVR model, focusing on the ranking and selecting steps. For model training, we adopt the primal RankSVM scheme  as the ranking solver. Its complexity is due to Hessian computation and the linear search in Newton direction respectively, with and denoting the number of ranking constraints (see Equations (4) and (8)) and the feature dimensions. Suppose positive instances per person are selected in the training stage, then , where is the total number of training people.
The cost for the selection process mainly involves measuring the quality score of each positive instance of all training people with Eqn. (12) and Eqn. (15). Its complexity is , where denotes the total number of positive instances across all training data. The total complexity of model training is thus . We evaluated and reported the model training cost in our experiments (Sec. 4.1).
3.4 Re-Identification by DVR
Once learned, the ranking model (Eqn. (3)) can be deployed to perform person re-identification by matching a given probe person image sequence observed in one camera view against a gallery set in another disjoint camera. Formally, the ranking/matching score of a gallery person sequence with respect to is computed as
where and are the feature sets of the video fragments extracted from the sequences and , respectively. The same video fragmentation process as used for model training (Sec. 3.2) is employed for deploying a trained model. Finally, the gallery people are sorted in descending order of their assigned matching scores to generate a ranking list.
Combination with prior spatial feature based models Our approach can complement existing spatial feature based person re-identification approaches. In particular, we incorporate Eqn. (16) into the ranking scores obtained by other models as
where refers to the weighting assigned to the -th method, which is estimated by cross-validation.
3.5 Discussions on Related Models
We discuss the relationship of our proposed DVR model with other relevant contemporary models in the literature, with a focus on their differences. First, most existing max-margin ranking methods [52, 7] do not consider uncertainty in the ranking constraints during model optimisation. In contrast, the proposed DVR model jointly optimises both the selection of the ranking constraints and the ranking function. This is necessary because the bag-level (e.g. image sequences) supervision cannot directly determine the instance-level (e.g. fragments) constraints (Sec. 3.3).
Second, our model also differs notably from other multi-instance ranking models [56, 57, 58] in a number of aspects. (1) Bergeron et al.  relaxed the selection vectors (Eqn. (6)) to be continuous during model optimisation, whilst our model searches for exact solutions of instance selection. As shown in our evaluation (Sec. 4.1), Bergeron et al.’s relaxation method can significantly increase the cost of constraint selection when the training set is large, though it does not compromise the model performance. (2) The model presented in  focuses on encoding bag-level (or sample-level) constraints into the ranking function by modelling instance-level constraints, assuming all instances can provide contribution to model optimisation. In contrast, we emphasise the selection of discriminative/informative instance data (e.g. fragments) for robust learning, necessary for coping with very noisy and incomplete data (e.g. unregulated image sequences), whilst the stronger assumption made in  is less valid. (3) Different from all these multi-instance models [56, 57, 58], the proposed DVR model is unique in its capability for allowing different quantities of explicit discriminative instance selection and then exploitation, due to our formulation of a principled instance quality measure (Eqn. (15)). This can potentially increase the flexibility and scalability of our model in a variety of problem settings (e.g. varying degrees of noise) and applications (e.g. other sequence matching based tasks).
Datasets Extensive experiments were conducted on three image sequence datasets designed for person ReID, iLIDS Video re-IDentification (iLIDS-VID) , PRID , and HDA+ . All three datasets are very challenging due to clothing similarities among people, lighting and viewpoint variations across camera views, cluttered background and occlusions (Fig. 1 and Fig. 5).
iLIDS-VID – Our new iLIDS-VID person sequence dataset  was created based on two non-overlapping camera views from the i-LIDS Multiple-Camera Tracking Scenario (MCTS) , which was captured at an airport arrival hall under a multi-camera CCTV network (Fig. 5(a)). It consists of image sequences for randomly sampled people, with one pair of image sequences from two disjoint camera views for each person. Each image sequence has a variable length consisting of to image frames, with an average number of .
PRID – The PRID dataset  includes image sequences for people from two camera views that are adjacent to each other (Fig. 5(b)). Each image sequence has a variable length consisting of to image frames111We used sequences of frames from people in the evaluation. , with an average number of . Compared with the iLIDS-VID dataset, it is less challenging due to being captured in non-crowded outdoor scenes with relatively simple and clean backgrounds and rare occlusions.
HDA+ – The HDA+ dataset  contains a total of labelled people across indoor cameras in an office environment (Fig. 5(c,d)). HDA+ is characterised by (i) low and variable frame rates, e.g. fps (frames per second) of HDA+ versus fps of both PRID and iLIDS-VID; and (ii) clothing variation over time. One limitation of HDA+ is the small number of people re-appearing between camera pairs whilst re-appearance is required for evaluating ReID. In our experiments, we selected two camera pairs, and , that satisfy: (1) a sufficiently large number of people reappearing across the camera views; (2) very low video frame rates to evaluate its effect on space-time feature based ReID models; (3) some people’s clothing changes to evaluate the clothing-variation challenge. In particular, camera pair provides pairwise image sequences of different people at fps. Each video has frames with an average of frames. In contrast, camera pair contains pairwise videos of people at only fps, with sequence length varying between frames with an average of frames. For sequences frames, we expanded them up to
frames by interpolating new frames using duplicates of the temporally-nearest frames in a sequence. This is to enable fragmentation on them. Note, little or no space-time information is available in very short sequences, e.g.frame. This is designed to test how a space-time feature based model degrades with decreasing space-time information available in the input video data.
Evaluation settings From every dataset, all sequence pairs are randomly split into two subsets of equal size, one for training and one for testing. Following the evaluation protocol on the PRID dataset , in the testing phase, the sequences from one camera are used as the probe set while the ones from another camera are the gallery set. The results are measured by Cumulated Matching Characteristics (CMC). Specifically, we show top rank matching rates. As CMC values are proportional to the dataset size (the overall population for the ranked pairs), we adopt Ranks for PRID and iLIDS-VID, and Ranks for HDA+ ( size of iLIDS-VID and PRID), so that these values are approximately comparable across all four datasets. To obtain stable statistical results, we repeat the experiments for trials and report the average results.
4.1 Evaluation on Model Variants
We evaluated and analysed the proposed DVR model in three aspects: (1) effectiveness of the selection mechanisms; (2) effectiveness of the fragment representations; (3) robustness against low and variable video frame rates.
Effectiveness of the selection mechanisms – For the selection mechanism, we conducted two comparisons: (a) the DVR(single) model versus our preliminary model reported in  which we call DVR(float) since its selection involves a (float) weighted combination of instances in contrast to our new single or multiple explicit instance selection strategies, (b) single versus multiple fragment-pair selection (Sec. 3). The results in Table I (the first two rows) show that identical scores are obtained by DVR(single) and DVR(float) . This is further verified by the observation that both models select almost identical discriminative video fragments. On the other hand, the computational cost/time required are different for the two models, in particular when the visual content is more crowded and selection becomes harder. More specifically, for model training including both the ranking and selecting steps, Table I shows that both models require similar time for the ranking step on all datasets. This is because they are subject to the same number of ranking constraints (Eqn. (8)). However, although the time required for the selection routine is similar for PRID and HDA+, DVR(single) is significantly faster than DVR(float) on iLIDS-VID, e.g. over speed up. This was performed on a -bit Intel CPU Processor@GHz with a MATLAB implementation in Linux OS. These observations suggest no advantage in treating the selection as a float weighted combination of instances as originally proposed in [53, 56].
One may ask the question how many discriminative fragment pairs should be selected from each cross-view image sequence pair of a person during model training. To that end, we evaluated the performance of ReID using different numbers of positive fragment pairs per person on PRID and iLIDS-VID222 This multi-fragment selection evaluation is not performed on HDA+ as some short image sequences have only one fragment. . It is evident from Table I that the use of additional discriminative fragment pairs can further boost the overall performance of person ReID at the price of increased model training time. This empirically supports our analysis on the potential benefits of multiple fragment pair selection and exploitation as discussed in Sec. 3.3.1. However, the margin of improvement from additional fragment data quickly diminishes. In our experiments, we utilised up to the top fragment pairs per person. Any further addition of more pairs had very limited effect in improving the learned ranking model. Moreover, it is also observed that the construction of ranking constraints in RankSVM is a time consuming process and its complexity is linear in the number of constraints. Empirically, selecting the top discriminative fragment pairs from a matched training image sequence pair for model learning provides a good trade-off between ReID accuracy and model learning cost. For the remaining experiments reported in this section, DVR(top) models were trained for PRID and iLIDS-VID and DVR(single) models for HDA+ in the comparative evaluation against other baseline methods.
Effectiveness of the fragment representations – It is worth pointing out that our preliminary work presented in  is somewhat limited on fragment representation as no colour appearance information is considered. Here we report a significant improvement in performance from combining the space-time features (HOG3D) with colour features (Sec. 3.2). For the DVR(single) model, Table II shows , , and increase at Rank-1 recognition rate on PRID, iLIDS-VID, HDA+(fps) and HDA+(fps) respectively, when comparing with the results by HOG3D and ColHOG3D. This suggests that colour plays an important role in re-identifying people, also evident from the colour-only ReID performance in the table. These results demonstrate the importance of utilising both space-time and colour appearance information for person ReID in image sequence data, further supporting previous studies on the importance of leveraging colour information for ReID [2, 39, 48, 49, 50]. Throughout the following experiments, ColHOG3D is adopted as the default fragment representation in our DVR model, unless specified otherwise.
Robustness against low and variable video frame-rates – The proposed DVR model is expected to benefit more from higher frame-rate videos, whilst its advantage over appearance-only based models diminishes gradually with a decrease in video frame rate as less space-time information is available. The results in Table II show that the space-time feature (HOG3D) only based DVR model produces very competitive ReID accuracy compared to models using colour features alone, given high (fps) frame rate videos from PRID and iLIDS-VID. Encouragingly, HOG3D-only based DVR retains credible ReID accuracies on fps sequences from HDA+. However, when the frame rate decreases more significantly to fps, the performance of the HOG3D-only based model degrades considerably whilst the colour-only based DVR is less affected. These results are consistent with the expectation that space-time feature alone based ReID models degrade when very limited or no space-time information is available in very low frame rate videos. Nevertheless, the space-time information selected by the DVR model is still useful for ReID even at such a low frame rate. It is also evident that the full DVR model using the ColHOG3D representation selectively explores the complementary information from both space-time and colour appearance features for significant improvements on ReID accuracies in all situations including very low video frame-rates (the bottom row in Table II). This illustrates the strength and robustness of the DVR model in utilising complementary visual information, even when space-time information is very poor or even absent. This also demonstrates the robustness and flexibility of the DVR model in coping with significant variations in video frame rate when extracting and exploiting discriminative space-time information from unregulated surveillance videos.
4.2 Comparing Gait Recognition and Temporal Sequence Matching
We compared the proposed DVR model with contemporary gait recognition and temporal sequence matching methods for person (re-)identification. (I) Gait recognition (GEI+RSVM)  is a state-of-the-art gait recognition model using Gait Energy Image (GEI)  (computed from pre-segmented silhouettes) as sequence representation and RankSVM  for recognition. A challenge for applying gait recognition to unregulated image sequences in ReID scenarios is to generate good gait silhouettes as input. To that end, we first deployed the DPAdaptiveMedianBGS algorithm provided by the BGSLibrary  to extract silhouettes from image sequences given by each dataset. This approach produces better foreground masking than other alternatives. (II) ColLBP/HoGHoF/ColLBPHoGHoF+DTW applies Dynamic Time Warping  to compute the similarity between two sequences, using either ColLBP  or HoGHoF  or their combination as the per-frame feature descriptor. This is similar to the approach of Simonnet et al. , except that they only used colour features. In comparison, ColLBP is a stronger representation as it encodes both colour and texture. Alternatively, HoGHoF encodes both texture and motion information.
Table III presents the comparative ReID results among DVR, GEI+RSVM (gait), ColLBP+DTW, HoGHoF+DTW, and ColLBPHoGHoF+DTW. It is evident that the proposed DVR outperforms significantly any competitor on all datasets. Gait recognition  gives significantly weaker performance than the DVR model on every dataset. In comparison, its ReID accuracy on PRID and HDA+ is much better than that on iLIDS-VID. This is because the GEI gait features are very sensitive to background clutter and occlusions, as shown by the examples in Fig. 6. It is obvious that the extracted gait foreground masks from the iLIDS-VID person sequence (middle) are contaminated more heavily by cluttered background and other moving objects, compared to those from either PRID (top) or HDA+ (bottom). Our DVR model trains itself by simultaneously selecting and ranking only those video fragments which suffer the least from occlusions and noise. Moreover, DTW based sequence matching methods using either ColLBP, HoGHoF, or their combination also suffer notably from the inherently uncertain nature of ReID sequences and perform significantly poorer than the proposed DVR approach. This is largely due to: (1) Person sequences have different durations with arbitrary starting/ending frames, also potentially different numbers of walking cycles. Therefore, attempts to match entire sequences holistically inevitably suffer from mismatching with erroneous similarity measurement; (2) There is no clear (explicit) mechanism to avoid incomplete/missing data, typical in crowded scenes; (3) Direct sequence matching is less discriminative than learning an inter-camera discriminative mapping function, which is explicitly built into the DVR model by exploring multi-instance (fragment-pair) selection and ranking.
4.3 Comparing Spatial Feature Representations
To evaluate the effectiveness of discriminative video fragment selection and ranking using both spatial appearance and space-time features for person ReID, we compared the proposed DVR model against a wide range of contemporary ReID models using spatial features, either in single-shot or multi-shot (multi-frames). In order to process the iLIDS-VID dataset for our experiments, we mainly considered contemporary methods with code available publicly. They include (1) SDALF  (single-/multi-shot versions); (2) eSDC333 The eSDC model cannot be evaluated on the small HDA+ dataset as it requires additionally saliency statistics modelling with two large reference sets which are not available on HDA+. ; (3) SS-ColLBP which uses RankSVM  as model and colour&LBP  as representation; (4) We also extended SS-ColLBP to multi-shot by averaging the ColLBP features of each frame over an image sequence to focus on stable appearance cues and suppress noise, in a similar approach to . We call this method MS-ColLBP. Moreover, we discuss the effect of clothing variation on person ReID methods, a challenging topic which is mostly ignored and under-investigated currently in the literature.
Comparing with spatial feature based methods – The results in Table IV show that the proposed DVR model outperforms significantly all the spatial feature based methods on all datasets, e.g. it gains and Rank- improvement over eSDC; it also yields , , and Rank- improvement over MS-ColLBP on PRID, iLIDS-VID, HDA+(fps) and HDA+(fps) respectively. Note that the improvement margin achieved by the DVR model on iLIDS-VID (a more challenging dataset) is much more significant than those on PRID and HDA+. This demonstrates the effectiveness of the proposed selective sequence matching method in coping with challenging real-world data for learning a robust re-identification ranking function. More concretely, the power of our DVR model can be largely attributed to identity-sensitive space-time gradient cues learned by our discriminative fragment selection based matching and ranking mechanism, beyond the conventional models of only learning from the spatial appearance data, e.g. colour and texture.
Clothing change challenge – Existing person ReID studies typically assume no changes in clothing. However, this assumption is not always valid. Realistically, clothing may change for some people within and/or across camera views. Specifically, while there is no () explicit clothing change among the people in both PRID and iLIDS-VID, people changed their jacket/coat/shirt in HDA+(fps) and in HDA+(fps), resulting in substantial change in appearance (Fig. 5(c,d)). Whilst only partial appearance variation may arise from changes in viewpoint and lighting, severe occlusion can also cause significant appearance change (Fig. 5(a,b)). Given this observation, we compared the performance of DVR against other appearance-based ReID models on the four different datasets with different degrees of clothing changes. We pay special attention to multi-shot models as they are expected to be more robust under clothing changes. The results in Table IV show that MS-SDALF benefits consistently from multiple shots on all four datasets, either with clothing changes or not. This is largely due to its body-part selective matching strategy, i.e. using the best-matched patch pairs during matching. However, this method can also give weak ReID accuracy due to the inherent difficulties in obtaining explicitly reliable body-part segmentation in surveillance images. In comparison, MS-ColLBP suffers considerably more from clothing changes, evident from a decreased performance advantage over SS-ColLBP on HDA+(fps) and worse still on HDA+(fps), when compared with those on PRID and iLIDS-VID. This suggests that the advantage of MS-ColLBP over SS-ColLBP decreases when clothing changes are abrupt at low frame rates. Under such conditions, averaging without selection is a poor strategy to cope with clothing changes. In contrast, the proposed DVR model not only explores discriminative space-time ReID information less sensitive to appearance change, but also selects automatically the best-matched fragments for appearance consistency, sharing a similar principle of MS-SDALF but being more flexible and robust without requiring explicit part segmentation. We show in Fig. 7 two examples of model selected discriminative fragment pairs across camera views for person ReID. Note, this selection is driven by both static appearance and dynamic motion information embedded in our DVR model design. This demonstrates the potential advantage of the DVR model in addressing the clothing change challenge in person ReID, a problem under-studied in the current literature.
4.4 Complementary to Spatial Features
We further evaluated the complementary effect between the DVR model and existing colour/texture feature based ReID approaches. The results are reported in Table V. It is evident that for any existing appearance model, significant performance gain is achieved by incorporating the DVR ranking score (Eqn. (17)) into its ranking result. More specifically, on PRID and iLIDS-VID, the Rank- ReID performance of using multi-shot colour and texture features (MS-ColLBP) is boosted by and ; Rank- of eSDC is improved by and ; Rank- of eSDC+MS-SDALF is increased by and , respectively. Similar improvements are gained on low frame rate sequences from HDA+ by MS-ColLBP and MS-SDALF. Such a performance step-change in improving conventional spatial feature based models is primarily due to the exploration of discriminative space-time features and the fragment selection based matching scheme by the proposed DVR model. This space-time selective matching process discovers mostly independent source of information when comparing with all static appearance features, therefore playing a significant complementary and beneficial role to contemporary spatial feature based models. It is also worth pointing out that most existing spatial feature based methods benefit more from combining with DVR when tested on iLIDS-VID, and less on PRID and HDA+. This observation highlights the importance and necessity of discriminative fragment selection for robust model learning given video data from more crowded public scenarios where blind learning from all the sequence data without selection leads to poorer and degraded models.
In addition, it is evident from Table V that the DVR model can benefit from combining with other spatial feature based ReID models, although slightly. This gain may be explained as the result of drawing from diverse sources of spatial features.
4.5 Evaluation of Space-time Fragment Selection
To evaluate the space-time video fragment selection mechanism in the proposed DVR model, we implemented two baseline methods without this selection mechanism: (1) SS-ColHOG3D represents each image sequence by ColHOG3D features of a single fragment randomly selected from the image sequence; (2) MS-ColHOG3D represents each image sequence by the averaged ColHOG3D features of four fragments uniformly selected from the sequence. In both baseline methods, RankSVM  is used to rank the person sequence representations. For a fair comparison, the length of these fragments used for both baselines is set the same as that in our DVR model.
The results are presented in Table VI. The DVR model outperforms SS-ColHOG3D and MS-ColHOG3D in Rank- by and on PRID, by and on HDA+(fps), and by and on HDA+(fps). The performance advantage of DVR over SS-ColHOG3D and MS-ColHOG3D is even greater on the more challenging iLIDS-VID dataset, i.e. yielding and Rank- improvement respectively. This demonstrates clearly that in the presence of significant noise and given unregulated person image sequences, it is indispensable to automatically select discriminative space-time fragments from raw image sequences in order to construct a more robust model for person ReID. It is also noted that MS-ColHOG3D outperforms SS-ColHOG3D by suppressing noise using temporal averaging. Although such a straightforward averaging approach can have some benefits over single-shot methods, it loses out on discriminative information selection due to uniform temporal smoothing.
5 Conclusion and Future Work
We have presented a novel DVR framework for person re-identification by video ranking using discriminative space-time and appearance feature selection. Our extensive evaluations show that this model outperforms a wide range of contemporary techniques from gait recognition and temporal sequence matching to state-of-the-art single-/multi-shot(or frame) spatial feature representation based ReID models. In contrast to existing ReID approaches that often employ spatial appearance of people alone, the proposed method is capable of capturing more accurately both appearance and space-time information discriminative for person ReID through learning a cross-view multi-instance ranking function. This is made possible by the ability of our model to discover and exploit automatically the most reliable and informative video fragments extracted from inherently incomplete and inaccurate person image sequences captured against cluttered backgrounds, without any guarantee on person walking cycles, starting/ending frame alignment, video frame rates, and clothing stability. Moreover, the proposed DVR model significantly complements and improves existing spatial appearance features when combined for person ReID. Extensive comparative evaluations were conducted to validate the advantages of the proposed model over a variety of baseline methods on three challenging image sequence based ReID datasets.
Future work Person re-identification remains largely an unsolved problem , and our future work includes: (1) In addition to space-time information, how to exploit automatically other knowledge sources, e.g. the topology structure of a camera network, or the semantic description (e.g. mid-level attributes nameable by human) of people’s appearance and walking style; (2) How to cope with open-world person re-identification settings [39, 62, 63] where the probe people are not guaranteed to appear in the gallery set.
We shall thank Dario Figueira of IST for providing the HDA+ dataset and for assisting in extracting the person bounding boxes from raw videos required for person ReID evaluations and gait experiments; Martin Hirzer, Peter Roth and Csaba Beleznai of AIT for providing the additional raw videos of PRID required for gait recognition experiments. Corresponding authors: Shaogang Gong and Shengjin Wang.
-  W.-S. Zheng, S. Gong, and T. Xiang, “Reidentification by relative distance comparison,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, pp. 653–668, 2013.
M. Hirzer, P. M. Roth, M. Köstinger, and H. Bischof, “Relaxed pairwise
learned metric for person re-identification,” in
European Conference on Computer Vision, 2012, pp. 780–793.
M. Farenzena, L. Bazzani, A. Perina, V. Murino, and M. Cristani, “Person
re-identification by symmetry-driven accumulation of local features,” in
IEEE Conference on Computer Vision and Pattern Recognition, 2010, pp. 2360–2367.
-  N. Gheissari, T. B. Sebastian, and R. Hartley, “Person reidentification using spatiotemporal appearance,” in IEEE Conference on Computer Vision and Pattern Recognition, 2006, pp. 1528–1535.
-  L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian, “Scalable person re-identification: A benchmark,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1116–1124.
-  S. Gong, M. Cristani, C. Loy, and T. Hospedales, “The re-identification challenge,” in Person Re-Identification. Springer, 2014, pp. 1–20.
-  B. Prosser, W.-S. Zheng, S. Gong, and T. Xiang, “Person re-identification by support vector ranking,” in British Machine Vision Conference, 2010.
-  R. Zhao, W. Ouyang, and X. Wang, “Unsupervised salience learning for person re-identification,” in IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 3586–3593.
-  i-LIDS Multiple Camera Tracking Scenario Definition, UK Home Office, 2008.
-  R. Poppe, “A survey on vision-based human action recognition,” Image and Vision Computing, vol. 28, pp. 976–990, 2010.
-  D. Weinland, R. Ronfard, and E. Boyer, “A survey of vision-based methods for action representation, segmentation and recognition,” Computer Vision and Image Understanding, vol. 115, pp. 224–241, 2011.
-  M. Sapienza, F. Cuzzolin, and P. Torr, “Learning discriminative space-time actions from weakly labelled videos,” in British Machine Vision Conference, 2012.
-  M. S. Nixon, T. Tan, and R. Chellappa, Human identification based on gait. Springer, 2010, vol. 4.
-  S. Sarkar, P. J. Phillips, Z. Liu, I. R. Vega, P. Grother, and K. W. Bowyer, “The humanid gait challenge problem: Data sets, performance, and analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, pp. 162–177, 2005.
-  J. Han and B. Bhanu, “Individual recognition using gait energy image,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, pp. 316–322, 2006.
-  R. Martín-Félez and T. Xiang, “Gait recognition by ranking,” in European Conference on Computer Vision, 2012, pp. 328–341.
-  K. Bashir, T. Xiang, and S. Gong, “Gait recognition without subject cooperation,” Pattern Recognition Letters, vol. 31, pp. 2052–2060, 2010.
-  H. Wang, M. M. Ullah, A. Klaser, I. Laptev, C. Schmid et al., “Evaluation of local spatio-temporal features for action recognition,” in British Machine Vision Conference, 2009.
-  S. Gong and T. Xiang, Visual analysis of behaviour: from pixels to semantics. Springer, 2011.
-  I. Laptev, “On space-time interest points,” International Journal of Computer Vision, vol. 64, pp. 107–123, 2005.
-  P. Dollár, V. Rabaud, G. Cottrell, and S. Belongie, “Behavior recognition via sparse spatio-temporal features,” in 2nd Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, 2005, pp. 65–72.
-  G. Willems, T. Tuytelaars, and L. Van Gool, “An efficient dense and scale-invariant spatio-temporal interest point detector,” in European Conference on Computer Vision, 2008, pp. 650–663.
-  M. Bregonzio, S. Gong, and T. Xiang, “Recognising action as clouds of space-time interest points,” in IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 1948–1955.
-  Y. Ke, R. Sukthankar, and M. Hebert, “Volumetric features for video event detection,” International Journal of Computer Vision, vol. 88, pp. 339–362, 2010.
-  A. Gilbert, J. Illingworth, and R. Bowden, “Fast realistic multi-action recognition using mined dense spatio-temporal features,” in IEEE International Conference on Computer Vision, 2009, pp. 925–931.
-  I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld, “Learning realistic human actions from movies,” in IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp. 1–8.
-  P. Scovanner, S. Ali, and M. Shah, “A 3-dimensional sift descriptor and its application to action recognition,” in ACM International Conference on Multimedia, 2007, pp. 357–360.
-  A. Klaser and M. Marszalek, “A spatio-temporal descriptor based on 3d-gradients,” in British Machine Vision Conference, 2008.
-  Z. Lin, Z. Jiang, and L. S. Davis, “Recognizing actions by shape-motion prototype trees,” in IEEE International Conference on Computer Vision, 2009, pp. 444–451.
-  D. Simonnet, M. Lewandowski, S. A. Velastin, J. Orwell, and E. Turkbeyler, “Re-identification of pedestrians in crowds using dynamic time warping,” in Workshop of European Conference on Computer Vision, 2012, pp. 423–432.
-  O. Hamdoun, F. Moutarde, B. Stanciulescu, and B. Steux, “Person re-identification in multi-camera system by signature based on interest point descriptors collected on short video sequences,” in ACM International Conference on Distributed Smart Cameras, 2008, pp. 1–6.
-  D. N. T. Cong, C. Achard, L. Khoudour, and L. Douadi, “Video sequences association for people re-identification across multiple non-overlapping cameras,” in International Conference on Image Analysis and Processing, 2009, pp. 179–189.
-  S. Karaman and A. D. Bagdanov, “Identity inference: generalizing person re-identification scenarios,” in Workshop of European Conference on Computer Vision, 2012, pp. 443–452.
-  C. Nakajima, M. Pontil, B. Heisele, and T. Poggio, “Full-body person recognition system,” Pattern Recognition, vol. 36, pp. 1997–2006, 2003.
-  W. Li and X. Wang, “Locally aligned feature transforms across views,” in IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 3594–3601.
-  D. S. Cheng, M. Cristani, M. Stoppa, L. Bazzani, and V. Murino, “Custom pictorial structures for re-identification,” in British Machine Vision Conference, 2011.
-  Y. Xu, L. Lin, W.-S. Zheng, and X. Liu, “Human re-identification by matching compositional template with cluster sampling,” in IEEE International Conference on Computer Vision, 2013, pp. 3152–3159.
-  A. Bedagkar-Gala and S. K. Shah, “Part-based spatio-temporal model for multi-person re-identification,” Pattern Recognition Letters, vol. 33, pp. 1908–1915, 2012.
-  S. Gong, M. Cristani, S. Yan, and C. Loy, Person Re-Identification. Springer, 2014.
-  D. Gray, S. Brennan, and H. Tao, “Evaluating appearance models for recognition, reacquisition, and tracking,” in IEEE International workshop on performance evaluation of tracking and surveillance, 2007.
-  W.-S. Zheng, S. Gong, and T. Xiang, “Associating groups of people,” in British Machine Vision Conference, 2009.
-  C. C. Loy, T. Xiang, and S. Gong, “Multi-camera activity correlation analysis,” in IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 1988–1995.
-  H. Ben Shitrit, J. Berclaz, F. Fleuret, and P. Fua, “Tracking multiple people under global appearance constraints,” in IEEE International Conference on Computer Vision, 2011, pp. 137–144.
-  S. Hare, A. Saffari, and P. H. S. Torr, “Struck: Structured output tracking with kernels,” in IEEE International Conference on Computer Vision, 2011, pp. 263–270.
-  A. Kanaujia, C. Sminchisescu, and D. Metaxas, “Semi-supervised hierarchical models for 3d human pose reconstruction,” in IEEE Conference on Computer Vision and Pattern Recognition, 2007, pp. 1–8.
-  J. Xiao, H. Cheng, H. Sawhney, C. Rao, and M. Isnardi, “Bilateral filtering-based optical flow estimation with occlusion detection,” in European Conference on Computer Vision, 2006, pp. 211–224.
-  R. Waters and J. Morris, “Electrical activity of muscles of the trunk during walking.” Journal of Anatomy, vol. 111, pp. 191–199, 1972.
-  C. Liu, S. Gong, C. Loy, and X. Lin, “Person re-identification: What features are important?” in Workshop of European Conference on Computer Vision, 2012, pp. 391–401.
-  C. Liu, S. Gong, and C. C. Loy, “On-the-fly feature importance mining for person re-identification,” Pattern Recognition, vol. 47, pp. 1602–1615, 2014.
-  R. Zhao, W. Ouyang, and X. Wang, “Person re-identification by salience matching,” in IEEE International Conference on Computer Vision, 2013, pp. 2528–253.
-  Y. Li, S. Wang, Q. Tian, and X. Ding, “Feature representation for statistical-learning-based object detection: A review,” Pattern Recognition, vol. 48, no. 11, pp. 3542–3559, 2015.
-  O. Chapelle and S. S. Keerthi, “Efficient algorithms for ranking with svms,” Information Retrieval, vol. 13, pp. 201–215, 2010.
-  T. Wang, S. Gong, X. Zhu, and S. Wang, “Person re-identification by video ranking,” in European Conference on Computer Vision, 2014, pp. 688–703.
-  M. Hirzer, C. Beleznai, P. M. Roth, and H. Bischof, “Person re-identification by descriptive and discriminative classification,” in Scandinavian Conference on Image Analysis, 2011.
-  D. Figueira, M. Taiana, A. Nambiar, J. Nascimento, and A. Bernardino, “The hda+ data set for research on fully automated re-identification systems,” in Workshop of European Conference on Computer Vision, 2014.
C. Bergeron, J. Zaretzki, C. Breneman, and K. P. Bennett, “Multiple instance
International Conference on Machine learning, 2008, pp. 48–55.
-  C. Bergeron, G. Moore, J. Zaretzki, C. M. Breneman, and K. P. Bennett, “Fast bundle algorithm for multiple-instance learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, pp. 1068–1079, 2012.
Y. Hu, M. Li, and N. Yu, “Multiple-instance ranking: Learning to rank images for image retrieval,” inIEEE Conference on Computer Vision and Pattern Recognition, 2008, pp. 1–8.
-  L. R. Rabiner and B.-H. Juang, Fundamentals of speech recognition. PTR Prentice Hall Englewood Cliffs, 1993, vol. 14.
-  A. Sobral, “BGSLibrary: An opencv c++ background subtraction library,” in IX Workshop de Visão Computacional, 2013.
-  V. John, G. Englebienne, and B. Krose, “Solving person re-identification in non-overlapping camera using efficient gibbs sampling,” in British Machine Vision Conference, 2013.
-  W.-S. Zheng, S. Gong, and T. Xiang, “Towards open-world person re-identification by one-shot group-based verification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PP, no. 99, pp. 1–1, 2015.
-  B. Cancela, T. Hospedales, and S. Gong, “Open-world person re-identification by multi-label assignment inference,” in British Machine Vision Conference, 2014.