Appearance Descriptors for Person Re-identification: a Comprehensive Review

07/22/2013
by   Riccardo Satta, et al.
0

In video-surveillance, person re-identification is the task of recognising whether an individual has already been observed over a network of cameras. Typically, this is achieved by exploiting the clothing appearance, as classical biometric traits like the face are impractical in real-world video surveillance scenarios. Clothing appearance is represented by means of low-level local and/or global features of the image, usually extracted according to some part-based body model to treat different body parts (e.g. torso and legs) independently. This paper provides a comprehensive review of current approaches to build appearance descriptors for person re-identification. The most relevant techniques are described in detail, and categorised according to the body models and features used. The aim of this work is to provide a structured body of knowledge and a starting point for researchers willing to conduct novel investigations on this challenging topic.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 10

11/19/2018

CA3Net: Contextual-Attentional Attribute-Appearance Network for Person Re-Identification

Person re-identification aims to identify the same pedestrian across non...
04/01/2016

Person Re-identification in Appearance Impaired Scenarios

Person re-identification is critical in surveillance applications. Curre...
04/19/2018

Part-Aligned Bilinear Representations for Person Re-identification

We propose a novel network that learns a part-aligned representation for...
07/03/2018

A Spatial and Temporal Features Mixture Model with Body Parts for Video-based Person Re-Identification

The video-based person re-identification is to recognize a person under ...
11/13/2014

Person Re-identification Based on Color Histogram and Spatial Configuration of Dominant Color Regions

There is a requirement to determine whether a given person of interest h...
07/13/2018

Survey on Deep Learning Techniques for Person Re-Identification Task

Intelligent video-surveillance is currently an active research field in ...
05/07/2020

Deep Learning based Person Re-identification

Automated person re-identification in a multi-camera surveillance setup ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Person re-identification[32] consists of recognising an individual who has already been observed (hence the term re-identification) over a network of video surveillance cameras. The topic is currently attracting much interest from researchers, due to the various possible applications such a technique can enable, e.g., off-line retrieval of all the video-sequences where an individual of interest appears, whose image is given as query, or on-line pedestrian tracking over multiple, possibly not-overlapping cameras (a task also known as re-acquisition [46]).

While several biometric traits can be in principle used to this aim, strong pose variations and unconstrained environments make the use of classical biometric traits like face difficult of impractical [32] with the typical sensors and setting of a surveillance network. Therefore, researchers explored the use of cues that pose less constraints, at the expense of an intrinsically lower identification capability. Among them, clothing appearance is used in the most of re-identification methods, as a soft, session-based cue, that is relatively easy to extract, and exhibits uniqueness over a limited time span. Various descriptors of the clothing appearance have been proposed so far in the literature [32]

. They are mostly designed heuristically, and are based on the extraction of various kinds of low-level

local and global features from the images showing the individual111The term “local features” refers to localised characteristics of the image, e.g. the colour distribution around a certain salient point of the image; the term “global features”, instead, refers to characteristics of the whole image, e.g. the overall colour distribution.. Typically, they exploit a part-based body model, to take into account the non-rigid structure of the human body and treat the appearance of different body parts (e.g. torso and legs) independently.

This paper provides an overview of existing methods used in literature for the task of person re-identification, with particular respect to the techniques used to build a descriptor of the body appearance. The presented review is mostly based on Chapter 2 of my thesis work [88]. The remainder of the paper is structured as follows. Sect. 2 first gives a simple formal statement to person re-identification. Then Sect. 3 reviews current approaches to construct appearance descriptors. The survey is conducted under two “orthogonal” viewpoints, namely the kind of body model and the kind of features used (Sect. 3.1 and Sect. 3.2 respectively). Sect. 3.3 then focuses on the problem and current approaches of combining different feature sets While almost all existing methods use the clothing appearance as main cue to perform re-identification, it is worth to note that other approaches have been attempted in literature, for instance based on gait, or anthropometric measures captured through novel RGB-D sensors. These methods are briefly surveyed in Sect. 4. Finally, Sect. 5, which concludes the paper.

2 Problem overview

Formally, person re-identification can be modelled as a recognition/matching task, where a probe individual is matched against a gallery of templates (representing the individuals previously seen by the camera network). Thus, the problem of re-identifying an individual represented by its descriptor can be formulated as:

(1)

where is a gallery of template descriptors, and is a proper distance metric.

In order to address the re-identification problem above, it is indeed fundamental, first, to answer the question of how to represent persons using a descriptor. This is the topic of investigation of the rest of the paper.

3 Appearance descriptors

The procedure of extracting appearance descriptors typically follow a standard pipeline (see Fig. 1 and Fig. 2):

  1. the person is detected and tracked by suitable algorithms;

  2. the pixels belonging to the person are separated from the background (foreground extraction or segmentation) in each frame of the video-sequence;

  3. a descriptor is built from the resulting silhouettes (one for each frame), using local or global features, possibly after different body parts are detected through a body model, in order to take the into account the non-rigid nature of the body;

Descriptors of Step 3 are finally stored in a data base for subsequent searches.

Figure 1: Descriptor construction pipeline.
Figure 2: (a) Example outputs of a pedestrian detection algorithm in three frames taken from real-world video-surveillance footages; Detected blobs are in green. (b) Example of division of a blob into person and non-person pixels.

Step 1 requires i) a method to detect people in a given video frame [31] (i.e., to recognise the image regions, or blobs, that contain a person), and ii) a data association algorithm that track people found by the detector [56, 79, 109] (i.e., to associate blobs in subsequent frames to the same person). These two steps may also be carried out together, and reinforce one another [3]. Step 2 is usually carried out using an adaptive model of the background [35].

Many challenging issues can affect some or all the three steps above. Among them we cite (see Fig. 3):

  • Pose and viewpoint variations. The relative pose of a person with respect to the cameras of the network varies depending on the walking path of that person, and of the viewpoint of the camera. This may cause consistent variations of the person appearance.

  • Partial occlusions. Parts of a person may be not visible to the camera due to occlusions caused by objects, clothing accessories or other people. This may cause the segmentation algorithm to fail in separating one person from the rest of the scene; consequently, descriptors may be built from images partially corrupted by the source of the occlusion.

  • Illumination changes. Illumination conditions may differ in different cameras, and in the same camera in different periods of time due to changing environmental conditions. This may result in appearance changes over different cameras and during time.

  • Changes in colour response. Different cameras may have a different colour response, that may affect person appearance as well.

The vast majority of methods assumes that the steps of detection, tracking and segmentation have been already accomplished using any of the algorithms available in literature, and concentrate on the task of constructing descriptors. The interested reader is referred to [20] and [31] for a comprehensive survey of pedestrian detection and foreground segmentation algorithms. This paper concentrates on Step 3, namely, how to construct discriminant and robust appearance descriptors to match persons in different views.

As stated in the introductory Section, appearance descriptors usually follow a part-based body model: the body is at first subdivided in parts. Then, body parts are described via global features or bags (i.e., unordered sets) of local features. Therefore, it is convenient to split the survey of current appearance descriptors in two parts, first reviewing body part subdivision models (Sect.3.1), then focusing on appearance features (Sect.3.2). Combining different kind of features may help in attaining a better performance; Sect. 3.3 provides a closer insight on typical approaches for feature combination in appearance descriptors.

Figure 3: Pairs of images showing the same person from different cameras, taken from two common benchmark data sets, VIPeR [46] and i-LIDS [112]. Notice pose variations (a)(b)(c), partial occlusions (d), illumination changes (a)(b)(c), and different colour responses (e).

3.1 Part-based body models

The human body is not a rigid object. Instead, it has a complex kinematics, and can be better described using a part-based model, possibly where relative positions of parts are not fixed a-priori but are inferred from the image. Furthermore, discontinuities of the clothing appearance usually follow the body structure (e.g., the clothing appearances of the upper and lower body usually differ). Many existing appearance descriptors, therefore, exploit some part-based human body model to segment the silhouette into different parts. Some other descriptors (e.g., [7, 13, 19, 27, 47, 51, 52, 55, 57, 60, 69, 71, 83, 103, 104, 111]) consider the body as a whole instead. Part-based body models used in existing appearance descriptors can roughly be divided into three categories:

  • fixed models, in which size and relative position of body parts are defined a-priori;

  • adaptive models, that try to fit a predefined part subdivision model to the image of the person;

  • learned models, that previously learn the model constraints (e.g., relative parts disposition) from a labelled training set of images of individuals.

In the rest of this Section, part-based body models belonging to the three categories above are reviewed and compared.

3.1.1 Fixed part models

Probably the simplest kind of part subdivision is a fixed one, in which the sizes and positions of body parts are chosen a-priori. An example of this approach can be found in [67, 84, 113], where the body is subdivided into six horizontal stripes of equal size, that roughly capture the head, upper and lower torso and upper and lower legs. Similarly, in [6] the silhouette is subdivided in five equal-sized stripes. An even simpler fixed part subdivision is used in [61]. Three horizontal stripes of respectively 16%, 29% and 55% of the total blob height roughly locate head, torso and legs, then the first strip is discarded as the head typically consists of few pixels and is not informative for the clothing appearance.

3.1.2 Adaptive part models

Other body models are adaptive, in the sense that they try to fit a predefined part subdivision model to the image of the individual. In one of the descriptors proposed in [8], the MPEG-7 Dominant Colour Descriptor (DCD) [108] is used to dynamically separate the body into two parts, upper and lower body, looking for discontinuities in dominant colours (the same DCD is also used as feature set to describe each body part, see Sect. 3.2). The approach of [36] extends the basic idea of exploiting appearance anti-symmetries of [8]. It dynamically finds three body areas, namely the head, torso, and legs, exploiting symmetry and anti-symmetry properties of silhouette and appearance. To this aim, two operators are defined. The first measures is called chromatic bilateral operator. It measures the appearance anti-symmetry of a certain image region with respect to a given horizontal axis, and is defined as

(2)

where is the Euclidean distance, evaluated between pixels represented in the HSV colour space and located symmetrically with respect to an horizontal axis placed at height of the person image. This distance is summed up over the person pixels lying in the horizontal strip centred in and of height .

The second is called spatial covering operator and measures the difference of the silhouette areas of two regions:

(3)

where is the width of the blob, and and , denote the number of person pixels respectively of the strip of vertical extension and . These operators are combined to find two axes, and , that respectively separate head and torso, and torso and legs. These axes are defined as

(4)
(5)

The parameter is set to a value of where is the blob height in pixels. The values and isolate three regions approximately corresponding to head, body and legs (Fig. 4-a). The head part is discarded as it carries very low informative content. As claimed by the authors, this strategy is able to locate body parts which are dependent on the visual and positional information of the clothes, robust to pose, viewpoint variations, and low resolution. After [36], the same part-based model has been used in various other works [14, 72, 73, 89, 91, 93, 106].

Figure 4: (a) Symmetry-driven subdivision in three parts [36]. The blob of size pixels containing the person is divided according to two horizontal axes, and , found by minimising a proper combination of the operators defined in Eqs. (2)-(3). (b) Decomposable body model used in [44]: (b.1) the decomposable triangulated graph model; (b.2) Partitioning of the person according to the decomposable model. (c) An example of fitting the decomposable triangulated model of [44] to an individual: (c.1) an image of an individual; (c.2) edges detected through the the Canny’s algorithm [24]; (c.3) result of fitting the model to the edges (in red). All figures are taken from [36] and [44].

A deformable model that is fitted to each individual to find six body regions is used one of the methods in [44], based on decomposable triangulated graphs [2]. A triangulated graph is a collection of cliques of size three, that has a perfect elimination order for their vertices, i.e., there exists an elimination order for all vertices such that (i) each eliminated vertex belongs only to one triangle, and (ii) a new decomposable triangulated graph results from eliminating the vertex.

The model is fit to the image of a person using the following strategy. Let the model be a decomposable triangulated graph T with triangles . The goal is to find a function that maps the model to the image domain, such that the consistency of the model with salient image features is maximised, and deformations of the underlying model are minimised. The function must be a piecewise affine map [38], i.e the deformation of each triangle must be an affine transformation. The problem becomes to minimise an energy functional that can be written as a sum of costs:

(6)

where the represents the image features. The terms take into account the cost for shape distortion of the -th triangle, while attracts the model to salient image features, which are found using an edge detector (Canny’s algorithm [24]). As shown in [2], a model based on decomposable triangulated graphs can be efficiently optimised using dynamic programming. Once the model has been fitted with regard to the image, the individual is partitioned into six salient body parts, shown Fig. 4-b with different colours. An example of application to a real pedestrian image is shown in Fig. 4-c.

3.1.3 Learned part models

More recently, some methods that rely on previously trained body part detectors and articulated body models have been proposed. Part detectors are statistical classifiers that learn a model of a certain body part (e.g., an arm) from a given training set of images of people where body parts are manually located and labelled. Typically, these detectors exploit features related to the edges contained on the image. An approach of this kind has been used in

[15, 16] based on the work of Felzenszwalb et al. [37]. The overall body model is made up of different part models; each one, in turn, consists of a spatial model and of a part filter

. The spatial model defines a set of allowed placements for a part with respect to the bounding box containing the person, and a deformation cost for each placement. To learn a model, a generalisation of Support Vector Machines (SVM)

[22] called latent variable SVM (LSVM) is used. In [15, 16], such model is used to detect four different body parts, namely head, left torso, right torso and the upper legs (see Fig. 5-a).

Figure 5: (a) Sample output of the articulated body model used in [15, 16]. (b) Sample output of the Pictorial Structure model used in [25]. (c) Sample Pictorial Structure of the upper body part, with the torso part as root node. (d) Kinematic prior learned on the dataset from [85]. The mean part position is shown in blue dots; the covariance of the part relations in the transformed space is shown using red ellipses. Figures taken from [15] and [4].

An articulated body model based on Pictorial Structures (PS) was proposed in [4] and later exploited in [25] for the task of re-identification. In [25], six parts are considered (chest, head, thighs and legs, see Fig. 5-b), while the original PS model is also able to detect and locate upper and lower arms.

A PS model for an object [39] is a collection of parts with connections between certain pairs of parts (an example is provided in Fig. 5-c). The approach of [4] uses a PS of the human body that is made up of a set of parts, and a set of generic part detectors based on descriptors of the shape. The model and the body part detectors are trained on a training set of images of people.

Let be the set of configurations of each body part. Each is the state of the -th body part , where and are the image coordinates of the part centre, is the absolute part orientation, and is the part scale, relative to the size of the part in the training set. Given the image evidence , the problem is to maximise the a-posteriori probability (posterior) that the part configuration is correct. The posterior is proportional to

(7)

according to Bayes’ theorem

[34]. The term is the likelihood of the image evidence given a particular body part configuration, while corresponds to a kinematic tree prior. Both are learned from a training set, as follows.

Kinematic three prior. The prior encodes the kinematic constraints, i.e. the constraints on the relative parts disposition. The body structure is mapped on a directed acyclic graph, so that can be factorised as

(8)

where denotes the set of all directed edges in the kinematic tree, and is the root node, that in [4] is chosen to be the torso body part.

The prior for the root part configuration is assumed to be uniform. To model part relations , a transformed space is used, where such relations can be modelled as Gaussian [39]. More specifically, the part configuration is transformed into the coordinate system of the joint between the two parts and using the transformation:

(9)

where is the mean relative position of the joint between the two parts and , in the coordinate system of part , and is the relative angle between the two parts. Then, part relations are modelled as Gaussian in the transformed space:

(10)

where and

can be learned via maximum likelihood estimation

[34] from a labelled training set of images of people. It is worth noting that the body parts are only loosely attached to the joints (also called a loose-limbed model [98]), which helps increasing the robustness of the pose estimation. Fig. 5-d shows the priors learned from the multiple views and multiple poses people data set of [85], a common benchmark corpus for body pose estimation algorithms.

Likelihood of the image evidence. To estimate the likelihood , the methods relies on a different appearance model for each body. Each appearance model will result in a part evidence map that reports the evidence for the -th part for each possible position, scale, and rotation.

Assuming that the different part evidence maps are conditionally independent, and that each depends only on the part configuration , the likelihood can be written as:

(11)

Substituting Eq. (8) and Eq. (11) in Eq. (7), one finally obtains:

(12)

The part detectors use a variant of the shape context descriptor [75], that consists in a log-polar histogram of locally normalised gradient orientations. The feature vector is obtained by concatenating all shape context descriptors whose centres fall inside the bounding box of the part. During detection, different positions, scales, and orientations are scanned with sliding windows. The classifier used for detection is an ensemble of a fixed number of decision stumps combined through AdaBoost [42].

3.2 Features

Each body part (or the whole image of the individual, if no body part subdivision model is used) is typically described using one or more different global or local features. In this, Section, the main kinds of features used in the literature are reviewed.

3.2.1 Global features

Global features are characteristics measured in the whole image or body region considered, and are usually represented as a fixed-size vector of real numbers.

Probably the most widely used feature of this kind is the global colour histogram. Given a colour image of size pixels, the colours of the image are at first quantised into bins . The histogram is then constructed as the count of the number of occurrences per bin. Typically, such count is normalised as the fraction of pixels of the image belonging to the bin. Colour image pixels are typically represented as a triplet of values, representing the amount of colour in different colour channels (e.g., Red, Green and Blue). In this case, each colour channel is quantised separately. The resulting histogram can be multi-dimensional (one dimension for each channel), or mono-dimensional (the final histogram is constructed as the concatenation of histograms in each colour channel). The latter saves a lot of space (e.g., if 16 bin are used for each colour channel, the size of the multi-dimensional histogram would be bins, while the mono-dimensional one would have a size of 48 bins) and has usually a similar discriminant capability to the former. Various colour spaces exist in the literature. Among them it is worth citing:

  • The RGB colour space, where each colour is represented as the corresponding amount of Red, Green and Blue; it directly relates to the way devices acquire and visualise colours.

  • Perceptual colour spaces, i.e., spaces inspired to the way the human brain perceives colour; e.g., the Hue-Saturation-Value (HSV) colour space, in which the light intensity (V channel) is separated from the colour tonality (H channel) and the saturation of the colour (S channel).

Good surveys on colour spaces are provided in [102, 105]. Many appearance descriptors use global colour histograms, to represent the whole body appearance [13, 57, 67] or the overall appearance of each body part [6, 14, 15, 16, 36, 44, 47, 61, 84, 106, 113]. Du et al. [33] evaluated the use of colour histograms computed in various colour spaces for building appearance descriptors for re-identification. To tackle with the lower amount of information usually carried by peripheral pixels (that could actually belong to the background, as the person segmentation is usually very noisy), in [25, 36, 106] these pixels receive less weight than those near the vertical silhouette symmetry axis.

The colour space is typically quantised in an uniform fashion. However, many colour ranges can be irrelevant for representing a certain appearance, e.g. colours ranges that are not present in the image, or whose coverage percentage with respect to the image is irrelevant. For this reason, some approaches try first to find the most representative colour ranges, then describe the appearance with respect to these ones. One of the methods of [8] and the methods of [15, 16, 60] use the Dominant Colour Descriptor (DCD) (also called Representative Meta Colours Model, RMCM) of MPEG-7, which provides a compact description of the most representative colours. Given an image, the DCD algorithm first finds the dominant colours [30], via k-means clustering of all the colour triplets in the image. Then, the descriptor is defined as

(13)

where is the -th dominant colour (i.e., the centroid of the -th cluster), and is the percentage of image pixels that fall into the -th cluster. A similar approach is used also in [23], called Global Colour Context. The method of [27] partly differs to the former ones, although it shares with them the same idea of describing appearance in terms of the most important colours. Instead of finding representative colours by clustering, they are chosen a priori; specifically, eleven colors, usually referred to as culture colours [26], are used: black, white, red, yellow, green, blue, brown, purple, pink, orange, and grey. Each pixel of the image is assigned to the most similar cultural colour.

Colour histograms are invariant to scale and show a good robustness with respect to partial occlusions, if the occlusion itself is small. However, they are sensitive to changing brightness and colour response of the sensor. Illumination conditions in outdoor environments may consistently vary during time due to changing weather conditions and the varying illumination of the Sun during the day. On the other hand, lighting conditions of indoor scenes may vary from camera to camera due to different types of lamps (e.g., incandescent, tungsten, neon) and also due to weather conditions in case of presence of windows that let the Sun light enter. Colour response of the sensors may also vary due to environmental conditions and due to the automatic colour balance that often takes place in-camera.

Different mechanisms have been exploited to address, at least partially, the above problems. Probably the simplest one is colour normalisation [105]. The chromaticity RGB space is one of these techniques, used in [19, 33, 104], and consists of dividing each colour channel of each pixel by the sum of all the channels of that pixel, e.g. . Another common technique is the Grey-world normalisation [21], which relies on the assumption that the average colour of a scene is usually a tonality of grey. It consists of dividing each RGB channel of every pixel by the average value of that channel in the image, e.g. . Grey-world normalisation is used in [103, 104]. Similar to Grey-world is the affine normalisation used in [19, 103, 104]

, where pixel-values of each color channel are normalised independently by subtracting the average and scaling them with the standard deviation, e.g.

.

Alternative to colour normalisation is histogram equalisation [40], which is used in the re-identification methods of [9, 103, 104]. It is based on the assumption that a change in illumination preserves the rank ordering of sensor responses (i.e. pixel values). The rank measure for the bin of the histogram and the -th colour channel is defined as , where is the number of bins and is the histogram relative to the -th channel.

Finally, Piccardi and Cheng [83] exploited a colour quantisation scheme to mitigate the effect of illumination changes between cameras. They represent the image with a Major Colour Spectrum Histogram (MCSH), that is, an histogram of the top represented colour values in the image.

Another problem of histograms is that they do not retain any information on the spatial disposition of colours. A simple way to incorporate the spatial information is to add the relative pixel height (i.e. the ratio between the vertical coordinate of the pixel and the total height of the silhouette) as another channel of the image222The horizontal coordinate of the pixel is typically not used, as it is not robust to body rotations and viewpoint changes.. A colour-position histogram can be then built which is able to spatially localise the colour distribution [19, 103, 104]. A similar approach is used also in [61], where two dimensions are added to each pixel (i.e. the radial and angular distance to the torso center) and quantised. The Color Structure Descriptor (CSD) of MPEG-7 [71] is used in [50], and encodes the distribution of colour by the following steps: (i) move a window of size pixel over the picture ; (ii) determine which colours are present in within the window; (iii) increase the corresponding bins in a color histogram by one, independently of the number of pixels of these colors.

Instead of looking at colour properties, other kinds of global features try to characterise gradients, textures and repeated patterns of the whole body appearance or of each body part. Gabor filters [76] ans Schmid filters [95] are orientation-sensitive filters that capture texture and edge informations on the image. The former ones are aimed at detecting horizontal and vertical lines, while the latter ones detect circular gradient changes. They are used in various appearance descriptors [47, 67, 69, 84, 113] in conjunction with other colour-related features.

Hahnel et al. [50] compared various different texture features. The fist is the 2D Quadrature Mirror Filter (QMF), a well known filter in signal processing that splits a 2D input signal into two bands (high and low-pass) in each direction (horizontal, vertical and diagonal. The second is the Oriented Gaussian Derivatives (OGD) filter, based on steerable Gaussian filters. Also, two MPEG-7 texture-related descriptors, are used the Homogeneous Texture Descriptor (HTD) that uses Gabor filters, and the Edge Histogram Descriptor (EHD), basically an histograms of the directions of each edge pixel in the image [99].

It is worth pointing out that texture-based features have always been used in combination to colour-based ones. Information on repeated patterns is in fact likely to be not distinctive enough when used alone. Hahnel et al. [50] confirmed this thought, and showed also that the combination of colour and texture-based descriptors may lead only to minor performance improvements.

3.2.2 Local features

The term local feature refers to an appearance characteristic of a small portion of the image (e.g., the neighbourhood of a pixel). The regions where local features are extracted can be chosen in various way (e.g. by dense sampling, by an interest operator or at random). Each small region is described by a feature vector (e.g., an histogram). This lead to a representation of the image as as a bag (set) of local features.

Interest points are one important category of local features. The most famous among them is SIFT (Scale Invariant Feature Transform) [68], where at first salient points of the image are chosen via in interest operator that looks for “stable” locations in the image (i.e. locations that are identifiable over different scales and rotations). This operation is carried out by detecting scale-extrema locations in the scale space of scale , which is defined by the function

(14)

where is the convolution operation in the image coordinates and , and is a 2-D Gaussian with standard deviation . Stable key-points can be detected in this space e.g. by using difference-of-Gaussians functions convolved with the image:

(15)

To detect the local minima and maxima of , each point is compared with its 8 neighbours at the same scale , and its 9 neighbours in the two scales and (). If this value is the minimum or maximum of all these points, then this point is an extrema, and it is labelled as key-point. A subsequent stage filters out low-contrast and noisy points. The remaining key-points are described as a histogram of the edge orientations of a small window centred on the key-point. SIFT points or its variants, (e.g., Speeded-Up Robust Features, SURF [12]) are used in various appearance descriptors to represent the whole body appearance. Interest point are typically chosen via interest operators [29, 51, 52, 60, 69, 72, 73] but some works exist (e.g.,[111]) that adopt dense sampling instead.

Other approaches use different kinds of local features.

Maximally Stable Colour Regions (MSCR) [41] are used in [25, 36, 69]. The MSCR algorithm first detects a set of regions in the image (Fig. 6

-a) by using a constrained agglomerative clustering on image pixels, which show the maximal chromatic distance. The detected regions are then described by their area, centroid, second moment matrix and average color, forming 9-dimensional feature vectors, and are stable to scale and affine transforms.

Recurrent Highly-Structured Patches (RHSP) used in the method of [36], try instead to capture repeated patterns and textures of the clothing appearance. The procedure of creating RHSPs is as follows. First, random and possibly overlapping small patches are extracted from the image. Patches that do not carry texture informations (e.g. showing uniform colours) are discarded by thresholding the patch entropy, computed as the sum of the entropy of each colour channel. Remaining patches are then further filtered, keeping only those that exhibit invariance to rotations. Second, the recurrence of each patch is evaluated, via Local Normalised Cross-Correlation over a small local region containing that patch. Third, patches that show a high degree of recurrence are clustered, maintaining for each final cluster the patch nearest to the centroid. These patches are finally described as their Local Binary Pattern histogram [81], a simple yet efficient way to describe textured content, based on a per-pixel transform that encodes small-scale appearance structures.

Figure 6: (a) Maximally Stable Colour Regions [41] detected in two images showing the same pedestrian. (b) Steps of the extraction of RHSP: random extraction, rotational invariance check, recurrence check, entropy thresholding, clustering. The final result of this process is a set of patches (in this case only one) characterising repeated patterns of each body part of the individual. Figures taken from [36].

Instead of using interest operators like the one defined by Eqs.(14)-(15), or other proper selection criteria to choose where to extract a local feature, in [47], a set of strips of fixed height and position are extracted from the image, and described by a concatenation of colour histograms in different colour spaces and Gabor and Shmid filters. Similarly, in [55] partly overlapping rectangular patches of fixed size are sampled from the image following a pre-defined regular grid. Each patch is represented by its colour histogram in the HSV colour space, and by its LBP histogram to capture textures and repeated patterns. An analogous approach is also used in [111], except for the fact that patches are not overlapping. Finally, instead of using regular sampling, one could sample patches at random, an approach followed for instance in [93].

To reduce the dimensionality of local features-based descriptors, in [89, 91] a dissimilarity approach has been introduced [82]

: a bag of local features is turned into a dissimilarity-vector that encodes the degree of similarity to a set of predefined prototype local features. Prototypes are found by clustering local features extracted from a design set of images of people. In case a part-based body model is used, memberships to body parts are kept and each body part is represented via a dedicated dissimilarity vector. The same dissimilarity-based descriptor was then used in

[90, 92, 94], also for tasks different that person re-identification.

3.3 Combination of features and matching

Many person re-identification methods use appearance descriptors made up of only one kind of features among the above mentioned ones, typically based on colour or interest points [6, 8, 15, 16, 19, 23, 27, 44, 51, 52, 61, 72, 73, 103, 104]. However, as combining different sources of information usually helps in attaining a better performance, especially when sources are complementary (i.e. they look at different aspects of the appearance, e.g. colour and texture), many authors have defined descriptors that use a combination of features.

In principle, two main combination techniques can be exploited to this aim [87]:333In verification tasks, whose goal is to establish whether the claimed identity is true, combination can also be performed at decision level, i.e., by combining the crisp outputs of classifier/detectors. It can not be applied to person re-identification, which is a recognition task instead.

  1. feature-level fusion: if the features used are made up of a single vector of fixed size (e.g. global features, or local features with an intrinsic ordering) they can be combined simply by concatenating feature vectors;

  2. score-level fusion: a distinct detector/matcher is used for each feature, and their real-valued scores are combined (e.g., by averaging them, or using their maximum value).

The first approach is followed for instance in [33, 55, 106]. The second approach requires to define a proper fusion rule. Many methods used a weighted average of the partial scores attained with each single feature, where weights are fixed a-priori by the system designer [13, 14, 25, 36]. Another approach is to learn a proper metric or a set of weights from a training set. In [47], AdaBoost[42] is used to this aim: each feature set is associated to a weak two-class classifier (a decision stump) which discerns between the class 0 (identities differ) and 1 (identity is the same) based only in that feature set. The method of [84] tries to find a linear function to weight the absolute difference of samples by training an ensemble of RankSVM rankers [59] given pairwise relevance constraints. The Probabilistic Relative Distance Comparison (PRDC) technique of [113]

maximises the probability that a pair of true match has a smaller distance than that of a wrong match. The output is an orthogonal matrix which essentially encodes the global importance of each feature. In

[69] a pairwise metric is learned through a recently proposed method, Pairwise Constrained Component Analysis (PCCA) [74], which learns a projection into a low-dimensional space where the distance between pairs of data points respects the desired constraints.

Metric learning and similar approaches always help in boosting re-identification performance. However, it is worth to note that all the above methods require a training set of labelled data. Such set can be for instance the gallery of templates. This requires that the template gallery is fixed, i.e. templates cannot be added during system operation; such constraint might be too strong for real-world application scenarios.

4 Other cues

Some cues alternative to the clothing appearance have been exploited in the literature to perform person re-identification or assimilable tasks. Despite the intrinsic limitations of such cues, they could be potentially of help in certain conditions, possibly combined with appearance cues.

Figure 7: (a) Two sequences of aligned foreground silhouettes. (b) Their corresponding Gait Energy Image. Figures taken from [53].

Human gait, i.e. the recurrent pattern of motion of a person walking, is among these cues. In cognitive science, it is known to be one of the cues that humans exploit to recognise people [100]. Among the approaches to characterise gait, the recently proposed Gait Energy Image (GEI) [53] has attracted the attention of many researchers. Here, the gait signature is formed by by normalising, aligning and averaging a sequence of foreground silhouettes corresponding to one “walking period” (see Fig. 7

). Principal Component Analysis (PCA) is then used to reduce the dimensionality of the signature.

The use of Gait Energy Image can lead to high recognition rates [107] and can overcome one of the main limitations of clothing appearance-based approaches, that is, the impossibility of distinguishing people when their clothing changes between observations. It is also not directly affected by illumination changes. However, it requires perfect alignments of the silhouettes to be compared, and is sensible to segmentation errors. These two constraints severely limit the use of GEI-based methods on practical, real-world applications. Researchers have therefore attempted to explore other approaches. Zhao et al. [110] and more recently Gu et al. [48] used a 3D skeletal representation, that however requires multiple overlapping camera views or a constrained environment to construct and track it.

Some authors attempted instead to perform remote face recognition [78]

, that is, face recognition with low resolution images. As low resolution face images are not directly usable for recognition, many approaches attempted to address the problem through the obvious way of trying to increase image resolution, using super-resolution techniques

[49, 54, 58, 96]. Other authors proposed instead techniques that work directly on low resolution images, by exploiting metric learning [65, 66], multidimensional scaling [18], or multiple frames from video sequences [5]. All the approaches above could in principle be used in conjunction with appearance cues to increase re-identification accuracy when the face is visible.

Another useful set of soft cues is anthropometry, that is, the characterisation of individuals through the measurement of physical body features [86], e.g., height, arm length, and eye-to-eye distance. Measures are typically taken according to a number of body landmark points (e.g., elbows, hands, knees, feet), that have to be localized either automatically or manually. In the classic study by Daniels and Churchill [28], the uniqueness of 10 different anthropometric traits was evaluated on a large data base of 4063 individuals. None of the considered traits was found to be “average” (i.e., approximately close to the mean point), considering all 10 dimensions. Furthermore, only 7% of the individuals were “average” in 2 dimensions, and 3% in 3 dimensions.

Although the use of anthropometric measurements for person recognition has been proposed in many works, their extraction was often based on costly devices, like 3D laser scanners, and/or require user collaboration in a constrained environment [45, 77, 80]. In some works, anthropometric measurements are extracted from a single RGB camera view, instead. In [11] a method that does not require camera calibration was proposed, for simultaneously estimating anthropometric measurements and pose. However, the former are measured up to a scale factor, and consequently can not be used to directly compare individuals in images acquired by different cameras. Calibration is not required in [1] as well, although 13 body landmarks have to be manually selected, from an image of an individual in frontal pose. Other methods focus on height measurement only [17, 43, 63, 64, 70], but require camera calibration to estimate absolute height values. Interestingly, in [70] height is used as a cue for the task of associating tracks of individuals coming from disjoint camera views, which is actually the same re-acquisition task that is enabled by person re-identification.

Figure 8: (a) The 20 skeletal points tracked by the Kinect SDK in the classical representation of the Vitruvian Man. (b–d) Examples of the pose estimation capabilities of the Kinect SDK. Depending on the degree of confidence of the estimation of the points position, the Kinect SDK distinguishes between good (in green) or inferred (in yellow) points, the latter being less reliable than the former.

None of the above works fits the typical setting of person re-identification tasks, which is characterised by multiple, uncalibrated cameras and unconstrained environment, with free poses and non collaborative users. Recently, it has been shown that body pose can be reliably estimated in real-time by exploiting RGB-D sensors [97, 101], like the MS Kinect, a device recently introduced in the video-gaming market. The pose estimation functionality of Kinect SDK [62], which is based on a similar method, provides the absolute position (in meters) of 20 different body joints in real-time, with high reliability (see Fig. 8). Detecting joint positions enables the evaluation of several anthropometric measures. In [10] such joints were used to extract a set of different anthropometric measures from front or back poses: distance between floor and head, ratio between torso and legs, height, distance between floor and neck, distance between neck and left shoulder, distance between neck and right shoulder, and distance between torso center and right shoulder. Other three geodesic distance measures were estimated from the 3D mesh of the abdomen, obtained from the Kinect depth map: torso center to left shoulder, torso center (located in the abdomen) to left hip, and between torso center to right hip. Results reported in [10] appear promising. However, many of the considered anthropometric measures are hard or impossible to extract from unconstrained poses. For instance, extracting measures from 3D mesh requires near-frontal pose (abdomen is hidden in back pose); neck distance to left and right shoulders becomes hard to compute from lateral pose, even using a depth map, and requires to distinguish between left and right body parts. Such issues limit the actual set of anthropometric measures that can be used in realistic scenarios.

5 Conclusions

This paper provided a survey of current approaches and methods for constructing appearance descriptors for person re-identification. State-of-the-art descriptors have been reviewed from two different viewpoints, namely the kind of body model and the kind of features used to represent a person. We tried to provide a comprehensive analysis and description of the algorithms in a structured and consolidated way. We hope that this work will be a useful reference for anyone in the research community willing to work on this interesting and challenging topic.

References

  • [1] C. Ben Abdelkader and Y. Yacoob. Statistical estimation of human anthropometry from a single uncalibrated image. Computational Forensics, 2008.
  • [2] Yali Amit and Augustine Kong. Graphical templates for model registration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(3):225–236, mar 1996.
  • [3] M. Andriluka, S. Roth, and B. Schiele. People-tracking-by-detection and people-detection-by-tracking. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pages 1–8, june 2008.
  • [4] M. Andriluka, S. Roth, and B. Schiele. Pictorial structures revisited: People detection and articulated pose estimation. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1014–1021, 2009.
  • [5] Ognjen Arandjelovic and Roberto Cipolla. Face recognition from video using the generic shape-illumination manifold. In Proceedings of the 9th European Conference on Computer Vision (ECCV), pages 27–40, 2006.
  • [6] Tamar Avraham, Ilya Gurvich, Michael Lindenbaum, and Shaul Markovitch. Learning implicit transfer for person re-identification. In Proceedings of the European Conference of Computer Vision (ECCV) Workshops, 1st Workshop on Re-Identification (REID), pages 381–390, 2012.
  • [7] Walid Ayedi, Hichem Snoussi, and Mohamed Abid. A fast multi-scale covariance descriptor for object re-identification. Pattern Recognition Letters, 33(14):1902–1907, 2012. Special Issue on Novel Pattern Recognition-Based Methods for Re-identification in Biometric Context.
  • [8] Slawomir Bak, Etienne Corvee, Francois Bremond, and Monique Thonnat. Person re-identification using haar-based and dcd-based signature. In Proceedings of the 7th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pages 1–8, 2010.
  • [9] Slawomir Bak, Etienne Corvee, Francois Bremond, and Monique Thonnat. Person re-identification using spatial covariance regions of human body parts. In Proceedings of the 7th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pages 435–440, 2010.
  • [10] B. I. Barbosa, M. Cristani, A. Del Bue, L. Bazzani, and V. Murino. Re-identification with rgb-d sensors. In Proceedings of the European Conference of Computer Vision (ECCV) Workshops, 1st Workshop on Re-Identification (REID), 2012.
  • [11] Carlos Barrón and Ioannis A. Kakadiaris. Estimating anthropometry and pose from a single uncalibrated image. Computer Vision and Image Understanding, 81(3):269–284, March 2001.
  • [12] Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. Speeded-up robust features (surf). Computuer Vision and Image Understanding, 110(3):346–359, June 2008.
  • [13] Loris Bazzani, Marco Cristani, Alessandro Perina, Michela Farenzena, and Vittorio Murino. Multiple-shot person re-identification by hpe signature. In Proceedings of the 20th International Conference on Pattern Recognition (ICPR), pages 1413–1416, Washington, DC, USA, 2010. IEEE Computer Society.
  • [14] Loris Bazzani, Marco Cristani, Alessandro Perina, and Vittorio Murino. Multiple-shot person re-identification by chromatic and epitomic analyses. Pattern Recognition Letters, 33(7):898–903, 2012. Special Issue on Awards from ICPR 2010.
  • [15] A. Bedagkar-Gala and Shishir K. Shah. Part-based spatio-temporal model for multi-person re-identification. Pattern Recognition Letters, 33(14):1908–1915, October 2012.
  • [16] Apurva Bedagkar-Gala and Shishir K. Shah. Multiple person re-identification using part based spatio-temporal color appearance model. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pages 1721–1728, nov. 2011.
  • [17] C. BenAbdelkader and Y. Yacoob. Statistical body height estimation from a single image. In Proceedings of the 8th IEEE International Conference on Automatic Face Gesture Recognition (FG), pages 1–7, 2008.
  • [18] Soma Biswas, Kevin W. Bowyer, and Patrick J. Flynn. Multidimensional scaling for matching low-resolution face images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(10):2019–2030, 2012.
  • [19] Henri Bouma, Sander Borsboom, Richard J. M. den Hollander, Sander H. Landsmeer, and Marcel Worring. Re-identification of persons in multi-camera surveillance under varying viewpoints and illumination. Proceedings SPIE 8359, Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense XI, pages 83590Q–83590Q–10, 2012.
  • [20] Thierry Bouwmans, Fida El Baf, and Bertrand Vachon. Background Modeling using Mixture of Gaussians for Foreground Detection - A Survey. Recent Patents on Computer Science, 1(3):219–237, November 2008.
  • [21] G. Buchsbaum. A spatial processor model for object colour perception. Journal of the Franklin Institute, 310(1):1–26, 1980.
  • [22] Christopher J. C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovovery, 2(2):121–167, June 1998.
  • [23] Yinghao Cai and Matti Pietikäinen. Person re-identification based on global color context. In Proceedings of the Tenth International Workshop on Visual Surveillance (VS), ACCV’10, pages 205–215, Berlin, Heidelberg, 2011. Springer-Verlag.
  • [24] John Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(6):679–698, nov. 1986.
  • [25] Dong S. Cheng, Marco Cristani, Michele Stoppa, Loris Bazzani, and Vittorio Murino. Custom pictorial structures for re-identification. In Proceedings of the British Machine Vision Conference (BMVC), pages 68.1–68.11, 2011.
  • [26] Angela D’angelo and Jean-Luc Dugelay. A statistical approach to culture colors distribution in video sensors. In 5th International Workshop on Video Processing and Quality Metrics for Consumer Electronics (VPQM), Scottsdale, AZ, United States, 01 2010.
  • [27] Angela D’angelo and Jean-Luc Dugelay. People re-identification in camera networks based on probabilistic color histograms. In SPIE 2011, Electronic Imaging Conference on 3D Image Processing (3DIP) and Applications, Vol. 7882, 23-27 January, 2011, San Francisco, CA, USA, San Francisco, United States, 01 2011.
  • [28] G. S. Daniels and E. Churchill. The average man? Technical Note WCRD TN 53-7: Wright-Patterson Air Force Base, OH: Wright Air Force Development Center, 1952.
  • [29] I.O. de Oliveira and J.L. de Souza Pio. Object reidentification in multiple cameras system. In Proceedings of the 4th International Conference on Embedded and Multimedia Computing (EM-Com), pages 1–8, 2009.
  • [30] Yining Deng, B. S. Manjunath, Charles Kenney, Michael S. Moore, Student Member, and Hyundoo Shin.

    An efficient color representation for image retrieval.

    IEEE Transactions on Image Processing, 10:140–147, 2001.
  • [31] P. Dollar, C. Wojek, B. Schiele, and P. Perona. Pedestrian detection: An evaluation of the state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(4):743–761, april 2012.
  • [32] Gianfranco Doretto, Thomas Sebastian, Peter Tu, and Jens Rittscher. Appearance-based person reidentification in camera networks: problem overview and current approaches. Journal of Ambient Intelligence and Humanized Computing, 2:127–151, 2011.
  • [33] Yuning Du, Haizhou Ai, and Shihong Lao. Evaluation of color spaces for person re-identification. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR), Washington, DC, USA, 2012. IEEE Computer Society.
  • [34] Richard O. Duda, Peter E. Hart, and David G. Stork. Pattern Classification. Wiley, New York, 2. edition, 2001.
  • [35] Ahmed M. Elgammal, David Harwood, and Larry S. Davis. Non-parametric model for background subtraction. In Proceedings of the 6th European Conference on Computer Vision-Part II, ECCV ’00, pages 751–767, London, UK, UK, 2000. Springer-Verlag.
  • [36] Michela Farenzena, Loris Bazzani, Alessandro Perina, Vittorio Murino, and Marco Cristani. Person re-identification by symmetry-driven accumulation of local features. In Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2360–2367, 2010.
  • [37] Pedro Felzenszwalb, David McAllester, and Deva Ramanan. A discriminatively trained, multiscale, deformable part model. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008.
  • [38] Pedro F. Felzenszwalb. Representation and detection of deformable shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(2):208–220, feb. 2005.
  • [39] Pedro F. Felzenszwalb and Daniel P. Huttenlocher. Pictorial structures for object recognition. International Journal of Computer Vision, 61(1):55–79, January 2005.
  • [40] Graham Finlayson, Steven Hordley, Gerald Schaefer, and Gui Yun Tian. Illuminant and device invariant colour using histogram equalisation. Pattern Recognition, 38(2):179–190, 2005.
  • [41] Per-Erik Forssén. Maximally stable colour regions for recognition and matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Minneapolis, USA, June 2007. IEEE Computer Society.
  • [42] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In

    Proceedings of the Second European Conference on Computational Learning Theory

    , EuroCOLT ’95, pages 23–37, London, UK, UK, 1995. Springer-Verlag.
  • [43] Andrew C. Gallagher, Andrew C. Blose, and Tsuhan Chen. Jointly estimating demographics and height with a calibrated camera. In Proceedings of the IEEE 12th International Conference on Computer Vision (ICCV), pages 1187–1194, 2009.
  • [44] Niloofar Gheissari, Thomas B. Sebastian, and Richard Hartley. Person reidentification using spatiotemporal appearance. In Proceedings of the 2006 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages 1528–1535, 2006.
  • [45] Afzal Godil, Patrick Grother, and Sandy Ressler. Human identification from body shape. In Proceedings of the 4th International Conference on 3D Digital Imaging and Modeling (3DIM), pages 386–393, 2003.
  • [46] Douglas Gray, S. Brennan, and H. Tao. Evaluating appearance models for recognition, reacquisition, and tracking. In Proceedings of the 10th IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS), pages 41–47, 2007.
  • [47] Douglas Gray and Hai Tao. Viewpoint invariant pedestrian recognition with an ensemble of localized features. In Proceedings of the 10th European Conference on Computer Vision (ECCV), pages 262–275, 2008.
  • [48] Junxia Gu, Xiaoqing Ding, Shengjin Wang, and Youshou Wu. Action and gait recognition from recovered 3-d human joints. Transaction on System, Man and Cybernetics, Part B, 40(4):1021–1033, August 2010.
  • [49] Bahadir K. Gunturk, Aziz Umit Batur, Yucel Altunbasak, Monson H. Hayes III, and Russell M. Mersereau. Eigenface-domain super-resolution for face recognition. IEEE Transactions on Image Processing, 12(5):597–606, 2003.
  • [50] M. Hahnel, D. Klunder, and K.-F. Kraiss. Color and texture features for person recognition. In

    Proceedings of the 2004 IEEE International Joint Conference on Neural Networks

    , volume 1, july 2004.
  • [51] O. Hamdoun, F. Moutarde, B. Stanciulescu, and B. Steux. Interest points harvesting in video sequences for efficient person identification. In Proceedings of the 8th International Workshop on Visual Surveillance (VS), 2008.
  • [52] O. Hamdoun, F. Moutarde, B. Stanciulescu, and B. Steux. Person re-identification in multi-camera system by signature based on interest point descriptors collected on short video sequences. In Proceedings of the Second ACM/IEEE International Conference on Distributed Smart Cameras, 2008. ICDSC 2008., pages 1–6, sept. 2008.
  • [53] Ju Han and Bir Bhanu. Individual recognition using gait energy image. IEEE Transactions on Pattern Analisys and Machine Intelligence, 28(2):316–322, February 2006.
  • [54] Pablo H. Hennings-Yeomans, Simon Baker, and B. V. K. Vijaya Kumar. Simultaneous super-resolution and feature extraction for recognition of low-resolution faces. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, 2008.
  • [55] Martin Hirzer, Peter M. Roth, and Horst Bischof. Person re-identification by efficient impostor-based metric learning. In Proceedings of the Ninth IEEE International Conference on Advanced Video and Signal-Based Surveillance, (AVSS), pages 203–208, 2012.
  • [56] M. Isard and J. MacCormick. Bramble: a bayesian multiple-blob tracker. In Proceedings of the Eighth IEEE International Conference on Computer Vision (ICCV), volume 2, pages 34–41, 2001.
  • [57] Omar Javed, Khurram Shafique, and Mubarak Shah. Appearance modeling for tracking in multiple non-overlapping cameras. In Proceedings of the 2005 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages 26–33, june 2005.
  • [58] Kui Jia and Shaogang Gong.

    Multi-modal tensor face for simultaneous super-resolution and recognition.

    In Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2, ICCV ’05, pages 1683–1690, Washington, DC, USA, 2005. IEEE Computer Society.
  • [59] Thorsten Joachims. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’02, pages 133–142, New York, NY, USA, 2002. ACM.
  • [60] Kai Jungling, C. Bodensteiner, and M. Arens. Person re-identification in multi-camera networks. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 55–61, june 2011.
  • [61] Arif Khan, Jian Zhang, and Yang Wang. Appearance-based re-identification of people in video. In Proceedings of the 2010 International Conference on Digital Image Computing: Techniques and Applications (DICTA), pages 357–362, dec. 2010.
  • [62] Microsoft®Kinect™. http://www.microsoft.com/en-us/kinectforwindows/.
  • [63] Kual-Zheng Lee. A simple calibration approach to single view height estimation. In Proceedings of the 9th Conference on Computer and Robot Vision, pages 161–166, 2012.
  • [64] Seok-Han Lee, Tae-Eun Kim, and Jong-Soo Choi. A single-view based framework for robust estimation of heights and positions of moving people. In Digest of Technical Papers of the 2010 International Conference on Consumer Electronics (ICCE), pages 503–504, 2010.
  • [65] B. Li, H. Chang, S. Shan, and X. Chen. Low-resolution face recognition via coupled locality preserving mappings. IEEE Signal Processing Letters, 17(1):20–23, January 2010.
  • [66] Bo Li, Hong Chang, Shiguang Shan, and Xilin Chen. Coupled metric learning for face recognition with degraded images. In

    Proceedings of the 1st Asian Conference on Machine Learning: Advances in Machine Learning

    , ACML ’09, pages 220–233, Berlin, Heidelberg, 2009. Springer-Verlag.
  • [67] Chunxiao Liu, Shaogang Gong, Chen Change Loy, and Xinggang Lin. Person re-identification: What features are important? In Proceedings of the European Conference of Computer Vision (ECCV) Workshops, 1st Workshop on Re-Identification (REID), 2012.
  • [68] David G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91–110, November 2004.
  • [69] Bipeng Ma, Yu Su, and Frederic Jurie. Local descriptors encoded by fisher vectors for person re-identification. In Proceedings of the European Conference of Computer Vision (ECCV) Workshops, 1st Workshop on Re-Identification (REID), pages 413–422, 2012.
  • [70] C. Madden and M. Piccardi. Height measurement as a session-based biometric for people matching across disjoint camera views. In In Image and Vision Computing New Zealand, page 29, 2005.
  • [71] B.S. Manjunath, J.-R. Ohm, V.V. Vasudevan, and A. Yamada. Color and texture descriptors. IEEE Transactions on Circuits and Systems for Video Technology, 11(6):703–715, jun 2001.
  • [72] Niki Martinel and Gian Luca Foresti. Multi-signature based person re-identification. Electronics Letters, 48(13):765–767, 21 2012.
  • [73] Niki Martinel and Christian Micheloni. Re-identify people in wide area camera network. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 31–36, june 2012.
  • [74] Alexis Mignon. Pcca: A new approach for distance learning from sparse pairwise constraints. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), CVPR ’12, pages 2666–2672, Washington, DC, USA, 2012. IEEE Computer Society.
  • [75] Krystian Mikolajczyk and Cordelia Schmid. A performance evaluation of local descriptors. IEEE Transaction on Pattern Analysis and Maching Intelligence, 27(10):1615–1630, October 2005.
  • [76] Javier R. Movellan. Tutorial on Gabor Filters. Tutorial paper http://mplab.ucsd.edu/tutorials/pdfs/gabor.pdf, 2008.
  • [77] S. P. Neugebauer and P. A. Sallee. New 3d biometric capabilities for human identification at a distance. In Proceedings of the 2009 Special Operations Forces Industry Conference (SOFIC), 2009.
  • [78] Jie Ni and Rama Chellappa. Evaluation of state-of-the-art algorithms for remote face recognition. In Proceedings of the 2010 International Conference on Image Processing (ICIP), pages 1581–1584, 2010.
  • [79] W. Niu, L. Jiao, D. Han, and Y.-F. Wang. Real-time multiperson tracking in video surveillance. In Proceedings of the 2003 Joint Conference of the Fourth International Conference on Information, Communications and Signal Processing, and Fourth Pacific Rim Conference on Multimedia, volume 2, pages 1144–1148, dec. 2003.
  • [80] D.B. Ober, S.P. Neugebauer, and P.A. Sallee.

    Training and feature-reduction techniques for human identification using anthropometry.

    In Proceedings of the Fourth IEEE International Conference on Biometrics: Theory Applications and Systems (BTAS), pages 1 –8, Sept. 2010.
  • [81] Timo Ojala, Matti Pietikainen, and Topi Maenpaa. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7):971–987, 2002.
  • [82] Elzbieta Pekalska and Robert P. W. Duin. The Dissimilarity Representation for Pattern Recognition: Foundations And Applications, volume 64 of

    Machine Perception and Artificial Intelligence

    .
    World Scientific Publishing Co., Inc., River Edge, NJ, USA, 2005.
  • [83] Massimo Piccardi and Eric Dahai Cheng. Track matching over disjoint camera views based on an incremental major color spectrum histogram. In Proceedings of the 2005 IEEE International Conference on Video and Signal Based Surveillance (AVSS 05), 15-16 September 2005, Como, Italy, pages 147–152. IEEE Computer Society, 2005.
  • [84] B. Prosser, W. Zheng, S. Gong, and T. Xiang. Person re-identification by support vector ranking. In Proceedings of the British Machine Vision Conference (BMVC), pages 21.1–21.10, 2010.
  • [85] Deva Ramanan. Learning to parse images of articulated bodies. In Proceedings of the Conference on Neural Information Processing Systems (NIPS), 2006.
  • [86] J.A. Roebuck, K.H.E. Kroemer, and W.G. Thomson. Engineering anthropometry methods. Wiley series in human factors. Wiley-Interscience, 1975.
  • [87] A. Ross and A. K. Jain. Multimodal Biometrics: an overview. In Proceedings of 12th European Signal Processing Conference, pages 1221–1224, 2004.
  • [88] Riccardo Satta. Dissimilarity-based people re-identification and search for intelligent video surveillance. Phd thesis, PhD School in Information Engineering, University of Cagliari, Cagliari (Italy), 2013.
  • [89] Riccardo Satta, Giorgio Fumera, and Fabio Roli. Exploiting dissimilarity representations for person re-identification. In Proceedings of the 1st International Workshop on Similarity-Based Pattern Analysis and Recognition (SIMBAD), pages 275–289, 2011.
  • [90] Riccardo Satta, Giorgio Fumera, and Fabio Roli. Appearance-based people recognition by local dissimilarity representations. In Proceedings of the ACM Workshop on Multimedia and Security, MMSec ’12, pages 151–156, New York, NY, USA, 2012. ACM.
  • [91] Riccardo Satta, Giorgio Fumera, and Fabio Roli. Fast person re-identification based on dissimilarity representations. Pattern Recognition Letters, 33(14):1838–1848, 2012.
  • [92] Riccardo Satta, Giorgio Fumera, and Fabio Roli. A general method for appearance-based people search based on textual queries. In Proceedings of the European Conference of Computer Vision (ECCV) Workshops, 1st Workshop on Re-Identification (REID), volume 7583, pages 453–461. Springer Berlin Heidelberg, 2012.
  • [93] Riccardo Satta, Giorgio Fumera, Fabio Roli, Marco Cristani, and Vittorio Murino. A multiple component matching framework for person re-identification. In Proceedings of the 16th International Conference on Image Analysis and Processing (ICIAP), volume 2, pages 140–149, 2011.
  • [94] Riccardo Satta, Federico Pala, Giorgio Fumera, and Fabio Roli. Real-time appearance-based person re-identification over multiple kinecttm cameras. In Proceedings of the 8th International Conference on Computer Vision Theory and Applications (VISAPP 2013), Barcelona, Spain, 21/02/2013 2013.
  • [95] Cordelia Schmid. Constructing models for content-based image retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages 39–45, 2001.
  • [96] Sumit Shekhar, Vishal M. Patel, and Rama Chellappa. Synthesis-based recognition of low resolution faces. In Proceedings of the 2011 International Joint Conference on Biometrics, IJCB ’11, pages 1–6, Washington, DC, USA, 2011. IEEE Computer Society.
  • [97] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake. Real-time human pose recognition in parts from single depth images. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1297–1304, 2011.
  • [98] Leonid Sigal and Michael J. Black. Predicting 3d people from 2d pictures. In Proceedings of the IV Conference on Articulated Motion and Deformable Objects (AMDO), pages 185–195, 2006.
  • [99] Thomas Sikora. The mpeg-7 visual standard for content description - an overview. IEEE Transactions on Circuits and Systems for Video Technology, 11(6):696–702, June 2001.
  • [100] Sarah V. Stevenage, Mark S. Nixon, and Kate Vince. Visual analysis of gait as a cue to identity. Applied Cognitive Psychology, 13(6):513–526, 1999.
  • [101] J. Taylor, J. Shotton, T. Sharp, and A. Fitzgibbon. The vitruvian manifold: Inferring dense correspondences for one-shot human pose estimation. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 103–110, 2012.
  • [102] M. Tkalcic and J.F. Tasic. Colour spaces: perceptual, historical and applicational background. In EUROCON 2003. Computer as a Tool. The IEEE Region 8, volume 1, pages 304–308, sept. 2003.
  • [103] Dung Nghi Truong Cong, Catherine Achard, Louahdi Khoudour, and Lounis Douadi. Video sequences association for people re-identification across multiple non-overlapping cameras. In Proceedings of the 15th International Conference on Image Analysis and Processing (ICIAP), pages 179–189, 2009.
  • [104] Dung Nghi Truong Cong, Louahdi Khoudour, Catherine Achard, Cyril Meurie, and Olivier Lezoray. People re-identification by spectral classification of silhouettes. Signal Processing, 90(8):2362–2374, August 2010.
  • [105] Koen van de Sande, Theo Gevers, and Cees Snoek.

    Evaluating color descriptors for object and scene recognition.

    IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1582–1596, September 2010.
  • [106] Yang Wu, Masayuki Mukunoki, Takuya Funatomi, M. Minoh, and Shihong Lao. Optimizing mean reciprocal rank for person re-identification. In Proceedings of the 2011 8th IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS ’11, pages 408–413, Washington, DC, USA, 2011. IEEE Computer Society.
  • [107] Dong Xu, Yi Huang, Zinan Zeng, and Xinxing Xu. Human gait recognition using patch distribution feature and locality-constrained group sparse representation. IEEE Transactions on Image Processing, 21(1):316–326, jan. 2012.
  • [108] Nai-Chung Yang, Wei-Han Chang, Chung-Ming Kuo, and Tsia-Hsing Li. A fast mpeg-7 dominant color extraction with new similarity measure for image retrieval. Journal of Visual Communication and Image Representation, 19(2):92–105, February 2008.
  • [109] Alper Yilmaz, Omar Javed, and Mubarak Shah. Object tracking: A survey. ACM Computing Surveys (CSUR), 38(4), December 2006.
  • [110] Guoying Zhao, Guoyi Liu, Hua Li, and M. Pietikainen. 3d gait recognition using multiple cameras. In 7th International Conference on Automatic Face and Gesture Recognition (FGR), pages 529–534, april 2006.
  • [111] Rui Zhao, Wanli Ouyang, and Xiaogang Wang. Unsupervised salience learning for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, USA, June 2013.
  • [112] Wei-Shi Zheng, Shaogang Gong, and Tao Xiang. Associating groups of people. In Proceedings of the British Machine Vision Conference (BMVC), 2009.
  • [113] Wei-Shi Zheng, Shaogang Gong, and Tao Xiang. Person re-identification by probabilistic relative distance comparison. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 649–656, Washington, DC, USA, 2011. IEEE Computer Society.