Human attribute and action recognition in still images is a challenging problem that has received much attention in recent years (Joo et al, 2013; Khan et al, 2013; Prest et al, 2012; Yao et al, 2011). Both tasks are challenging since humans are often occluded, can appear in different poses (also articulated), under varying illumination, and at low resolution. Furthermore, significant variations in scale both within and across different classes make these tasks extremely challenging. Figure 1 shows example images from different categories in the Stanford-40 and the Willow action datasets. The bounding box (in red) of each person instance is provided both at train and test time. These examples illustrate the inter- and intra-class scale variations common to certain action categories. In this paper, we investigate image representations which are robust to these variations in scale.
Bag-of-words (BOW) image representations have been successfully applied to image classification and action recognition tasks (Khan et al, 2013; Lazebnik et al, 2006; van de Sande et al, 2010; Sharma et al, 2012). The first stage within the framework, known as feature detection, involves detecting keypoint locations in an image. The standard approach for feature detection is to use dense multi-scale feature sampling (Khan et al, 2012; Nowak et al, 2006; van de Sande et al, 2010) by scanning the image at multiple scales at fixed locations on a grid of rectangular patches. Next, each feature is quantized against a visual vocabulary to arrive at the final image representation. A disadvantage of the standard BOW pipeline is that all scale information is lost. Though for image classification such an invariance with respect to scale might seem beneficial since instances can appear at different scales, it trades discriminative information for scale invariance. We distinguish two relevant sources of scale information: (i) dataset scale prior: due to the acquisition of the dataset some visual words could be more indicative of certain categories at a particular scale than at others scales (e.g. we do not expect persons of 15 pixels nor shoes at 200 pixels) and (ii) relative scale: in the presence of a reference scale, such as the person bounding box provided for action recognition, we have knowledge of the actual scale at which we expect to detect parts of the object (e.g. the hands and head of the person). Both examples show the relevance of scale information for discriminative image representations, and are the motivation for our investigation into scale coding methods for human attribute and action recognition.
Traditionally, BOW methods are based on hand-crafted local features such as SIFT (Lowe, 2004), HOG (Dalal and Triggs, 2005) or Color Names (van de Weijer et al, 2009). Recently, Convolutional Neural Networks (CNNs) have had tremendous success on a wide range of computer vision applications, including human attribute (Zhang et al, 2014) and action recognition (Oquab et al, 2014). Cimpoi et al. Cimpoi et al (2015)
showed how deep convolutional features (i.e. dense local features extracted at multiple scales from the convolutional layers of CNNs) can be exploited within a BOW pipeline for object and texture recognition. In their approach, a Fisher Vector encoding scheme is used to obtain the final image representation (called FV-CNN). We will refer to this type of image representation as abag of deep features, and in this work we will apply various scale coding approaches to a bag of deep features.
Contributions: In this paper, we investigate strategies for incorporating multi-scale information in image representations for human attribute and action recognition in still images. Existing approaches encode multi-scale information only at the feature extraction stage by extracting convolutional features at multiple scales. However, the final image representation in these approaches is scale-invariant since all the scales are pooled into a single histogram. To prevent the loss of scale information we will investigate two complementary scale coding approaches. The first approach, which we call absolute scale coding, is based on a multi-scale image representation with scale encoded with respect to the image size. The second approach, called relative scale coding, instead encodes feature scale relative to the size of the bounding box corresponding to the person instance. Scale coding of bag-of-deep features is performed by applying the coding strategies to the convolutional features from a pre-trained deep network. The final image representation is obtained by concatenating the small, medium and large scale image representations. We perform comprehensive experiments on five standard datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and the Database of Human Attributes (HAT-27). Our experiments clearly demonstrate that our scale coding strategies outperform both the scale-invariant bag of deep features and the standard deep features extracted from fully connected layers of the same network. We further show that combining our scale coding strategies with standard features from the FC layer further improves the classification performance. Our scale coding image representations are flexible and effective, while providing consistent improvement over the state-of-the-art.
In the next section we discuss work from the literature related to our proposed scale coding technique. In sections 3 and 4 we describe our two proposals for coding scale information in image representations. We report on a comprehensive set of experiments performed on five standard benchmark datasets in section 5, and in section 6 we conclude with a discussion of our contribution.
2 Related Work
Scale plays an important role in feature detection. Important work includes early research on pattern spectrums (Maragos, 1989) based on mathematical morphology which provided insight into the existence of features at certain scales in object shapes. In the field of scale-space theory (Witkin, 1984; Koenderink, 1984) the scale of features was examined by analyzing how images evolved when smoothed with Gaussian filters of increasing scale. This theory was also at the basis of the SIFT detector (Lowe, 2004) which obtained scale-invariant features and showed these to be highly effective for detection of objects. The detection of scale-invariant features was much studied within the context of bag-of-words (Mikolajczyk and Schmid, 2004a). In contrast to the methods we describe in this paper, most of these works ignore the relative size of detected features.
In this section we briefly review the state-of-the-art in bag-of-words image recognition frameworks, multi-scale deep feature learning, and human action and attribute recognition.
The bag-of-words framework. In the last decade, the bag-of-words (BOW) based image representation dominated the state-of-the-art in object recognition (Everingham et al, 2010)
and image retrieval(Jegou et al, 2010). The BOW image representation is obtained by performing three steps in succession: feature detection, feature extraction and feature encoding. Feature detection involves keypoint selection either with an interest point detector (Mikolajczyk and Schmid, 2004b) or with dense sampling on a fixed grid (Bosch et al, 2008; Vedaldi et al, 2009). Several works (Rojas et al, 2010; van de Sande et al, 2010) demonstrated the importance of using a combination of interest point and grid-based dense sampling. This feature detection phase, especially when done on a dense grid, is usually multi-scale in the sense that feature descriptors are extracted at multiple scales at all points.
Local descriptors, such as SIFT and HOG, are extracted in the feature extraction phase (Lowe, 2004; Dalal and Triggs, 2005). Next, several encoding schemes can be considered (Perronnin et al, 2010; van Gemert et al, 2010; Zhou et al, 2010). The work of van Gemert et al (2010) investigated soft assignment of local features to visual words. Zhou et al. Zhou et al (2010) introduced super-vector coding that performs a non-linear mapping of each local feature descriptor to construct a high-dimensional sparse vector. The Improved Fisher Vectors, introduced by Perronnin et al. Perronnin et al (2010)
, encode local descriptors as gradients with respect to a generative model of image formation (usually a Gaussian mixture model (GMM) over local descriptors which serves as a visual vocabulary for Fisher vector coding).
Regardless of the feature encoding scheme, most existing methods achieve scale invariance by simply quantizing local descriptors to a visual vocabulary independently of the scale at which they were extracted. Visual words have no associated scale information, and scale is thus marginalized away in the histogram construction process. In this work, we use a Fisher Vector encoding scheme within the BOW framework and investigate techniques to relax scale invariance in the final image representation. We refer to this as scale coding, since scale information is preserved in the final encoding.
Deep features. Recently, image representations based on convolutional neural networks (LeCun et al, 1989) (CNNs) have demonstrated significant improvements over the state-of-the-art in image classification (Oquab et al, 2014), object detection (Girshick et al, 2014)2014), action recognition (Liang et al, 2014), and attribute recognition (Zhang et al, 2014)
. CNNs consist of a series of convolution and pooling operations followed by one or more fully connected (FC) layers. Deep networks are trained using raw image pixels with a fixed input size. These networks require large amounts of labeled training data. The introduction of large datasets (e.g. ImageNet(Russakovsky et al, 2014)) and the parallelism enabled by modern GPUs have facilitated the rapid deployment of deep networks for visual recognition.
It has been shown that intermediate, hidden activations of fully connected layers in trained deep network are general-purpose features applicable to visual recognition tasks (Azizpour et al, 2014; Oquab et al, 2014). Several recent methods (Cimpoi et al, 2015; Gong et al, 2014; Liu et al, 2015) have shown superior performance using convolutional layer activations instead of fully-connected ones. These convolutional layers are discriminative, semantically meaningful and mitigate the need to use a fixed input image size. Gong et al. Gong et al (2014) proposed a multi-scale orderless pooling (MOP) approach by constructing descriptors from the fully connected (FC) layer of the network. The descriptors are extracted from densely sampled square image windows. The descriptors are then pooled using the VLAD encoding (Jégou et al, 2010) scheme to obtain final image representation.
In contrast to MOP (Gong et al, 2014), Cimpoi et al. Cimpoi et al (2015) showed how deep convolutional features (i.e. dense local features extracted at multiple scales from the convolutional layers of CNNs) can be exploited within a BOW pipeline. In their approach, a Fisher Vector encoding scheme is used to obtain the final image representation. We will refer to this type of image representation as a bag of deep features, and in this work we will apply various scale coding approaches to a bag of deep features. Though FV-CNN (Cimpoi et al, 2015) employs multi-scale convolutional features, the descriptors are pooled into a single Fisher Vector representation. This implies that the final image representation is scale-invariant since all the scales are pooled into a single feature vector. We argue that such a representation is sub-optimal for the problem of human attribute and action recognition and propose to explicitly incorporate multi-scale information in the final image representation.
Action recognition in still images. Recognizing actions in still images is a difficult problem that has gained a lot of attention recently (Khan et al, 2013; Oquab et al, 2014; Prest et al, 2012; Yao et al, 2011). In action recognition, bounding box information of each person instance is provided both at train and test time. The task is to associate an action category label with each person bounding box. Several approaches have addressed the problem of action recognition by finding human-object interactions in an image (Maji et al, 2011; Prest et al, 2012; Yao et al, 2011). A poselet-based approach was proposed in Maji et al (2011) where poselet activation vectors capture the pose of a person. Prest et al. Prest et al (2012) proposed a human-centric approach that localizes humans and objects associated with an action. Yao et al. Yao et al (2011) propose to learn a set of sparse attribute and part bases for action recognition in still images. Recently, a comprehensive survey was performed by Ziaeefard and Bergevin (2015) on action recognition methods exploiting semantic information. In their survey, it was shown that methods exploiting semantic information yield superior performance compared to their non-semantic counterparts in many scenarios. Human action recognition in still images is also discussed within the context of fuzzy domain in a recent survey (Lim et al, 2015).
proposed the use of discriminative spatial saliency for action recognition by employing a max margin classifier. A comprehensive evaluation of color descriptors and color-shape fusion approaches was performed byKhan et al (2013) for action recognition. Khan et al. Khan et al (2014a) proposed pose-normalized semantic pyramids employing pre-trained body part detectors. A comprehensive survey was performed by Guo and Lai (2014) where existing action recognition methods are categorized based on high-level cues and low-level features.
Recently, image representations based on deep features have achieved superior performance for action recognition (Gkioxari et al, 2014; Hoai, 2014; Oquab et al, 2014). Oquab et al. Oquab et al (2014) proposed mid-level image representations using pre-trained CNNs for image classification and action recognition. The work of Gkioxari et al (2014) proposed learning deep features jointly for action classification and detection. Hoai et al. Hoai (2014)
proposed regularized max pooling and extract features at multiple deformable sub-windows. The aforementioned approaches employ deep features extracted from activations of the fully connected layers of the deep CNNs. In contrast, we use dense local features from the convolutional layers of networks for image description.
The incorporation of scale information has been investigated in the context of action recognition in videos (Shabani et al, 2013; Zhu et al, 2012). The work of Shabani et al (2013) proposes to construct multiple dictionaries at different resolutions in a final video representation. The work of Zhu et al (2012) proposes multi-scale spatio-temporal concatenation of local features resulting in a set of natural action structures. Both these methods do not consider relative scale coding. In addition, our approach is based on recent advancements of deep convolutional neural networks (CNNs) and Fisher vector encoding scheme. We re-visit the problem of incorporating scale information for the popular CNNs based deep features. To the best of our knowledge, we are the first to investigate and propose scale coded bag-of-deep feature representations applicable for both human attribute and action recognition in still images.
Human attribute recognition. Recognizing human attributes such as, age, gender and clothing style is an active research problem with many real-world applications. State-of-the-art approaches employ part-based representations (Bourdev et al, 2011; Khan et al, 2014a; Zhang et al, 2014) to counter the problem of pose normalization. Bourdev et al. Bourdev et al (2011) proposed semantic part detection using poselets and constructing pose-normalized representations. Their approach employs HOGs for part descriptions. Later, Zhang et al. Zhang et al (2014) extended the approach of Bourdev et al (2011) by replacing the HOG features with CNNs. Khan et al. Khan et al (2014a) proposed pre-trained body part detectors to automatically construct pose normalized semantic pyramid representations.
In this work, we investigate scale coding strategies for human attribute and action recognition in still images. This paper is an extended version of our earlier work (Khan et al, 2014b). Instead of using the standard BOW framework with SIFT features, we propose scale coding strategies within the emerging Bag of Deep Features paradigm that uses dense convolutional features in classical BOW pipelines. We additionally extend our experiments with results on the PASCAL VOC 2010, PASCAL 2012, Standord-40 and Human Attribute (HAT-27) datasets.
3 Scale Coding: Relaxing Scale Invariance
In this section we discuss several approaches to relaxing the scale invariance of local descriptors in the bag-of-words model. Originally, the BOW model was developed for image classification where the task is to determine the presence or absence of objects in images. In such situations, invariance with respect to scale is important since the object could be in the background of the image and thus appear small, or instead appear in the foreground and cover most of the image space. Therefore, extracted features are converted to a canonical scale — and from that point on the original feature scale is discarded — and mapped onto a visual vocabulary. When BOW was extended to object detection (Harzallah et al, 2009; Vedaldi et al, 2009) and later to action recognition (Delaitre et al, 2010; Khan et al, 2013; Prest et al, 2012) this same strategy for ensuring scale invariance was applied.
However, this invariance comes at the expense of discriminative power through the loss of information about relative scale between features. In particular, we distinguish two sources of scale information: (i) dataset scale prior: the acquisition and/or collection protocol of a data set results in a distribution of the object-sizes as a function of the size in the image, e.g. most cars are between 100-200 pixels, and (ii) relative scale: in the presence of a reference scale, such as the person bounding box, we have knowledge of the actual scale at which we expect to detect parts or objects (e.g. the size at which the action-defining object such as the mobile phone or musical instrument should be detected). These sources of information are lost in scale-invariant image representations. We propose two strategies to encode scale information of features in the final image representation.
3.1 Scale-invariant Image Representation
We first introduce some notation. Features are extracted from the person bounding boxes (available at both training and testing time) using multi-scale sampling at all feature locations. For each bounding box , we extract a set of features:
where indexes the feature sites in , and indexes the scales extracted at each site.
In the scale-invariant representation a single representation is constructed for each bounding box :
where denotes a coding scheme which maps the input feature space of dimensionality to the image representation space of dimensionality .
Let us first consider the case of standard bag-of-words with nearest neighbor assignment to the closest vocabulary word. Assume we have a visual vocabulary of words. Every feature is quantized to its closest (in the Euclidean sense) vocabulary word:
where is the Euclidean distance. Index corresponds to the vocabulary word to which feature is assigned. Letting be the one-hot column vector of length with all zeros except for the index where it is one, we can write the standard hard-assignment bag-of-words by plugging in:
as the coding function in Eq. 1.
For the case of Fisher vector encoding Perronnin and Dance (2007), a Gaussian Mixture Model (GMM) is fitted to the distribution of local features :
where are the parameters defining the GMM, respectively the mixing weights, the means, and covariance matrices for the Gaussian mixture components, and
The coding function is then given by the gradient with respect to all of the GMM parameters:
and plugging this encoding function into Eq. 1. For more details on the Fisher vector encoding, please refer to Sánchez et al (2013). Since the superiority of Fisher coding has been shown in several publications we apply Fisher coding throughout this paper (Chatfield et al, 2011).
3.2 Absolute Scale Coding
The first scale preserving scale coding method we propose uses an absolute multi-scale image representation. Letting be the set of extracted feature scales, we encode features in groups of scales:
Instead of being marginalized completely away as in equation (1), feature scales are instead divided into several subgroups that partition the entire set of extracted scales (i.e. ). In this work we consider a split of all scales into three groups with for small, medium and large scale features. For absolute scale coding, these three scale partitions are defined as:
where the two cutoff thresholds and are parameters of the encoding. The final representation is obtained by concatenating these three encodings of the box and thus preserves coarse scale information about the originally extracted features, and it exploits what we refer to as the dataset scale prior or absolute scale. However, note that this representation does not exploit the relative scale information.
3.3 Relative Scale Coding
In relative scale coding features are represented relative to the size of the bounding box of the object (in our case the person bounding box). The representation is computed with:
where and are the width and height of bounding box and and are the mean width and hight of all bounding boxes in the training set. Taking into account the boundary length ensures that elongated objects have large scales.
As for absolute scale coding, described in the previous section, we group relative scales into three groups. The relative scale splits are defined with respect to relative scale:
Since the number of scales which fall into the small, medium and large scale range image representation now varies with the size of the bounding box, we introduce a normalization factor in Eq. 3 to counter this. Here is the cardinality of the set .
Relative scale coding represents visual words at a certain relative scale with respect to the bounding box size. Again, it consists of three image representations for small, medium and large scale visual words, which are then concatenated together to form the final representation for . However, depending on the size of the bounding box, the scales which are considered small, medium and large change. An illustrative overview of this approach is given in Figure 2. In contrast to the standard approach, this method preserves the relative scale of visual words without completely sacrificing the scale invariance of the original representation.
3.4 Scale Partitioning
Until now we we have considered partitioning the features into three scale-groups: small, medium and large. Here, we evaluate this choice and compare it with other partitioning of the scales.
To evaluate the partitioning of scales, we extracted features at different scales on Stanford-40 and the PASCAL VOC 2010 datasets. For this evaluation we performed absolute scale encoding and varied the number of scale partitions from one (equivalent to standard scale-invariant coding) to 21 in which case every scale is represented by a single image representation. In Figure 3 we plot the mean average precision (mAP) on Stanford-40 and PASCAL 2010 as a function of the number of scale partitions. The curve clearly shows that absolute scale coding outperforms the generally applied representation based on scale-invariant coding (which collects all scales in a single partition). Furthermore, it shows that after three scale partitions, the gain of increasing the number of partitions is negligible. Throughout this paper we use three scale partitions for all scale coding experiments.
4 The Bag of Deep Features Model
Inspired by the recent success of CNNs, we use deep features in our scale coding framework.
Deep convolutional features: Similar to Cimpoi et al (2015), we use the VGG-19 network proposed by Simonyan and Zisserman (2015), pre-trained on the ImageNet dataset. It was shown to provide the best performance in a recent evaluation (Cimpoi et al, 2015; Chatfield et al, 2014) for image classification tasks. In the VGG-19 network, input images are convolved with
filters at each pixel at a stride of 1 pixel. The network contains several max-pooling layers which perform spatial pooling overpixel windows at a stride of 2 pixels. The VGG-19 network contains 3 fully connected (FC) layers at the end. The width of the VGG-19 network starts from 64 feature maps in the first layer and increases by a factor of 2 after each max-pooling layer to reach 512 feature maps at its widest (see (Simonyan and Zisserman, 2015) for more details).
Typically, the activations from the FC layer(s) are used as input features for classification. For VGG-19 this results in a 4096-dimensional representation. In contrast, we use the output of the last convolutional layer of the network since it was shown to provide superior performance compared to other layers (Cimpoi et al, 2015). This layer returns dense convolutional features at a stride of eight pixels. We use these 512-dimensional descriptors as local features within our scale coding framework. To obtain multi-scale samples, we rescale all images over a range of scales and pass them through the network for feature extraction. Note that the number of extracted local convolutional patches depend on the size of the input image.
Vocabulary construction and assignment. In standard BOW all features are quantized against a scale-invariant visual vocabulary. The local features are then pooled in a single scale-invariant image representation. Similar to Cimpoi et al (2015), we use the Fisher Vector encoding for our scale coding models. For vocabulary construction, we use the Gaussian mixture model (GMM). The convolutional features are then pooled via the Fisher encoding that captures the average first and second order differences. The 21 different scales are pooled into the three scale partitions to ensure that the scale information is preserved in the final representation. It is worth mentioning that our scale coding schemes can also be used with other encoding schemes such as hard assignment, soft assignment, and VLAD (Jégou et al, 2010).
5 Experimental Results
In this section we present the results of our scale coding strategies for the problem of human attribute and action recognition. First we detail our experimental setup and datasets used in our evaluation, and then present a comprehensive comparison of our approach with baseline methods. Finally, we compare our approach with the state-of-the-art in human attribute and action recognition.
5.1 Experimental Setup
As mentioned earlier, bounding boxes of person instances are provided at both train and test time in human attribute and action recognition. Thus the task is to predict the human attribute or action category for each person bounding box. To incorporate context information, we extend each person bounding box by of its width and height.
In our experiments we use the pre-trained VGG-19 network (Simonyan and Zisserman, 2015). Similar to Cimpoi et al. Cimpoi et al (2015), we extract the convolutional features from the output of the last convolutional layer of the VGG-19 network. The convolutional features are not de-correlated by using PCA before employing Fisher Vector encoding, since it has been shown (Cimpoi et al, 2015) to deteriorate the results. The convolutional features are extracted after rescaling the image at 21 different scales . This results in 512-dimensional dense local features for each scaled image. On an image of size , the feature extraction on multi-core CPU takes about 5 seconds. For our scale coding approaches, we keep a single, constant threshold for all datasets.
For each problem instance, we construct a visual vocabulary using a Gaussian Mixture Model (GMM) with 16 components. In Figure 4 we plot the mean average precision (mAP) on Willow and PASCAL 2010 datasets as a function of the number of Gaussian components. We observed no significant gain in classification performance by increasing the number of Gaussian components beyond 16. The parameters of this model are fit using a set of dense descriptors sampled from descriptors over all scales on the training set. We randomly sample 100 descriptor points from each training image. The resulting sampled feature descriptors from the whole training set are then used to construct a GMM based dictionary. We also perform experiments by varying the number of feature samples per image. However, no improvement in performance was observed with increased feature samples per image. We employ a GMM with diagonal covariances. Finally, the Fisher vector representations discussed in section 3
are constructed for each image. The Fisher vector encoding is performed with respect to the Gaussian mixture model (GMM) with means, diagonal covariances, and prior probabilities. The dense image features have dimensionality 512, and so our final scale-coded Fisher vector representation has dimensionality(i.e. it is a concatenation of the Fisher vector encoding the three scale categories). In our experiments, we use the standard VLFeat library (Vedaldi and Fulkerson, 2010) commonly used to construct GMM based vocabulary and the improved Fisher vector based image representations. For classification, we employ SVMs with linear kernels on the concatenated Fisher vectors of each scale-coding groups described above.
|Willow||PASCAL 2010||PASCAL 2012||Stanford-40||HAT-27|
|VGG-19 FC Simonyan and Zisserman (2015)||72.0||61.2|
|MOP Gong et al (2014)||74.8||64.1|
|FV-CNN Cimpoi et al (2015)||75.4||64.5|
|Absolute + Relative + FC||92.1||82.7||80.3||80.0||70.6|
We perform experiments on five datasets to validate our approach:
The Willow Action Dataset consisting of seven action categories: interacting with computer, photographing, playing music, riding bike, riding horse, running and walking.111Willow is available at: http://www.di.ens.fr/willow/research/stillactions/
The Stanford-40 Action Dataset consisting of images of different action categories such as gardening, fishing, applauding, cooking, brushing teeth, cutting vegetables, and drinking.222Stanford-40 is at http://vision.stanford.edu/Datasets/40actions.html
The PASCAL VOC 2010 Action Dataset consisting of action categories: phoning, playing instrument, reading, riding bike, riding horse, running, taking photo, using computer and walking.333PASCAL 2010 is at: http://www.pascal-network.org/challenges/VOC/voc2010/
The PASCAL VOC 2012 Action Dataset consisting of different action classes: phoning, playing instrument, reading, riding bike, riding horse, running, taking photo, using computer, walking and jumping.444PASCAL 2012 is at: http://www.pascal-network.org/challenges/VOC/voc2012/.
The 27 Human Attributes Dataset (HAT-27) consisting of 9344 images of 27 different human attributes such as crouching, casual jacket, wedding dress, young and female.555HAT-27 is available at: https://sharma.users.greyc.fr/hatdb/.
The test sets for both the PASCAL VOC 2010 and 2012 datasets are withheld by the organizers and results must be submitted to an evaluation server. We report the results on the test sets in section 5.2.2 and provide a comparison with state-of-the-art methods. For the Willow (Delaitre et al, 2010), Stanford-40 (Yao et al, 2011) and HAT-27 (Sharma and Jurie, 2011) datasets we use the train and test splits provided by the respective authors.
Evaluation criteria: We follow the same evaluation protocol as used for each dataset. Performance is measured in average precision as area under the precision-recall curve. The final performance is calculated by taking the mean average precision (mAP) over all categories in each dataset.
|BOW-DPM Delaitre et al (2010)||58.2||35.4||73.2||82.4||69.6||44.5||54.2||59.6|
|POI Delaitre et al (2011)||56.6||37.5||72.0||90.4||75.0||59.7||57.6||64.1|
|DS Sharma et al (2012)||59.7||42.6||74.6||87.8||84.2||56.1||56.5||65.9|
|CF Khan et al (2013)||61.9||48.2||76.5||90.3||84.3||64.7||64.6||70.1|
|EPM Sharma et al (2013)||64.5||40.9||75.0||91.0||87.6||55.0||59.2||67.6|
|SC Khan et al (2014b)||67.2||43.9||76.1||87.2||77.2||63.7||60.6||68.0|
|SM-SP Khan et al (2014a)||66.8||48.0||77.5||93.8||87.9||67.2||63.3||72.1|
|EDM Liang et al (2014)||86.6||90.5||89.9||98.2||92.7||46.2||58.9||80.4|
|NSP Mettes et al (2016)||88.6||61.8||93.4||98.8||98.4||69.4||62.3||81.7|
|DPM-VR Sicre and Jurie (2015)||84.9||72.0||91.2||96.9||93.6||73.4||61.0||81.9|
|Poselets Maji et al (2011)||49.6||43.2||27.7||83.7||89.4||85.6||31.0||59.1||67.9||59.7|
|IaC Shapovalova et al (2011)||45.5||54.5||31.7||75.2||88.1||76.9||32.9||64.1||62.0||59.0|
|POI Delaitre et al (2011)||48.6||53.1||28.6||80.1||90.7||85.8||33.5||56.1||69.6||60.7|
|LAP Yao et al (2011)||42.8||60.8||41.5||80.2||90.6||87.8||41.4||66.1||74.4||65.1|
|WPOI Prest et al (2012)||55.0||81.0||69.0||71.0||90.0||59.0||36.0||50.0||44.0||62.0|
|CF Khan et al (2013)||52.1||52.0||34.1||81.5||90.3||88.1||37.3||59.9||66.5||62.4|
|SM-SP Khan et al (2014a)||52.2||55.3||35.4||81.4||91.2||89.3||38.6||59.6||68.7||63.5|
|Action Poselets Maji et al (2011)||32.4||45.4||27.5||84.5||88.3||77.2||31.2||47.4||58.2||59.3||55.1|
|MDF Oquab et al (2014)||46.0||75.6||45.3||93.5||95.0||86.5||49.3||66.7||69.5||78.4||70.2|
|WAB Hoai et al (2014)||49.5||67.5||39.1||94.3||96.0||89.2||44.5||69.0||75.9||79.6||70.5|
|Action R-CNN Gkioxari et al (2014)||47.4||77.5||42.2||94.9||94.3||87.0||52.9||66.5||66.5||76.2||70.5|
|RMP Hoai (2014)||52.9||84.3||53.6||95.6||96.1||89.7||60.4||76.0||72.9||82.3||76.4|
|TL Khan et al (2015)||62.4||91.3||61.1||93.3||95.1||84.1||59.8||84.5||53.0||84.9||77.0|
|VGG-19 + VGG-16|
|+ Full Image Simonyan and Zisserman (2015)||71.3||94.7||71.3||97.1||98.2||90.2||73.3||88.5||66.4||89.3||84.0|
5.2.1 Baseline Scale-Coding Performance Analysis
We first give a comparison of our scale coding strategies with the baseline scale-invariant coding. Our baseline is the FV-CNN approach (Cimpoi et al, 2015) where multi-scale convolutional features are pooled in a single scale-invariant image representation. The FV-CNN approach is further extended with spatial information by employing spatial pyramid pooling scheme (Lazebnik et al, 2006). The spatial pyramid scheme is used with two levels ( and ), yielding a total of 5 cells. We also compare our results with standard deep features obtained from the activations of the first fully connected layer of the CNN. Additionally, we compare our approach with Multiscale Orderless Pooling (MOP) (Gong et al, 2014) by extracting FC activations at three levels: 4096-dimensional CNN activation from the entire image patch (the person bounding box), patches of 4096 dimensions pooled using VLAD encoding with 100 visual-words, and the same VLAD encoding but with patches. The three representations are concatenated into a single feature vector for classification. Note that we use the same VGG-19 network for all of these image encodings.
Table 1 gives the baseline comparison on all five datasets. Since the PASCAL VOC 2010 and 2012 test sets are withheld by the organizers, performance is measured on the validation sets for the baseline comparison. The standard multi-scale invariant approach (FV-CNN) improves the classification performance compared to the standard FC deep features. The spatial pyramid based FV-CNN further improves over the standard FV-CNN method. Our absolute and relative scale coding approaches provide a consistent gain in performance on all datasets, compared to baselines using features from the same deep network. Note that the standard scale-invariant (FV-CNN) and our scale coding schemes are constructed using the same visual vocabulary (GMM) and set of local features from the convolutional layer. Finally, a further gain in accuracy is obtained by combining the classification scores of our two scale coding approaches with the standard FC deep features. This combination is done by simply adding the three classifier outputs. On the Stanford-40 and HAT-27 datasets, this approach yields a considerable gain of and in mAP, respectively, compared to the MOP approach employing FC features from the same network (VGG-19). These results suggest that the FC, absolute scale, and relative scale encodings have complementary information that when combined yield results superior to each individual representation.
5.2.2 Comparison with the State-of-the-art
We now compare our approach with the state-of-the-art on the five benchmark datasets. In this section we report results for the combination of our relative and absolute scale coding strategies with the FC deep features. The combination is done by simply adding the three classifier outputs.
Willow: Table 2 gives a comparison of our combined scale coding approach with the state-of-the-art on the Willow dataset. Our approach achieves the best performance reported on this dataset, with an mAP of . The shared part detectors approach of Mettes et al (2016) achieves an mAP of , while the part-based deep representation approach (Sicre and Jurie, 2015) obtains an mAP of . Our approach, without exploiting any part information, yields the best results on 6 out of 7 action categories, with an overall gain of in mAP compared to Sicre and Jurie (2015).
PASCAL VOC 2010: Table 3 compares our combined scale coding approach with the state-of-the-art on the PASCAL VOC 2010 Action Recognition test set. The color fusion approach of Khan et al (2013) achieves an mAP of , the semantic pyramid approach by Khan et al. Khan et al (2014a) obtains a mAP of , and the method of Yao et al (2011) based on learning a sparse basis of attributes and parts achieves an mAP of . Our approach yields consistent improvement over the state-of-the-art with an mAP of on this dataset. Figure 5 shows the confusion matrix for our scale coding based approach. The differences with the confusion matrix based on the standard scale-invariant FV-CNN approach are superimposed for confusions where the absolute change is at least . Overall, our approach improves the classification results with notable improvements for playing music (), reading () and using computer () action categories. Further, our approach reduces confusion all categories except for walking.
PASCAL VOC 2012: In Table 4 we compare our approach with state-of-the-art on the PASCAL VOC 2012 Action Recognition test set. Among existing approaches, Regularized Max Pooling (RMP) (Hoai, 2014) obtains a mAP score of . The best results on this dataset are obtained by combining the FC features of the VGG-16 and VGG-19 networks. These FC features are extracted both from the full-image and the provided bounding box of the person. Our combined scale coding based approach provides the best results on 3 out of 10 action categories, and achieves an mAP of on the PASCAL 2012 test set. It is worth mentioning that our scale coding based approach employs a single network (VGG-19) and does not exploit the full-image information. Combining our Scale coding based approaches using multiple deep networks is expected to further improve performance.
Stanford-40 dataset: In Table 5 we compare scale coding with state-of-the-art approaches: SB Yao et al (2011), CF Khan et al (2013), SM-SP Khan et al (2014a), Place Zhou et al (2014), D-EPM Sharma et al (2015) and TL Khan et al (2015). Stanford-40 is the most challenging action dataset and contains 40 categories. The semantic pyramids of Khan et al. Khan et al (2014a) achieve an mAP of . Their approach combines spatial pyramid representations of full-body, upper-body and face regions using multiple visual cues. The work of Zhou et al (2014) uses deep features trained on ImageNet and a recently introduced large scale dataset of Place Scenes. Their hybrid deep features based approach achieves a mAP of . The D-EPM approach (Sharma et al, 2015) based on expanded part models and deep features achieves a mAP score of
. The transfer learning (TL) based approach(Khan et al, 2015) with deep features obtains a mAP score of . Our combined scale coding approach achieves state-of-the-art results with a gain of in mAP compared to the TL based approach (Khan et al, 2015).
Ranking of Different Action Categories
|VGG-19 FC Simonyan and Zisserman (2015)||186 (98)||51 (5)||104 (24)||56 (5)|
|FV-CNN Cimpoi et al (2015)||144 (62)||57 (4)||98 (22)||52 (5)|
|Our Approach||32 (5)||17 (1)||30 (1)||38 (2)|
|EPM Sharma et al (2013)||85.9||93.6||67.3||77.2||97.9||98.0||74.6||24.0||62.7||94.0||38.9||68.9||64.2||36.2|
|RAD Joo et al (2013)||91.4||96.8||77.2||89.8||96.3||97.7||63.5||12.3||59.3||95.4||32.1||70.0||65.6||33.5|
|SM-SP Khan et al (2014a)||86.1||92.2||60.5||64.8||94.0||96.6||76.8||23.2||63.7||92.8||37.7||69.4||67.7||36.4|
|D-EPM Sharma et al (2015)||93.2||95.2||72.6||84.0||99.0||98.7||75.1||34.2||77.8||95.4||46.4||72.7||70.1||36.8|
|EPM Sharma et al (2013)||49.7||24.3||37.7||61.6||40.0||57.1||44.8||39.0||46.8||61.3||32.2||64.2||43.7||58.7|
|RAD Joo et al (2013)||53.5||16.3||37.0||67.1||42.6||64.8||42.0||30.1||49.6||66.0||46.7||62.1||42.0||59.3|
|SM-SP Khan et al (2014a)||55.9||18.3||40.6||65.6||40.6||57.4||33.3||38.9||44.0||67.7||46.7||46.3||38.6||57.6|
|D-EPM Sharma et al (2015)||62.5||39.5||48.4||75.1||63.5||75.9||67.3||52.6||56.6||84.6||67.8||79.7||53.1||69.6|
In Figure 6 we compare the per-category performance of our approach with two state-of-the-art approaches: D-EPM (Sharma et al, 2015) and FV-CNN (Cimpoi et al, 2015). Our scale coding based approach achieves the best performance on 37 out of 40 action categories on this dataset. A significant gain in performance is achieved especially for drinking (+), washing dishes (+), taking photos (+), smoking (+), and waving hands (+) action categories, all compared to the two state-of-the-art methods. Table 6 shows example images from pouring liquid, gardening, using computer and fishing categories. The corresponding ranks are shown for the standard VGG-19 FC, FV-CNN and our scale coding based approach. The number indicates the absolute rank of corresponding image in the list of all test images sorted by the probability for the corresponding class. A lower number implies higher confidence in the action class label. We also show rank with respect to the number of false positives appearing before the example test image in the ranked list. Our approach obtains improved rank on these images compared to the two standard approaches.
Human Attributes (HAT-27) dataset: Finally, Table 7 shows a comparison of our scale coding based approach with state-of-the-art methods on the Human Attributes (HAT-27) dataset. The dataset contains 27 different human attributes. The expanded part-based approach by Sharma et al. Sharma et al (2013) yields an mAP of , and semantic pyramids (Khan et al, 2014a), combining body part information in a spatial pyramid representation, an mAP of . The approach of Joo et al (2013) is based on learning a rich appearance part based dictionary and achieves an mAP of . Deep FC features from the VGG-19 network obtains a mAP score of . The D-EPM method (Sharma et al, 2015) based on deep features and expanded part based models achieves the best results among the existing methods with a mAP of . On this dataset, our scale coding based approach outperforms the D-EPM method with a mAP score of . Scale coding yields the best classification performance on 15 out of 27 attribute categories compared to the state-of-the-art.
Figure 7 illustrates the top four predictions of six attribute categories from the HAT-27 dataset. These examples show inter- and intra-class variations among different categories. The variations in scale and pose of persons make the problem of attribute classification challenging. Our scale coding based approach consistently improves the performance on this dataset.
5.2.3 Generality of Our Approach
We have validated our approach on two challenging problems: human attribute and action classification. However, our scale coding approach is generic and is more broadly applicable to other recognition tasks. To validate the generality of our approach, we perform additional experiments on the popular MIT indoor scene 67 dataset (Quattoni and Torralba, 2009) for scene recognition task. The dataset contains 15620 images of 67 indoor scene classes. The training and test configurations are provided by the original authors, where each category has around 80 images for training and 20 for testing. The performance is measured in terms of mean classification accuracy computed over all the categories in the dataset. Most existing methods (Lin et al, 2015; Wei et al, 2015; Kulkarni et al, 2016; Herranz et al, 2016) report results using VGG16 model, pre-trained on either ImageNet or Places dataset. For fair comparison, we also validate our absolute scale coding approach using the VGG16 model and only compare with approaches pre-trained on ImageNet dataset.
Table 8 shows a comparison of our absolute scale coding based approach with state-of-the-art methods on the MIT indoor scene 67 dataset. Among existing approaches, the work of Herranz et al (2016) also investigated multi-scale CNN architecture by training scale-specific networks on the ImageNet dataset, focusing on the CNN models. Several scale specific networks are combined by concatenating the FC7 features of all networks, yielding a mean accuracy score of . Instead, our approach proposes multi-scale image representations by using a single pre-trained deep network and preserving scale information in the pooling method, obtaining a mean classification score of . The results are further improved when combining the standard FC features with our scale coding approach. It is worth to mention that a higher recognition score of is obtained by Herranz et al (2016), when combining scale-specific networks trained on both ImageNet and Places scene dataset. However, when using the same deep model architecture (VGG16) and only ImageNet dataset for network training, our results of are superior compared to obtained by the multi-scale scale-specific networks (Herranz et al, 2016).
|DAG-CNN Yang and Ramanan (2015)||77.5|
|Deep Spatial Pyramid Lin et al (2015)||78.3|
|B-CNN Lin et al (2015)||79.0|
|FV-CNN Cimpoi et al (2015)||79.2|
|SPLeap Kulkarni et al (2016)||73.5|
|Standard VGG16 Simonyan and Zisserman (2015)||69.6|
|Standard VGG16 FT Herranz et al (2016)||76.4|
|Multi-Scale Network Herranz et al (2016)||79.0|
|Our Approach + Standard VGG16||83.1|
In this paper we investigated the problem of encoding multi-scale information for still images in the context of human attribute and action recognition. Most state-of-the-art approaches based on the BOW framework compute local descriptors at multiple scales. However, multi-scale information is not explicitly encoded as all the features from different scales are pooled into a single scale-invariant histogram. In the context of human attribute and action recognition, we demonstrate that both absolute and relative scale information can be encoded in final image representations and that relaxing the traditional scale invariance commonly employed in image classification can lead to significant gains in recognition performance.
We proposed two alternative scale coding approaches that explicitly encode scale information in the final image representation. The absolute scale of local features is encoded by constructing separate representations for small, medium and large features, while the relative scale of the local features is encoded with respect to the size of the bounding box corresponding to the person instance in human action or attribute recognition problems. In both cases, the final image representation is obtained by concatenating the small, medium and large scale representations.
Comprehensive experiments on five datasets demonstrate the effectiveness of our proposed approach. The results clearly demonstrate that our scale coding strategies outperform both the scale-invariant bag of deep features and the standard deep features extracted from the same network. An interesting future direction is the investigation of scale coding strategies for object detection and fine-grained object localization. We believe that our scale coding schemes could be very effective for representing candidate regions in object detection techniques based on bottom-up proposal of likely object regions.
This work has been funded by the projects TIN2013-41751 and of the Spanish Ministry of Science, the Catalan project 2014 SGR 221, the CHISTERA project PCIN-2015-251, SSF through a grant for the project SymbiCloud, VR (EMC), VR starting grant (2016-05543), through the Strategic Area for ICT research ELLIIT, the grant 251170 of the Academy of Finland. The calculations were performed using computer resources within the Aalto University School of Science “Science-IT” project. We also acknowledge the support from Nvidia and the NSC.
- Azizpour et al (2014) Azizpour H, Sullivan J, Carlsson S (2014) Cnn features off-the-shelf: An astounding baseline for recognition. In: CVPRW, pp 512–519
- Bosch et al (2008) Bosch A, Zisserman A, Munoz X (2008) Scene classification using a hybrid generative/discriminative approach. PAMI 30(4):712–727
- Bourdev et al (2011) Bourdev L, Maji S, Malik J (2011) Describing people: A poselet-based approach to attribute classification. In: ICCV, pp 1543–1550
- Chatfield et al (2011) Chatfield K, Lempitsky V, Vedaldi A, Zisserman A (2011) The devil is in the details: an evaluation of recent feature encoding methods. In: BMVC
- Chatfield et al (2014) Chatfield K, Simonyan K, Vedaldi A, Zisserman A (2014) Return of the devil in the details: Delving deep into convolutional nets. In: BMVC
- Cimpoi et al (2015) Cimpoi M, Maji S, Vedaldi A (2015) Deep filter banks for texture recognition and segmentation. In: CVPR, pp 3828–3836
- Dalal and Triggs (2005) Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: CVPR, pp 886–893
- Delaitre et al (2010) Delaitre V, Laptev I, Sivic J (2010) Recognizing human actions in still images: a study of bag-of-features and part-based representations. In: BMVC
- Delaitre et al (2011) Delaitre V, Sivic J, Laptev I (2011) Learning person-object interactions for action recognition in still images. In: NIPS, pp 1503–1511
- Everingham et al (2010) Everingham M, Gool LJV, Williams CKI, Winn JM, Zisserman A (2010) The pascal visual object classes (voc) challenge. IJCV 88(2):303–338
- van Gemert et al (2010) van Gemert J, Veenman C, Smeulders A, Geusebroek JM (2010) Visual word ambiguity. PAMI 32(7):1271–1283
- Girshick et al (2014) Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: CVPR, pp 580–587
Gkioxari et al (2014)
Gkioxari G, Hariharan B, Girshick R, Malik J (2014) R-cnns for pose estimation and action detection. arXiv preprint arXiv:14065212
- Gong et al (2014) Gong Y, Wang L, Guo R, Lazebnik S (2014) Multi-scale orderless pooling of deep convolutional activation features. In: ECCV, pp 392–407
- Guo and Lai (2014) Guo G, Lai A (2014) A survey on still image based human action recognition. PR 47(10):3343–3361
- Harzallah et al (2009) Harzallah H, Jurie F, Schmid C (2009) Combining efficient object localization and image classification. In: ICCV
- Herranz et al (2016) Herranz L, Jiang S, Li X (2016) Scene recognition with cnns: Objects, scales and dataset bias. In: CVPR
- Hoai (2014) Hoai M (2014) Regularized max pooling for image categorization. In: BMVC
- Hoai et al (2014) Hoai M, Ladicky L, Zisserman A (2014) Action recognition from weak alignment of body parts. In: BMVC
- Jegou et al (2010) Jegou H, Douze M, Schmid C (2010) Improving bag-of-features for large scale image search. IJCV 87(3):316–336
- Jégou et al (2010) Jégou H, Douze M, Schmid C, Pérez P (2010) Aggregating local descriptors into a compact image representation. In: CVPR, pp 3304–3311
- Joo et al (2013) Joo J, Wang S, Zhu SC (2013) Human attribute recognition by rich appearance dictionary. In: ICCV, pp 721–728
- Khan et al (2012) Khan FS, van de Weijer J, Vanrell M (2012) Modulating shape features by color attention for object recognition. IJCV 98(1):49–64
- Khan et al (2013) Khan FS, Anwer RM, van de Weijer J, Bagdanov A, Lopez A, Felsberg M (2013) Coloring action recognition in still images. IJCV 105(3):205–221
- Khan et al (2014a) Khan FS, van de Weijer J, Anwer RM, Felsberg M, Gatta C (2014a) Semantic pyramids for gender and action recognition. TIP 23(8):3633–3645
- Khan et al (2014b) Khan FS, van de Weijer J, Bagdanov A, Felsberg M (2014b) Scale coding bag-of-words for action recognition. In: ICPR, pp 1514–1519
- Khan et al (2015) Khan FS, Xu J, van de Weijer J, Bagdanov A, Anwer RM, Lopez A (2015) Recognizing actions through action-specific person detection. TIP 24(11):4422–4432
- Koenderink (1984) Koenderink J (1984) The structure of images. Biological cybernetics 50(5):363–370
- Koskela and Laaksonen (2014) Koskela M, Laaksonen J (2014) Convolutional network features for scene recognition. In: ACM Multimedia, pp 1169–1172
- Kulkarni et al (2016) Kulkarni P, Jurie F, Zepeda J, Perez P, Chevallie L (2016) Spleap: Soft pooling of learned parts for image classification. In: ECCV
- Lazebnik et al (2006) Lazebnik S, Schmid C, Ponce J (2006) Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In: CVPR, pp 2169–2178
- LeCun et al (1989) LeCun Y, Boser B, Denker J, Henderson D, Howard R, Hubbard W, Jackel L (1989) Handwritten digit recognition with a back-propagation network. In: NIPS, pp 396–404
- Liang et al (2014) Liang Z, Wang X, Huang R, Lin L (2014) An expressive deep model for human action parsing from a single image. In: ICME, pp 1–6
- Lim et al (2015) Lim CH, Vats E, Chan CS (2015) Fuzzy human motion analysis: A review. PR 48(5):1773–1796
- Lin et al (2015) Lin TY, RoyChowdhury A, Maji S (2015) Bilinear cnn models for fine-grained visual recognition. In: ICCV
- Liu et al (2015) Liu L, Shen C, van den Hengel A (2015) The treasure beneath convolutional layers: Cross-convolutional-layer pooling for image classification. In: CVPR, pp 4749–4757
- Lowe (2004) Lowe D (2004) Distinctive image features from scale-invariant points. IJCV 60(2):91–110
Maji et al (2011)
Maji S, Bourdev LD, Malik J (2011) Action recognition from a distributed representation of pose and appearance. In: CVPR, pp 3177–3184
- Maragos (1989) Maragos P (1989) Pattern spectrum and multiscale shape representation. PAMI 11(7):701–716
- Mettes et al (2016) Mettes P, van Gemer J, Snoek C (2016) No spare parts: Sharing part detectors for image categorization. CVIU 152:131–141
- Mikolajczyk and Schmid (2004a) Mikolajczyk K, Schmid C (2004a) Scale and affine invariant interest point detectors. IJCV 60(1):63–86
- Mikolajczyk and Schmid (2004b) Mikolajczyk K, Schmid C (2004b) Scale and affine invariant interest point detectors. IJCV 60(1):63–86
- Nowak et al (2006) Nowak E, Jurie F, Triggs B (2006) Sampling strategies for bag-of-features image classification. In: ECCV, pp 490–503
- Oquab et al (2014) Oquab M, Bottou L, Laptev I, Sivic J (2014) Learning and transferring mid-level image representations using convolutional neural networks. In: CVPR, pp 1717–1724
- Perronnin and Dance (2007) Perronnin F, Dance C (2007) Fisher kernels on visual vocabularies for image categorization. In: CVPR, pp 1–8
- Perronnin et al (2010) Perronnin F, Sanchez J, Mensink T (2010) Improving the fisher kernel for large-scale image classification. In: ECCV, pp 143–156
Prest et al (2012)
Prest A, Schmid C, Ferrari V (2012) Weakly supervised learning of interactions between humans and objects. PAMI 34(3):601–614
- Quattoni and Torralba (2009) Quattoni A, Torralba A (2009) Recognizing indoor scenes. In: CVPR
- Rojas et al (2010) Rojas D, Khan FS, van de Weijer J, Gevers T (2010) The impact of color on bag-of-words based object recognition. In: ICPR, pp 1549–1553
- Russakovsky et al (2014) Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, et al (2014) Imagenet large scale visual recognition challenge. arXiv preprint arXiv:14090575
- Sánchez et al (2013) Sánchez J, Perronnin F, Mensink T, Verbeek J (2013) Image classification with the fisher vector: Theory and practice. International journal of computer vision 105(3):222–245
- van de Sande et al (2010) van de Sande K, Gevers T, Snoek CGM (2010) Evaluating color descriptors for object and scene recognition. PAMI 32(9):1582–1596
- Shabani et al (2013) Shabani AH, Zelek J, Clausi D (2013) Multiple scale-specific representations for improved human action recognition. PRL 34(15):1771–1779
- Shapovalova et al (2011) Shapovalova N, Gong W, Pedersoli M, Roca FX, Gonzalez J (2011) On importance of interactions and context in human action recognition. In: IbPRIA, pp 58–66
- Sharma and Jurie (2011) Sharma G, Jurie F (2011) Learning discriminative spatial representation for image classification. In: BMVC
- Sharma et al (2012) Sharma G, Jurie F, Schmid C (2012) Discriminative spatial saliency for image classification. In: CVPR, pp 3506–3513
- Sharma et al (2013) Sharma G, Jurie F, Schmid C (2013) Expanded parts model for human attribute and action recognition in still images. In: CVPR, pp 652–659
- Sharma et al (2015) Sharma G, Jurie F, Schmid C (2015) Expanded parts model for semantic description of humans in still images. arXiv preprint, arXiv:150904186
- Sicre and Jurie (2015) Sicre R, Jurie F (2015) Discriminative part model for visual recognition. CVIU 141:28–37
- Simonyan and Zisserman (2015) Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: ICLR
- Vedaldi and Fulkerson (2010) Vedaldi A, Fulkerson B (2010) Vlfeat: an open and portable library of computer vision algorithms. In: ACM MM, pp 1469–1472
- Vedaldi et al (2009) Vedaldi A, Gulshan V, Varma M, Zisserman A (2009) Multiple kernels for object detection. In: ICCV, pp 606–613
- Wei et al (2015) Wei XS, Gao BB, Wu J (2015) Deep spatial pyramid ensemble for cultural event recognition. In: ICCV Workshop
- van de Weijer et al (2009) van de Weijer J, Schmid C, Verbeek JJ, Larlus D (2009) Learning color names for real-world applications. TIP 18(7):1512–1524
- Witkin (1984) Witkin A (1984) Scale-space filtering: A new approach to multi-scale description. In: ICASSP
- Yang and Ramanan (2015) Yang S, Ramanan D (2015) Multi-scale recognition with dag-cnns. In: ICCV
- Yao et al (2011) Yao B, Jiang X, Khosla A, Lin AL, Guibas LJ, Li FF (2011) Human action recognition by learning bases of action attributes and parts. In: ICCV, pp 1331–1338
- Zhang et al (2014) Zhang N, Paluri M, Ranzato M, Darrell T, Bourdev L (2014) Panda: Pose aligned networks for deep attribute modeling. In: CVPR, pp 1637–1644
- Zhou et al (2014) Zhou B, Lapedriza A, Xiao J, Torralba A, Oliva A (2014) Learning deep features for scene recognition using places database. In: NIPS, pp 487–495
- Zhou et al (2010) Zhou X, Yu K, Zhang T, Huang T (2010) Image classification using super-vector coding of local image descriptors. In: ECCV, pp 141–154
- Zhu et al (2012) Zhu X, Li M, Li X, Yang Z, Tsien J (2012) Robust action recognition using multi-scale spatial-temporal concatenations of local features as natural action structures. PLoS One 7(10)
- Ziaeefard and Bergevin (2015) Ziaeefard M, Bergevin R (2015) Semantic human activity recognition: A literature review. PR 48(8):2329–2345