First-Take-All: Temporal Order-Preserving Hashing for 3D Action Videos

06/06/2015 ∙ by Jun Ye, et al. ∙ University of Central Florida 0

With the prevalence of the commodity depth cameras, the new paradigm of user interfaces based on 3D motion capturing and recognition have dramatically changed the way of interactions between human and computers. Human action recognition, as one of the key components in these devices, plays an important role to guarantee the quality of user experience. Although the model-driven methods have achieved huge success, they cannot provide a scalable solution for efficiently storing, retrieving and recognizing actions in the large-scale applications. These models are also vulnerable to the temporal translation and warping, as well as the variations in motion scales and execution rates. To address these challenges, we propose to treat the 3D human action recognition as a video-level hashing problem and propose a novel First-Take-All (FTA) Hashing algorithm capable of hashing the entire video into hash codes of fixed length. We demonstrate that this FTA algorithm produces a compact representation of the video invariant to the above mentioned variations, through which action recognition can be solved by an efficient nearest neighbor search by the Hamming distance between the FTA hash codes. Experiments on the public 3D human action datasets shows that the FTA algorithm can reach a recognition accuracy higher than 80 there are 65 frames per video over the datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The recent advances in the commodity depth sensors such as Microsoft Kinect, Intel RealSense and LeapMotion have dramatically changed the way of human-computer interaction. The new generation of user interfaces based on 3D motion capturing and recognition make the interactions between humans and computers easier than ever before. These interfaces have already enabled a wide range of applications including video games, education, business and healthcare. Behind all these applications, the 3D human action recognition plays a key role and directly determines the quality of the user experience.

Although a great number of works [10, 17, 11, 21, 16, 18] have been developed for solving the problem of automatic human action recognition, the modeling of dynamic structures of human actions remains challenging due to the temporal translation and warping of the action sequences, as well as the variation in the motion scales and the execution rates of the actions [26]

. More importantly, the current model-driven solutions normally require a dedicated classifier for each class of actions and cannot provide a scalable solution for the large-scale action recognition applications.

Inspired by the success of the hashing techniques in image retrieval

[3], we treat the 3D human action recognition as a hashing problem of encoding videos with compact binary sequences of fixed length, so that the similarity between videos can be compared by the Hamming distance between their hash codes preserving the intrinsic temporal structure of actions. Thus, action recognition can be solved by an efficient approximate nearest neighbor search based on the hash codes of videos. Most of the existing hashing algorithms [3, 2, 1, 8] are developed for images with the fixed resolution/dimension.

We note there exist some hashing algorithm [23] that handles the large-scale video datasets while considering the temporal consistency. However, such method still applies the hashing to each individual frame rather than the entire sequence. Hence, to measure the video similarity, we have to compute the average similarity between each pair of frames, which can be computationally prohibitive especially with a ever-growing length of videos in many applications. To address this problem, we propose to hash the entire video as a whole into the bit sequence of fixed length to facilitate direct computation of Hamming distance. There are two challenges facing the hashing of the entire video: (1) encoding the temporal structures of actions and (2) dealing with the varying length of videos to generate fixed length of hash codes. Ideally, we wish that the generated hash codes can be resilient against temporal translation and warping, variations in scales and execution rates.

To address the above challenges, we propose a novel temporal order-preserving hashing algorithm, namely First-Take-All (FTA). The FTA hashing algorithm first applies multiple random projections to translate a video into several sequences of latent postures. Then it encodes the video by the temporal order of the occurrence of these postures. Specifically, in each iteration that generates a new hash code, a group of latent postures are randomly selected, and then a video is encoded by the index of the posture that is acted first. After several iterations, a set of hash codes generated in this FTA fashion can capture the temporal order of latent postures acted in a video, and the similarity between videos can be measured by computing the Hamming distance between the FTA hash codes. Since the hashing is applied to the entire video, it can normally achieve a low bit rate per frame, making it much efficient compared with the model-based approaches. In addition, we will show that the FTA hashing is invariant to the temporal translation and warping, as well as the variations in motion scales and execution rates, as long as the temporal-order of the posture sequence does not change for a class of actions. This makes FTA robust against the intra and inter-class variations caused by individual actors.

The FTA hashing algorithm extends the Winner-Take-All (WTA) algorithm [20]

, but differs from it in several significant aspects. First of all, WTA is not an algorithm that can be applied to hashing varied length of sequences. In fact, it must assume that all the input vectors reside in a feature space of fixed dimension. For this reason, WTA has only been applied to hash the data of fixed length like images and text. Second, WTA compares the order of features chosen from the original space. This unnecessarily limits its ability in capturing the ranking structure among various subspaces. On the contrary, the proposed FTA hashing is more expressive in representing the temporal order of latent postures obtained by projecting the sequence into several subspaces.

The main contributions of the paper are:

  1. We propose a novel FTA algorithm for hashing videos of varied length. The hashing algorithm is invariant to temporal translation, scale variation and execution rate variation; and

  2. We perform extensive experiment studies on three public 3D human action datasets and demonstrate the performance of the proposed FTA Hashing algorithm by comparing it with a baseline method without leveraging the temporal-order information.

To the best of our knowledge, this is the first work to perform hashing on the entire video sequence of varied length for the recognition of human actions. It is also worth noting that, the proposed FTA hashing is not limited to human action videos, it can also be potentially applied as a generic temporal hashing algorithm to other types of video sequences.

The remainder of the paper is organized as follows. Section 2 briefly reviews the related work. The First-Take-All hashing algorithm is introduced and discussed in Section 3. Experiments and performance study are presented in Section 4. Finally, Section 5 concludes the paper.

2 Related work

We review the related works in two following categories.

Human Action Recognition and Retrieval

Modeling temporal structure of video sequences is one of the most challenging problems in human action recognition and has attracted intensive research. Many of the existing approaches focus on extracting the local spatio-temporal features and do not explicitly model the temporal patterns of the action sequence. Most of these works are histogram-based and adopting the bag-of-words framework. For example, in [7], the bag-of-3D-points from the depth maps are sampled and clustered to model the dynamics of human actions. Similar ideas are presented in [22], where the Histogram of Gradient (HoG) features are exacted from the depth motion maps to classify human actions. Histogram of 3D joints (HOJ3D) [18] and histogram of visual words [27] are also employed to describe the action sequences by using the joint features. Unfortunately, these histogram-based methods do not preserve the temporal order of the primitive postures of the action and may lead to the poor performance on distinguishing between actions composed of the similar postures but in different temporal orders.

It is obvious that the temporal characteristics of human actions must be fully explored in order to achieve a high recognition rate. Motion template-based approaches [10, 26] are introduced to model the temporal dynamics of actions, where the Dynamic Time Warping (DTW) algorithm is used to align the sequences of varied length and execution rate. On the contrary, the Temporal Pyramid [17, 11] attempts to represent the temporal structure of the sequence by uniformly subdividing the sequence into several partitions. With the uniform temporal partition, however, the temporal pyramid is less flexible to handle the execution rate variation. The Adaptive Temporal Pyramid [21] is instead proposed to overcome this problem by adaptively dividing the temporal sequence by the motion energy. Vemulapalli et al. [16] presents a body-part representation of the human skeleton, where the temporal dynamics in terms of 3D transformation are projected onto a curved manifold in the Lie group.

Hashing Algorithms

Hashing algorithms are widely adopted in the approximate nearest neighbor search problem [3, 2]. There are plenty of works aiming at achieving a higher retrieval rate with shorter code length. We categorize these existing hashing algorithms into two families – the space-partitioning methods and the ranking-based methods.

The space-partitioning methods, such as Locality Sensitive Hashing (LSH) [3] and Compressed Hashing [8] normally partition the whole feature space into a sequence of half spaces and quantize the original features into binary bits by these half spaces. In order to preserve the Euclidean distances between high dimensional vectors with sufficient precision, long codes are usually required for these methods. To overcome this drawback, Multi-Probe LSH [9] and entropy-based LSH [12] are proposed to reduce the storage burden at the cost of increasing the query time. On the other hand, different paradigms of hashing algorithms have been developed to approximate distance metrics other than the Euclidean distance, such as the -norm distance[2] and the Mahalanobis distance [5]. Spherical LSH [14] works for hashing a set of points on the hypersphere of an input space. There also exist several works kernelizing the LSH approaches by considering the Reproducing Kernel Hilbert Space (RKHS) [4, 13].

Unlike space-partitioning methods, the ranking-based hashing methods encode the ordinal relation between the original features rather than their magnitudes. For example, Min Hashing [1] approximates the Jaccard similarity coefficient between two sets by encoding a set with the minimal value of a hash function over its members. Recently, the Winner-Take-All (WTA) Hash [20] has been proposed to encode the magnitude orders of randomly permutated features. The resultant hash codes are scale-invariant, and are often more resilient against the noises. Some other ranking-based hashing algorithms, like the Rank-Sensitive Hash [15], can be regarded as a special case of the WTA when the window size of ordinal comparison is .

However, to the best of our knowledge, there exists no hashing algorithm aiming at encoding the temporal structure of entire sequences of varied length. Ye et al. [23] introduce a supervised hashing algorithm for video sequences. However, it still performs hashing on individual frames, rather than on the entire video. Then the video similarity has to be computed by the average Hamming distance between each pair of frames. In contrast, we attempt to expand the scope of ranking-based hashing methods to directly explore the temporal order of actions on the video level. This can yield much compact hash codes for videos, and the similarity between videos can be directly computed by the Hamming distance between the video-level hash codes.

3 Temporal Order-Preserving Hashing

In order to hash human action sequences into binary bits while preserving their temporal order, we apply random projection to a video. This generates a sequence of confidence scores measuring whether each frame of the video belonging to a (unlabeled) posture corresponding to the random projection. We generate several random projections that are applied to a video, and encode the video with the index of the random projection of which the peak of the confidence score comes the first. This is why our algorithm is name First-Take-All (FTA).

This process will be repeated several times, and a sequence of hashing codes will be generated for a video. In this fashion, the temporal order in which the postures are performed in a video will be encoded. In Section 3.1, we formalize the algorithm, and then we will explain the intuitive idea behind the formal description.

3.1 Formal Description

In this section, we formally present the proposed temporal order-preserving hashing algorithm for video sequence – First-Take-All (FTA).

Random Projections for Latent Postures

The method can be formulated as follows. Suppose a video of length is represented as a sequence of frames . Each frame , , is represented by a -dimensional feature vector. First, we generate random projections , each

is drawn from a multivariate Gaussian distribution, i.e.,

for .

As aforementioned, each random projection can be interpreted as forming a linear subspace for a unknown posture. Hence, the inner product represents the confidence score of posture on frame . In a compact matrix form, we can use a matrix of size

each row of which represents the application of random projection to the entire video sequence.

First-Take-All

Our goal is to design a hashing coding mechanism to preserve the temporal order structure. With the posture sequence in each row of the resultant matrix , we can find which posture is first performed to encode the video. Formally, we randomly choose rows indexed by from , each row representing the sequence of a latent posture. When , it is a pair-wise comparison of the temporal orders between two postures. Otherwise, when , the comparison is made between multiple postures.

To decide which of postures comes first, we need to test when each posture is performed for the first time in the sequence. Here we introduce two approaches.

FTA by peak The first approach is to use the peak of confidence score to denote the first time a posture is acted. Suppose for posture , its peak is attained at frame , i.e.,

So denotes the time when the confidence that posture is performed reaches the peak. Note that here we require that the confidence score should be larger than a threshold for a frame to be considered as posture . This is used to rule out those less salient postures and provide a more robust result for the following temporal order comparison. If the confidence score for posture has never passed this threshold, it implies the posture might have never been performed. In this case, we set to by convention.

Then the posture that comes first is given by

(1)

where is the selected rows for the current iteration. We will use as the hash code for the video, which denotes the index of the posture first performed in the sequence. If all involved are (i.e., none of postures have been performed), the above temporal-order comparison fails, and we output a special code to denote this case.

Figure 1: Illustration of the FTA by peak approach when

We illustrate an FTA by peak example in Figure 1, where we plot three posture sequences , each corresponding to a row of matrix . For simplicity, we assume that the peak of three sequences passes the preset threshold . If we set , all three sequences are involved, and the peak of comes at first and thus the code of the current iteration produces a hash code . Otherwise, if we set , a pair of sequences are randomly chose. Suppose we choose . Because , we output to encode that the peak of posture sequence comes first.

FTA by peak is a much conservative approach to decide the temporal order between the postures. Usually well before the peak, a posture has already been acted for a while. In contrast to this conservative approach, we introduce a more aggressive approach below.

FTA by thresholding In this approach, we assume that the first time a posture is performed in a sequence is when its confidence score reaches the preset threshold for the first time. Formally, we have

where belong to the selected postures. By convention, we set to if the above set is empty, denoting this posture has never been acted in the sequence. Accordingly, the index of the posture first performed is also determined by Eq. (1).

In an extreme case that there is no posture from passing the threshold test, all would be for . Then we produce a special hashing code to encode the video. Therefore, in each iteration, we have a code book with entries for the encoding of a video. Apparently, a -ary code can be represented by binary bits. We repeat the above coding iterations times, and we will get -ary codes or equivalently binary codes to encode an input sequence.

An algorithmic overview of our method is shown in Algorithm 1.

1:procedure FTA(, , , , , )
2:     Generate from Gaussian distribution;
3:     Set ;
4:     Initialize as an empty binary sequence.
5:     for  to  do
6:         Randomly select rows from ;
7:         for  to  do
8:              Compute the first-acting time for ;
9:         end for
10:         Set
11:         
12:     end for
13:return
14:end procedure
Algorithm 1 FTA Hashing

Apparently, the complexity to compute one hash code is , the total complexity for hash codes as well as the cost for the random projection is . Considering

, the total complexity can be estimated as

, which is linear with respect to the input arguments.

3.2 The Invariance Properties of FTA

(a) translation invariance
(b) scale invariance
(c) execution rate invariance
Figure 2: Running examples to demonstrate the invariance properties of the FTA hashing. Two postures (red and blue) are investigated on two videos (solid and dash) of the same action class.

As mentioned in section 1, it is very challenging to model the temporal characteristics of action videos due to the temporal translation and warping as well as the variation in motion scales and execution rates. In this subsection, we show the nice properties of the FTA hashing, demonstrating that it is insensitive to the above variations by its temporal order-preserving nature.

Figure 2 illustrates some running examples to show these properties. We discuss the examples using the FTA by peak version and set to . The result is equally applicable to FTA by thresholding.

Following the notations in Section 3.1, we use , to denote two video sequences of the same action class. Two postures are obtained by projecting both video sequences into the corresponding subspaces, and we apply FTA by peak to encode the temporal order of these postures. We use , and , to denote the confidence scores of posture and on , , respectively. The occurrence of the peaks are at time for video , and for video .

Temporal Translation Invariance In Figure 2(a), although the peaks of postures (red) and (blue) are at different locations for and due to the temporal translation, the relationship and are consistent for these two video of the same class. The FTA hashing produces the code 1 for both videos.

Motion Scale Invariance In Figure 2(b), although , (solid line) have a larger scale than , (dash line), their peak order remains the same and the FTA hashing produces the same hash code for the videos of the same class.

Execution Rate Invariance In Figure 2(c), and (solid line) are squeezed due to the execution rate variation. For example, some people perform the action faster than the other people. However, the order of and are not affected by this variations and the FTA hashing produces the same code for and .

4 Experiments

In this section, we demonstrate the experiment results on several 3D action video datasets.

4.1 Baseline Method

As performing the hashing on the video level is a new problem, to our best knowledge, there is no existing methods in literature that can serve as the baseline. Thus we introduce a “Bag-of-Words (BOW)” style method as the baseline for the comparison. The method also employs the random projection to each frame to produce a sequence of latent postures. However, it views each video as a bag and each posture as an item in the bag. In other words, it does not consider any temporal orders between the postures. Specifically, the BOW algorithm performs a thresholding test to decide whether an item of posture exists in the video, i.e., the hash code is given as

where is a threshold to detect the existence of the posture according to its confidence score in the sequence. The above process is iterated to generate a sequence of binary hash bits. Since the BOW method does not leverage any temporal information, it can serve as a baseline method to validate the advantage of the temporal order-preserving FTA hashing algorithm.

4.2 Feature Extraction

Since the feature extraction is not the contribution of the current paper, we adopt the following four types of features commonly used in 3D human action recognition tasks.

  1. Pairwise-joint distance (PJD): the normalized distance between a pair of joints [17, 26];

  2. Joint offset feature (JO): the normalized joint offset from two consecutive frames [25];

  3. Pairwise-angle feature (PA): the cosine of the angle between a pair of body segments [24]; and

  4. Histogram of Velocity Components (HVC): the histogram of the 3D velocity of the point cloud in the neighborhood of the joints [24].

4.3 Experiment Setting

The parameter for the BOW, both versions of FTAs are chosen by the -fold cross validation on the training set. Considering the proposed hashing algorithm is based on random projection and selection of postures, we repeat the experiment for

runs and report the average accuracy as the results in all experiments. The Hamming distance is used as the distance metric for the KNN search to predict the label of an unknown test sequence.

4.4 Experiment Results

We conduct performance evaluations on public three mostly used 3D action video datasets. It is worth noting that the proposed FTA hashing algorithm is especially amenable to large-scale tasks, however, to our best knowledge, there is no extremely large 3D action datasets publicly available in literature. But the results demonstrated on these datasets should suffice to show the competitive performance of the proposed algorithm.

UTKinect-Action Dataset

The UTKinect-Action dataset [19] consists of

action types performed by 10 subjects. All subjects perform each action twice. Since subjects are free to move in the environment, the dataset is very challenging due to the huge viewpoint variation and intra-class variance. We follow the same cross-subject test setting from

[16].

Experimental results are summarized in Table 1 in which we compare the recognition accuracy of the BOW, the FTA by peak and the FTA by thresholding with four types of features. The -ary code length is set to , is set to . The FTA by peak produces the accuracy of and on the PA feature and the PJD feature, respectively. This is a very impressive performance considering that we only use the hashing and the approximate nearest neighbor search by the Hamming distance. As shown, both the FTA by peak and the FTA by thresholding outperform the baseline BOW method by more than

, demonstrating the contribution of the modeling temporal order to performance of the FTA hashing. We also note that FTA by peak has a higher performance than the FTA by thresholding. This is probably because the peak is a more robust estimate of occurrence time of a posture than the onset time passing a confidence threshold.

It is also worth noting that the performances vary across different features. The PA feature achieves an accuracy of while the HVC feature has only in accuracy. This is normal because the discriminative capabilities of different features are different. We report the accuracy of multiple features to show the performance of FTA can consistently outperform the BOW. The recognition accuracy can be further boosted by fusing multiple features (e.g. concatenation) but this is out of the scope of this paper.

Features BOW FTA by Thresholding FTA by Peak
PJD
JO
PA
HVC
Table 1: Performance comparison between different hashing methods on the UTKinect-Action dataset (, , accuracy in terms of %).

Next, we further study the performance with respect to different (i.e., the -ary code used by FTA) as well as the threshold . We only report the result on the FTA by peak since it is consistently better than FTA by thresholding. Figure 3 shows the accuracy versus when the code length is set to . The accuracy drops when increases, where gives the best performance. This is because a larger may produce redundant codes when comparing a large number of postures – postures may be dominated by a few more salient postures resulting in the loss of the discriminative information.

Figure 3: Relationship between and the accuracy on different features on the UTKinect-Action dataset (-ary code length is set to 1000).

Figure 4 shows the effect of code length on the performance when . The accuracy increases while the code length grows. The accuracy remains stable after the code length reaches . Note that the code length is with respect to the -ary code. In other words, we use binary bits to encode a -nary code.

As shown, the entire video requires only -ary hash codes (i.e., bits of codes when ) to achieve an accuracy higher than with PJD and PA features. Suppose if a video has frames, it means we only need bits per frame, which is extremely efficient considering state-of-the-art supervised hashing methods for the image retrieval normally needs bits for a single image to achieve a comparable performance [6]. This shows significant efficiency the proposed FTA can achieve.

Figure 4: Relationship between the -ary code length and the accuracy on different features on the UTKinect-Action dataset ( is set to 2).

Finally, we also show the effect of the threshold on the performance of the FTA by peak. In this experiment, is fixed to 2, the -ary code length is set to . In Figure 5, consistent performance has been shown on accuracy versus threshold across all four features. When the threshold is set to a small value, many false postures can pass the threshold test. On the contrary, when the threshold is set to a larger value, true postures can fail the test resulting a hash code full of ”0” bits. Both cases would compromise the accuracy. Thus the threshold should be set in a reasonable range for the satisfactory level of accuracy by avoiding false positive or false negative detection of latent postures.

Figure 5: Relationship between the threshold and the accuracy on different features on the UTKinect-Action dataset ( is set to 2 and -ary code length is set to 1000).

MSR Action3D Dataset

The MSR Action3D dataset [7] covers sports action types and 10 subjects. All subjects perform each action two or three times. The dataset is very challenging due to the high intra-class variations. We follow the same experiment settings in [17].

Similar to the experiment in the previous UTKinect-Action dataset, we compare the performance among the baseline BOW, FTA by threshold and the FTA by peak. Results are reported in Table 2. Again, FTA by peak has achieved the best accuracy than the other two methods across all features. Different features have produced different level of accuracies. The JO feature achieves an accuracy of while the PJD feature has only in accuracy.

Features BOW FTA by Thresholding FTA by Peak
PJD
JO
PA
HVC
Table 2: Performance comparison between different hashing methods on the MSR Action3D dataset (, , accuracy in terms of %).

We also evaluate the impact of and code length on the accuracy of the FTA by peak algorithm, and show the results in Figure 6 and Figure 7. As shown, the recognition accuracy drops as increases, while a longer code length usually produces higher accuracy. This results are consistent with the results on the MSR Action3D dataset.

Figure 6: Relationship between and the accuracy on different features on the MSR Action3D dataset. (-ary code length is set to 1000)
Figure 7: Relationship between the -ary code length and the accuracy on the MSR Action3D dataset.

MSRActionPairs Dataset

The MSRActionPairs dataset [11] consists of action types performed by subjects. Each subject performs every action three times. This dataset contains 6 pairs of similar actions which has exactly the same poses but different temporal orders. For example, ”Pick up” and ”Put down”, ”Push a chair” and ”Pull a chair”. This dataset is very suitable to evaluate the temporal order-preserving capability of the proposed FTA hashing algorithm. We follow the same test setting of [11].

Table 3 compares the results among the FTA by peak, FTA by thresholding and the baseline BOW algorithm. The recognition accuracy of the FTA by peak significantly outperforms the BOW algorithm by on most of the features. Since the MSRActionPairs dataset is very sensitive to the temporal order of the action sequences, the FTA hashing can effectively distinguish between different actions with similar postures but in different temporal orders. On the contrary, BOW does not encode the temporal structure, and is incapable of handling this challenging setting on this dataset.

In addition, we perform the same set of experiments on the impact of and the code length on the performance in Figure 8 and Figure 9. Similar results are observed as for the other two datasets.

Features BOW FTA by Thresholding FTA by Peak
PJD
JO
PA
HVC
Table 3: Performance comparison between different hashing methods on the MSRActionPairs dataset (Accuracy in terms of %).
Figure 8: Relationship between and the accuracy on the MSRActionPairs dataset.
Figure 9: Relationship between the -ary code length and the accuracy on the MSRActionPairs dataset.

5 Conclusions

In this paper, we revisit the human action recognition problem from a hashing perspective and propose a novel First-Take-All hashing algorithm to interpret the temporal patterns of the entire video. The FTA hashing preserves the temporal order of the action sequence and achieves invariance to the temporal translation, motion scale as well the execution rate. Experiment results on three public 3D human action datasets have demonstrated the performance and the efficiency of the proposed FTA hashing.

The current work is based on random projection. We would like to further enhance the performance of the FTA hashing and shrink the code length by leveraging the learning-based method in our future work.

References

  • [1] A. Z. Broder, M. Charikar, A. M. Frieze, and M. Mitzenmacher. Min-wise independent permutations. In

    Proceedings of the thirtieth annual ACM symposium on Theory of computing

    , pages 327–336. ACM, 1998.
  • [2] M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In Proceedings of the twentieth annual symposium on Computational geometry, pages 253–262. ACM, 2004.
  • [3] A. Gionis, P. Indyk, R. Motwani, et al. Similarity search in high dimensions via hashing. In VLDB, volume 99, pages 518–529, 1999.
  • [4] B. Kulis and K. Grauman. Kernelized locality-sensitive hashing for scalable image search. In Computer Vision, 2009 IEEE 12th International Conference on, pages 2130–2137. IEEE, 2009.
  • [5] B. Kulis, P. Jain, and K. Grauman. Fast similarity search for learned metrics. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(12):2143–2157, 2009.
  • [6] K. Li, G.-J. Qi, J. Ye, and K. A. Hua. Rank subspace learning for compact hash codes. In arXiv:1503.05951, 2015.
  • [7] W. Li, Z. Zhang, and Z. Liu. Action recognition based on a bag of 3d points. In

    Computer Vision and Pattern Recognition Workshops, 2010 IEEE Computer Society Conference on

    , pages 9–14.
  • [8] Y. Lin, R. Jin, D. Cai, S. Yan, and X. Li. Compressed hashing. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 446–451. IEEE, 2013.
  • [9] Q. Lv, W. Josephson, Z. Wang, M. Charikar, and K. Li. Multi-probe lsh: Efficient indexing for high-dimensional similarity search. In Proceedings of the 33rd International Conference on Very Large Data Bases, VLDB ’07, pages 950–961. VLDB Endowment, 2007.
  • [10] M. Müller and T. Röder. Motion templates for automatic classification and retrieval of motion capture data. In Proceedings of the 2006 ACM SIGGRAPH, pages 137–146, 2006.
  • [11] O. Oreifej and Z. Liu. Hon4d: Histogram of oriented 4d normals for activity recognition from depth sequences. In Computer Vision and Pattern Recognition, 2013 IEEE Conference on, pages 716–723.
  • [12] R. Panigrahy. Entropy based nearest neighbor search in high dimensions. In Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm, SODA ’06, pages 1186–1195, Philadelphia, PA, USA, 2006. Society for Industrial and Applied Mathematics.
  • [13] M. Raginsky and S. Lazebnik. Locality-sensitive binary codes from shift-invariant kernels. In Advances in neural information processing systems, pages 1509–1517, 2009.
  • [14] K. Terasawa and Y. Tanaka. Spherical lsh for approximate nearest neighbor search on unit hypersphere. In Algorithms and Data Structures, pages 27–38. Springer, 2007.
  • [15] D. Tschopp and S. Diggavi. Approximate nearest neighbor search through comparisons. arXiv preprint arXiv:0909.2194, 2009.
  • [16] R. Vemulapalli, F. Arrate, and R. Chellappa. Human action recognition by representing 3d skeletons as points in a lie group. In Computer Vision and Pattern Recognition, IEEE Conference on, 2014.
  • [17] J. Wang, Z. Liu, Y. Wu, and J. Yuan. Mining actionlet ensemble for action recognition with depth cameras. In Computer Vision and Pattern Recognition, 2012 IEEE Conference on, pages 1290–1297.
  • [18] L. Xia, C.-C. Chen, and J. Aggarwal. View invariant human action recognition using histograms of 3d joints. In Computer Vision and Pattern Recognition Workshops, 2012 IEEE Computer Society Conference on, pages 20–27.
  • [19] L. Xia, C.-C. Chen, and J. Aggarwal. View invariant human action recognition using histograms of 3d joints. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, pages 20–27. IEEE, 2012.
  • [20] J. Yagnik, D. Strelow, D. A. Ross, and R.-s. Lin. The power of comparative reasoning. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 2431–2438. IEEE, 2011.
  • [21] X. Yang and Y. Tian. Super normal vector for activity recognition using depth sequences. In Computer Vision and Pattern Recognition, IEEE Conference on, 2014.
  • [22] X. Yang, C. Zhang, and Y. Tian. Recognizing actions using depth motion maps-based histograms of oriented gradients. In Proceedings of the 20th ACM international conference on Multimedia, pages 1057–1060, 2012.
  • [23] G. Ye, D. Liu, J. Wang, and S.-F. Chang. Large-scale video hashing via structure learning. In Computer Vision (ICCV), 2013 IEEE International Conference on, pages 2272–2279. IEEE, 2013.
  • [24] J. Ye, K. Li, G.-J. Qi, and K. A. Hua. Temporal order-preserving dynamic quantization for human actin recognition from multimodal sensor streams. In Multimedia Retrieval (ICMR), ACM International Conference on. ACM, 2015.
  • [25] G. Yu, Z. Liu, and J. Yuan. Discriminative orderlet mining for real-time recognition of human-object interaction. In Proceedings of the Asian Conference on Computer Vision (ACCV’14), Singapore, pages 1–5, 2014.
  • [26] X. Zhao, X. Li, C. Pang, X. Zhu, and Q. Z. Sheng. Online human gesture recognition from motion data streams. In Proceedings of the 21st ACM international conference on Multimedia, pages 23–32, 2013.
  • [27] Y. Zhu, W. Chen, and G. Guo. Fusing spatiotemporal features and joints for 3d action recognition. In Computer Vision and Pattern Recognition workshops, 2013 IEEE Conference on.