Describing human actions using attributes is closely related to representing an object using attributes . Several studies have investigated the attribute-based approaches for object recognition problems [3, 4, 2, 5, 6]. These methods have demonstrated that attribute-based approaches can not only recognize object categories, but can also describe unknown object categories. In this paper, we propose a dictionary-based approach for learning human action attributes which are useful to model and recognize known action categories, and also describe unknown action categories.
Dictionary learning is one of the approaches for learning attributes (i.e., dictionary atoms) from a set of training samples. In , a promising dictionary learning algorithm, K-SVD, is introduced to learn an over-complete dictionary. Input signals can then be represented as a sparse linear combination of dictionary atoms. K-SVD only focuses on focus on representational capability, i.e., minimizes the reconstruction error. The method of optimal direction (MOD)  shares the same sparse coding as K-SVD.  manually selects training samples to construct a dictionary.  trains one dictionary for each class to obtain discriminability.
Discriminative dictionary learning is gaining attention in many disciplines. Discriminative K-SVD in  extends K-SVD by incorporating the classification error into the objective function to obtain a more discriminative dictionary. 
aims to obtain the discriminative power of dictionary by iteratively updating the dictionary from the results of a linear classifier. introduces a label consistent constraint to obtain the discrimination of sparse codes among the classes. Some other examples include LDA-based basis selection , distance matrix learning , hierarchical pairwise merging of visual words , maximization of mutual information (MMI) [17, 18, 1], and sparse coding-based dictionary learning [10, 19].
Recent dictionary-based approaches for learning action attributes include agglomerative clustering , forward selection  and probabilistic graphical model .  proposes an unsupervised approach and uses minimization to find basic primitives to represent human motions.
In this paper, we propose an approach for dictionary learning of human action attributes via information maximization. In addition to using the appearance information between dictionary atoms, we also exploit class label information associated with dictionary atoms to learn a compact and discriminative dictionary for human action attributes. The mutual information for appearance information and class distributions between the learned dictionary and the rest of the dictionary space are used to define the objective function, which is optimized using a Gaussian Process (GP) model  proposed for sparse representation. The property of sparse coding naturally leads to a kernel with compact support, i.e., zero values for a most portion, in GP for significant speed-ups. Representation and recognition of actions are accomplished through sparse coefficients related to learned attributes.
Unlike previous dictionary learning methods that mostly consider learning reconstructive dictionaries, our algorithm can encourage dictionary compactness and discriminability simultaneously. Sparse representation over a dictionary with coherent atoms has the multiple representation problem . A compact dictionary consists of incoherent atoms, and encourages similar signals, which are more likely from the same class, to be consistently described by a similar set of atoms with similar coefficients. A discriminative dictionary encourages signals from different classes to be described by either a different set of atoms, or the same set of atoms but with different coefficients [25, 26, 10]. Both aspects are critical for action classification using sparse representation. As shown in Fig. 1, our approach produces consistent sparse representations for the same class of signals.
Our approach adopts the rule of Maximization of Mutual Information to obtain a compact and discriminative dictionary. The dictionary atoms are considered as attributes in our paper. Compared to previous methods, our approach maximizes the mutual information for both the appearance information and class distribution of dictionary atoms to learn a dictionary while  and  only maximize the mutual information for class distribution. Thus, we can expect improved dictionary compactness from our approach. Both  and  obtain a dictionary through merging of two visual words, which can be time-consuming when the dictionary size is large. Besides, our approach is efficient because the dictionary is learned in the sparse feature space so we can leverage the property of sparse coding to use kernel locality for speeding up the dictionary learning process.
Our main contributions are:
We propose a novel probabilistic model for sparse representation.
We learn a compact and discriminative dictionary for sparse coding via information maximization.
We describe and recognize human actions, including unknown actions, via a set of human action attributes in a sparse feature space.
We present a simple yet near-optimal action summarization method.
The rest of this paper is structured as follows. In Sec. II, we discuss human action features and attributes. We then propose a novel probabilistic model for sparse representation in Sec. III. In Sec. IV, we present our attribution dictionary learning framework. We describe how to adopt our attribution dictionary learning method for action summarization in Sec. V. Experimental results are given in Sec. VI to demonstrate the effectiveness of our approach for action recognition and summarization.
Ii Action Features and Attributes
Human action features are extracted from an action interest region for representing and describing actions. The action interest region is defined as a bounded region around the human performing the activity, which is obtained using background subtraction and/or tracking.
Ii-a Basic Features
The human action attributes require feature descriptors to represent visual aspects. We introduce basic features, including both local and global features, used in the paper.
Global Features: Global features encode rich information from an action interest region, so they generally perform better than local features in recognition. When cameras and backgrounds are static, we use the silhouette-based feature descriptor presented in  to capture shape information, while we use Histogram of oriented gradient (HOG) descriptors used in  for dynamic backgrounds and moving cameras. For encoding motion information, we use optical-flow based feature descriptors as in . We use Action Bank descriptors introduced in  to demonstrate that our attribute learning method can enhance the discriminability of high-level global features.
Local Features: Spatio-temporal local features describe a video as a collection of independent patches or D cuboids, which are less sensitive to viewpoint changes, noise and partial occlusion. We first extract a collection of space-time interest points (STIP) introduced in  to represent an action sequence, and then use HOG and histogram of flow to describe them.
Ii-B Human Action Attributes
Motivated by [20, 21, 22], an action can be represented as a set of basic action units. We refer to these basic action units as human action attributes. In order to effectively describe human actions, we need to learn a representative and semantic set of action attributes. Given all the basic features from training data, we aim to learn a compact and discriminative dictionary where all the dictionary atoms can be used as human action attributes. The final learned dictionary can be used as a “Thesaurus” of human action attributes. Each human action is then decomposed as sparse linear combinations of attributes in the thesaurus though sparse coding. The sparse coefficient associated with each attribute measures its weight in representing an action.
Iii A Probabilistic Model for Sparse Representation
Before we present our dictionary learning framework, we first suggest a novel probabilistic model for sparse representation motivated by .
Iii-a Reconstructive Dictionary Learning
A reconstructive dictionary can be learned through K-SVD , which is a method to learn an over-complete dictionary for sparse coding. Let be a set of input signals in a -dimensional feature space . In K-SVD, a dictionary with a fixed number of atoms is learned by finding a solution iteratively to the following problem:
where () is the learned dictionary, are the sparse codes of input signals , and specifies the sparsity that each signal has fewer than atoms in its decomposition. Each dictionary atom is -normalized. The learned dictionary from (1) only minimizes the reconstruction error, so it is not optimal in terms of compactness and discriminability.
Iii-B A Gaussian Process
Given a set of input signals , , there exists an infinite dictionary space . Each dictionary atom maps the set of input signals to its corresponding sparse coefficients in , which can be viewed as its observations to the set of input signals. When two dictionary atoms and are similar, it is more likely that input signals will use them simultaneously in their sparse decomposition . Thus the similarity of two dictionary atoms can be assessed by the correlation between their observations (i.e., sparse coefficients). Such correlation property of sparse coefficients has been used in  to cluster dictionary atoms.
With the above formulation, we obtain a problem which is commonly referred as a GP model. A GP is specified by a mean function and a symmetric positive-definite covariance function . Since we simplify our problem by assuming an initial dictionary , we only need to specify entries in the covariance function for atoms existing in , and leave the rest undefined. In this paper, for each pair of dictionary atoms , the corresponding covariance function entry is defined as the covariance between their associated sparse coefficients . For simplicity, we use the notation to refer to the covariance entry at the indices of , . Similarly, we use to denote the covariance matrix for a set of dictionary atoms .
The GP model for sparse representation provides the following useful property: given a set of dictionary atoms and the associated sparse coefficients , the distribution at any given testing dictionary atom
is a Gaussian with a closed-form conditional variance.
is the vector of covariances betweenand each atom in .
Iii-C Dictionary Class Distribution
When the set of input signals is labeled with one of discrete class labels, we can further derive class related distributions over sparse representations.
As mentioned, each dictionary atom maps the set of input signals to its corresponding sparse coefficients in . Since each coefficient here corresponds to an input signal , it is associated with a class label. If we aggregate based on class labels, we obtain a
sized vector. After normalization, we have the conditional probability, where represents the probability of observing a class given a dictionary atom.
Iv Learning Attribute Dictionary
As the optimal dictionary size is rarely known in advance, we first obtain through K-SVD an initial dictionary of a large size . As discussed, the initial dictionary from (1) only minimizes the reconstruction error, and is not optimal in terms of compactness and discriminability. Then we learn a compact and discriminative dictionary from the initial dictionary via information maximization.
Given the initial dictionary obtained from (1), we aim to compress it into a dictionary of size , which encourages the signals from the same class to have very similar sparse representations, as shown in Fig. 1. In other words, the signals from the same class are described by a similar set of attributes, i.e., dictionary atoms. Therefore, a compact and discriminative dictionary is more desirable.
An intuitive heuristic is to start with, and iteratively choose the next best atom from which provides a maximum increase for the entropy of , i.e., , until , where denotes the remaining dictionary atoms after have been removed from the initial dictionary . Using the GP model, we can evaluate as a closed-form Gaussian conditional entropy,
where is defined in (2). This heuristic is a good approximation to the maximization of joint entropy (ME) criteria, i.e., .
With the ME rule, as atoms in the learned dictionary are less correlated to each other due to their high joint entropy, the learned dictionary is compact. However, the maximal entropy criteria will favor attributes associated with the beginning and the end of an action, as they are least correlated. Such a phenomenon is shown in Fig. (b)b and Fig. (d)d in the experiment section. Thus we expect high reconstruction error and weak discriminability. To mitigate this in our dictionary learning framework, we adopt Maximization of Mutual Information (MMI) as the criteria for ensuring dictionary compactness and discriminability.
Iv-a MMI for Unsupervised Learning (Mmi-1)
The rule of maximization of entropy only considers the entropy of dictionary atoms. Instead we choose to learn that most reduces the entropy about the rest of dictionary atoms .
It is known that maximizing the above criteria is NP-complete. A similar problem has been studied in the machine learning literature. We can use a very simple greedy algorithm here. We start with , and iteratively choose the next best dictionary atom from which provides a maximum increase in mutual information, i.e.,
where denotes . Intuitively, the ME criteria only considers , i.e., forces to be most different from already selected dictionary atoms , now we also consider to force to be most representative among the remaining atoms.
It has been proved in  that the above greedy algorithm is submodular and serves a polynomial-time approximation that is within of the optimum. Using arguments similar to the ones presented in , the near-optimality of our approach can be guaranteed if the initial dictionary size is sufficiently larger than .
Given the initial dictionary size , each iteration requires to evaluate (6). Such an algorithm seems to be computationally infeasible for any large initial dictionary size. The nice feature of this approach is that we model the covariance kernel over sparse codes , which entitles a compact support, i.e., most entries of have zero or very tiny values. After we ignore those zero value portion while evaluating (6), the actual computation becomes very efficient.
Iv-B MMI for Supervised Learning (Mmi-2)
The objective functions in (4) and (5) only consider the appearance information of dictionary atoms, hence is not optimized for classification. For example, attributes to distinguish a particular class can possibly be missing in . So we need to use appearance information and class distribution to construct a dictionary that also causes minimal loss information about labels.
Let denote the labels of discrete values, . In Sec. III-C, we discussed how to obtain , which represents the probability of observing a class given a dictionary atom. Give a set of dictionary atom , we define . For simplicity, we denote as , and as .
To enhance the discriminative power of the learned dictionary, we propose to modify the objection function (4) to
we can easily notice that now we also force the classes associated with to be most different from classes already covered by selected atoms ; and at the same time, the classes associated with should be most representative among classes covered by the remaining atoms. Thus the learned dictionary is not only compact, but also covers all classes to maintain the discriminability. It is interesting to note that MMI-1 is a special case of MMI-2 with .
The parameters in (8
) are data dependent and can be estimated as the ratio between the maximal information gained from an atom to the respective compactness and discrimination measure, i.e.,
For each term in (8), only the first greedily selected atoms are involved in parameter estimation. This leads to an efficient process in finding the parameters.
Iv-C MMI using dictionary class distribution (Mmi-3)
MMI-1 considers the appearance information for dictionary compactness, and MMI-2 uses appearance and class distribution to enforce both dictionary compactness and discriminability. To complete the discussion, MMI-3, which is motivated by , only considers the dictionary class distribution, discussed in Sec. III-C, for dictionary discriminability.
In MMI-3, we start with an initial dictionary obtained from K-SVD. At each iteration, for each pair of dictionary atoms, and , we compute the MI loss if we merge these two into a new dictionary atom , and pick the pair which gives the minimum MI loss. We continue the merging process till the desired dictionary size. The MI loss is defined as,
V Action Summarization using MMI-1
Summarizing an action video sequence often considers two criteria: diversity and coverage . The diversity criterion requires the elements in a summary be as different from each other as possible; and the coverage criterion requires a summary to also represent the original video well.
In (5), the first term forces to be most different from already selected dictionary atoms . The second term to force to be most representative among the remaining atoms. By considering an action sequence as a dictionary, and each frame as a dictionary atom, MMI-1 serves a near-optimal video summarization scheme. The first term in (5) measures diversity and the second term in (5) measures coverage. The only revision required here is to define the kernel of the Gaussian process discussed in Sec. III-B as .
The advantage in adopting MMI-1 as a summarization/sampling scheme can be summarized as follows: first, MMI-1 is a simple greedy algorithm that can be executed very efficiently. Second, the MMI-1 provides near-optimal sampling/summarization results, which is within of the optimum. Such near-optimality is achieved through a submodular objective function that enforces diversity and coverage simultaneously.
Vi Experimental Evaluation
This section presents an experimental evaluation using four public action datasets: Keck gesture dataset , Weizmann action dataset , UCF sports action dataset , and UCF50 action dataset . On the Keck gesture dataset, we thoroughly evaluate the basic behavior of our proposed dictionary learning approaches MMI-1, MMI-2, and MMI-3, in terms of dictionary compactness and discriminability, by comparing with other alternatives. Then we further evaluate the discriminability of our learned action attributes over the popular Weizmann aciton dataset, the challenging UCF sports and UCF50 action datasets.
Vi-a Comparison with Alternative Approaches
The Keck gesture dataset consists of 14 different gestures, which are a subset of the military signals. These 14 classes include turn left, turn right, attention left, attention right, flap, stop left, stop right, stop both, attention both, start, go back, close distance, speed up, come near. Each of the 14 gestures is performed by three subjects. Some sample frames from this dataset are shown in Fig. 1.
For comparison purposes, in addition to MMI-1, MMI-2 and MMI-3 methods proposed in Sec. IV, we also implemented two additional action attributes learning approaches. The first approach is the maximization of entropy (ME) method discussed before. The second approach is to simply perform k-means over an initial dictionary from K-SVD to obtain a desired size dictionary.
Vi-A1 Dictionary Purity and Compactness
Through K-SVD, we start with an initial 500 size dictionary using the shape feature (sparsity 30 is used). We then learned a 40 size dictionary from using 5 different approaches. We let in (8) throughout the experiment. To evaluate the discriminability and compactness of these learned dictionaries, we evaluate the purity and compactness measures as shown in Fig. 2. The purity is assessed by the histograms of the maximum probability observing a class given a dictionary atom, i.e., , and the compactness is assessed by the histograms of . As each dictionary atom is -normalized, and indicates the similarity between dictionary atoms and . Fig. (a)a shows MMI-2 is most “pure”, as around of dictionary atoms learned by MMI-2 have 0.6-above probability to only associate with one of the classes. MMI-3 shows comparable purity to MMI-2 as the MI loss criteria used in MMI-3 does retain the class information during dictionary learning. However, as shown in Fig. (b)b, MMI-2 dictionary is much more compact, as only about MMI-2 dictionary atoms have -above similarity. As expected, comparing to MMI-2, MMI-1 shows better compactness but much less purity.
Vi-A2 Describing Unknown Actions
We illustrate here how unknown actions can be described through a learned attribute dictionary. We first obtain a 500 size initial shape dictionary using 11 out of 14 gesture classes, and keep flap, stop both and attention both as unknown actions. We would expect a near perfect description to these unknown actions, as we notice these three classes are composed by attributes observed in the rest classes. For example, flap is a two-arm gesture “unseen” by the attribute dictionary, but its left-arm pattern is similar to turn left, and right-arm is similar to turn right.
As shown in Fig. 3, we learned 40 size dictionaries using MMI-2, ME and MMI-3 respectively from . Through visual observation, ME dictionary (Fig. (b)b) is most compact as dictionary atoms look less similar to each other. However, different from MMI-2 dictionary (Fig. (a)a), it contains shapes mostly associated with the action start and end as discussed in Sec. IV, which often results in high reconstruction errors shown in Fig. (d)d. MMI-3 dictionary (Fig. (c)c) only concerns about the discriminability, thus obvious redundancy can be observed in its dictionary. We can see from Fig. (d)d, though the action flap is unknown to the dictionary, we still obtain a nearly perfect reconstruction through MMI-2, i.e., we can perfectly describe it using attributes in dictionary with corresponding sparse coefficients.
Vi-A3 Recognition Accuracy
In all of our experiments, we use the following classification schemes: when the global features, i.e., shape and motion, are used for attribute dictionaries, we first adopt dynamic time warping (DTW) to align and measure the distance between two action sequences in the sparse code domain; then a -NN classifier is used for recognition. When the local feature STIP  is used, DTW becomes not applicable, and we simply perform recognition using a -NN classifier based on the sparse code histogram of each action sequence.
In Fig. 4, we present the recognition accuracy on the Keck gesture dataset with different dictionaries sizes and over different global and local features. We use a leave-one-person-out setup, i.e., sequences performed by a person are left out, and report the average accuracy. We choose an initial dictionary size to be twice the dimension of an input signal and sparsity 10 is used in this set of experiments. In all cases, the proposed MMI-2 outperforms the rest. The sparse code noise has more effects on the DTW methods than the histogram method, thus, MMI-2 brings more improvements on global features over local features. The peak recognition accuracy obtained from MMI-2 is comparable to (motion), (shape), (shape and motion) reported in .
As discussed, the near-optimality of our approach can be guaranteed if the initial dictionary size is sufficiently larger than . We usually choose a size for to keep be to times larger. As shown in Fig. 4, such dictionary size range usually produces good recognition performance. We can also decide when the MI increase in (8) is below a predefined threshold, which can be obtained via cross validation from training data.
Vi-B Discriminability of Learned Action Attributes
In this section, we further evaluate the discriminative power of learned action attributes using MMI-2.
Vi-B1 Recognizing Unknown Actions
The Weizmann human action dataset contains 10 different actions: bend, jack, jump, pjump, run, side, skip, walk, wave1, wave2. Each action is performed by 9 different people. We use the shape and the motion features for attribute dictionaries. In the experiments on the Weizmann dataset, we learn a size dictionary from a size initial dictionary and the sparsity 10 is used. When we use a leave-one-person-out setup, we obtain recognition accuracy for the Weizmann dataset.
To evaluate the recognition performance of attribute representation for unknown actions, we use a leave-one-action-out setup for dictionary learning, and then use a leave-one-person-out setup for recognition. In this way, one action class is kept unknown to the learned attribute dictionary, and its sparse representation using attributes learned from the rest classes is used for recognition. The recognition accuracy is shown in Table I.
It is interesting to notice from the second row of Table I that only jump can not be perfectly described using attributes learned from the rest 9 actions, i.e., jump is described by a set of attributes not completely provided by the rest actions. By examining the dataset, it is easy to notice jump does exhibit unique shapes and motion patterns.
As we see from the third row of the table, omitting attributes of the wave2, i.e., the wave-two-hands action, brings down the overall accuracy most. Further investigation tells us, when the wave2 attributes are not present, such accuracy loss is caused by pjump being misclassified as jack, which means the attributes contributed by wave2 are useful to distinguish pjump from jack. This makes great sense as jack is very similar to pjump but jack contains additional wave-two-hands pattern.
Vi-B2 Recognizing Realistic Actions
The UCF sports dataset is a set of 150 broadcast sports videos and contains 10 different actions shown in Fig. 5. It is a challenging dataset with significant variations in scene content and viewpoints. As the UCF dataset often involves multiple people in the scene, we use tracks from ground-truth annotations. We use the HOG and the motion features for attribute dictionaries. We learned a size dictionary from a size initial dictionary and the sparsity 10 is used. We adopt a five-fold cross-validation setup. With such basic features and a simple -NN classifier, we obtain average recognition accuracy over the UCF sports action dataset, and the confusion matrix is shown in Fig. 7.
Vi-C Attribute dictionary on high-level features
We learn our sparse attribute dictionary from features. As discussed in Sec. II, human actions are typically represented by low- or mid-level features, which contain little semantic meanings. Recent advances in action representations suggest the inclusion of semantic information for high-level action features. A promising high-level action feature, ActionBank, is introduced in 
. The ActionBank representation is a concatenation of max-pooled detection features from many individual action detectors sampled broadly in a semantic space. As reported in, the action recognition accuracy using ActionBank features is better than the state of the art, better by 3.7% on UCF Sports, and 10% on UCF50.
In this section, we demonstrate that our learned action attributes can not only benefit from but also enhance high-level features in terms of discriminability. We perform experiments on the UCF Sports and UCF50 action datasets.
We revisit the UCF sports dataset. Instead of the low-level HOG and motion features, we adopt the ActionBank high-level features for attribute dictionaries. A dimensional ActionBank feature is extracted for each action, and such feature is reduced to dimensions through PCA. Then, we learned a 40-sized attibute dictionary from a 128-sized initial dictionary and the sparsity 20 is used. We use the same leave-one-out cross-validation setup as  for action recognition. In order to emphasize the discriminability of learned action attributes, we adopt a simple -NN classifier.
The recognition accuracies using high-level ActionBank features are reported in the second part of Table II. We obtain 90.7% by using ActionBank features directly with a -NN classifier. The recognition accuracy using the initial K-SVD dictionary on ActionBank features is 52.1%. The recognition accuracy using the attribute dictionaries learned by MMI-1, MMI-2 and MMI-3 are 93.6%, 91.5% and 87.9%. We made the following three observations: first, the proposed dictionary learning method significantly enhances dictionary discriminability (better by 41.5% than the initial K-SVD dictionary). Second, the learned attributes using MMI-1 further improve the state of the art discriminability of ActionBank features (better by 3.0%). Third, discriminability improvements from considering class distribution during dictionary learning are less significant while using high-level features, comparing to low-level ones. This can be due to that high-level features like ActionBank have already encoded such semantic information, i.e., the feature appearance carries class information. Though MMI-2 significantly outperforms both MMI-2 and MMI-3 given low-level features, MMI-1 is preferred when high-level semantic features are used.
|Rodriguez et al. ||69.2|
|Yeffet and Wolf ||79.3|
|Varma and Babu ||85.2|
|Wang et al. ||85.6|
|Le et al. ||86.5|
|Kovashka and Grauman ||87.3|
|Wu et al. ||91.3|
We conduct another set of experiments using high-level features on the UCF50 action dataset. UCF50 is a very challenging action dataset with 50 action categories, consisting of 6617 realistic videos taken from youtube. Sample frames from the UCF50 action dataset are shown in Fig. 6. A dimensional ActionBank feature is first extracted for each action, and such feature is reduced to dimensions through PCA. Then, we learned a 128-sized dictionary from a 2048-sized initial dictionary and the sparsity 60 is used. We use 5-fold group-wise cross-validation setup suggested in  for action recognition. Again, we adopt a simple -NN classifier. We obtain 36.7% by using ActionBank features directly with a -NN classifier, and 41.5% by using the MMI-1 attribute dictionaries learned from ActionBank features. The learned action attributes further improve the discriminability of ActionBank features by 4.8%.
Vi-D Action Sampling/Summarization using MMI-1
This section presents experiments demonstrating action summarization using the proposed MMI-1 algorithm. We first use the MPEG shape dataset  to provide an objective assessment of diversity and coverage enforced by the MMI-1 sampling scheme. Then we provide action summarization examples using the UCF sports dataset.
As discussed in Sec II
, actions are described using features extracted from an action interest region. Global action features are typically shape-based or motion-based descriptors. As video summarization often lacks of objective assessment schemes, shape sampling provides an objective alternate to measure diversity and coverage of a sampling/summarization method.
We conducted shape sampling experiments on the MPEG dataset. This dataset contains 70 shape classes with 20 shapes each. As shown in Fig. (a)a
, we use 10 classes with 10 shape each in our experiments. To emphasize both diversity and coverage criteria, we keep our shape descriptor be variant to affine transformations. Thus, shapes with distinct rotation, scaling or translation are considered as outliers. The Top-10 shape sampling results using ME in Fig.(b)b, which only considers diversity, retrieved 3 classes. The sampling results using k-means in Fig. (c)c, which focuses on coverage, retrieved 7 classes. As shown in Fig. (d)d, the sampling results using the proposed MMI-1 method, which enforces both diversity and coverage criteria, retrieved all 10 classes.
In Fig. 9, we provide an action summarization example using the proposed MMI-1 method. For the dive sequence in Fig. (a)a, we describe each frame of the action using both the HOG and the motion features. Then we sample Top-10 frames using MMI-1 and sort them by timestamps, as shown in Fig. (b)b. Through a subjective assessment, the dive action summarized using MMI-1 in Fig. (b)b is compact yet representative.
We presented an attribute dictionary learning approach via information maximization for action recognition and summarization. By formulating the mutual information for appearance information and class distributions between the learned dictionary and the rest of dictionary space into an objective function, we can ensure the learned dictionary is both representative and discriminative. The objective function is optimized through a GP model proposed for sparse representation. The sparse representation for signals enable the use of kernels locality in GP to speed up the optimization process. An action sequence is described through a set of action attributes, which enable both modeling and recognizing actions, even including “unseen” human actions. Our future work includes how to automatically update the learned dictionary for a new action category.
J. Liu and M. Shah, “Learning human actions via information maximization,” in
IEEE Computer Society Conf. on Computer Vision and Patt. Recn., Anchorage, Alaska, June 2008.
-  A. Farhadi, I. Endres, and D. Hoiem, “Attribute-centric recognition for cross-category generatlization,” in Proc. IEEE Computer Society Conf. on Computer Vision and Patt. Recn., San Francisco. CA, June 2010.
-  C. Lampert, H. Nickisch, and S. Harmerling, “Learning to detect unseen object classes by between-class attribute transfer,” in Proc. IEEE Computer Society Conf. on Computer Vision and Patt. Recn., Miami, FL, June 2009.
-  A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth, “Describing objects by their attributes,” in Proc. IEEE Computer Society Conf. on Computer Vision and Patt. Rec., Miami, FL, June 2009.
-  I. Ramirez, P. Sprechmann, and G. Sapiro, “Classification and clustering via dictionary learning with structured incoherence and shared features,” in Proc. IEEE Computer Society Conf. on Computer Vision and Patt. Recn., San Francisco. CA, June 2010.
X. Yu and Y. Aloimonos, “Attribute-based transfer learning for object categorization with zero/one training example,” inProc. European Conf. on Computer Vision, Crete, Greece, Sep. 2010.
-  M. Aharon, M. Elad, and A. Bruckstein, “k-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. on Signal Processing, vol. 54, no. 11, pp. 4311–4322, Nov. 2006.
-  K. Engan, S. Aase, and J. Husy, “Frame based signal compression using method of optimal directions (MOD),” in IEEE Intern. Symp. Circ. Syst., Orlando, FL, May 1999.
J. Wright, M. Yang, A. Ganesh, S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,”IEEE Trans. on Patt. Anal. and Mach. Intell., vol. 31, no. 2, pp. 210–227, 2009.
-  J. Mairal, F. Bach, J. Pnce, G. Sapiro, and A. Zisserman, “Discriminative learned dictionaries for local image analysis,” in IEEE Computer Society Conf. on Computer Vision and Patt. Recn., Anchorage, Alaska, 2008.
-  Q. Zhang and B. Li, “Discriminative k-SVD for dictionary learning in face recognition,” in Proc. IEEE Computer Society Conf. on Computer Vision and Patt. Recn., San Francisco, CA, June 2010.
D. Pham and S. Venkatesh, “Joint learning and dictionary construction for pattern recognition,” inIEEE Computer Society Conf. on Computer Vision and Patt. Recn., Anchorage, Alaska, June 2008.
-  Z. Jiang, Z. Lin, and L. S. Davis, “Learning a discriminative dictionary for sparse coding via label consistent K-SVD,” in Proc. IEEE Computer Society Conf. on Computer Vision and Patt. Recn., Colorado springs, CO, June 2011.
-  K. Etemad and R. Chellappa, “Separability-based multiscale basis selection and feature extraction for signal and image classification,” IEEE Trans. on Image Process., vol. 7, no. 10, pp. 1453–1465, 1998.
-  M. Bilenko, S. Basu, and R. J. Mooney, “Integrating constraints and metric learning in semi-supervised clustering,” in International Conference on Machine Learning, Alberta, Canada, 2004.
-  L. Wang, L. Zhou, and C. Shen, “A fast algorithm for creating a compact and discriminative visual codebook,” in Proc. European Conf. on Computer Vision, Marseiiles, France, Oct. 2008.
S. Lazebnik and M. Raginsky, “Supervised learning of quantizer codebooks by information loss minimization,”IEEE Trans. on Patt. Anal. and Mach. Intell., vol. 31, no. 7, pp. 1294–1309, 2009.
-  N. Slonim and N. Tishy, “Document clustering using word clusters via the information bottleneck method,” in International ACM SIGIR Conference, Athens, Greece, July 2000.
-  J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Supervised dictionary learning,” in Neural Information Processing Systems, Vancouver, Canada, Dec. 2008.
-  C. Thurau and V. Hlavac, “Pose primitive based human action recognition in videos or still images,” in IEEE Computer Society Conf. on Computer Vision and Patt. Recn., Anchorage, Alaska, June 2008.
-  D. Weinland and E. Boyer, “Action recognition using exampler-based embedding,” in IEEE Computer Society Conf. on Computer Vision and Patt. Recn., Anchorage, Alaska, June 2008.
-  A. Elgammal, V. Shet, Y. Yacoob, and L. Davis, “Learning dynamics for exemplar-based geature recognition,” in IEEE Computer Society Conf. on Computer Vision and Patt. Recn., Madison, WI, June 2003.
-  Y. Li, C. Fermuller, and Y. Aloimonos, “Learning shift-invariant sparse representation of actions,” in Proc. IEEE Computer Society Conf. on Computer Vision and Patt. Recn., San Francisco, CA, June 2010.
-  C. Rasmussen and C. Williams, Gaussian Processes for Machine Learning. the MIT Press, 2006.
-  F. Rodriguez and G. Sapiro, “Sparse representations for image classification: Learning discriminative and reconstructive non-parametric dictionaries,” Tech. Report, University of Minnesota, Dec. 2007.
-  K. Huang and S. Aviyente, “Sparse representation for signal classification,” in Neural Information Processing Systems, Vancouver, Canada, Dec. 2007.
-  Z. Lin, Z. Jiang, and L. Davis, “Recognizing actions by shape-motion prototype trees,” in Proc. Intl. Conf. on Computer Vision, Kyoto, Japan, Oct. 2009.
-  N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proc. IEEE Computer Society Conf. on Computer Vision and Patt. Recn., San Diego, CA, June 2005.
-  A. Efros, A. Berg, G. Mori, and J. Malik, “Recognizing action at a distance,” in Proc. Intl. Conf. on Computer Vision, Nice, France, 2003.
-  S. Sadanand and J. J. Corso, “Action bank: A high-level representation of activity in video,” in Proc. IEEE Computer Society Conf. on Computer Vision and Patt. Recn., Providence, RI, June 2012.
-  I. Laptev, M. Marszałek, C. Schmid, and B. Ronfeld, “Learning realistic human actions from movies,” in Proc. IEEE Computer Society Conf. on Computer Vision and Patt. Recn., Anchorage, Alaska, June 2008.
-  A. Krause, A. Singh, and C. Guestrin, “Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies,” Journal of Machine Learning Research, vol. 9, pp. 235–284, 2008.
-  N. Shroff, P. Turaga, and R. Chellappa, “Video precis: Hightlighting diverse aspects of videos,” IEEE Transactions on Multimedia, vol. 12, no. 8, pp. 853–868, 2010.
-  M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri, “Actions as space-time shapes,” in Proc. Intl. Conf. on Computer Vision, Beijing, China, Oct. 2005.
-  M. D. Rodriguez, J. Ahmed, and M. Shah, “Action mach: A spatio-temporal maximum average correlation height filter for action recognition.” in IEEE Computer Society Conf. on Computer Vision and Patt. Recn., Anchorage, Alaska, 2008.
-  “UCF50 dataset,” http://www.cs.ucf.edu/vision/public_html/data.html.
-  L. Yeffet and L. Wolf, “Local trinary patterns for human action recognition,” in Proc. Intl. Conf. on Computer Vision, Kyoto, Japan, Nov. 2009.
-  M. Varma and B. R. Babu, “More generality in efficient multiple kernel learning,” in International Conference on Machine Learning, Montreal, Canada, June 2009.
-  H. Wang, M. M. Ullah, A. Kläser, I. Laptev, and C. Schmid, “Evaluation of local spatio-temporal features for action recognition,” in British Machine Vision Conference, London, Sep. 2009.
-  Q. Le, W. Zou, S. Yeung, and A. Ng, “Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis,” in Proc. IEEE Computer Society Cnf. on Computer Vision and Patt. Recn., Colorado springs, CO, June 2011.
-  A. Kovashka and K. Grauman, “Learning a hierarchy of discriminative space-time neighborhood features for human action recognition,” in Proc. IEEE Computer Society Conf. on Computer Vision and Patt. Recn., San Francisco. CA, June 2010.
-  X. Wu, D. Xu, L. Duan, and J. Luo, “Action recognition using context and appearance distribution features,” in Proc. IEEE Computer Society Cnf. on Computer Vision and Patt. Recn., Colorado springs, CO, June 2011.
-  L. Latecki, R. Lakamper, and T. Eckhardt, “Shape descriptors for non-rigid shapes with a single closed contour,” in Proc. IEEE Computer Society Conf. on Computer Vision and Patt. Recn., Hilton Head, SC, June 2000.