1 Introduction
Human actions consist of simultaneous flow of different body parts. Based on this complex articulated essence of human movements, the analysis of these signals could be highly complicated. To ease the task of classification, actions could be broken down into their components. This is done by a body part detection on depth sequences of human body movements [1]. Having the 3D locations of body joints in the scene, we can separate the complicated motion of body into a concurrent set of behaviors on major skeleton joints; therefore human action sequences could be considered as multipart signals. Throughout this paper, we use the term “part” to denote each body joint as defined in [1].
Limiting the learning into skeleton based features cannot deliver high levels of performance in action recognition, because: (1) most of the usual human actions are defined based on the interaction of body with other objects, and (2) depth based skeleton data is not always accurate due to the noise and occlusion of body parts. To alleviate these issues, different depth based appearance features can be leveraged. The work in [2] proposed LOP (local occupancy patterns) around each of the body joints in order to represent 3D appearance of the interacting objects. Another solution is HON4D (histogram of oriented 4D normals) [3], which gives more descriptive and robust models of the local depth based appearance and motion, around the joints. Based on the complementary properties of mentioned features, it is beneficial to utilize all of them as different descriptors for each joint. Combining heterogeneous features of each part of the skeleton, leads into a multimodalmultipart combination, which demands sophisticated fusion algorithms.
An interesting approach to handle the articulation of actions was recently proposed by [2]
. As the key intuition, they have shown each individual action class can be represented by the behavior and appearance of few informative joints in the body. They utilized a data mining technique to find these discriminative sets of joints for each class of the available actions and tied up the features of those parts as “actionlets”. They employed a multikernel learning method to build up ensembles of actionlets as kernels for action classification. This method is highly robust against the noise in depth maps, and the results show its strength to characterize the human body motion and also humanobject interactions. However the downside of this approach is the inconsistency of their heuristic selection process (mining actionlets) with the following learning step. Moreover, it simply concatenates different types of features for multimodal fusion, which is another drawback of this work. In this fashion, achieving the optimal combination of features regarding the classification task cannot be guaranteed.
To overcome the limitations mentioned above, we propose a joint structured sparsity regression based learning method which integrates part selection into the learning process considering the heterogeneity of features for each joint. We associate all the features for each part as a bundle and apply a group sparsity regularization to select a small number of active parts for each action class. To model the precise hierarchy of the multimodalmultipart features in an integrated learning and selection framework, we propose a hierarchical mixed norm which includes three levels of regularization over learning weights. To apply the modality based coupling over heterogeneous features of each part, it applies a mixed norm with two degrees of “diversity” induction [4], followed by a group sparsity among the feature groups of different parts to apply part selection.
The main contributions of this paper are twofold: First, we integrated the part selection process into our learning in order to select discriminative body parts for different action classes latently, and utilize them to learn classifiers. Second, a hierarchical mixed norm is proposed to apply the desired simultaneous sparsity and regularization over different levels of learning weights corresponding to our special multimodalmultipart features in a joint group sparsity regression framework.
We evaluate our method on three challenging depth based action recognition datasets: MSRDailyActivity dataset [2], MSRAction3D dataset [5], and 3DActionPairs dataset [3]. Our experimental results show that the proposed method is superior to other available methods for action recognition on depth sequences.
The rest of this paper is organized as follows: Section 2 reviews the related works on depth based action recognition, joint sparse regression, mixed norms, and multitask learning. Section 3 presents the proposed integrated feature selection and learning scheme. It also introduces the new multimodalmulti part mixed norm which applies regularization and group sparsity into the proposed learning model. Experimental results on three abovementioned benchmarks are covered in section 4 and we conclude the paper in section 5.
2 Related Work
Visual features extracted from depth signals can be classified into two major classes. The first are skeleton based features, which extract information from the provided 3D locations of body joints on each frame of the sequence. Essentially, skeletons have a very succinct and highly discriminative representation of the actions.
[6] utilized them to extract “eigenjoints” for action classification using a naïvebayesnearestneighbor classifier. In [7] spherical histograms of 3D locations of the joints went through HMM to model the temporal changes and final action classification. Presence of noise in depth maps and occlusion of body parts bounds the reliability of this type of features. Another major deficiency of skeleton data is their incapacity to represent the interactions of the body with other objects which is crucial for activity interpretation.The other group, consists of features which are extracted directly from depth maps. Most of the features in this class consider depth maps as spatiotemporal signals and tried to extract local or holistic descriptions from input sequences. [5] proposed a depth based action graph model in which each node indicates a salient posture and actions were represented as paths through graph nodes. To deal with occlusion and noise issues in depth maps, [8] proposed “random occupancy pattern” features and applied an elasticnet regularization [9] to find the most discriminative subset of features for action recognition. STIP (spacetime interest point) detection described by HOG (histogram of oriented gradients) [10] and HOF (histogram of optical flow) was originally proposed for recognition purposes on RGB videos [11], but [12] showed this could be easily generalized into RGB+D signals. To improve the discrimination of descriptors, they generalized the idea of “motion history images” [13] over depth maps. Noisesuppression could also boost up the performance of STIP detection on depth sequences [14]. Four dimensional surface normals were shown to be very powerful representations of body movements over depth signals [3]. This idea was a generalization of HOG3D [15]
into four dimensional depth videos. They quantized the 4D normal vectors of depth surfaces by taking their histograms over the vertices of a 4D regular polychoron, which were shown to be highly informative for action classification.
Regarding the strengths and weaknesses of aforementioned classes of features, we infer they are complementary to each other and to achieve higher levels of performance, we have to combine them. [2] used histograms of 3D point clouds around the joints (LOP) to be added into skeleton based features for action classification using an “actionlet ensemble” framework. [16] added local HON4D [3] into joint features to learn a maxmargin temporal warping based action classifier. We utilize skeletons, LOP and HON4D as stateoftheart depth based features to build up our multimodal input for the task of action recognition.
The main intuition behind the work of [2] was the fact that features of few informative joints are good enough for recognizing each class of the actions. They defined “actionlet” as the combination of features of a limited numbers of joints and based on the discriminative power of each joint and each actionlet, they performed a data mining procedure to find the best actionlets for each class of the actions. They used mined actionlets as kernels in a multikernel multiclass SVM. We further extend this idea by applying group sparsity in a joint feature selection framework. To do so, we group the features of each part (joint) and applied norm between these groups to achieve a sparse set of active parts to represent each action class.
Mixed norms are powerful tools to inject simultaneous sparsity and coupling effects between the learning coefficients. They have been studied in a variety of fields. In statistical domain, [17] proposed the “group Lasso”, as an extension over “Lasso” [18] for a grouped variable selection in regression. [19] introduced “composite absolute penalty” for hierarchical variable selection. “Hierarchical penalization” is also proposed to utilize prior structure of the variables for a better fitting model [20]. In sparse regression, mixed norms have been used as regularization terms to link sparsity and persistence of variables [21]. A generalized shrinkage scheme was proposed by [22] for structured sparse regression. [23] used mixed norms as structured sparsity regularizers for heterogeneous feature fusion, and [24] extended this idea for a multiview clustering. [25] proposed a robust selftaught learning using mixed norms and [26] utilized a fractional mixed norm for robust adaptive dictionary learning. In this paper, to regularize the multimodal features of each part, we apply a mixed norm. To achieve the sparsity between parts, we generalize this into an hierarchical norm.
If multiple learning tasks at hand share some inherent constituents or structures, “Multitask Learning” [27] techniques could be globally beneficial. In joint sparse regression, multitask learning is formulated by a mixed norm. [28] proposed an norm to add this into Lasso for variable selection. In joint feature selection, norm can provide multitask learning by applying selection between the regularized parameters of each feature [29]. Same is used in [30] as a generalization of norm in a multitask joint sparsity representation model to fuse complementary visual features across recognition tasks.[31] studied different mixed norms when they applied multitask sparse learning in visual tracking and based on their experimental results, they showed is superior among them. In this work, we use a similar norm to utilize the shared latent factors between different binary action classifiers. We apply regularization over the weights corresponding to each feature across all the tasks, followed by an between all the features at hand.
3 Multimodal Multipart Learning
Notations
Throughout this paper, we use bold uppercase letters to represent matrices and bold lowercase letters to indicate vectors. For a matrix , we denote its th row as and its th column as .
Assume the partition is defined over a vector to divide its elements into disjoint sets. We use to represent the indices of th set in , and its corresponding elements in are referred to as , also represents the th element of . The norm of regarding is represented by and is defined as the norms of the elements inside each set of followed by an norm of the values across the sets; mathematically:
(1) 
in which indicates the cardinality of set .
Now consider the elements of each set are further partitioned by operator into disjoint subsets. Similarly, we indicate th subset of th set of as and represents its th element. The norm of regarding and is also represented by and is defined as the norms (regarding ) of all sets followed by an norm of the values across the sets of ; mathematically:
(2)  
This representation can be easily extended into higher orders of structural mixed norms by further partitioning the subsets.
3.1 Multipart Learning by Structured Sparsity
Our purpose of learning is to recognize the actions in depth videos, based on depth based and skeleton based features extracted. The set of input features we use to describe each action sample is a combination of multimodal multipart features. The entire body is separated into a number of parts (as illustrated in Fig.1) and for each part we have different types of features to represent the movement and local depth appearance. Therefore, our input feature set for each input sample, can be represented by a vector: , which consists of feature groups of different parts and modalities. Assume operator is partitioning into parts, and is defined over sets of to further partition them based on number of features modalities. So, the hierarchy of features inside this vector is indicated by: , in which each .
Now the problem of multiclass action recognition can be considered as multiple binary regression based classification problems in a one versus all manner. Given training samples in which and their corresponding labels for distinct classes: with and ; we are looking for a projection matrix
which minimizes a set of loss functions
for all classes and samples . Our choice for the total loss function, without loss of generality, is sum of squared errors .The most common shrinkage methods to regularize the learning weights against overfitting are to penalize norms of the learning weights for each class:
(3) 
in which is the regularization factor. Employing norm leads into a general weight decay and minimization of the magnitude of , and applying norm yields simultaneous shrinkage and sparsity among the individual features. Such methods simply ignore the structural information between the features, which can be useful for classification; therefore, it is beneficial to embed these feature relations into our learning scheme via structured sparsity inducing mixed norms.
In the context of depth based action recognition, features are naturally partitioned into parts. “Actionlet ensemble” method [2] tried to discover discriminative joint groups using a data mining process, which led into an interesting improvement on the performance; however, their heuristic selection process is discrete and separated from the following learning step. To address these issues, we propose to apply group sparsity to perform part selection and classification in a regression based framework, in contrast to the mining based joint group discovery of [2].
We know that the discriminative strength of features in each part are highly correlated regarding all the classes at hand. So we expect the corresponding learning parameters (elements of each ) to be triggered or halted concurrently within each set of partitioning (for each action class). To apply a grouping effect on these features, we consider each set in as a unit and measure its strength with an norm of the included learning weights. On the other hand, we seek a sparse set of parts to be activated for each class at hand, so we apply an norm between the values of the groups. Such an intuition can be formulated by an mixed norm based on for each class:
(4) 
Adding this up for all the action classes with the same regularization factor, we have:
(5)  
in which is the vectorization operator and is the partitioning operator of elements based on their corresponding tasks (or columns here): . We will refer to this multipart learning method as “MP”.
Minimization of (5) applies the desired grouping effect into the features of each part and guarantees the sparsity on the number of active parts for each class in a smooth and simpler way, compared to the actionlet method.
3.2 Multimodal Multipart Learning via Hierarchical Mixed Norm
In the above formulation, we apply an regularization norm over heterogeneous features of all the modalities for each part, and ignore the modality structures between them. In other words, applying a general norm may cause the suppression of the information at some dimensions. These issues are more severe when training samples are limited (which is the case for action recognition in depth), in which it might lead to weak generalization of the learning.
To overcome these limitations, we utilize to regularize the coefficients inside each modality, so that “diversity” [21] can be encouraged. It is already known that the behavior of norm for rapidly moves towards [32]; since is not easy to optimize directly, we picked as the most efficient approximation of it. Higher order norms like apply the same effect but with a slightly more expensive processing cost.
By applying the norm to regularize the weights in each modality group of each part, now we have a threelevel mixed norm. Inner gives more “diversity” to regularize the features inside each partialitymodality subset. norm employs a magnitude based regularization over the values to link different modalities of each part, and the outer applies the soft part selection between the values of each action class (Fig.1).
Replacing the previous structured norm by the proposed hierarchical mixed norm in (5), we have:
(6)  
here, indicates the partitioning of features based on their source body part, and represents further partitioning of each part’s set regarding the modalities of the features. In the rest of this paper, we use the abbreviation “MMMP” to refer to this method. It is worthwhile to note, changing the inner norm to will reduce the hierarchical norm into a two level mixed norm, i.e. derived directly from the definition of hierarchical norm (2).
When different learning tasks have similar latent features, “Multitask Learning” [27] techniques can improve the performance of the entire system by applying information sharing between the tasks. Here we are learning classifiers for different classes which essentially have lots of latent components in common, so pushing them to share some features is beneficial for the classification task. This can be done by applying an grouping on all the weights corresponding to each individual feature. Each of these values represents the magnitude of strength for its corresponding feature among all the tasks. Then applying an over the magnitudes can apply a shared variable selection considering all the tasks. Adding the new multitask term into (6), we have:
(7)  
(8) 
here, is the number of rows in which is equal to the size of the entire feature vector, and defines the partitioning of elements based on their corresponding individual features: .
Combining these two regularization terms can be considered as a trade off between sparsity and persistence of features [33] based on their relations across the parts, modalities, and between the action classes.
In our experiments, we use body joints as partitioning operator . Since each column of has the same hierarchical partitioning as input features: , in which counts the number of classes and counts the feature groups for joints. The features for each joint come from different modalities: skeletons, LOP, and HON4D; this defines the operator. Therefore, each , in which each is the corresponding weight elements to class , joint and modality . This way (3.2) will be expanded to:
(9)  
3.3 Two Step Learning Approach
The downside of current formulation is the large number of weights to be learned simultaneously, compared to the size of training samples which are highly limited in current depth based action recognition benchmarks. To resolve this, we first learn the partially optimum weights for multipart features of each modality separately and then finetune them by the proposed multimodal multipart learning.
To learn the partially optimum weights for each modality , we optimize:
(10) 
After achieving the partially optimum point for each modality, we merge the values for all modalities:
(11) 
Next is to finetune the weights in the multimodalmultipart learning fashion, on a neighborhood of values. To do so, we expect the global optimum weight not to diverge too much from their partially optimal points:
(12) 
The last term in (3.3) will limit the deviation of learning weights from their partially optimal point, as we expect them to be just finetuned in this step.
Upon optimization over training data, the detection of the learned classifier for each testing sample can be obtained by:
(13) 
The optimization steps are all done by “LBFGS” algorithm using offtheshelf “minFinc” tool [34].
4 Experiments
This section describes our experimental setup details and then provides the results of the proposed method on three depth based action recognition benchmarks.
4.1 Experimental Setup
All the provided experiments are done on Kinect based datasets. Kinect captures RGB frames, depth map signals and 3D locations of major joints. To have a fair comparison with other depth based methods, we ignore the RGB signals. Skeleton extraction is done automatically by Kinect’s SDK based on the partbased human pose recognition system of [1]
. On each frame, we have an estimation of 3D positions of 20 joints in the body. All of our features are defined based on these joints as the multipart partitioning operator (
); therefore, each feature necessarily belongs to one of these parts.To represent skeleton based features, first we normalize the 3D locations of joints against size, position and direction of the body in the scene. This normalization step eases the task of comparison between body poses. On the other hand, the extracted body locations and directions could also be highly discriminative for some action classes like “walking” or “lying down”; therefore we add them into the features under a new auxiliary part. To encode the dynamics of skeleton based features, we apply “Fourier temporal pyramid” as suggested by [2]
and keep first four frequency coefficients of each short time Fourier transformation. This leads into a feature vector of size 1,876 for each action sample.
In addition to skeleton based features, other modalities we use are local HON4D [3] and LOP [2] to represent depth based local dynamics and appearance around each joint. On each frame, LOPs are extracted on a (96,96,320)sized depth neighborhood of each joint, which is divided into number of (32,32,80)sized bins. To represent LOP based kinetics, we use a similar Fourier temporal pyramid transformation. HON4D features are also extracted locally over the location of joints on each frame. We encode HON4D features using LLC (localityconstrained linear coding) [35]
to reduce their dimensionality while preserving the locality of 4D surface normals. Dictionary size of 100 is picked for the clustering step. LLC codes go through a max pooling over a 3 level temporal pyramid. Dimension of the features for LOP and HON4D are 5,040 and 14,000 respectively. The overall dimensionality of input features for each sample is 20,916.
4.2 MSRDailyActivity3D Dataset
Method  Structure/Hierarchical Norm Used  Accuracy 

80.612.49%  
MP  81.552.43%  
MMMP  84.032.16% 
Method  Structure/Hierarchical Norm Used  Accuracy 

86.88%  
87.50%  
MP  88.13%  
MMMP  91.25% 
Method  Modalities  Accuracy 

Actionlet Ensemble [2]  LOP  61% 
Proposed MP  LOP  79.38% 
Orderlet Mining [36]  Skeleton  73.8% 
Actionlet Ensemble [2]  Skeleton  74% 
Proposed MP  Skeleton  79.38% 
Local HON4D [3]  HON4D  80.00% 
Proposed MP  HON4D  81.88% 
Actionlet Ensemble [2]  Skeleton+LOP  85.75% 
Proposed MMMP  Skeleton+LOP  88.13% 
MMTW [16]  Skeleton+HON4D  88.75% 
Proposed MMMP  Skeleton+HON4D  89.38% 
DSTIP [14]  DCSF+LOP  88.20% 
Proposed MMMP  Skeleton+LOP+HON4D  91.25% 
According to its intraclass variations and choices of action classes, MSRDailyActivity dataset [2], is one of the most challenging benchmarks for action recognition in depth sequences. It contains RGB, depth, and skeleton information of 320 action samples, from 16 classes of daily activities in a living room. Each activity is done by 10 distinct subjects in two different ways and evaluations are applied over a fixed crosssubject setting; first five subjects are taken for training and others for testing. Unlike other datasets, MSRDailyActivity has a more realistic variation within each class. Subjects used both hands randomly to do the activities, and samples of each class are captured in different poses.
First, to verify the strengths of our proposed hierarchical mixed norm, we evaluate the performance of the classification in a subjectwise crossvalidation scenario. We evaluate the performance of the plain norm, the multipart structured norm (MP), and the proposed hierarchical mixed norm (MMMP), in all 252 possible train/test splits of 5 out of 10 subjects. To have a proper comparison between these norms, we have not applied the multitask term. The results of this experiment are shown in Table I. Adding part based grouping, when it ignores the modality associations between the features, can slightly improve the performance from 80.61% into 81.55%. By adding multimodality grouping and applying the proposed hierarchical mixed norm, improvement is more significant and reaches 84.03%.
Next, we verify the results of our method by applying mentioned norms on the standard train/test split of the subjects. As provided in Table II, applying simple feature selection using a plain norm leads into 86.88% of accuracy. By applying a plain norm on all the features we get 87.50%. Multipart learning regardless of heterogeneity of the modalities leads into 88.13%. Finally by adding the multipart learning via the proposed hierarchical mixed norm we reach the interesting accuracy of 91.25% on this dataset. Applying higher orders for the innermost norm (like ) achieved the same level of accuracy at a slightly higher processing time.
To assess the strength of the proposed multipart learning, we evaluate our method on single modality setting using (3.3). As shown in Table III, on skeleton based features, we got 79.38% compared to 74% of the baseline actionlet method. Using LOPs, our method achieved 79.38% which is more than 18% higher than the actionlet’s performance. For local HON4D features, we achieved 81.88% compared to 80.00% of the baseline local HON4D method. Now we use the partially learned weights of single modality multipart learning and employ them for the optimization of (3.3) to learn globally optimum projections. First we try the combination of skeleton based features with LOP. Using proposed learning, we get 88.13% of accuracy which outperforms the baseline’s best result of 85.75%. [16] used skeleton and HON4D features in a temporal warping framework and got 88.75%. Our method outperforms it using the same set of features by achieving 89.38% of accuracy. And finally using all three modalities, our method achieves the performance level of 91.25%. Table III shows the complete set of results for this experiment.
Our implementation is done in MATLAB, and not fully optimized for time efficiency. The average training and testing time of MMMP on a Corei5 machine are and seconds respectively, with no parallel processing.
It is worth pointing out some of the published works on this dataset applied other train/test splits, e.g. [37] reported 93.1% of accuracy on a leaveonesubjectout cross validation. On this setup, proposed MMMP method achieves 97.5%.
4.3 MSRAction3D Dataset
Method (protocol of [5])  Accuracy 

Action Graph on Bag of 3D Points [5]  74.7% 
Histogram of 3D Joints [7]  79.0% 
EigenJoints [6]  83.3% 
Random Occupancy Patterns [8]  86.5% 
Depth HOG [38]  91.6% 
Lie Group [39]  92.5% 
JAS+HOG [40]  94.8% 
DLGSGC+TPM [41]  96.7% 
Proposed MMMP  98.2% 
Method (protocol of [2])  Accuracy 

Depth HOG [38] (as reported in [16])  85.5% 
Actionlet Ensemble [2]  88.2% 
HON4D [3]  88.9% 
DSTIP [14]  89.3% 
Lie Group [39]  89.5% 
HOPC [42]  91.6% 
Max Margin Time Warping [16]  92.7% 
Proposed MMMP  93.1% 
MSRAction3D [5]
is another depth based action dataset which provided depth sequences and skeleton information of 567 samples for 20 action classes. Actions are done by 10 different subjects, two or three times each. Evaluations are applied over another fixed crosssubject setting; Odd numbered subjects are taken for training and evens for testing. On one hand, depth sequences in this dataset have clean background which eases the recognition, and on the other hand, number of classes are higher than other datasets which could be a challenge for classification.
The reported results on this dataset are divided in two different scenarios. First is the average cross subject performance on three action subsets defined in [5], and second is the overall cross subject accuracy regardless of subsets, as done in [2]. Following [39], we call them as protocols of [5] and [2]. Tables IV and V show the results. Although we still have the highest accuracy among the reported results, the achieved margin is not as large as other datasets. This is because of the simplicity of actions in this dataset. Since there is not any interaction with other objects, most of the classes are highly distinguishable using skeleton only features; therefore our multimodality could not boost up the results that much, but the multipart learning still shows its advantage over other methods.
4.4 3D Action Pairs Dataset
To emphasize the importance of the temporal order of body poses on the meaning of the actions, [3] proposed 3D Action Pairs dataset. It covers 6 pairs of similar actions. The only difference between each pair is their temporal order so they have similar skeleton, poses, and object shapes. Each action is performed by 10 subjects, 3 times. First five subjects are taken for testing and others for training. Based on the fewer number of the action classes and absence of intraclass variations, this is the easiest benchmark among depth based action recognition datasets and other methods already achieved very high accuracies on it.
Here we apply our full multimodal multipart learning method using all three available modalities of features. As shown in Table VI, the proposed method, outperforms all others and saturates the benchmark by achieving the perfect performance level on this dataset.
5 Conclusion
This paper presents a new multimodal multipart learning approach for action classification in depth sequences. We show that a sparse combination of multimodal partbased features can effectively and discriminatively represent all the available action classes at hand. Based on the nature of the problem, we utilize a heterogeneous set of features from skeleton based 3D joint trajectories, depth occupancy patterns and histograms of depth surface normals and show the proper way of using them as multimodal features set for each part.
The proposed method does the group feature selection, weight regularization, and classifier learning in a consistent optimization step. It applies the proposed hierarchical mixed norm to model the proper structure of multimodal multipart input features by applying a diversity norm over the coefficients of each partmodality group, linking different modalities of each part by a magnitude based norm, and utilizing a soft part selection by a sparsity inducing norm.
The provided experimental evaluations on three challenging depth based action recognition datasets show the proposed method can successfully apply the structure of the input features into a concurrent group feature selection and learning and confirm the strengths of the suggested framework compared to other methods.
References

[1]
J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore,
A. Kipman, and A. Blake, “Realtime human pose recognition in parts from
single depth images,” in
IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, 2011.  [2] J. Wang, Z. Liu, Y. Wu, and J. Yuan, “Learning actionlet ensemble for 3d human action recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2014.
 [3] O. Oreifej and Z. Liu, “Hon4d: Histogram of oriented 4d normals for activity recognition from depth sequences,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013.
 [4] B. Rao and K. KreutzDelgado, “An affine scaling methodology for best basis selection,” IEEE Transactions on Signal Processing (TSP), 1999.
 [5] W. Li, Z. Zhang, and Z. Liu, “Action recognition based on a bag of 3d points,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops), 2010.
 [6] “Effective 3d action recognition using eigenjoints,” Journal of Visual Communication and Image Representation, 2014.
 [7] L. Xia, C.C. Chen, and J. Aggarwal, “View invariant human action recognition using histograms of 3d joints,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops), 2012.
 [8] J. Wang, Z. Liu, J. Chorowski, Z. Chen, and Y. Wu, “Robust 3d action recognition with random occupancy patterns,” in European Conference on Computer Vision (ECCV), 2012.
 [9] H. Zou and T. Hastie, “Regularization and variable selection via the elastic net,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2005.
 [10] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2005.
 [11] I. Laptev and T. Lindeberg, “Spacetime interest points,” in IEEE International Conference on Computer Vision (ICCV), 2003.
 [12] B. Ni, G. Wang, and P. Moulin, “Rgbdhudaact: A colordepth video database for human daily activity recognition,” in IEEE International Conference on Computer Vision Workshops (ICCV Workshops), 2011.
 [13] A. Bobick and J. Davis, “The recognition of human movement using temporal templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2001.
 [14] L. Xia and J. Aggarwal, “Spatiotemporal depth cuboid similarity feature for activity recognition using depth camera,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013.
 [15] A. Klaeser, M. Marszalek, and C. Schmid, “A spatiotemporal descriptor based on 3dgradients,” in British Machine Vision Conference (BMVC), 2008.
 [16] J. Wang and Y. Wu, “Learning maximum margin temporal warping for action recognition,” in IEEE International Conference on Computer Vision (ICCV), 2013.
 [17] M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2006.
 [18] R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society. Series B (Methodological), 1996.
 [19] P. Zhao, G. Rocha, and B. Yu, “The composite absolute penalties family for grouped and hierarchical variable selection,” The Annals of Statistics, 2009.
 [20] M. Szafranski, Y. Grandvalet, and P. Morizetmahoudeaux, “Hierarchical penalization,” in Advances in Neural Information Processing Systems (NIPS), 2008.
 [21] M. Kowalski, “Sparse regression using mixed norms,” Applied and Computational Harmonic Analysis, 2009.
 [22] M. Kowalski and B. Torrésani, “Structured Sparsity: from Mixed Norms to Structured Shrinkage,” in Signal Processing with Adaptive Sparse Structured Representations (SPARS), 2009.
 [23] H. Wang, F. Nie, H. Huang, and C. Ding, “Heterogeneous visual features fusion via sparse multimodal machine,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013.

[24]
H. Wang, F. Nie, and H. Huang, “Multiview clustering and feature learning via
structured sparsity,” in
International Conference on Machine Learning (ICML)
, 2013.  [25] H. Wang, F. Nie, and H. Huang, “Robust and discriminative selftaught learning,” in International Conference on Machine Learning (ICML), 2013.
 [26] H. Wang, F. Nie, W. Cai, and H. Huang, “Semisupervised robust dictionary learning via efficient norms minimization,” in IEEE International Conference on Computer Vision (ICCV), 2013.
 [27] R. Caruana, “Multitask learning,” Machine Learning, 1997.
 [28] H. Liu, M. Palatucci, and J. Zhang, “Blockwise coordinate descent procedures for the multitask lasso, with applications to neural semantic basis discovery,” in International Conference on Machine Learning (ICML), 2009.
 [29] G. Obozinski, B. Taskar, and M. I. Jordan, “Joint covariate selection and joint subspace selection for multiple classification problems,” Statistics and Computing, 2010.
 [30] X.T. Yuan, X. Liu, and S. Yan, “Visual classification with multitask joint sparse representation,” IEEE Transactions on Image Processing (TIP), 2012.
 [31] T. Zhang, B. Ghanem, S. Liu, and N. Ahuja, “Robust visual tracking via multitask sparse learning,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012.

[32]
A. Rakotomamonjy, R. Flamary, G. Gasso, and S. Canu, “ penalty for
sparse linear and sparse multiple kernel multitask learning,”
IEEE Transactions on Neural Networks
, 2011.  [33] M. Kowalski and B. Torrésani, “Sparsity and persistence: mixed norms provide simple signal models with dependent coefficients,” Signal, Image and Video Processing, 2009.
 [34] M. Schmidt, “Minfunc,” 2005. [Online]. Available: http://www.di.ens.fr/~mschmidt/Software/minFunc.html
 [35] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong, “Localityconstrained linear coding for image classification,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010.
 [36] G. Yu, Z. Liu, and J. Yuan, “Discriminative orderlet mining for realtime recognition of humanobject interaction,” in Asian Conference on Computer Vision (ACCV), 2014.
 [37] S. Althloothi, M. H. Mahoor, X. Zhang, and R. M. Voyles, “Human activity recognition using multifeatures and multiple kernel learning,” Pattern Recognition, 2014.
 [38] X. Yang, C. Zhang, and Y. Tian, “Recognizing actions using depth motion mapsbased histograms of oriented gradients,” in ACM International Conference on Multimedia (MM), 2012.
 [39] R. Vemulapalli, F. Arrate, and R. Chellappa, “Human action recognition by representing 3d skeletons as points in a lie group,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
 [40] E. OhnBar and M. Trivedi, “Joint angles similarities and hog for action recognition,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops), 2013.
 [41] J. Luo, W. Wang, and H. Qi, “Group sparsity and geometry constrained dictionary learning for action recognition from depth maps,” in IEEE International Conference on Computer Vision (ICCV), 2013.
 [42] H. Rahmani, A. Mahmood, D. Q Huynh, and A. Mian, “Hopc: Histogram of oriented principal components of 3d pointclouds for action recognition,” in European Conference on Computer Vision (ECCV), 2014.
Comments
There are no comments yet.