Log In Sign Up

A Prototype-Based Generalized Zero-Shot Learning Framework for Hand Gesture Recognition

by   Jinting Wu, et al.

Hand gesture recognition plays a significant role in human-computer interaction for understanding various human gestures and their intent. However, most prior works can only recognize gestures of limited labeled classes and fail to adapt to new categories. The task of Generalized Zero-Shot Learning (GZSL) for hand gesture recognition aims to address the above issue by leveraging semantic representations and detecting both seen and unseen class samples. In this paper, we propose an end-to-end prototype-based GZSL framework for hand gesture recognition which consists of two branches. The first branch is a prototype-based detector that learns gesture representations and determines whether an input sample belongs to a seen or unseen category. The second branch is a zero-shot label predictor which takes the features of unseen classes as input and outputs predictions through a learned mapping mechanism between the feature and the semantic space. We further establish a hand gesture dataset that specifically targets this GZSL task, and comprehensive experiments on this dataset demonstrate the effectiveness of our proposed approach on recognizing both seen and unseen gestures.


A Generalized Zero-Shot Framework for Emotion Recognition from Body Gestures

Although automatic emotion recognition from facial expressions and speec...

Visual and Semantic Prototypes-Jointly Guided CNN for Generalized Zero-shot Learning

In the process of exploring the world, the curiosity constantly drives h...

Convolutional Prototype Learning for Zero-Shot Recognition

Zero-shot learning (ZSL) has received increasing attention in recent yea...

Continuous representations of intents for dialogue systems

Intent modelling has become an important part of modern dialogue systems...

Zero-Shot Sign Language Recognition: Can Textual Data Uncover Sign Languages?

We introduce the problem of zero-shot sign language recognition (ZSSLR),...

Learning Disentangled Intent Representations for Zero-shot Intent Detection

Zero-shot intent detection (ZSID) aims to deal with the continuously eme...

I Introduction

Hand gesture recognition has been widely applied in various fields, such as post-stroke rehabilitation[1], sign language recognition[2], emotion recognition[3] and human-robot interaction[4]. However, most existing works can only recognize a limited number of categories that have been seen during training and fail to extend to new categories. Besides, the high accuracy of hand gesture recognition requires a large number of labeled data from different classes, which in practice is costly and time-consuming. Therefore, it is critical to transfer the learned knowledge from seen to unseen categories and recognize unseen hand gesture classes, in order to better understand the intent of a user’s new hand gesture.

Zero-Shot Learning (ZSL), where the goal is to accurately recognize data of unseen classes, provides a solution for tackling the above challenges. ZSL methods establish associations between seen and unseen categories with side information such as attributes[5, 6]

and semantic vectors


. Note that in ZSL tasks, training and test classes are strictly disjoint, and only data from the seen categories are used during training. This may lead to inferior performance due to the inherent bias towards the seen classes when data from both seen and unseen classes are available at test time. In other words, the classifier tends to misidentify the samples from unseen categories as seen categories

[8]. To solve this problem, GZSL, a more general task is proposed, where samples from the unseen categories are mixed with the seen categories in the test set[9]. It aims to reduce the effect of such bias towards seen classes in a less restricted setting where training and testing categories are not disjoint.

Although the ZSL and GZSL approaches for object recognition[5, 6, 10, 11, 12, 13] have been largely investigated and achieved great success, approaches that target the ZSL/GZSL for dynamic hand gesture recognition are less explored. Thomason and Knepper[4] first developed a ZSL gesture recognition system to understand user’s intent by leveraging coordinated natural language, gesture, and context. They generated semantic descriptions of new gesture categories and provided some preliminary results, but qualitative verification metrics such as recognition accuracy are not clearly given in this paper. Madapana and Wachs[14, 15] described a new paradigm for Zero-Shot Gestural Learning (ZSGL), in which they generated semantic descriptors for gestures and assessed the performance of various state-of-the-art algorithms. They later proposed another Hard Zero-Shot Learning (HZSL) task of gestures[16] with a small amount of gesture data, and addressed it by integrating One-Shot Learning (OSL) and supervised clustering techniques. However, in the settings of GZSL, the recognition accuracy of their method becomes much lower. More recently, in our previous work[17], we established a skeletal joint gesture dataset and designed a recognition system for unfamiliar dynamic gestures based on Semantic Auto-Encoder (SAE). This model achieves a much higher accuracy in ZSL setting by utilizing an additional reconstruction constraint, but the issue of the bias towards the seen classes still remains in GZSL setting. The performance of these models in both seen and unseen categories is not satisfactory enough, and thus it is difficult to satisfy the different needs of application and better understand the users’ intent.

In order to improve the performance on the GZSL task of hand gesture recognition, we propose an end-to-end prototype-based framework which can mitigate the bias towards predicting seen classes. A detector is first developed to learn gesture representations and discriminate whether the test samples come from the seen or unseen categories. Then, two different classifiers for recognizing seen and unseen classes, respectively, produce the corresponding prediction results for test samples. Inspired by Convolutional Prototype Learning (CPL) framework[18] which has shown great potential to handle the open world recognition problem, we use the prototype loss to improve intra-class compactness of the feature representation. In this way, seen samples can be classified to the seen categories which the nearest prototypes belong to, and unseen samples are excluded at the same time via the learned distance threshold. The feature representations of those excluded unseen samples are further taken into the zero-shot label predictor to obtain prediction results. The proposed framework can be trained in an end-to-end manner, which ensures the learning efficiency and makes the feature representations robust to further recognize both seen and unseen categories.

The contributions of this paper are as follows:

  • An end-to-end prototype-based GZSL framework is proposed for hand gesture recognition by integrating prototype learning and feature mapping mechanism. It can improve both the recognition effectiveness and efficiency as all intermediate procedures are updated simultaneously.

  • We build a novel hand gesture dataset which contains 25 hand gestures and 11 semantic attributes for GZSL task. This dataset is extended and re-recorded based on our previous work[17] for hand gesture recognition in ZSL setting.

  • Comprehensive experimental results on our dataset compared with other state-of-the-art methods demonstrate the effectiveness and efficiency of the proposed framework.

Ii Related Work

Ii-a Dynamic Hand Gesture Recognition

Early works on hand gesture recognition are mainly based on RGB images[2] or the information captured by data gloves[19]. However, data gloves tend to be not user-friendly, and RGB images often lose part of the spatial information of hand motion. Recently, with the development of depth sensors, such as the Leap Motion controller (LMC)[20] and Microsoft Kinect[21], rich 3D information of gestures can be obtained, and accurate skeleton data can be more easily extracted. In this paper, we use the LMC to capture skeleton data for its high localization precision.

As for the recognition algorithms, early research on dynamic hand gesture recognition mainly uses hand-crafted features and adopts methods such as Dynamic Time Warping (DTW)[22]

, Hidden Markov Model (HMM)

[2] and Hidden Conditional Random Field (HCRF)[20]

for classification. With the success of deep learning in a variety of visual tasks, deep networks such as 3D Convolutional Neural Network (3D CNN)

[23, 24]

, Long Short-Term Memory Networks (LSTM)

[25, 26] and graph-based networks[27] are applied in the field of gesture recognition. Although many approaches have been investigated for hand gesture recognition, the problem of recognizing samples from new unseen classes still remains. A large number of training data are difficult to obtain and the model needs to be retrained after obtaining new data, which limit the practical application of these algorithms in real scenarios. Different from the above works, our proposed method in the more challenging GZSL setting can help address the above problems to better satisfy the different needs of the application.

Ii-B Zero-Shot Learning

The early works of zero-shot learning directly construct classifiers for seen and unseen class attributes. Lampert et al.[5] first proposed the task of zero-shot learning and introduced an attribute-based classification approach by leveraging high-level descriptions of object classes to tackle this problem. Later, some other works propose to learn mappings from feature to semantic space. For example, Norouzi et al.[7]

mapped images to class embeddings and estimated unseen labels by combining the embedding vectors of the most possible seen classes. Romera-Paredes et al.

[10] developed a simple yet effective approach which learned the relationships between features, attributes and categories by adopting a two-layer linear model. More recently, Kodirov et al.[6] adopted an encoder-decoder paradigm that was learned with an additional reconstruction constraint to project a visual feature vector into the semantic space. Morgado and Vasconcelos[13] proposed two semantic constraints as the complementarity between class and semantic supervision for recognition and achieved state-of-the-art performance. Some other methods further map both features and attributes to a shared space to predict unseen classes. For example, Changpinyo et al.[28] aligned the semantic and feature space by computing the convex combination coefficients of base classifiers to construct classifiers for unseen classes.

Fig. 1: Overview of the proposed framework which consists of two branches: a prototype-based detector and a zero-shot label predictor. The first branch takes the gesture sequences as input and outputs the representations in the prototype space. The distance between the representation and the given prototypes determines whether a test sample belongs to a seen or unseen class via the learned threshold. Then, for the samples that are considered to be from unseen categories, their feature representations are further taken as the input to the zero-shot label predictor to obtain recognition results. These two branches can be jointly trained in an end-to-end manner.

Ii-C Generalized Zero-Shot Learning

The limitation of zero-shot learning is that all test data only come from unseen classes. Therefore, a generalized zero-shot learning setting is proposed where the training and testing classes are not necessarily disjoint by allowing both seen and unseen classes during testing. Recently, many works have been proposed to address this task. For example, Xian et al.[12]

proposed a generative adversarial network (GAN) that synthesizes CNN features of unseen classes, which are conditioned on class-level semantic information. This generative model alleviates the problem of data imbalance between seen and unseen categories. Another GAN-based model is developed and achieved improvements in balancing accuracy between seen and unseen classes, by combining visual-semantic mapping, semantic-visual mapping and metric learning

[29]. Some other approaches formulate this task as a cross-modal embedding problem. For instance, Felix et al.[30] investigated a multi-modal based algorithm that can balance seen and unseen categories by training both visual and semantic Bayesian classifiers. Schonfeld et al.[11]

learned latent features of images and attributes via aligned Variational Autoencoders which contain the essential multi-modal information associated with unseen classes. Although these methods mainly target the challenge of data imbalance between seen and unseen categories, the bias still exists due to the similar treatment of all categories. To solve this, Bhattacharjee et al.

[31] further proposed a novel detector based on an autoencoder with reconstruction and triplet cosine embedding losses to determine whether an input sample belongs to a seen or unseen category. This greatly improves the recognition performance in recognizing novel classes. Mandal et al.[8] further achieved zero-shot action recognition by introducing a separate treatment of seen and unseen action categories and synthesized video features for unseen action categories to train an out-of-distribution detector. The method we propose combines the detector and the seen class classifier as a branch, which has a simpler structure and is more convenient for training.

Iii Methodology

We first formalize the problem of GZSL, and describe the proposed model for hand gesture recognition, which consists of two modules: a prototype-based detector and a zero-shot label predictor. The prototype-based detection branch first learns a detector that determines whether an input sample belongs to a seen or unseen category, and meanwhile produces feature representations of unseen data. Then, the zero-shot label prediction branch takes these features as input, and outputs predictions of samples from unseen classes through a learned mapping mechanism from feature to semantic space. We then provide the detailed end-to-end learning objective in this section. The proposed framework is shown in Fig. 1.

Iii-a Problem Definition

Let be the training data for the seen classes, where is the hand skeletal sequence, is the label of in the set of seen classes , is the corresponding semantic embedding among all embeddings . Similarly, the unseen data can be denoted as , where the hand skeletal sequence is only available during testing, represents the set of unseen labels, and . The goal of GZSL is to learn a classifier , so that the knowledge can be transferred to recognize samples from both seen and unseen categories.

Iii-B Prototype-Based Detector (PBD)

The prototype-based detector utilizes a multi-layer Bidirectional Long Short-Term Memory Networks (BLSTM)[32]

to extract temporal features from gesture sequences. The input gesture sequences are captured by a Leap Motion Controller, which involve hand direction, palm center and skeletal joint positions. A BLSTM layer is composed of two LSTM layers (a forward one and a backward one), which can capture both past and future contextual information at the same time. Traditionally, a softmax layer is further added on the top of the features extracted by BLSTM for classification. However, this softmax-based approach tends to misclassify unseen to seen classes, thus makes it difficult to distinguish between the seen and unseen categories.

To solve the problem of misclassification, Yang et al.[18] proposed a convolutional prototype learning (CPL) framework, which can improve the robustness of classification. CPL aims to learn a few prototypes using CNN features and predict classification labels by matching representations in the prototype space with the closest prototype. Inspired by this, we propose to map the extracted features from BLSTM and learn a fixed number of prototypes for each class. Then, by adding a distance threshold selection process, we can determine whether a test sequence belongs to a trained category.

The BLSTM features of each input are denoted as , and the parameters of BLSTM are denoted as . The features are projected to the prototype space through a FC layer. The projection of in the prototype space is denoted as , and the learned prototypes are defined as , where represents the prototype of the category, is the number of categories and is the number of prototypes for each class. The parameters of BLSTM and the prototypes

are jointly trained through the following two loss functions.

The first is the distance-based cross entropy (DCE) loss which is based on the traditional cross entropy loss. It can be defined as:


where computes the distance between and , is a hyper-parameter. Minimizing the DCE loss helps improve the classification accuracy and enhance the separability among different training classes.

Another prototype loss (PL) is used as a normalization to enhance intra-class compactness, which is defined as:


where is the closest prototype to for the class , which can effectively regularize the model and improve the intra-class compactness of the feature representations.

Iii-C Zero-Shot Label Predictor

In ZSL and GZSL tasks, semantic representations of all seen and unseen categories are available. In order to recognize unseen gestures, a model needs to learn the relationship between the high-level semantic representations and the extracted features which are introduced in Section III-B. Inspired by the single-layer linear Semantic Auto-Encoder [6] for object recognition, we develop a multi-layer Semantic Auto-Encoder (SAE) as the classifier to improve the prediction results by stacking more layers.

We use fully connected (FC) layers with symmetric structure as the encoder and decoder of the SAE. All the parameters of the encoder and decoder are denoted as . The input and the output of the encoder are denoted as and , and the output of the decoder is denoted as . The SAE aims to learn a mapping, which projects the learned representations from feature space to semantic space. The mapped semantic embedding is trained to be close to the given semantic prototype of the corresponding category, and at the same time, the SAE can retain the original input information through the reconstruction of the decoder. The loss function of SAE consists of an attribute loss and a reconstruction loss as:


where represents the parameters for BLSTM in our model.

Iii-D End-to-End Learning Objective

The two branches which integrate feature extraction and label prediction can be jointly trained in an end-to-end manner. Therefore, both the parameters of prototype-based detector and SAE are learned at the same time. Thus, the joint learning objective of our end-to-end framework can be formulated as:


where , , are hyper-parameters which weight the above four loss terms.

After training the end-to-end network, the distance thresholds for visual prototypes are further set, in order to determine whether a new sample belongs to a seen or unseen categories. Specifically, we fix the trained network parameters and prototypes, and use all the training data to learn the thresholds , where is the gesture sequence in the training set, is the number of training samples, and is the corresponding threshold of . The loss function is given by:


where , represents the minimal distance between and all prototypes, and is the corresponding threshold of the closest prototype of . aims to correctly classify the samples that belong to the current category, and

is used as the regularization to reduce the influence of outliers on thresholds. By using the hyper-parameter

that weights the above two parts, the model is able to learn the thresholds that can best discriminate samples of different categories.

Iii-E Label Prediction

During predicting, an test sample will be classified into the category of the closest prototype by the prototype-based detector:


where represents the category which the closest prototype belongs to. Then, the model distinguishes the seen and unseen categories by comparing the minimum distance with the thresholds:


where represents the intermediate prediction results of the prototype-based detector, and is the threshold of the closest prototype of .

Then, for the sample which is considered to be from an unseen category, its feature is further projected into the semantic space by the SAE. We compare the projected semantic representation with the semantic prototypes of all unseen classes. The zero-shot prediction result is given by:


where is the semantic prototype of the unseen category.

In summary, the prediction result of a test sample is as follows:


Iv Experiments

Iv-a Experiment Settings

Iv-A1 Dataset

This dataset is an expansion of the dataset proposed in our previous work[17] which contains 16 seen gestures in the training set and 4 unseen gestures in the test set. As shown in Fig. 2, 16 seen gestures and 9 unseen gestures are developed in our novel dataset, which are captured by a Leap Motion Controller. More differences in hand position and gesture custom are taken into account. In total, 800 sequences of all seen categories are contained in the training set, and 500 sequences of both seen and unseen categories are contained in the test set. Each sequence consists of 100 frames. Meanwhile, the information such as hand direction, palm center and skeletal joint positions on a single right hand is recorded for each frame. To better extract hand posture and relative motion information, the recorded data is preprocessed and normalized. We further design 11 attributes including hand movement and finger bending states for each category based on the experience of gesture recognition research. All attributes are binary and they are visualized in the form of heat map which are shown in Fig. 3.

Fig. 2: Hand gestures in our dataset.
Fig. 3: Binary heat map of the categories and attributes.
ESZSL [15] 77.81% 13.89% 23.57%
CADA-VAE [11] 80.00% 53.89% 64.40%
f-CLSWGAN [12] 79.79% 55.00% 65.08%
End-to-End Framework (Ours) 89.06% 58.33% 70.49%
TABLE I: The Experimental Results of the State-of-the-art Comparisons in GZSL Setting
Methods Test Time
BLSTM+SAE[6] 91.88% 15.00% 25.79% 0.023s
End-to-End Framework (Fixed Threshold) 84.69% 50.56% 63.31% 0.022s
PBD+SAE 90.63% 57.22% 70.15% 0.026s
End-to-End Framework 89.06% 58.33% 70.49% 0.022s
TABLE II: The Experimental Results of the Ablation Analysis

Iv-A2 Evaluation Metrics

We adopt the top-1 accuracy to evaluate the models, and the top-1 accuracy of seen classes and unseen classes are denoted as and . As there is an inherent bias towards the seen classes, and to ensure that both and

are high enough, we use harmonic mean

for the final performance comparison, which can be defined as:


Iv-A3 Implementation Details

We utilize a three-layer BLSTM network to extract features, and the numbers of forward and backward LSTM neurons are set to 64. The features are mapped to prototype space through a fully connected layer. We maintain one prototype for each category, and the dimension of prototypes is set to 20. The activation function we use in the network is ReLU. For SAE, both encoder and decoder have two hidden layers. The input dimension of the encoder is the same as the feature dimension, which is 128. The output dimension of the encoder is the same as the number of attributes, which is 11. During training, the batch size is set to 8, the learning rate is set to 0.001, and the number of training epochs is set to 100. The Adam optimizer

[33] is utilized to minimize the loss. Other hyper-parameters are selected by 10-fold cross-validation. of threshold selection is set to 0.1, and of the end-to-end learning objective are set to 5, 5 and 0.05, respectively.

Iv-B State-of-the-art Comparisons

In this section, we compare our proposed framework in the GZSL setting with one of the state-of-the-art methods for zero-shot gesture recognition proposed in [15], which utilized three ZSL methods and obtained the best recognition results using Embarrassingly Simple Zero-Shot Learning (ESZSL)[10]. In order to better demonstrate the effectiveness of our algorithm, we also choose two state-of-the-art methods for object recognition, CADA-VAE[11] and f-CLSWGAN[12], for the comparisons on our dataset at the same time. The features of the seen and unseen categories are obtained by a three-layer BLSTM network. Experimental results are shown in Table I.

We observe that our proposed method outperforms the state-of-the-art methods, and , and are increased by 9.06%, 3.33% and 5.41%, respectively. The recognition accuracy is decreased for CADA-VAE and f-CLSWGAN because the two methods predict labels of both seen and unseen categories by using only a single model. Specifically, it is difficult to maintain the recognition accuracy of the seen category while considering the generalization performance. In our work, however, two separate classifiers are used for label prediction, thus the generalization ability is enhanced to reduce the impact of bias. In addition, the complexity of their models is higher than ours, and more parameters that need to be learned require more training data which can be inefficient and unavailable.

Iv-C Ablation Analysis

We analyze different components in our framework including the prototype-based detector, threshold selection and end-to-end training manner.

Prototype-Based Detector. We choose the traditional SAE [6] without the prototype-based detector as a baseline of the GZSL task of hand gesture recognition. The features are first obtained by a three-layer BLSTM network. We then feed them into the traditional SAE for predictions of both seen and unseen categories. The experimental result is shown in Table II (line 1). We observe that although the traditional SAE performs slightly better on , there is a severe bias towards the seen categories. Our framework combining prototype-based detector and SAE achieves the improvement of 44.7% on harmonic mean over the traditional SAE model. This demonstrates that the prototype-based detector can effectively separate the unseen category from the seen category, which greatly reduces the impact of learning bias.

Threshold Selection. In order to verify the effectiveness of our threshold selection method, we compare it to the method with a fixed threshold for all seen categories and explore the impact of the parameter on the discrimination between the seen and unseen categories. Except for the threshold selection part, other modules of the different comparison models are identical. The Comparison Results of Acceptance Rate (AR) and Rejection Rate (RR) for Different Threshold Selection are shown in Table III. AR denotes the percentage of the accepted samples in the test samples from the seen categories, while RR denotes the percentage of the rejected samples in the test samples from the unseen categories. The results demonstrate that our method can achieve effective trade-off regarding the acceptance rate and rejection rate, and thus enhance the ability of modeling the cross-class difference as well as the intra-class consistency. Based on the best parameter selection where the fixed threshold is set to 0.5 and of threshold selection is set to 0.01, the recognition results of the seen and unseen categories can be seen in Table II (line 2 and line 4, respectively). We observe that our threshold selection method has a better performance in , and , because improper selection of thresholds can lead to misclassification of both the seen and unseen categories.

Fixed Threshold Our Method
0.01 64.69% 97.22% 0.5 76.56% 96.11%
0.05 81.56% 92.22% 0.2 83.12% 93.33%
0.1 85.00% 83.33% 0.05 87.50% 86.11%
0.2 87.19% 81.11% 0.02 91.00% 77.22%
0.5 90.81% 63.33% 0.01 93.12% 72.00%
1 95.93% 48.88% 0.005 95.62% 66.11%
TABLE III: The Comparison Results of AR and RR for Different Threshold Selection

End-to-End Training Manner. We also compare the performance of our end-to-end framework to the framework where two branches are trained separately. In the third line of Table II, we can observe that when the framework is trained separately, the performance for seen classes is slightly increased compared to our end-to-end framework. This is because although jointly training makes the feature extraction more suitable for both prototype-based detector and SAE, it is difficult to fully satisfy these two tasks using a small amount of data. However, the end-to-end network has the advantage of higher speed and better performance for unseen classes. It is more suitable for the tasks that require real-time performance while ensuring recognition accuracy.

V Conclusion

In this paper, we propose a prototype-based GZSL framework for hand gesture recognition. Two branches of our framework are introduced: a prototype-based detector and a zero-shot label predictor. The prototype-based detector combines feature extraction and prototype learning, which can determine whether a test sample belongs to an unseen category and obtain the prediction results of the samples from the seen categories. In the zero-shot label prediction branch, the SAE is utilized to learn the mapping from the feature space to the semantic space and further predict labels for the samples from the unseen categories. In addition, we design a joint learning objective to train the entire framework in an end-to-end manner. We establish a dataset for evaluating this GZSL task of hand gesture recognition, and the experimental results demonstrate that the proposed framework achieves a significant improvement over the state-of-the-art methods. In future work, we aim to extend this framework to a larger scale of gesture data in order to better support human-robot interaction in the real world.


This work is supported by the National Natural Science Foundation of China (Grant No. 61673378) and the Ministry of Science and Technology of the People’s Republic of China (Grant No. 2017YFC0820200).


  • [1] W. Li, C. Hsieh, L. Lin, and W. Chu, “Hand gesture recognition for post-stroke rehabilitation using leap motion,” in 2017 International Conference on Applied System Innovation (ICASI).   IEEE, 2017, pp. 386–388.
  • [2]

    W. Yang, J. Tao, and Z. Ye, “Continuous sign language recognition using level building based on fast hidden markov model,”

    Pattern Recognition Letters, vol. 78, pp. 28–35, 2016.
  • [3] M. Gavrilescu, “Recognizing emotions from videos by studying facial expressions, body postures and hand gestures,” in 2015 23rd Telecommunications Forum Telfor (TELFOR).   IEEE, 2015, pp. 720–723.
  • [4] W. Thomason and R. A. Knepper, “Recognizing unfamiliar gestures for human-robot interaction through zero-shot learning,” in International Symposium on Experimental Robotics.   Springer, 2016, pp. 841–852.
  • [5] C. H. Lampert, H. Nickisch, and S. Harmeling, “Learning to detect unseen object classes by between-class attribute transfer,” in

    2009 IEEE Conference on Computer Vision and Pattern Recognition

    .   IEEE, 2009, pp. 951–958.
  • [6] E. Kodirov, T. Xiang, and S. Gong, “Semantic autoencoder for zero-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3174–3183.
  • [7] M. Norouzi, T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. S. Corrado, and J. Dean, “Zero-shot learning by convex combination of semantic embeddings,” arXiv preprint arXiv:1312.5650, 2013.
  • [8] D. Mandal, S. Narayan, S. K. Dwivedi, V. Gupta, S. Ahmed, F. S. Khan, and L. Shao, “Out-of-distribution detection for generalized zero-shot action recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 9985–9993.
  • [9] Y. Xian, C. H. Lampert, B. Schiele, and Z. Akata, “Zero-shot learning-a comprehensive evaluation of the good, the bad and the ugly,” IEEE transactions on pattern analysis and machine intelligence, 2018.
  • [10] B. Romera-Paredes and P. Torr, “An embarrassingly simple approach to zero-shot learning,” in

    International Conference on Machine Learning

    , 2015, pp. 2152–2161.
  • [11] E. Schonfeld, S. Ebrahimi, S. Sinha, T. Darrell, and Z. Akata, “Generalized zero-and few-shot learning via aligned variational autoencoders,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 8247–8255.
  • [12] Y. Xian, T. Lorenz, B. Schiele, and Z. Akata, “Feature generating networks for zero-shot learning,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 5542–5551.
  • [13] P. Morgado and N. Vasconcelos, “Semantically consistent regularization for zero-shot recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 6060–6069.
  • [14] N. Madapana and J. P. Wachs, “A semantical & analytical approach for zero shot gesture learning,” in 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017).   IEEE, 2017, pp. 796–801.
  • [15] N. Madapana and J. Wachs, “Zsgl: zero shot gestural learning,” in Proceedings of the 19th ACM International Conference on Multimodal Interaction, 2017, pp. 331–335.
  • [16] N. Madapana and J. P. Wachs, “Hard zero shot learning for gesture recognition,” in 2018 24th International Conference on Pattern Recognition (ICPR).   IEEE, 2018, pp. 3574–3579.
  • [17] J. Wu, K. Li, X. Zhao, and M. Tan, “Unfamiliar dynamic hand gestures recognition based on zero-shot learning,” in International Conference on Neural Information Processing.   Springer, 2018, pp. 244–254.
  • [18] H. Yang, X. Zhang, F. Yin, and C. Liu, “Robust classification with convolutional prototype learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3474–3482.
  • [19] F. Camastra and D. De Felice, LVQ-Based Hand Gesture Recognition Using a Data Glove.   Berlin, Heidelberg: Springer Berlin Heidelberg, 2013, pp. 159–168.
  • [20] W. Lu, Z. Tong, and J. Chu, “Dynamic hand gesture recognition with leap motion controller,” IEEE Signal Processing Letters, vol. 23, no. 9, pp. 1188–1192, 2016.
  • [21]

    A. Tang, K. Lu, Y. Wang, J. Huang, and H. Li, “A real-time hand posture recognition system using deep neural networks,”

    ACM Transactions on Intelligent Systems and Technology (TIST), vol. 6, no. 2, p. 21, 2015.
  • [22] R. Vemulapalli, F. Arrate, and R. Chellappa, “Human action recognition by representing 3d skeletons as points in a lie group,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 588–595.
  • [23] P. Molchanov, X. Yang, S. Gupta, K. Kim, S. Tyree, and J. Kautz, “Online detection and classification of dynamic hand gestures with recurrent 3d convolutional neural network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 4207–4215.
  • [24] Z. Hu, Y. Hu, J. Liu, B. Wu, D. Han, and T. Kurfess, “3d separable convolutional neural network for dynamic hand gesture recognition,” Neurocomputing, vol. 318, pp. 151–161, 2018.
  • [25] G. Zhu, L. Zhang, P. Shen, and J. Song, “Multimodal gesture recognition using 3-d convolution and convolutional lstm,” Ieee Access, vol. 5, pp. 4517–4524, 2017.
  • [26] J. C. Nunez, R. Cabido, J. J. Pantrigo, A. S. Montemayor, and J. F. Velez, “Convolutional neural networks and long short-term memory for skeleton-based human activity and hand gesture recognition,” Pattern Recognition, vol. 76, pp. 80–94, 2018.
  • [27] X. S. Nguyen, L. Brun, O. Lézoray, and S. Bougleux, “A neural network based on spd manifold learning for skeleton-based hand gesture recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 12 036–12 045.
  • [28] S. Changpinyo, W.-L. Chao, B. Gong, and F. Sha, “Synthesized classifiers for zero-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 5327–5336.
  • [29] H. Huang, C. Wang, P. S. Yu, and C.-D. Wang, “Generative dual adversarial network for generalized zero-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 801–810.
  • [30] R. Felix, M. Sasdelli, I. Reid, and G. Carneiro, “Multi-modal ensemble classification for generalized zero shot learning,” arXiv preprint arXiv:1901.04623, 2019.
  • [31]

    S. Bhattacharjee, D. Mandal, and S. Biswas, “Autoencoder based novelty detection for generalized zero shot learning,” in

    2019 IEEE International Conference on Image Processing (ICIP).   IEEE, 2019, pp. 3646–3650.
  • [32] A. Graves and J. Schmidhuber, “Framewise phoneme classification with bidirectional lstm and other neural network architectures,” Neural networks, vol. 18, no. 5-6, pp. 602–610, 2005.
  • [33] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.