Word-level sign language recognition (WSLR), as a fundamental sign language interpretation task, aims to overcome the communication barrier for deaf people. However, WSLR is very challenging because it consists of complex and fine-grained hand gestures in quick motion, body movements and facial expressions.
Recently, deep learning techniques have demonstrated their advantages on the WSLR task[20, 13, 26, 15]. However, annotating WSLR datasets requires domain-specific knowledge, with the consequence that even the largest existing datasets have a limited number of instances, e.g., on average around 10 to 50 instances per word [20, 13]. This is an order of magnitude fewer than the amount in common video datasets on other tasks, e.g., Kinetics for action recognition  has 750 instances per class. The limited amount of training data for the sign recognition task may lead to overfitting or otherwise restrict the performance of WSLR models in real-world scenarios. On the other hand, there are abundant subtitled sign news videos easily attainable from the web which may potentially be beneficial for WSLR.
Despite the availability of sign news videos, transferring such knowledge to WSLR is very challenging. First, subtitles only provide weak labels for the occurrence of signs and there is no annotation of temporal location or categories. Second, such labels are noisy. For example, a subtitle word does not necessarily indicate if the word is signed. Third, news signs typically span over 9-16 frames , which is significantly different from the videos (on average 60 frames [20, 13]) used to train WSLR models in terms of gesture speed. Therefore, directly augmenting WSLR datasets with news sign examples fails to improve recognition performance.
In this paper, we present a method that transfers the cross-domain knowledge in news signs to improve the performance of WSLR models. More specifically, we first develop a sign word localizer to extract sign words by employing a base WSLR model in a sliding window manner. Then, we propose to coarsely align two domains by jointly training a classifier using news signs and isolated signs. After obtaining the coarsely-aligned news words representations, we compute and store the centroid of each class of the coarsely-aligned new words in an external memory, called prototypical memory.
Since the shared visual concepts between these domains are important for recognizing signs, we exploit prototypical memory to learn such domain-invariant descriptors by comparing the prototypes with isolated signs. In particular, given an isolated sign, we first measure the correlations between the isolated sign and news signs and then combine the similar features in prototypical memory to learn a domain-invariant descriptor. In this way, we acquire representations of shared visual concepts across domains.
After obtaining the domain-invariant descriptor, we propose a memory-augmented temporal attention module that encourages models to focus on distinguishable visual concepts among different signs while suppressing common gestures, such as demonstrating gestures (raising and putting down hands) in tutorial videos. Therefore, our network focuses more on the visual concepts shared within each class and less on those commonly appearing in different classes, thus achieving better classification performance.
In summary, (i) we propose a coarse domain alignment approach by jointly training a classifier on news signs and isolated signs to reduce their domain gap; (ii) we develop prototypical memory and learn a domain-invariant descriptor for each isolated sign; (iii) we design a memory-augmented temporal attention over the representation of isolated signs and guide the model to focus on learning features from common visual concepts within each class while suppressing distracting ones, thus facilitating classifier learning; (iv) experimental results demonstrate that our approach significantly outperforms state-of-the-art WSLR methods on the recognition accuracy by a large margin of 12% on WLASL and 6% on MSASL. Furthermore, we demonstrate the effectiveness of our method on localizing sign words from sentences automatically, achieving 28.1 AP@0.5. Therefore, our method has a prominent potential for this process.
2 Related Works
Our work can be viewed as a semi-supervised learning method from weakly- and noisy-labelled data. In this section, we briefly review works in the relevant fields.
2.1 Word-level Sign Language Recognition
. Deep models learn spatial representations using 2D convolutional networks and model temporal dependencies using recurrent neural networks[20, 13]. Some methods also employ 3D convolutional networks to capture spatio-temporal features simultaneously [12, 37, 20, 13]. In addition, several works [17, 16] exploit human body keypoints as inputs to recurrent nets. It is well known that training deep models require a large amount of training data. However, annotating WSLR samples requires expert knowledge, and existing WSLR video datasets [13, 20] only contain a small number of examples, which limits the recognition accuracy. Our method aims at tackling this data insufficiency issue and improving WSLR models by collecting low-cost data from the internet.
2.2 Semi-supervised Learning from Web Videos
Some works [21, 38, 9] attempt to learn visual representations through easily-accessible web data. In particular,  combines curriculum learning  and self-pace learning  to learn a concept detector.  introduces a Q-learning based model to select and label web videos, and then directly use the selected data for training. Recently,  found that pretraining on million-scale web data improves the performance of video action recognition. These works demonstrate the usefulness of web videos in a semi-supervised setting. Note that, the collected videos are regarded as individual samples in prior works. However, our collected news videos often contain multiple signs in a video, which brings more challenges to our task.
2.3 Prototypical Networks and External Memory
Prototypical networks  aim at learning classification models in a limited-data regime. During testing, prototypical networks calculate a distance measure between test data and prototypes, and predict using nearest-neighbour principle. In essence, a prototypical network provides a distance-based partition of the embedding space and facilitates the retrieval based on the nearest neighbouring prototypes.
External memory equips a deep neural network with capability of leveraging contextual information. They are originally proposed for document-level question answering (QA) problems in natural language processing[34, 32]. Recently, external memory networks have been applied to visual tracking 7] and movie comprehension . In general, external memory is often served as a source providing additional offline information to the model during training and testing.
3 Proposed Approach
A WSLR dataset with labeled training examples is denoted by , where is an RGB video; is the number of frames (on average 64); and are the height and width of the frame respectively, and
is a one-hot encoded label ofclasses. We also consider a complementary set of sign news data denoted by . Similarly, is an RGB video, but with an average length of 300 frames. is a sequence of English tokens representing the subtitles corresponding to .
We observe that despite the domain difference between signs from news broadcasts and isolated signs, samples from the same class share some common visual concepts, such as hand gestures and body movements. In other words, these shared visual concepts are more suitable to represent the cross-domain knowledge and invariant to domain differences. Motivated by this intuition, we encourage models to learn such cross-domain features and exploit them to achieve better classification performance.
To this end, we first extract news signs from and train a classifier jointly using news and isolated signs. In this fashion, we are able to coarsely align these two domains in the embedding space. Then, we exploit prototypes to represent the news signs and store in a prototypical external memory (Sec. 3.3). Furthermore, for each isolated sign video, we learn a domain-invariant descriptor from the external memory by measuring its correlation with the contents in each memory cell (Sec. 3.4). Based on our learnt domain-invariant descriptor, we design a memory-augmented temporal attention module to let isolated sign representation focus on temporally similar signing gestures, thus promoting the classification accuracy. Figure 3 illustrates an overview of our method.
3.3 Constructing Prototypical Memory
3.3.1 Extracting words from weakly-labelled videos
In order to utilize the data from news broadcasts, we need to localize and extract news signs from the subtitled videos. Specifically, we first pre-process the subtitles by lemmatizing  the tokens and convert lemmas into lowercase. Then, for each isolated sign class , we collect video clips which contains the word in the processed subtitles. To do so, we apply a classifier pretrained on isolated signs to the collected videos in a sliding window manner. For each window, we acquire the classification score of each class . For each video , we choose the sliding window that achieves the highest classification score for , i.e., , where denotes that is a sliding window from . Lastly, we discard windows with a class score lower than a threshold . We use to denote the set of news sign video clips collected for , i.e., .
3.3.2 Joint training for coarse domain alignment
Although can exploit the knowledge learned from isolated signs to recognize news signs to some extent, we observe that struggles to make confident predictions. In particular, produces many false negatives and therefore misses valid news signs during the localization step. This is not surprising by acknowledging the domain gap. This phenomenon mainly comes from the domain gap between news signs and isolated ones. As can be seen in Figure 2, the features of isolated signs and news ones exhibit different distributions, which is undesirable when transferring knowledge between these two domains. To tackle this issue, we propose to first train a classifier jointly using sign samples from both domains, denoted by .
We use I3D  as the backbone network for both and
. For feature extraction, we remove its classification head and use the pooled feature maps from the last inflated inception submodule. Figure2 shows the feature representations of these two domain videos after the coarse domain alignment, where the domain gap is significantly reduced.
3.3.3 Prototypical memory
In order to exploit the knowledge of news signs when classifying isolated signs, we adopt the idea of external memory. We propose to encode the knowledge of news signs into prototypical memory, where a prototype  is stored in a memory cell. Specifically, for class , we define its prototype as the mean of the feature embeddings of all the samples in :
A prototypical memory is constructed as an array of prototypes, i.e. , where is the dimension of the prototype features.
Despite the abundance of sign news videos, the number of extracted samples is much less due to the domain gap. Recall that our classifier is able to minimize the domain gap. It would be a solution of using to re-collect samples. However, we observe that the performance of the classifier on WSLR decreases and using to select news sign video clips does not generate more news sign samples. This phenomenon can also be explained in Figure 2. Since aims to minimize the domain gap, each cluster becomes less concentrated, which leads to the decrease of the classification accuracy.
Prototype representation provides us with a robust way to represent news signs in a limited-data regime. It induces a partition of the embedding space based on a given similarity measurement, which facilitates effective retrieval of similar visual concepts encoded in the news signs. By arranging them in an external memory, we link our classification model to a knowledge base of high-level visual features. In the next section, we will explain how to use these memory cells to learn a domain-invariant descriptor and then employ the domain-invariant feature to promote WSLR model.
3.4 Learning Domain-invariant Descriptor
After the two domains are coarsely aligned, our method will focus on learning domain-invariant descriptor using the prototypical memory. In this way, we are able to extract the common concepts from these two domains. For a prototypical memory and an isolated sign feature , where is determined by the number of the video frames, our goal is to generate a class-specific common feature from the prototypical memory.
Since and are extracted by two different backbone networks and 111For simplicity, we also refer the backbones of these two classifiers to as and , these features are embedded in different spaces. Therefore, in order to measure the correlation between and , we employ two different projection matrices to project these two space in to a common one first and then compute their normalized dot product in the common embedding space:
where is a softmax function, i.e., applied in row-wise; and are two projection matrices for and , respectively.
Eq. 2 defines the correlation between the isolated sign and the features in prototypical memory cells in the common embedding space. According to the feature correlations, we reweighted the features in the memory in the common embedding space, as follows:
where the perturbation matrix allows our model to compensate for the errors during the domain alignment. We then map back to the input space as a residual of and finally acquire the domain-invariant descriptor via maxpooling:
where is a linear mapping. Next, we explain how to utilize to learn word sign representations.
3.5 Memory-augmented Temporal Attention
Since collecting isolated signs from continuous sentences involves a laborious frame-by-frame annotation process, existing isolated sign datasets are mostly collected in controlled environments for demonstration purposes. In particular, signs in isolated datasets often consist of demonstrating gestures, such as raising up or putting down the hands, and those gestures appear in sign videos regardless of words. This will increase the difficulty of learning a WSLR classifier since common gestures emerge in all the classes. A good WSLR model is supposed to focus on those discriminative temporal regions while suppressing demonstrating gestures.
Our attention module is designed to capture salient temporal information using the similarity between the domain-invariant descriptor and the isolated sign representation . Since the domain-invariant descriptor is acquired from the prototypical memory, we call our attention as memory-augmented temporal attention. Specifically, because and represent different semantics and lie in their own feature space, we compute their similarity matrix by first projecting them into a shared common space:
where , are linear mappings in . This operation compares the domain-invariant descriptor with the feature of an isolated sign on each temporal region in a pairwise manner. Then we normalize the similarity matrix with a softmax function to create the attention map :
Eq. 7 indicates that the attention map describes the similarity of and in the embedded common space. To acquire the attended features for isolated signs, we design a scheme similar to squeeze-and-excitation . In particular, we first introduce a linear mapping to embed to a low-dimensional space for attention operation and then lift it up back to the input space of using linear mapping with . Namely, our attended isolated sign representation is derived as follows:
We remark Eq. 8 aggregates features along channels and therefore learns a channel-wise non-mutually-exclusive relationship, while  aggregates feature maps across spatial dimension to produce descriptors for each channel. We then complement the feature representation of isolated signs with such channel-wise aggregated information by adding as a residual to for final classification. In this way, our model learns to concentrate on features from salient temporal regions and explicitly minimizes the influence of irrelevant gestures.
We adopt the binary cross-entropy loss function as in
. Specifically, given a probability distributionover different classes of signs, the loss is computed as:
where is the number of samples in the batch; is the number of classes; denotes the probability for the -th sample belonging to the -th class, and is the label of the sample.
4.1 Setup and Implementation Details
|MSASL100 ||100||3658 (-4%)||1021 (-14%)||749 (-1%)|
|MSASL200 ||200||6106 (-4%)||1743 (-15%)||1346 (-1%)|
Datasets. We evaluate our model on the WLASL  and MSASL  datasets. Both WLASL and MSASL are introduced recently supporting large-scale benchmarks for word-level sign language recognition. These videos record native American Sign Language (ASL) signers or interpreters, demonstrating how to sign a particular English word in ASL. However, some links used to download MSASL videos have expired and the related videos are not accessible. As a result, we obtain 7% less data for training ([20, 13] use both training and validation data for retraining models) and 1% fewer videos for testing on MSASL. Therefore, results on MSASL should be taken as indicative. Detailed dataset statistics222Note that MSASL() misses partial data for training due to invalid download links as discussed in Section 4.1. The percentage of missing data is shown in brackets. are summarized in Table 1.
Implementation details. Inflated 3D ConvNet (I3D)  is a 3D convolutional network originally proposed for action recognition. Considering its recent success on WSLR [20, 13], we use I3D as our backbone network and initialize it with the pretrained weights on Kinetics . When extracting the word samples, we choose sliding windows of sizes 916 considering the common time span for a sign word . We set threshold for localizing news signs to for WLASL100 and MSASL100, and to for WLASL300 and MSASL200, respectively.
Training and testing. We observe that although WLASL and MSASL datasets are collected from the different Internet sources, they have some videos in common. In order to avoid including testing videos in the training set, we do not merge the training videos from the two datasets. Instead, we train and test models on these two datasets separately.
. Specifically, during training, we apply both spatial and temporal augmentation. For spatial augmentations, we randomly crop a square patch from each frame. We also apply random horizontal flipping to videos because horizontally mirroring operation does not change the meaning of ASL signs. For temporal augmentation, we randomly choose 64 consecutive frames and pad shorter videos by repeating frames. We train our model using the Adam optimizer with an initial learning rate of and a weight decay of . During testing, we feed an entire video into the model. Similar to [20, 13], we choose hyper-parameters on the training set, and report results by retraining on both training and validation sets using the optimal parameters.
4.2 Qualitative Results
Visualizing memory-augmented attention We visualize the output of the memory-augmented temporal attention in Fig. 4. The first example is the word “jacket” from WLASL. It can be seen that the temporal attention module filters out the starting and ending gestures in the video and learns to focus on the middle part of the video, where the sign is performed. The second example is the word “brown” from MSASL. In this case, the attention map shows two peaks. By examining the video, we find that the sign is actually performed twice in a row with a slight pause in between.
Generating sign signatures The temporal attention facilitates to select representative frames from sign videos, referred to as “sign signatures”. In Fig. 1, the sign signatures are selected from the frames with the highest attention score from testing examples. The sign signatures generated by our model are visually consistent with those manually identified from the news signs. A potential usage for sign signatures is to help to automatically create summary , e.g., cover photos, for videos on sign language tutorial websites333e.g. https://www.signingsavvy.com/.
|RCNN [li2019word, 13]||25.97||55.04||25.28||54.13||19.31||46.56||18.93||45.76||15.75||39.12||16.34||39.16||8.84||26.00||8.49||25.94|
|I3D [li2019word, 13]||65.89||84.11||67.01||84.58||56.14||79.94||56.24||78.38||80.91||93.46||81.94||94.13||74.29||90.12||75.32||90.80|
|I3D + n.w.||61.63||82.56||62.18||82.72||54.19||80.69||54.71||80.99||77.70||93.59||75.41||90.34||75.40||90.34||76.68||90.69|
Recognition accuracy (%) on WLASL. RCNN refers to the Recurrent Convolution Neural Networks; I3D refers to the plain I3D setting; I3D + n.w. denotes the setting where extracted news words are directly added into the training set. We use macro. to denote the macro average accuracy and use micro. to denote the micro average accuracy. () Results of MSASL are indicative due to the missing training data.
4.3 Baseline Models
We compare with two baseline WSLR models, i.e., Recurrent Convolutional Neural Networks (RCNN) and I3D. Both RCNN and I3D are suggested in [20, 13] to model the spatio-temporal information in word-level sign videos and achieve state-of-the-art results on both datasets.
RCNN. In RCNN, it uses a 2D convolutional network to extract spatial features on frames. Then recurrent neural networks, such as GRU  or LSTM , are stacked on top of the convolutional network to model temporal dependencies. In our experiment, we use the implementation from  which uses a two-layer GRU on top of VGG-16.
I3D. I3D  is a 3D convolutional neural network that inflates the convolution filters and pooling layers of 2D convolutional networks. I3D is recently adapted for WSLR [20, 13] and achieves a prominent recognition accuracy. For WLASL, we use pretrained weights from the authors of . For MSASL, we report our reproduced results444We contacted authors of  but was not able to get their implementation..
4.4 Quantitative Results
4.4.1 Comparison of Recognition Performance
We report recognition performance on two metrics: (i) macro average accuracy (macro.), which measures the accuracy for each class independently and calculates the average, as reported in ; (ii) micro average accuracy (micro.), which calculates the average per-instance accuracy, as reported in . We summarize the results in Table 2.
In Table 2, I3D+n.w. results indicate that directly adding news signs to the training set does not help the training and even harms the model performance in most cases. This demonstrates the influence of the domain gap. Moreover, the degradation in performance also reveals the challenge of transferring knowledge from the news words to the WSLR models. We also notice that on MSASL200, the recognition accuracy improves after adding the news words despite the large domain gap. Although the improvement is minor, this shows the validity of our collected news sign videos.
As Table 2 shows, RCNN performs poorly mainly because its limited capacity to capture temporal motion dependency. Our proposed method surpasses previous state-of-the-art I3D model on both datasets. Because we use the same backbone network (I3D) as the baseline models, we conclude that the improvements come from the knowledge transferred from news words. Since the news words do not exhibit irrelevant artefacts such as idling and arm raising, they let the model focus more on the actual signing part in isolated words and produce more robust features.
We observe that our proposed model outperforms previous state-of-the-art by a large margin on WLASL. This is because WLASL has even fewer examples (13-20 in each class) compared to MSASL (40-50). For fully supervised models, the number of examples in WLASL is very scarce and it requires an efficient way to learn good representations. In this regard, our proposed approach is able to transfer the knowledge from the news words and helps the learning process in such a limited-data learning regime.
4.4.2 Word-level Classifier as Temporal Localizer
Lack of training data is one of the main obstacles for both word-level and sentence-level sign language recognition tasks . One such problem for sentence-level sign recognition is the lack of accurate temporal boundary annotations for signs, which can be useful for tasks such as continuous sign language recognition . We employ our word-level classifier as a temporal localizer to provide automatic annotations for temporal boundaries of sign words in sentences.
Setup. Since there is no ASL dataset providing frame-level temporal annotations, we manually annotate temporal boundaries for 120 random news word instances to validate our ideas. The word classes are from WLASL100. Our expert annotators are provided with a news sentence and a isolated sign video. They are asked to identify the starting and end frame of the sign word in the news sentence.
Annotation quality control. We use temporal-IoU (tIoU) to verify the annotation quality, which is widely used to evaluate temporal action localization results . For the two time intervals and , their tIoU is computed as . The initial average tIoU between the annotations is 0.73. We discard those entries with tIoU0.5. For the remaining entries, an agreement is reached by discussion. We keep 102 annotated entries.
Results. We demonstrate the improvement of the word recognizer by localization accuracy. To this end, we employ classifiers in a sliding window fashion of 9-16 frames and identify a sign word if the predicted class probability is larger than 0.2. We compare I3D with our model by computing mAP at different tIoUs. As shown in Table 3, our method achieves higher localization performance. and provides an option for automatic temporal annotations.
|plain I3D [li2019word, 13]||27.4||23.9||15.3||02.4|
4.5 Analysis and Discussions
We investigate the effect of different components of our model by conducting experiments on WLASL100.
Effect of coarse domain alignment. We first study the effect of coarse domain alignment as mentioned in Sec. 3.3.2. To this end, we extract features for news signs using classifier without coarse alignment, and store class centroids as memory. In Table 5, the model achieves better performance when coarse alignment is used. By training jointly on samples from two domains, the classifier aligns the domains in the embedding space. And when coarse domain alignment is not applied, the domain gap leads to less relevant prototypes and prevents from learning good domain-invariant features.
Effect of cross-domain knowledge. To investigate the influence of cross-domain knowledge in our method, we explore three different settings to produce the prototypical memory: (i) simulating the case where only isolated signs are available. As an alternative, we use to extract features for isolated signs and use their class centroids as memory. In the remaining two settings, we investigate the effectiveness of news sign prototypes. To this end, we use to extract features for both isolated and news sign words: (ii) employing centroids of only isolated word features as memory; (iii) using both isolated and news word features to compute centroids.
|wo. coarse align.||70.93||87.21||71.30||86.25|
|w. coarse align.||77.52||91.08||77.55||91.42|
As seen in Table 5, only using the aligned model with news signs as memory achieves best performance. We further analyze performance degradation in other settings as follows. Setting (i), the model only retrieves information from the isolated signs. Thus, it does not benefit from cross-domain knowledge. Setting (ii), the representations of isolated signs are compromised due to the coarse alignment, thus providing even worse centroids than (i). Setting (iii), averaging cross-domain samples produces noisy centroids since samples are not well clustered in the embedding space.
In this paper, we propose a new method to improve the performance of sign language recognition models by leveraging cross-domain knowledge in the subtitled sign news videos. We coarsely align isolated signs and news signs by joint training and propose to store class centroids in prototypical memory for online training and offline inference purpose. Our model then learns a domain-invariant descriptor for each isolated sign. Based on the domain-invariant descriptor, we employ temporal attention mechanism to emphasize class-specific features while suppressing those shared by different classes. In this way, our classifier focuses on learning features from class-specific representation without being distracted. Benefiting from our domain-invariant descriptor learning, our classifier not only outperforms the state-of-the-art but also can localize sign words from sentences automatically, significantly reducing the laborious labelling procedure.
Proceedings of the 26th annual international conference on machine learning, pp. 41–48. Cited by: §2.2.
-  (2019) Sign language recognition, generation, and translation: an interdisciplinary perspective. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility, pp. 16–31. Cited by: §4.4.2.
-  (2009) Learning sign language by watching tv (using weakly aligned subtitles). In , pp. 2961–2968. Cited by: §1, §2.1, §4.1.
-  (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, Cited by: §1.
-  (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6299–6308. Cited by: §3.3.2, §3.6, §4.1, §4.3.
-  (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Cited by: §4.3.
-  (2017) Attend to you: personalized image captioning with context sequence memory networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 895–903. Cited by: §2.3.
-  (2012) Sign language recognition using sub-units. Journal of Machine Learning Research 13 (Jul), pp. 2205–2231. Cited by: §2.1.
-  (2019) Large-scale weakly-supervised pre-training for video action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12046–12055. Cited by: §2.2.
-  (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §4.3.
-  (2018) Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132–7141. Cited by: §3.5, §3.5.
-  (2015) Sign language recognition using 3d convolutional neural networks. In 2015 IEEE international conference on multimedia and expo (ICME), pp. 1–6. Cited by: §2.1.
-  (2018) MS-asl: a large-scale data set and benchmark for understanding american sign language. arXiv preprint arXiv:1812.01053. Cited by: §1, §1, §2.1, §4.1, §4.1, §4.1, §4.3, §4.3, §4.4.1, Table 1, Table 2, Table 3, footnote 4.
-  (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.1.
-  (2018) Selfie sign language recognition with convolutional neural networks. International Journal of Intelligent Systems and Applications 10 (10), pp. 63. Cited by: §1.
Neural sign language translation based on human keypoint estimation. Applied Sciences 9 (13), pp. 2683. Cited by: §2.1.
-  (2018) Sign language recognition with recurrent neural network using human keypoint detection. In Proceedings of the 2018 Conference on Research in Adaptive and Convergent Systems, pp. 326–328. Cited by: §2.1.
-  (2017) Re-sign: re-aligned end-to-end sequence modelling with deep recurrent cnn-hmms. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4297–4305. Cited by: §4.4.2.
-  (2010) Self-paced learning for latent variable models. In Advances in Neural Information Processing Systems, pp. 1189–1197. Cited by: §2.2.
-  (2020) Word-level deep sign language recognition from video: a new large-scale dataset and methods comparison. In The IEEE Winter Conference on Applications of Computer Vision, pp. 1459–1469. Cited by: §1, §1, §2.1, §4.1, §4.1, §4.1, §4.3, §4.3, §4.3, §4.4.1.
-  Learning to detect concepts from webly-labeled video data.. Cited by: §2.2.
-  (2008) Sign language recognition by combining statistical dtw and independent classification. IEEE Transactions on Pattern Analysis and Machine Intelligence 30 (11), pp. 2040–2046. Cited by: §2.1.
-  (2009) Automatic recognition of fingerspelled words in british sign language. In 2009 IEEE computer society conference on computer vision and pattern recognition workshops, pp. 50–57. Cited by: §2.1.
-  (2008) Visualizing data using t-sne. Journal of machine learning research 9 (Nov), pp. 2579–2605. Cited by: Figure 2.
-  (2017) A read-write memory network for movie story understanding. In Proceedings of the IEEE International Conference on Computer Vision, pp. 677–685. Cited by: §2.3.
-  (2017-10) Gesture and sign language recognition with temporal residual networks. In The IEEE International Conference on Computer Vision (ICCV) Workshops, Cited by: §1.
-  (2008) Introduction to information retrieval. In Proceedings of the international communication of association for computing machinery conference, pp. 260. Cited by: §3.3.1.
-  (2016) Temporal action localization in untrimmed videos via multi-stage cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1049–1058. Cited by: §4.4.2.
-  (2017) Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pp. 4077–4087. Cited by: §2.3, §3.3.3.
Visual recognition of american sign language using hidden markov models.. Technical report Massachusetts Inst Of Tech Cambridge Dept Of Brain And Cognitive Sciences. Cited by: §2.1.
-  (1998) Real-time american sign language recognition using desk and wearable computer based video. IEEE Transactions on pattern analysis and machine intelligence 20 (12), pp. 1371–1375. Cited by: §2.1.
-  (2015) End-to-end memory networks. In Advances in neural information processing systems, pp. 2440–2448. Cited by: §2.3.
-  (2015) Sift-based arabic sign language recognition system. In Afro-european conference for industrial advancement, pp. 359–370. Cited by: §2.1.
-  (2014) Memory networks. arXiv preprint arXiv:1410.3916. Cited by: §2.3.
-  (2010) Chinese sign language recognition based on video sequence appearance modeling. In 2010 5th IEEE Conference on Industrial Electronics and Applications, pp. 1537–1542. Cited by: §2.1.
-  (2015) Sift based approach on bangla sign language recognition. In 2015 IEEE 8th International Workshop on Computational Intelligence and Applications (IWCIA), pp. 35–39. Cited by: §2.1.
-  (2018) Recognizing american sign language gestures from within continuous videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 2064–2073. Cited by: §2.1.
-  (2017) Learning to learn from noisy web videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5154–5162. Cited by: §2.2, §2.3.