Trajectory-Based Recognition of Dynamic Persian Sign Language Using Hidden Markov Model

12/04/2019 ∙ by Saeideh Ghanbari Azar, et al. ∙ Chargoon Co. 10

Sign Language Recognition (SLR) is an important step in facilitating the communication among deaf people and the rest of society. Existing Persian sign language recognition systems are mainly restricted to static signs which are not very useful in everyday communications. In this study, a dynamic Persian sign language recognition system is presented. A collection of 1200 videos were captured from 12 individuals performing 20 dynamic signs with a simple white glove. The trajectory of the hands, along with hand shape information were extracted from each video using a simple region-growing technique. These time-varying trajectories were then modeled using Hidden Markov Model (HMM) with Gaussian probability density functions as observations. The performance of the system was evaluated in different experimental strategies. Signer-independent and signer-dependent experiments were performed on the proposed system and the average accuracy of 97.48 experimental results demonstrated that the performance of the system is independent of the subject and it can also perform excellently even with a limited number of training data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 8

page 12

page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Sign language consists of a set of manual or non-manual gestures which are used for communication especially among deaf people. The majority of the gestures of a sign language are manual gestures, while non-manual gestures like head movements (e.g. nodding), body movement (e.g. shrugging) and face expressions also play an important role in sign communications. Unfortunately, the use of sign language is usually restricted to the deaf community resulting in restricted communication of them with the rest of the world. Therefore, they are usually excluded from society and deprived of their rights to have equal educational and career opportunities. In order to address this problem, Sign Language Recognition (SLR) systems are developed to translate this language into speech or text. Developing efficient SLR systems can facilitate the communication of deaf people in society and remove the barriers for them.

One of the basic issues regarding SLR is that there is not a universal sign language. Sign languages of different countries have their own grammar rules. SLR systems have been rapidly developed in recent years for different sign languages including American Vogler and Metaxas (1999); Wu et al. (2015); Lahoti et al. (2018), Chinese Huang et al. (2018), Australian Holden et al. (2005), Arabic Tubaiz et al. (2015); Shanableh et al. (2007), Indian Hore et al. (2017), Spanish López-Colino and Colás (2012) and Japanese Barricelli and Valtolina (2017). For more reviews on sign language and different approaches developed for SLR systems refer to Cheok et al. (2019); Parton (2005).

Due to its broad range of capabilities, Machine Vision (MV) is the major tool used in the development of SLR systems. An MV-based SLR system usually consists of three components: hand tracker, feature extractor and classifier. Hand tracker’s job is to segment hand regions from the background of the input video frames. Some studies rely on data gloves to track the hand movements for hand tracking

Tubaiz et al. (2015); Fang et al. (2007); Gao et al. (2000); Khelil and Amiri (2016); Kumar et al. (2017). Although these gloves are easy and precise to track, they usually contain heavy electromechanical devices which are inconvenient for the signers and limit their natural movements. Another type of studies relies on vision-based methods for hand tracking Shanableh et al. (2007); Chen et al. (2003); Al-Rousan et al. (2009). These techniques usually require some limitations on the signer’s cloths or imaging conditions. For instance, some of the vision-based studies need that the subject wear color gloves to facilitate the hand tracking process Maraqa and Abu-Zaiter (2008); Mohandes and Deriche (2005); Mohandes et al. (2012)

. Nevertheless, these techniques are more convenient and cheaper. Feature extractor is the second stage of an SLR system. It takes the hand tracker’s data and produces a feature vector. Some SLR studies rely on hand shape information to extract a feature vector

Mohandes et al. (2012) while others rely on hand trajectory information Lim et al. (2016)

. Once feature vectors are extracted, they need to be classified using an appropriate classifier. Many different classifiers have been utilized for recognizing different sign languages. These classifiers mainly include Neural Network (NN), K-Nearest Neighbor (KNN) and Hidden Markov Models (HMMs).

Developing machine vision based SLR systems started by the pioneering work of Starner et al. Starner (1995) in which they developed an American sign language recognition system. They placed the camera on top of a desk or a cap worn by the signer. They used two colored gloves to facilitate the hand tracking stage and classified the signs using HMM. In a recent study, Holden et al. Holden et al. (2005) presented an HMM-based system which relies on hand shape information to extract a feature vector. This shape information includes hand size, direction, roundedness and the angle between 2 hands. Their system recognizes Australian sign language with the accuracy rate of 97% at the sentence level and 99% at the word level. Recently, many researches have been developed regarding Arabic sign language recognition. Al-Rousan et al.Al-Rousan et al. (2009) suggested a vision-based system that uses Discrete Cosine Transform (DCT) and HMM for recognition of 30 Arabic signs in both offline and online modes. Recognition rates of 96.74% and 93.8% were obtained in offline and online mode, respectively. Tubaiz et al. Tubaiz et al. (2015) proposed a glove-based continuous SLR system. They used a feature extractor which emphasizes the temporal dependency of the data. A modified KNN approach is used for classification. The system recognizes 40 sentences of Arabic sign language with an accuracy of 98.9%.

Recently, deep learning-based approaches have proven to be very popular in many areas of machine vision applications including sign language recognition. The popularity of these approaches stems from their excellent discriminative abilities and successful performance. The seminal series of studies by Koller et all.

Koller et al. (2015b, 2016b, 2016a, 2018, 2019) are among the first studies conducted for deep learning-based sign language recognition. In their early work Koller et al. (2015b), they have used deep learning for sign language recognition based on the shape of the mouth. In Koller et al. (2018)

they have developed an SLR system which uses Convolutional Neural Networks (CNNs) in an HMM framework. By combining the discriminative abilities of CNNs with the dynamic modeling ability of HMM, they have significantly improved the recognition performance on three benchmark sign language datasets, namely PHOENIX 2012

Koller et al. (2015a), PHOENIX 2014 Koller et al. (2015a), and SIGNUM von Agris et al. (2008). In their recent work Koller et al. (2019)

, they have combined their previous works by adding multi-stream HMM to jointly solve the two sub-problems of hand gesture and mouth shape recognition. They develop a powerful and deep CNN with two bidirectional Long Short-Term Memory (LSTM) layers for recognition of continuous sign language sequences with weak and noisy labels.

1.1 Related Works to Persian Sign Language (PSL) Recognition

This part focuses on previous attempts made in recognition of PSL. Similar to other sign languages, PSL signs are divided into two main categories, i.e., static signs and dynamic signs. Static signs do not include hand movements and can be captured in a single image. Dynamic signs, on the other hand, include hand movements making it difficult to manipulate. Dynamic signs are usually captured in video frames, and video processing techniques are implemented to recognize them.

Development of a PSL recognition system is in its early stages. There have been few studies conducted in this field. These studies were all focused on image-based recognition of static signs. In the first PSL recognition system, Karami et al. Karami et al. (2011)

collected an image dataset of static alphabet signs. The images were then transformed into the wavelet domain. Different levels of wavelet transform including approximation coefficient of level 6, diagonal and horizontal details of level 6 and 7 and the vertical details of level 6 were used as feature vector. Multi-Layer Perceptron (MLP) neural network was used as a recognizer, and an accuracy rate of 94.06% was achieved. In another study, Barkoky and Charkari

Barkoky and Charkari (2011) designed a system for recognition of Persian sign numbers. They used a color-based technique to extract hand regions. A thinning method was then applied to these segmented images. The recognition was done by counting the number of endpoints of the thinned image. The accuracy rate of 96.6% was reported. In a similar study to Karami et al. (2011), Moghaddam et al. Moghaddam et al. (2011)

reported an image-based system to recognize alphabets of PSL. They used kernel based feature extraction methods including Kernel Principle Component Analysis (KPCA) and Kernel Discriminant Analysis (KDA). Three different classifiers including Minimum Distance (MD), Support Vector Machine (SVM) and NN were used to compare the results. In a more recent study, Zare et al.

Zare and Zahiri (2018) proposed a recognition system for 10 Persian static signs including six numbers and four words. They used skin segmentation in different color spaces to detect the hand regions. They employed Fourier descriptors as features to train a classifier. Their approach performs good results for real time signer-independent recognition system.

Each of the works discussed above for PSL recognition has its advantages. Since PSL recognition is a newly evolving field of study, there remains some challenges which motivates this paper. All the introduced PSL recognition systems are developed for alphabet or number signs which are not very useful in everyday conversations of the deaf community. This problem indicates the need for developing a dynamic sign recognition system which can recognize more practical signs. Motivated by this, we present a dynamic PSL recognition system. Over 1200 videos of 20 dynamic signs are collected for this system. A region growing technique is used to extract the motion trajectory of the hand and three other shape information. HMM with Gaussian observations is finally utilized to classify 20 dynamic signs.

To summarize, we make the following contributions: First, a new dynamic PSL dataset with 1200 videos is collected which contains 20 single-handed signs that are practical in everyday communication of the deaf community. To increase the diversity of the dataset, 12 individuals are participated in this dataset making it more subject-independent. Second, a dynamic PSL recognition system is proposed which uses a simple trajectory extraction approach based on region growing. The system performs excellently independent of the subject performing the signs, and it has the accuracy of more than 95% even with a limited number of training data.

The rest of the paper is organized as follows. Section 2 describes the collected dataset. Section 3 and Section 4 elaborate the trajectory extraction approach and the HMM classifier, respectively. Section 5, presents the experiments and compares the results. Finally, the paper is concluded in Section 6.

2 Dataset Collection

Sign Code Sign Sign Code Sign
1 Sad 11 Eat
2 Wish 12 Sun
3 Dear 13 Mother
4 Sorry 14 People
5 How? 15 Go
6 Student 16 Day
7 Today 17 Hear
8 Forget 18 Brave
9 Please 19 Natural
10 Danger 20 Can
Table 1: List of the signs of the dataset.

Since there is no dynamic PSL dataset available, the authors needed to construct a dataset. The dataset was collected in the Society of Deaf People (SDP), Urmia, Iran and named as University of Tabriz Persian Sign Language dataset (UoT-PSL) Azar and Seyedarabi (2016).In order to develop an efficient recognition system, the collected dataset needs to contain different versions of performing a sign. This can decrease the dependency of the system on the subject. For this purpose, twelve volunteers including six male and six female singers were participated in this dataset. The signers included both deaf and hearing individuals. Twenty dynamic signs of PSL were chosen to be included in the dataset. The signs were selected from the single-handed signs appearing in everyday conversations with the consultation of the experts in SDP. A list of the signs available in this dataset is presented in Table 1. Each individual performed a sign 5 times producing 60 samples for each sign. In the following parts of this paper, the signs will be referred to by their corresponding code in Table 1.

Figure 1: Samples of the captured video frames for sign ‘wish’.

A Sony digital camera (model DSC-HX9V) was used to obtain a total number of 1200 videos from 20 signs. There were no particular restrictions on the light of the environment. The collected videos were in AVI format and the frame rate was set to 25 frames per second and the spatial resolution was pixels. The audio contents of the videos were eliminated to avoid unnecessary complexities.

To ease the process of hand tracking, some restrictions were imposed on the imaging conditions. The signers stood in front of a blue background and were asked to wear a simple white glove. Colored marks were considered on the fingertips of the glove for possible future studies but they were not utilized in the present study. All the signs were chosen from single-handed Persian signs to avoid possible occlusions of the hands. Figure 1 presents a sample of the captured video frames.

3 Sign Trajectory Extraction

Figure 2: A representation of the hand trajectory.

The first step in developing the PSL recognition system is hand trajectory extraction. That is, in each frame, the hand region is detected, and its centroid in x and y-axes are saved. For a video stream these extracted centroids form the hand trajectory. Figure 2 gives an illustration of this definition. The hand region extraction procedure is explained in the following.

It should be noted that 10-15 starting frames of the videos did not contain the hand of the signer. In this paper the frame in which the hand appears for the first time in the scene is referred to as start frame. Thus, the first step was to detect this so-called start frame. There is no hand region in the frames that come before the start frame.Therefore, they all contain the same image of the signer. This means that subtracting these frames from the first frame results in an approximately zero image. This fact was exploited to detect the start frame. In this regard, each frame was subtracted from the first frame and among this subtracted stream of frames the first one containing a nonzero region bigger than a threshold determined the start frame. Figure 3 presents an example of the first frame of the video, start frame and their subtraction image.

Figure 3: An example of the first frame of the video, start frame and their subtraction image
Figure 4: Hand region extraction of ’th frame using the hand centroid of the ’th frame as seed for region growing.

After detecting the start frame, a hand tracking process begins to extract the trajectory of the hand. This hand tracking was accomplished via a region growing technique and is illustrated in Figure 4. Specifically, the centroid of the hand region in the start frame was obtained and was denoted as . This centroid’s location was used as a seed for region growing in the frame that comes after the start frame. This region growing produces the hand region in this frame. Then, the hand region centroid of this frame denoted as was used as a seed for region growing in its next frame. Figure 4 illustrates the hand region extraction of the ’th frame using the hand centroid of the ’th frame, i.e. . For all the frames of the video, this procedure was repeated, producing the hand trajectory. Figure 5 shows two examples of the extracted trajectories.

(a)
(b)
Figure 5: Two examples of the extracted trajectories for signs 1 (sad) and 13 (mother).

In addition to the centroid of the hand, three simple shape information were also extracted from each frame as features. These shape features include: area, orientation and eccentricity. Area determines the number of pixels in the hand region for each frame. Considering the hand region as an ellipse, orientation measures the angle between its x-axis and major axis. Eccentricity is a measure of how much the bounded ellipse of the hand deviates from being circular. These features were added to the hand centroids forming a five-dimensional feature vector for each sign.

These time-varying feature vectors will be used as the observation sequences of the HMMs. Since the signs in the dataset were performed with different subjects and each subject had his/her own speed of performing the sign, there are vast differences in the number of the frames of the videos. To decrease the subject dependency of the system, we need to normalize the number of frames before training the HMMs. For this purpose, a linear temporal interpolation with 30 query points was used to normalize the number of frames. Therefore, for each video sample we extracted a

feature matrix.

4 Hidden Markov Model Based Classification

Unlike static signs which create time-invariant features, dynamic signs produce features which vary in time. In order to classify these time-varying features, we need a system that can model this dynamic nature of the features. HMM has long been used for the classification of temporal patterns Brand et al. (1997), and it has been proved to be successful in sign language classification Starner (1995). This section gives a brief introduction to HMM.

HMM is a stochastic model which contains a Markov chain with an invisible or hidden sequence of states. If we denote the number of hidden states as

, an HMM can succinctly be represented by:

(1)

where is a matrix containing the initial probabilities of the states and is the transition probability matrix.

is called the state emission probability distribution and its components are denoted as

for th state. At each time, the process is in one of the hidden states and generates observations according to these emission probability distributions. The observations either can be discrete or continuous. For discrete observations, the emissions of each state are represented by probability mass functions (pmf), and for continuous observations, they are represented by probability density functions (pdf). Refer to Rabiner and Juang (1986); Rabiner (1989) for detailed tutorial on HMM.

The observations used in this study are five-dimensional continuous feature vectors. Therefore, a pdf should be assigned for estimating these observations. The mixture of Gaussians is proved to be a successful method for estimating the pdf of continuous observations

Bashir et al. (2007). Hence, we model the observations of each state of the HMM with a mixture of Gaussians. Let the -dimensional observation vector of each state be denoted as and the state at time be denoted as . The pdf of the observation at state can be modeled as:

(2)

where is the number of mixing Gaussian pdfs and is the mixing parameter satisfying:

(3)

is a multivariate Gaussian distribution with corresponding mean vector

and covariance matrix .

HMM-based classification is performed in two steps, i.e. training and evaluation. In training step, an HMM is trained for each sign. That is, the parameters of the triplet are estimated. This procedure is known as the training problem of HMM. The parameters are initialized to random values and then estimated using the Baum–Welch algorithm Rabiner and Juang (1986). Assuming we have the total number of classes or signs, the result of training step will be trained HMMs represented as . In the evaluation step, once HMMs are trained, a test sign with observation vector , is recognized by computing the probability of given each trained HMM. This is known as the evaluation problem of HMM and is solved using the forward-backward algorithm Rabiner and Juang (1986). Specifically, given a set of trained HMMs for signs, i.e. , and the observation sequence of the test sign, its sign label is assigned according to the following formula:

(4)

5 Results and Discussion

In this section, the performance of the proposed Persian sign language recognition system is evaluated in different experiments. Before conducting the experiments, the main parameters of the system are tuned. All the following experiments are performed with two sets of features. In the first set, which is referred to as trajectory features, only the x-y position of the hand is used as features. In the second set, which is referred to as trajectory-shape features, in addition to hand position, the shape information of the hand is also used as features. That is, the first set contains 2-dimensional features while the second one contains 5-dimensional features.

5.1 Parameter Tuning

To achieve best models for sign trajectories, there are two main parameters that need to be tuned, namely the number of hidden states and number of Gaussian mixtures. For this purpose, 20% of the samples of each sign were randomly chosen for training and the rest of the data were used for testing. For an HMM, the most important parameter is the number of hidden states. Figure 5(a) shows the accuracy of the system for different number of states while fixing the other parameters. It can be observed that for both set of features, the accuracy of the system increases as we increase the number of states from 3 to 12. The highest accuracy of the system is achieved for 12 states for both sets of features, i.e., 98.66% for trajectory-shape and 91.2% for trajectory features.

(a)
(b)
Figure 6: Tuning the number of states and the number of Gaussian mixtures. a) Classification accuracy as a function of number of states. b) Classification accuracy as a function of number of Gaussian mixtures.
(a)
(b)
Figure 7: Classification accuracy as a function of the number of states and number of Gaussian mixtures. a) For trajectory-shape features. b) For Trajectory features.

The next parameter to be tuned is the number of Gaussian mixtures. Figure 5(b) shows the accuracy of the system for different number of Gaussian mixtures while fixing the other parameters. For both sets of features, the best accuracy is achieved for 3 mixtures, and it decreases as we increase the number of mixtures, revealing that 3 mixture of Gaussians is the best representation for our observation data. To more evaluate the role of these two parameters, Figure 7 presents the classification performance for varying number of states and Gaussian mixtures. From this figures similar deduction to Figure 6 can be made about the optimal number of states and mixtures. Moreover, it can be observed from the figure that the trajectory-shape set of features (Figure 6(a)) is more robust to these parameters than the trajectory features (Figure 6(b)). To summarize, considering the results presented in Figure 6 and Figure 7, the optimal number of states and Gaussian mixtures were set to 12 and 3, respectively.

5.2 Sign Classification

In this section, the classification results of 20 dynamic Persian signs are presented. After extracting the hand trajectory and shape information, 20 HMMs were trained using both sets of features with 12 hidden states and 3 mixtures of Gaussian. In addition to HMM, Support Vector Machine (SVM) with polynomial kernel was also used for classification of signs and the results were compared to the ones obtained from HMM classification.

Three different training strategies with 20% of the samples of each sign were used to evaluate the performance of the system, namely random, subject-dependent and subject-independent training. In random training strategy, as its name suggests, 20% of the samples were selected randomly as training data, leaving the rest of the data for testing. In signer-dependent strategy, 20% of the samples of each subject were selected as training data. Considering we have five samples from each subject, one sample per subject was selected as training data. That is, we made sure that each subject had a sample among training data. In subject-independent strategy, on the other hand, we trained the system with samples of only two subjects, and the samples from the other ten subjects were left for testing. The results are presented in Table 2

. All the experiments were conducted in 10 runs, and the mean and variance values of the classification accuracy are reported in

Table 2.

Classifier SVM HMM
Feature set Trajectory Trajectory-shape Trajectory Trajectory-shape
Random 78.47 (0.85) 87.77 (1.12) 87.12 (0.21) 98.13 (0.11)
Subject-dependent 83.54 (0.62) 89.79 (0.23) 87.01 (0.13) 97.63 (0.12)
Subject-independent 62.90 (0.75) 67.10 (0.41) 83.20 (0.31) 96.70 (0.09)
Table 2: Classification results for different classifiers with 20% of the samples used for training in different training strategy.

Some observations can be made from this table. First, in both classifiers, adding shape information to the trajectory features has significantly (10%) increased the accuracy of the system, indicating the importance of the hand shape information in sign classification. Second, the results obtained by HMM is notably better than SVM. This may be due to the inability of SVM in dealing with the time-varying nature of the features, while HMM can successfully model these temporal features. Third, considering different training strategies, the results obtained by SVM meaningfully decrease in the subject-independent case and slightly increase for subject-dependent training. This leads to the conclusion that the system designed with SVM as a classifier is extremely subject-dependent. Contrary to SVM, HMM represents excellent subject-independent results and the classification accuracy drops only 1.4% in subject-independent case.

To further discuss the concept of signer-independency, Figure 8 presents samples of the same sign performed by a single signer (Figure 7(a)) and three different signers (Figure 7(b)). As it can be seen in Figure 7(a), different realizations of a sign performed by a single signer are very similar in terms of both the shape of the trajectory and the x and y values of each frame number. For different signers (Figure 7(b)), although the x and y values of each frame are different, the shape of the trajectory is almost similar. SVM treats each frame as a separate feature and fails to see the temporal pattern of the features. As a result, it hardly recognizes the similarities between the signs performed by different signers. Therefore, exhibits weak signer-independent results. HMM, on the other hand, can model this temporal patterns using its state transition capabilities.

(a)
(b)
Figure 8: Samples of the same sign performed by (a) a single signer and (b) three different signers.
Figure 9: The trajectories of the 20 persian signs used in this study. Note that the labels are eliminated to decrease the ambiguity of the figure.
(a)
(b)
Figure 10:

Performance of the proposed HMM-based system. Each sign is represented by its corresponding code. a) Confusion matrix of the classification. b) Accuracies obtained for each class of signs.

The trajectories of the 20 signs of the dataset are illustrated in Figure 9. This figure portrays the spread of realizations between these 20 signs. As it can be seen in this figure, most of the signs are distinguishable while a few of them have similar trajectories that can challenge the performance of the system. To better evaluate the performance of the proposed HMM-based system, the confusion matrix of the classification and the accuracies obtained for each class of signs are illustrated in Figure 9(a) and Figure 9(b), respectively. In these figures each sign is represented by its corresponding code from Table 1. According to these figures and the sign trajectories of Figure 9, following observations can be made. Half of the signs (signs 2-7, 13, 15, 19 and 20) are classified with 100% accuracy. Among these signs, signs 2-7 and sign 20 have distinguishable trajectories and the obtained accuracy was predictable. Signs 13, 15 and 19 have similar trajectories but they have been classified with 100% accuracy. This can be explained by the added shape information that has enable the system to discriminate between these signs. The lowest accuracy is obtained for sign 12 and it is mainly misclassified with sign 10, which may be due to their similar trajectories.

Figure 11: The accuracy of the examined methods as a function of the train data percentage.

One of the most critical aspects of a recognition system is its level of dependence on the number of training data. Regarding the challenges in the acquisition of the sign videos, the number of available samples for each class is usually restricted. Therefore, it is essential for a recognition system to perform correctly with limited training data. Figure 11 exhibits the accuracy of the examined methods as a function of the train data percentage. It can be seen that the SVM-based methods rely significantly on the number of training data and their performance decrease as we decrease the train data, whereas HMM-based methods, especially the method with trajectory-shape features, are entirely robust to the number of train data. It can be observed from the figure that even with 5% of the data for training, the system can successfully model the signs, and for train data percentage of more than 30, the performance of the system remains almost the same. Note that the accuracy is not yet at ceiling for training data percentage of 50.

6 Conclusion

In this study, a dynamic Persian sign language recognition system is presented. A dataset containing 1200 videos of 20 signs were collected. Hand trajectories along with three hand shape information were extracted from video frames using a region growing technique. HMM with Gaussian mixture observations was utilized to model these trajectories and their temporal patterns. According to the experimental results, the HMM-based system with hand’s trajectory and shape information as features can successfully recognize these 20 signs with an average accuracy of 98.13%. Moreover, the experiments indicated that the performance of the system is independent of the subject, and it has excellent performance even with a limited number of training data.

This study being an initial study on dynamic PSL recognition has only focused on the trajectories of the signs. While, it is likely that using a wider dictionary of signs will increase the possibility of more similar trajectories leading to a need for more training data. This problem can be addressed by using two cameras to extract both spatial and depth information and decrease the possibility of similar trajectories. Another solution may be to use more sophisticated approaches like deep learning based features. For future studies, the authors will be focused on updating the dataset and using deep learning based approaches for PSL recognition.

Acknowledgment

This paper is published as part of a research project supported by the University of Tabriz, Research Affairs Office, Iran. The authors would like to thank the Society of Deaf People (SDP), Urmia, Iran, for the many valuable assistances they provided during the acquisition of the dataset.

References

  • M. Al-Rousan, K. Assaleh, and A. Tala’a (2009) Video-based signer-independent arabic sign language recognition using hidden markov models. Applied Soft Computing 9 (3), pp. 990–999. Note: doi: https://doi.org/10.1016/j.asoc.2009.01.002 Cited by: §1, §1.
  • S. G. Azar and H. Seyedarabi (2016) University of tabriz persian sign language dataset (UoT-PSL). Note: Available at: https://asatid.tabrizu.ac.ir/Files/603_122fa2a0-989c-4124-b2f6-dfe2b5eb03ff.pdf Cited by: §2.
  • A. Barkoky and N. M. Charkari (2011) Static hand gesture recognition of persian sign numbers using thinning method. In Multimedia Technology (ICMT), 2011 International Conference on, pp. 6548–6551. Note: doi: 10.1109/ICMT.2011.6002201 Cited by: §1.1.
  • B. R. Barricelli and S. Valtolina (2017) A visual language and interactive system for end-user development of internet of things ecosystems. Journal of Visual Languages & Computing 40, pp. 1–19. Note: doi: https://doi.org/10.1016/j.jvlc.2017.01.004 Cited by: §1.
  • F. I. Bashir, A. A. Khokhar, and D. Schonfeld (2007) Object trajectory-based activity classification and recognition using hidden markov models. IEEE transactions on Image Processing 16 (7), pp. 1912–1919. Note: doi: 10.1109/TIP.2007.898960 Cited by: §4.
  • M. Brand, N. Oliver, and A. Pentland (1997) Coupled hidden markov models for complex action recognition. In Computer vision and pattern recognition, 1997. proceedings., 1997 ieee computer society conference on, pp. 994–999. Note: doi: 10.1109/CVPR.1997.609450 Cited by: §4.
  • F. Chen, C. Fu, and C. Huang (2003) Hand gesture recognition using a real-time tracking method and hidden markov models. Image and vision computing 21 (8), pp. 745–758. Note: doi: https://doi.org/10.1016/S0262-8856(03)00070-2 Cited by: §1.
  • M. J. Cheok, Z. Omar, and M. H. Jaward (2019) A review of hand gesture and sign language recognition techniques.

    International Journal of Machine Learning and Cybernetics

    10 (1), pp. 131–153.
    Note: doi: https://doi.org/10.1007/s13042-017-0705-5 Cited by: §1.
  • G. Fang, W. Gao, and D. Zhao (2007) Large-vocabulary continuous sign language recognition based on transition-movement models. IEEE transactions on systems, man, and cybernetics-part a: systems and humans 37 (1), pp. 1–9. Note: doi: 10.1109/TSMCA.2006.886347 Cited by: §1.
  • W. Gao, J. Ma, J. Wu, and C. Wang (2000) Sign language recognition based on hmm/ann/dp.

    International journal of pattern recognition and artificial intelligence

    14 (05), pp. 587–602.
    Note: doi: 10.1142/S0218001400000386 Cited by: §1.
  • E. Holden, G. Lee, and R. Owens (2005) Australian sign language recognition. Machine Vision and Applications 16 (5), pp. 312. Note: doi: https://doi.org/10.1007/s00138-005-0003-1 Cited by: §1, §1.
  • S. Hore, S. Chatterjee, V. Santhi, N. Dey, A. S. Ashour, V. E. Balas, and F. Shi (2017) Indian sign language recognition using optimized neural networks. In Information Technology and Intelligent Transportation Systems, pp. 553–563. Note: doi: https://doi.org/10.1007/978-3-319-38771-0_54 Cited by: §1.
  • S. Huang, C. Mao, J. Tao, and Z. Ye (2018) A novel chinese sign language recognition method based on keyframe-centered clips. IEEE Signal Processing Letters 25 (3), pp. 442–446. Note: doi: 10.1109/LSP.2018.2797228 Cited by: §1.
  • A. Karami, B. Zanj, and A. K. Sarkaleh (2011) Persian sign language (psl) recognition using wavelet transform and neural networks. Expert Systems with Applications 38 (3), pp. 2661–2667. Note: doi: https://doi.org/10.1016/j.eswa.2010.08.056 Cited by: §1.1.
  • B. Khelil and H. Amiri (2016) Hand gesture recognition using leap motion controller for recognition of arabic sign language. In 3rd International conference ACECS’16, Cited by: §1.
  • O. Koller, C. Camgoz, H. Ney, and R. Bowden (2019)

    Weakly supervised learning with multi-stream cnn-lstm-hmms to discover sequential parallelism in sign language videos

    .
    IEEE transactions on pattern analysis and machine intelligence. Note: doi: 10.1109/TPAMI.2019.2911077 Cited by: §1.
  • O. Koller, J. Forster, and H. Ney (2015a) Continuous sign language recognition: towards large vocabulary statistical recognition systems handling multiple signers. Computer Vision and Image Understanding 141, pp. 108–125. Note: doi: https://doi.org/10.1016/j.cviu.2015.09.013 Cited by: §1.
  • O. Koller, H. Ney, and R. Bowden (2015b) Deep learning of mouth shapes for sign language. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 85–91. Note: doi: 10.1109/ICCVW.2015.69 Cited by: §1.
  • O. Koller, H. Ney, and R. Bowden (2016a) Deep hand: how to train a cnn on 1 million hand images when your data is continuous and weakly labelled. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3793–3802. Note: doi: 10.1109/CVPR.2016.412 Cited by: §1.
  • O. Koller, O. Zargaran, H. Ney, and R. Bowden (2016b) Deep sign: hybrid cnn-hmm for continuous sign language recognition. In Proceedings of the British Machine Vision Conference 2016, Note: doi: http://epubs.surrey.ac.uk/812319/ Cited by: §1.
  • O. Koller, S. Zargaran, H. Ney, and R. Bowden (2018) Deep sign: enabling robust statistical continuous sign language recognition via hybrid cnn-hmms. International Journal of Computer Vision 126 (12), pp. 1311–1325. Note: doi: https://doi.org/10.1007/s11263-018-1121-3 Cited by: §1.
  • P. Kumar, H. Gauba, P. P. Roy, and D. P. Dogra (2017) Coupled hmm-based multi-sensor data fusion for sign language recognition. Pattern Recognition Letters 86, pp. 1–8. Note: doi: https://doi.org/10.1016/j.patrec.2016.12.004 Cited by: §1.
  • S. Lahoti, S. Kayal, S. Kumbhare, I. Suradkar, and V. Pawar (2018) Android based american sign language recognition system with skin segmentation and svm. In 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT), pp. 1–6. Note: doi: 10.1109/ICCCNT.2018.8493838 Cited by: §1.
  • K. M. Lim, A. W. Tan, and S. C. Tan (2016) A feature covariance matrix with serial particle filter for isolated sign language recognition. Expert Systems with Applications 54, pp. 208–218. Note: doi: https://doi.org/10.1016/j.eswa.2016.01.047 Cited by: §1.
  • F. López-Colino and J. Colás (2012) Spanish sign language synthesis system. Journal of Visual Languages & Computing 23 (3), pp. 121–136. Note: doi: https://doi.org/10.1016/j.jvlc.2012.01.003 Cited by: §1.
  • M. Maraqa and R. Abu-Zaiter (2008)

    Recognition of arabic sign language (arsl) using recurrent neural networks

    .
    In Applications of Digital Information and Web Technologies, 2008. ICADIWT 2008. First International Conference on the, pp. 478–481. Note: doi: 10.1109/ICADIWT.2008.4664396 Cited by: §1.
  • M. Moghaddam, M. Nahvi, and R. H. Pak (2011) Static persian sign language recognition using kernel-based feature extraction. In Machine Vision and Image Processing (MVIP), 2011 7th Iranian, pp. 1–5. Note: doi: 10.1109/IranianMVIP.2011.61215391 Cited by: §1.1.
  • M. Mohandes, M. Deriche, U. Johar, and S. Ilyas (2012)

    A signer-independent arabic sign language recognition system using face detection, geometric features, and a hidden markov model

    .
    Computers & Electrical Engineering 38 (2), pp. 422–433. Note: doi: https://doi.org/10.1016/j.compeleceng.2011.10.013 Cited by: §1.
  • M. Mohandes and M. Deriche (2005) Image based arabic sign language recognition. In Signal Processing and Its Applications, 2005. Proceedings of the Eighth International Symposium on, Vol. 1, pp. 86–89. Note: doi: 10.1109/ISSPA.2005.1580202 Cited by: §1.
  • B. S. Parton (2005) Sign language recognition and translation: a multidisciplined approach from the field of artificial intelligence. Journal of deaf studies and deaf education 11 (1), pp. 94–101. Note: doi: https://doi.org/10.1093/deafed/enj003 Cited by: §1.
  • L. R. Rabiner and B. Juang (1986) An introduction to hidden markov models. ieee assp magazine 3 (1), pp. 4–16. Cited by: §4, §4.
  • L. R. Rabiner (1989) A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE 77 (2), pp. 257–286. Cited by: §4.
  • T. Shanableh, K. Assaleh, and M. Al-Rousan (2007) Spatio-temporal feature-extraction techniques for isolated gesture recognition in arabic sign language. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 37 (3), pp. 641–650. Note: doi: 10.1109/TSMCB.2006.889630 Cited by: §1, §1.
  • T. E. Starner (1995) Visual recognition of american sign language using hidden markov models.. Technical report Massachusetts Inst Of Tech Cambridge Dept Of Brain And Cognitive Sciences. Cited by: §1, §4.
  • N. Tubaiz, T. Shanableh, and K. Assaleh (2015) Glove-based continuous arabic sign language recognition in user-dependent mode. IEEE Transactions on Human-Machine Systems 45 (4), pp. 526–533. Note: doi: 10.1109/THMS.2015.2406692 Cited by: §1, §1, §1.
  • C. Vogler and D. Metaxas (1999) Parallel hidden markov models for american sign language recognition. In Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, Vol. 1, pp. 116–122. Note: doi: 10.1109/ICCV.1999.791206 Cited by: §1.
  • U. von Agris, M. Knorr, and K. Kraiss (2008) The significance of facial features for automatic sign language recognition. In 2008 8th IEEE International Conference on Automatic Face Gesture Recognition, Vol. , pp. 1–6. External Links: Document, ISSN Cited by: §1.
  • J. Wu, Z. Tian, L. Sun, L. Estevez, and R. Jafari (2015) Real-time american sign language recognition using wrist-worn motion and surface emg sensors. In Wearable and Implantable Body Sensor Networks (BSN), 2015 IEEE 12th International Conference on, pp. 1–6. Note: doi: 10.1109/BSN.2015.7299393 Cited by: §1.
  • A. A. Zare and S. H. Zahiri (2018) Recognition of a real-time signer-independent static farsi sign language based on fourier coefficients amplitude. International Journal of Machine Learning and Cybernetics 9 (5), pp. 727–741. Note: doi: https://doi.org/10.1007/s13042-016-0602-3 Cited by: §1.1.