3D dynamic hand gestures recognition using the Leap Motion sensor and convolutional neural networks

03/03/2020 ∙ by Katia Lupinetti, et al. ∙ Consiglio Nazionale delle Ricerche 0

Defining methods for the automatic understanding of gestures is of paramount importance in many application contexts and in Virtual Reality applications for creating more natural and easy-to-use human-computer interaction methods. In this paper, we present a method for the recognition of a set of non-static gestures acquired through the Leap Motion sensor. The acquired gesture information is converted in color images, where the variation of hand joint positions during the gesture are projected on a plane and temporal information is represented with color intensity of the projected points. The classification of the gestures is performed using a deep Convolutional Neural Network (CNN). A modified version of the popular ResNet-50 architecture is adopted, obtained by removing the last fully connected layer and adding a new layer with as many neurons as the considered gesture classes. The method has been successfully applied to the existing reference dataset and preliminary tests have already been performed for the real-time recognition of dynamic gestures performed by users.



There are no comments yet.


page 7

page 8

page 13

page 14

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Gesture recognition is an interesting and active research area whose applications are numerous and various, including, for instance, robotics, training systems, virtual prototyping, video surveillance, physical rehabilitation, and computer games. This wide interest is due to the fact that hands and fingers are used to communicate and to interact with the physical world [2]; then, by analyzing human gestures it is possible to improve the understanding of the non-verbal human interaction. This understanding poses the basis for the creation of more natural human-computer interaction, which is fundamental for the creation of immersive virtual environments with a high sense of presence. Despite this popularity and interest, until few years ago, finger movements were difficult to be acquired and characterized, especially without the use of sophisticated tracking devices, which usually turn to be quite unnatural. Indeed, there exist many methods trying to solve hand gesture recognition by using wearable devices [11, 9, 19]. With the recent technology improvement, fingers’ tracks can be digitally obtained relying only on RGB cameras eventually enhanced with depth information. In this manner, it is possible abstracting human hands by adopting two main representations: 3D model-based and appearance-based [32]. Generally, 3D model-based representations are deduced by exploiting depth information, but there are methods trying to reconstruct 3D hand representations using only RGB-data.

Hand gestures can be classified as

static, i.e. if no change occurs over time, or dynamic, i.e. if several hand poses contribute to the final semantic of the gesture within an arbitrary time interval. So far, several works address static gesture, focusing on pre-defined gesture vocabularies, such as the recognition of the sign language of different countries [22, 33, 18, 23, 27, 24]. Even if dynamic gestures are not universal but vary in different countries and cultures, they are more natural and intuitive than the static ones.

Since a tracking without gloves or controllers is more natural and efficient for users, in this paper we aim at defining a method for dynamic gesture recognition based on 3D hand representation reconstructed from the Leap Motion sensor tracking.

Our method relies on deep learning techniques applied to the images obtained by plotting the positions of the hands acquired over time on a specific 2D plane, condensing the temporal information uniquely as traces left by the fingertips that fade towards a value of transparency (the alpha value) equal to zero as time passes. Compared to also drawing the traces of the other edges that make up the hand, we have found that this approach maximizes the information that can be condensed into a single image while keeping it understandable for humans.

For the training, a public available dataset presenting 1134 gestures [7, 8] has been used. The first stage of the evaluation of the deep neural network has been carried out on a subset of the 30% of the available gestures, mantaining the split as presented in the original paper [7] and reaching an overall accuracy of the 91.83%. We also propose our own dataset with about 2000 new dynamic gesture samples, created following considerations on the balance, number of samples and noise of the original dataset. We will show that by using our approach and our dataset, it is possible to exceed 98% accuracy in the recognition of dynamic hand gestures acquired through the Leap Motion sensor. Finally, we will briefly talk about the real-time setup and how it has already been successfully used to acquire the new dataset proposed in this paper and to perform some preliminary user tests.

The rest of the paper is organized as follows. Section 2 reviews the most pertinent related works. Section 3 and 4 detail the proposed method and show the results of the experimentation carried out respectively. Finally, section 5 ends the paper providing conclusions and future steps.

2 Related works

The ability of recognizing hand gestures, or more in general understanding the interaction between humans and the surrounding environment, has arisen interests in numerous fields and has been tackled in several studies consequently. So far, several commercial sensors for capturing full hand and finger action are available on the market, generally they can be divided into wearable (such as data gloves) and external devices (such as video cameras). Wearable sensors can address different purposes, for instance VR Glove by Manus111https://manus-vr.com/, Cyber Glove System222http://www.cyberglovesystems.com/, Noitom Hi5 VR333https://hi5vrglove.com/ are designed mainly for VR training; while the Myo Gesture Control Armband is especially used in medical applications [4]. This kind of technology is very accurate and with fast reaction speed. However, using gloves requires a calibration phase every time a different user starts and not always allows natural hand gestures and intuitive interaction because the device itself could constrain fingers motion [1, 14, 25, 37]. Therefore, research on hand motion tracking has begun investigating vision-based techniques relying on external devices with the purpose of allowing a natural and direct interaction [2].

In the following sections, we review methods registering hands from RGB cameras (both monocular or stereo) and RGB-D cameras and interacting through markerless visual observations.

2.1 Methods based on RGB sensors

The use of simple RGB cameras for the hand tracking, and consequently for their gesture recognition, is a challenging problem in computer vision. So far, works using markerless RGB-images mainly aim at the simple tracking of the motion, as the body movement

[20, 29, 6] or the hand skeleton [30, 34, 38]; while the motion recognition and interpretation has still big room for improvement. Considering the method proposed in [30], which presents an approach for real-time hand tracking from monocular RGB-images, it allows the reconstruction of the 3D hand skeleton even if occlusions occur. As a matter of principle, this methodology could be used as input for a future gesture recognition. Anyhow, it outperforms the RGB-methods but not the RGB-D ones presenting some difficulties when the background has similar appearance as the hand and when multiple hands are close in the input image.

Focusing on hand gesture recognition, Barros et al. [5] propose a deep neural model to recognize dynamic gestures with minimal image pre-processing and real time recognition. Despite the encouraging results obtained by the authors, the recognized gestures are significantly different from each other, so the classes are well divided, which usually greatly simplifies the recognition of the gestures.

Recently, [36]

proposes a system for the 3D dynamic hand gesture recognition by a deep learning architecture that uses a Convolutional Neural Network (CNN) applied on Discrete Fourier Transform on the artificial images. The main limitation of this approach is represented by the acquisition setup, i.e. it must be used in an environment where the cameras are static or where the relative movement between the background and the person is minimal.

2.2 Methods based on Depth sensors

To avoid many issues related to the use of simple RGB-images, depth cameras are widely used for hand tracking and gesture recognition purposes. Generally, the most common used depth cameras are the Microsoft Kinect444https://developer.microsoft.com/en-us/windows/kinect/ and the Leap Motion (LM) sensor555https://developer.leapmotion.com.

The Kinect sensor includes a QVGA (320x240) depth camera and a VGA (640x480) video camera, both of which produce image streams at 30 frames per seconds (fps). The sensor is limited by near and far thresholds for depth estimation and it is able to track the full-body

[40]. The LM is a compact sensor that exploits two CMOS cameras capturing images with a frame rate of 50 up to 200fps [3]. It is very suitable for hand gesture recognition because it is explicitly targeted to hand and finger tracking. Another type of sensor that is adopted sometimes is the Time-of-Flight camera, which measures distance between the camera and the subject for each point of the image by using an artificial light signal provided by a laser or an LED. This type of sensor has a low resolution (176x144) and it is generally paired with a higher resolution RGB camera [41].

Using one of the above mentioned sensors, there are several works that address the recognition of static hand gestures. Mapari and Kharat [27]

proposed a method to recognize the American Sign Language (ASL). Using the data extracted from the LMC, they compute 48 features (18 positional values, 15 distance values and 15 angle values) for 4672 collected signs (146 users for 32 signs) feeding an artificial neural network by using a Multilayer Perceptron (MLP). Filho et al.


use the normalized positions of the five finger tips and the four angles between adjacent fingers as features for different classifiers (K-Nearest Neighbors, Support Vector Machines and Decision Trees). They compare the effectiveness of the proposed classifiers over a dataset of 1200 samples (6 uses for 10 gestures) discovering that the Decision Trees is the method that better performs. Still among the methods to recognize static postures, Kumar et al.


apply an Independent Bayesian Classification Combination (IBCC) approach. Their idea is to combine hand features extracted by LM (3D fingertip positions and 3D palm center) with face features acquired by Kinect sensor (71 facial 3D points) in order to improve the meaning associated with a certain movement. One challenge performing this combination relies on fusion of the features, indeed pre-processing techniques are necessary to synchronize the frames since the two devices are not comparable.

A more challenging task, which increases the engagement by a more natural and intuitive interaction, is the recognition of dynamic gestures. In this case, it is crucial preserving spatial and temporal information associated with the user movement. Ameur et al. [3] present an approach for the dynamic hand gesture recognition extracting spatial features through the 3D data provided by a Leap Motion sensor and feeding a Support Vector Machine (SVM) classifier based on the one-against-one approach. With the aim of exploiting also the temporal information, Gatto et al. [13] propose a representation for hand gestures exploiting the Hankel matrix to combine gesture images generating a sub-space that preserves the time information. Then, gestures are recognized supposing that if the distance between two sub-spaces is small enough, then these sub-spaces are similar to each other. Mathe et al. [28] create artificial images that encode the movement in the 3D spaces of skeletal joints tracked by a Kinect sensor. Then, a deep learning architecture that uses a CNN is applied on the Discrete Fourier Transformation of the artificial images. With this work, authors demonstrate that is possible to recognize hand gestures without the need of a feature extraction phase. Boulahia et al. [7] extract features on the hands trajectories, which describe local information, for instance describing the starting and ending 3D coordinates of the 3D pattern resulting from trajectories assembling, and global information, such as the convex hull based feature. Temporal information is considered by extracting features on overlapping sub-sequences resulting from a temporal split of the global gesture sequence. In this way, the authors collect a vector of 356 elements used to feed a SVM classifier.

In general, the use of complex and sophisticated techniques to extract ad-hoc features and manage temporal information requires more human intervention and does not scale well when the dictionary of gestures to be classified has to be expanded. Furthermore, the extraction of hundreds of features at different time scales may even take more CPU time than a single forward pass on a standard CNN already optimized against modern GPU architectures, thus not guaranteeing real-time performance in classification.

3 Overview of the proposed approach

Based on the assumption that natural human-computer interaction should be able to recognize not only predefined postures but also dynamic gestures, here we propose a method for the automatic recognition of gestures using images obtained from LM data. Our method uses state-of-the-art deep learning techniques, both in terms of the CNN architectures and the training and gradient descent methods employed.

In the following sub-sections, first we describe the problem of dynamic gesture recognition from images (Section 3.1), then we illustrate the pipeline to create the required images and how we feed them to the neural network model (Section 3.2). Finally, we introduce the LMHGD dataset adopted and the rationale that led us to use it (Section 3.3).

3.1 Problem formulation

Let be a dynamic gesture and a set of gesture classes, where identifies the number of classified gestures. The variation of over time can be defined as:


where defines a certain instant in a temporal window of size and represents the frame of at the time . Note that a gesture can be performed over a variable temporal window (depending on the gesture itself or on the user aptitude). The dynamic hand gesture classification problem can be defined as finding the class where most likely belongs to, i.e. finding the pair

whose probability distribution

has the maximum value .

Let be a mapping that transforms the space and the temporal information associated with a gesture resulting into a single image defined as:


With this representation, there exist a single for each gesture regardless of the temporal window size . This new representation encodes in a more compact manner the different instants of each gesture and represents the new data to be recognized and classified. Then, the classification task can be redefined in finding whether an image belongs to a certain gesture class , i.e. finding the pair whose probability distribution has the maximum value .

3.2 Hand gesture recognition pipeline

We propose a view-based approach able to describe the performed movement over time whose pipeline is illustrated in Figure1. As input, a user performs different gestures recorded as depth images by using a Leap Motion sensor (blue box). A 3D gesture visualization containing temporal information is created by using the joint positions of the 3D skeleton of the hands (magenta box) as obtained from the Leap Motion sensor. From the 3D environment, we create a 2D image projecting the obtained 3D points on a view plane (green box). The created image is fed to the pre-trained convolutional neural network (yellow box), to whose output neurons (14 such as the gesture classes to be classified) a softmax function is applied which generates a probability distribution which finally represents the predicted classes (purple box). Finally, the gesture is labeled with the class that obtains the maximum probability value (orange box). In the following the two main steps are described.

Figure 1: Pipeline of the proposed hand gesture recognition

3.2.1 The 3D visualizer

We used the VisPy666http://vispy.org library to visualize the 3D data of the hands in a programmable 3D environment. The visualizer is able to acquire the skeleton data both from the files belonging to the LMHGD dataset (through the Pandas777https://pandas.pydata.org library), and in real time using the Leap Motion SDK wrapped through the popular framework ROS (Robot Operating System) [31] which provides a convenient publish/subscribe environment as well as numerous other utility packages.

A 3D hand skeleton is created by exploiting the tracking data about each finger of the hand, the palm center, the wrist and the elbow positions. If at a certain time the whole or a part of a finger is not visible, the Leap Motion APIs allows to estimate the finger positions relying on the previous observations and on the anatomical model of the hand.

Once the 3D joint positions are acquired, spatial and temporal information of each gesture movement are encoded by creating a 3D joint gesture image, where 3D points and edges are depicted in the virtual space for each finger. Here, the color intensity of the joints representing the fingertips changes at different time instants; specifically, recent positions () have more intense colors, while earlier positions () have more transparent colors. Finally, we create a 2D image by projecting the 3D points obtained at the last instant of the gesture on a view plane. In particular, we project the 3D fingertips of the hands on a plane corresponding to the top view, which represents hands in a “natural” way as a human usually see them. Figure2 shows an examples of the 2D hand gesture patterns obtained for four different gestures. Although this view does not contain all the information available in the 3D representation of the hands, we have found that it is sufficient for a CNN to classify the set of dynamic gestures under study very accurately.

(a) Catching
(b) Rotating
(c) Scroll
(d) Shaking
(e) Draw line
(f) Zoom
Figure 2: Examples of 2D hand gesture patterns

3.2.2 Classification method

The proposed method leverages a pre-trained ResNet-50 [16]

, a state-of-the-art 2D CNN that has been modified and fine-tuned to classify the images produced by our 3D visualizer. We decided to use a ResNet-50 because this kind of architecture is pre-trained on ImageNet

[35] and it is one of the fastest at making inference on new images, having one of the lowest FLOPS count among all the architectures available today [15]. Unfortunately, given the modest size of the original LMDHG dataset, it would not have been possible to train from scratch a 3D CNN model capable of classifying all the available information coming from the LM sensor.

3.3 The LMHGD gestures dataset

Most of the reviewed gesture datasets are composed of gestures executed with a single hand, performed and recorded perfectly, with no noise or missing parts, and segmented always with the same duration. These hypotheses ensure a good class separation improving the classification results but they are far from the reality. For instance, it is not unusual to record hand trembles during the gestures including a significant amount of noise.

To improve the evaluation of different methods over a more realistic dataset, Boulahia et al. [7] define a dataset of unsegmented sequences of hand gestures performed both with one and two hands. At the end of each gesture, the involved participants were asked to perform a “rest” gesture, i.e. keeping the hands in the last movement position for a few seconds, thus providing a kind of null gesture that can be used to recognize the ending of a certain movement.

We chose their dataset as a starting point to test our method because it was the most realistic dataset created using the Leap Motion sensor that we were able to identify. It is our opinion that the original LMHGD paper provides three major contributions: i) the evaluation of the method proposed by the authors against the DHG dataset [10], ii) the evaluation of the method proposed by the authors against the properly segmented version of the LMHGD dataset, iii) the evaluation of the method proposed by the authors against the non-segmented version (i.e. without providing their classifier with the truth value on where a gesture ends and the next one starts) of the LMHGD dataset. For this paper, we decided to apply our method in order to replicate and improve only point ii), namely, against the properly segmented LMHGD dataset.

This dataset contains 608 “active” plus 526 “inactive” (i.e. classified as the Rest gesture) gesture samples, corresponding to a total of 1134 gestures. These gesture instances fall into 14 classes, Point to, Catch, Shake down, Shake, Scroll, Draw Line, Slice, Rotate, Draw C, Shake with two hands, Catch with two hands, Point to with two hands, Zoom and Rest, of which the last 5 gestures are performed with two hands. Unfortunately, the gesture classes are divided unevenly having a number of samples not uniform among them. Indeed, most of the classes have roughly 50 samples, except Point to with hand raised that presents only 24 samples and Rest, as previously said, that presents 526 samples.

4 Experiments

In this section, we present the experimental results obtained by processing the LMHGD dataset, represented in form of images from our 3D visualizer. The main obtained results concern the training of three distinct models through (i) images depicting a single view of the hands from above (see sub-section 4.2); (ii) images obtained by stitching two views together (from the top and from the right) to provide further information to the classifier (see sub-section 4.3); and (iii) a new dataset that we publicly release at this URL888https://imaticloud.ge.imati.cnr.it/index.php/s/YNRymAvZkndzpU1 containing about 2000 new gestures performed more homogeneously with each other, with less noise and with fewer mislabeling occurrences than in the LMHGD dataset (see sub-section 4.4

). Indeed, we deem this dataset is richer and more suitable for the initial stages of training of CNN models when there are few samples available and it is important that the signal-to-noise ratio of the information used for training is high.

4.1 Training of the models

The training took place using Jupyter Notebook and the popular deep learning library, Fast.ai [17]

, based on PyTorch. The hardware used was a GPU node of the new high-performance EOS cluster located within the University of Pavia. This node has a dual Intel Xeon Gold 6130 processor (16 cores, 32 threads each) with 128 GB RAM and 2 Nvidia V100 GPUs with 32 GB RAM.

The training was performed on 1920x1080 resolution images rendered by our 3D visualizer, properly classified in directories according to the original LMHGD dataset and divided into training and validation sets, again following the indications of the original paper [7].

As previously mentioned, the model chosen for training is a pre-trained version of a ResNet-50 architecture. Fast.ai convenient APIs, allow to download pre-trained architecture and weights in a very simple and automatic way. Fast.ai also automatically modifies the architecture so that the number of neurons in the output layer corresponds to the number of classes of the current problem, initializing the new layer with random weights.

The training has performed using the progressive resizing technique, i.e. performing several rounds of training using the images of the dataset at increasing resolutions to speed up the early training phases, have immediate feedback on the potential of the approach, and to make the model resistant to images at different resolutions (i.e. the model generalizes better on the problem). The specific section in [12] explains very well the concept of progressive resizing. For our particular problem, we have chosen the resolutions of 192, 384, 576, 960, 1536 and 1920 px (i.e. 1, 2, 3, 5, 8 and 10/10 of the original 1920x1080 px resolution).

Each training round at a given image resolution is divided into two phases (a = frozen, b

= unfrozen), each consisting of 10 training epochs. In phase

a, the weights of all the layers of the neural network except those of the new output layer are frozen and therefore are not trained (they are used only in the forward pass). In phase b, performed with a lower learning rate (LR), typically of one or two orders of magnitude less999Fast.ai’s Learner class has a convenient lr_find() method that allows to find the best learning rate with which to train a model in its current state., all layers, even the convolutional ones, are trained to improve the network globally.

As neural network model optimizer, we chose Ranger as it combines two of the best state-of-the-art optimizers, RAdam [26] (Rectified Adam) and Lookahead [42], in a single optimizer. Ranger corrects some inefficiencies of Adam [21]

, such as the need for an initial warm-up phase, and adds new features regarding the exploration of the loss landscape, keeping two sets of weights, one updated faster and one updated more slowly, and interpolating between them to improve the convergence speed of the gradient descent algorithm.

Once all the training rounds were completed, the model with the best accuracy was selected for the validation phase. At the same accuracy between checkpoints at different training rounds, the model generated by the lowest round (i.e. trained with lower image resolution) was selected and reloaded for validation. This has a substantial advantage in the inference phase since smaller images are classified faster.

All the code and jupyter notebooks described in this section are available at the following URL101010https://github.com/aviogit/dynamic-hand-gesture-classification.

4.2 Evaluation on the LMHGD gesture dataset - single view

To allow a further comparison with the method provided by Boulahia et al. [7], we split the dataset according to their experiments, i.e. by using sequences from 1 to 35 of the dataset to train the model (779 samples representing of the dataset) and sequences from 35 to 50 to test it (355 samples representing of the dataset).

With this partition, our approach reaches an accuracy of 91.83% outperforming the 84.78% performed by Boulahia et al. From the confusion matrix illustrated in Figure 

3, we can notice that most of the classes are well recognized with an accuracy over 93%. Misclassifications occur when the paired actions are quite similar. For example, the gestures Point to and Rotate, which are recognized with an accuracy of 80% and 73% respectively, are confused with the nosiest class Rest; Point to with two hands, recognized with an accuracy of 73%, is confused with the close class Point to; while Shake with two hands, recognized with an accuracy of 80%, is reasonably confused with the two close classes Shake, Shake down.

Figure 3: Confusion matrix obtained using a single view.

For a comprehensive evaluation, in Figure 4 we show the top losses for our model. The top losses plot shows the incorrectly classified images on which our classifier errs with the highest loss. In addition to the most misclassified classes deduced also from the confusion matrix analysis (i.e. Point to, Rotate and Shake with two hands), from the analysis of the top losses plot, we can pinpoint a few mislabeled samples. For example, in Figure 4 it can be seen that the third sample (prediction: Rest, label: Rotate) does not actually represent a Rotate at all. The same is valid for Draw Line/Zoom, Scroll/Rest and Point to/Rest samples, and having so few samples in the dataset, these incorrectly labeled samples lower the final accuracy of the model and prevent it from converging towards the global optimum.

Figure 4: Top losses plot obtained using a single view.

4.3 Evaluation on the LMHGD gesture dataset - double view

To reduce these misclassifications, we trained a new model by increasing the amount of information available in each individual image: in this case, in addition to the top view, we stitch the final image by adding a view from the right. This approach allows the classifier (exactly like a human being) to disambiguate between gestures that have a strong informative content on the spatial dimension implicit in the top view (such as the Scroll gesture, for example). Some example images are shown in Figure 5.

(a) Catching
(b) Rotating
(c) Scroll
(d) Shaking
(e) Draw line
(f) Zoom
Figure 5: Examples of 2D hand gesture patterns obtained using a double view.

Using this pattern representation, the accuracy of our method reaches 92.11%. This model performs better than the one trained only with top view images, but the improvement is not as significant as we expected. The main reason is that the LMHGD dataset is challenging both in terms of noise, mislabeled samples and for the various semantic interpretations of the gestures which are collected from different users. Figure 6 shows different examples of the Point to gesture performed by several persons and used to feed the neural network. As can be seen, it is objectively difficult, even for a human being, to be able to distinguish shared characteristics among all the images that univocally indicate that they all belong to the Point to class.

Figure 6: Examples of point patterns present in the training set.

4.4 Evaluation on our new dataset - single view

With the aim of reducing this type of occurrence, we decided to create a new, more balanced dataset111111The images obtained from our new dataset and the ones for the LMHGD dataset are available at this URL: https://github.com/aviogit/dynamic-hand-gesture-classification-datasets, with more samples per class, and with gestures performed in a more homogeneous and less noisy way. The dataset has around 2000 gesture images sampled every 5 seconds and each class has around 100 samples. The Rest class now contains only images of hands that are mostly still. Two further classes have been added: the Blank class which contains only traces of gestures that are distant in time (or no gesture at all) and the Noise class, which represents all the gestures not belonging to any other class. The dataset is provided both in the form of images and ROS bags. The latter can be replayed (in a very similar way to a “digital tape” of the acquisition) through ROS’ rosbag play command and this will re-publish all the messages captured during the acquisition (skeleton + depth images) allowing to rerun the pipeline, possibly by changing the processing parameters (e.g. displaying the gestures in a different way or changing the sampling window to improve the real-time acquisition).

Using this new dataset, we then trained a new model, using a 70%/30% random split (1348 images for the training set, 577 images for the validation set). The overall accuracy of the model is 98.78%. We report in Figure 5 the confusion matrix obtained from this model.

Figure 7: Confusion matrix obtained using the new dataset.

4.5 Real-time application

The real-time acquisition, visualization and classification pipeline has already been used extensively to acquire the new dataset proposed in this paper and for qualitative user tests, again with a sampling window set to 5 seconds. On a PC with an Nvidia GTX 770 GPU, the ResNet-50 model takes a few hundred milliseconds to perform inference on an image produced by the 3D visualizer, thus making the real-time approach usable on practically any machine. However, these tests do not yet have sufficient statistical significance and must therefore be extended to several participants before they can be published. This part will be a subject of future works.

5 Conclusions

In this paper, we have proposed a visual approach for the recognition of dynamic 3D hand gestures through the use of convolutional neural network models. The pipeline that we propose acquires data (on file or in real-time) from a Leap Motion sensor, it performs a representation in a 3D virtual space from which one or more 2D views are extracted. These images, which condense the temporal information in the form of traces of the fingertips with varying color intensity, are then fed to a CNN model, first in the training phase, then in real-time for the inference phase. The two models trained on the LMHGD dataset achieved an accuracy of above the 91% and 92% respectively, while the model trained on the new dataset proposed in this paper reaches an accuracy above the 98%.

Future work will have the primary objective of enriching the new dataset, both in terms of the number of images, possibly by joining it with the LMHGD dataset after making the appropriate modifications and re-labeling, and in terms of the number of recognized gestures. In addition, the performance of the real-time pipeline will be validated with a benchmark extended to the largest possible number of users.


  • [1] L. Abraham, A. Urru, N. Normani, M. Wilk, M. Walsh, and B. O’Flynn (2018) Hand tracking and gesture recognition using lensless smart sensors. Sensors 18 (9), pp. 2834. Cited by: §2.
  • [2] A. Ahmad, C. Migniot, and A. Dipanda (2019)

    Hand pose estimation and tracking in real and virtual interaction: a review

    Image and Vision Computing 89, pp. 35–49. Cited by: §1, §2.
  • [3] S. Ameur, A. B. Khalifa, and M. S. Bouhlel (2016) A comprehensive leap motion database for hand gesture recognition. In 2016 7th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT), pp. 514–519. Cited by: §2.2, §2.2.
  • [4] D. Bachmann, F. Weichert, and G. Rinkenauer (2018) Review of three-dimensional human-computer interaction with focus on the leap motion controller. Sensors 18 (7), pp. 2194. Cited by: §2.
  • [5] P. Barros, G. I. Parisi, D. Jirak, and S. Wermter (2014) Real-time gesture recognition using a humanoid robot with a deep neural architecture. In 2014 IEEE-RAS International Conference on Humanoid Robots, pp. 646–651. Cited by: §2.1.
  • [6] A. F. Bobick and J. W. Davis (2001) The recognition of human movement using temporal templates. IEEE Transactions on pattern analysis and machine intelligence 23 (3), pp. 257–267. Cited by: §2.1.
  • [7] S. Y. Boulahia, E. Anquetil, F. Multon, and R. Kulpa (2017) Dynamic hand gesture recognition based on 3d pattern assembled trajectories. In 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), pp. 1–6. Cited by: §1, §2.2, §3.3, §4.1, §4.2.
  • [8] S. Y. Boulahia, E. Anquetil, F. Multon, and R. Kulpa (2017) Leap motion dynamic hand gesture (lmdhg) database. IRISA. Note: https://www-intuidoc.irisa.fr/english-leap-motion-dynamic-hand-gesture-lmdhg-database/ Cited by: §1.
  • [9] A. Bourke, J. O’brien, and G. Lyons (2007) Evaluation of a threshold-based tri-axial accelerometer fall detection algorithm. Gait & posture 26 (2), pp. 194–199. Cited by: §1.
  • [10] Q. De Smedt, H. Wannous, and J. Vandeborre (2016) Skeleton-based dynamic hand gesture recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–9. Cited by: §3.3.
  • [11] L. Dipietro, A. M. Sabatini, and P. Dario (2008) A survey of glove-based systems and their applications. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 38 (4), pp. 461–482. Cited by: §1.
  • [12] L. Fang, F. Monroe, S. W. Novak, L. Kirk, C. R. Schiavon, S. B. Yu, T. Zhang, M. Wu, K. Kastner, Y. Kubota, Z. Zhang, G. Pekkurnaz, J. Mendenhall, K. Harris, J. Howard, and U. Manor (2019)

    Deep learning-based point-scanning super-resolution imaging

    bioRxiv. External Links: Document, Link, https://www.biorxiv.org/content/early/2019/10/24/740548.full.pdf Cited by: §4.1.
  • [13] B. B. Gatto, E. M. dos Santos, and W. S. Da Silva (2017) Orthogonal hankel subspaces for applications in gesture recognition. In 2017 30th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), pp. 429–435. Cited by: §2.2.
  • [14] P. Gunawardane and N. T. Medagedara (2017) Comparison of hand gesture inputs of leap motion controller & data glove in to a soft finger. In Robotics and Intelligent Sensors (IRIS), 2017 IEEE International Symposium on, pp. 62–68. Cited by: §2.
  • [15] S. H. HasanPour, M. Rouhani, M. Fayyaz, and M. Sabokrou (2016) Lets keep it simple, using simple architectures to outperform deeper and more complex architectures. CoRR abs/1608.06037. External Links: Link, 1608.06037 Cited by: §3.2.2.
  • [16] K. He, X. Zhang, S. Ren, and J. Sun (2015) Deep residual learning for image recognition. CoRR abs/1512.03385. External Links: Link, 1512.03385 Cited by: §3.2.2.
  • [17] J. Howard and S. Gugger (2020-02) Fastai: a layered api for deep learning. Information 11 (2), pp. 108. External Links: ISSN 2078-2489, Link, Document Cited by: §4.1.
  • [18] R. Kaluri and P. R. CH (2018)

    Optimized feature extraction for precise sign gesture recognition using self-improved genetic algorithm

    International Journal of Engineering and Technology 8 (1), pp. 25–37. Cited by: §1.
  • [19] N. Y. Y. Kevin, S. Ranganath, and D. Ghosh (2004) Trajectory modeling in gesture recognition using cybergloves/sup/spl reg//and magnetic trackers. In 2004 IEEE Region 10 Conference TENCON 2004., pp. 571–574. Cited by: §1.
  • [20] M. Khokhlova, C. Migniot, and A. Dipanda (2018) 3D point cloud descriptor for posture recognition.. In VISIGRAPP (5: VISAPP), pp. 161–168. Cited by: §2.1.
  • [21] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. External Links: 1412.6980 Cited by: §4.1.
  • [22] E. K. Kumar, P. Kishore, M. T. K. Kumar, and D. A. Kumar (2020) 3D sign language recognition with joint distance and angular coded color topographical descriptor on a 2–stream cnn. Neurocomputing 372, pp. 40–54. Cited by: §1.
  • [23] P. Kumar, P. P. Roy, and D. P. Dogra (2018) Independent bayesian classifier combination based sign language recognition using facial expression. Information Sciences 428, pp. 30–48. Cited by: §1, §2.2.
  • [24] A. Kuznetsova, L. Leal-Taixé, and B. Rosenhahn (2013) Real-time sign language recognition using a consumer depth camera. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 83–90. Cited by: §1.
  • [25] G. Lawson, D. Salanitri, and B. Waterfield (2016) Future directions for the development of virtual reality within an automotive manufacturer. Applied ergonomics 53, pp. 323–330. Cited by: §2.
  • [26] L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and J. Han (2019)

    On the variance of the adaptive learning rate and beyond

    External Links: 1908.03265 Cited by: §4.1.
  • [27] R. B. Mapari and G. Kharat (2016) American static signs recognition using leap motion sensor. In Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies, pp. 67. Cited by: §1, §2.2.
  • [28] E. Mathe, A. Mitsou, E. Spyrou, and P. Mylonas (2018) Hand gesture recognition using a convolutional neural network. In 2018 13th International Workshop on Semantic and Social Media Adaptation and Personalization (SMAP), pp. 37–42. Cited by: §2.2.
  • [29] D. Mehta, S. Sridhar, O. Sotnychenko, H. Rhodin, M. Shafiei, H. Seidel, W. Xu, D. Casas, and C. Theobalt (2017) Vnect: real-time 3d human pose estimation with a single rgb camera. ACM Transactions on Graphics (TOG) 36 (4), pp. 1–14. Cited by: §2.1.
  • [30] F. Mueller, F. Bernard, O. Sotnychenko, D. Mehta, S. Sridhar, D. Casas, and C. Theobalt (2018-06) GANerated hands for real-time 3d hand tracking from monocular rgb. In Proceedings of Computer Vision and Pattern Recognition (CVPR), External Links: Link Cited by: §2.1.
  • [31] M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng (2009)

    ROS: an open-source robot operating system

    In ICRA Workshop on Open Source Software, Cited by: §3.2.1.
  • [32] S. S. Rautaray and A. Agrawal (2015) Vision based hand gesture recognition for human computer interaction: a survey. Artificial intelligence review 43 (1), pp. 1–54. Cited by: §1.
  • [33] S. Ravi, M. Suman, P. Kishore, K. Kumar, A. Kumar, et al. (2019) Multi modal spatio temporal co-trained cnns with single modal testing on rgb–d based sign language gesture recognition. Journal of Computer Languages 52, pp. 88–102. Cited by: §1.
  • [34] J. Romero, H. Kjellström, and D. Kragic (2010) Hands in action: real-time 3d reconstruction of hands in interaction with objects. In 2010 IEEE International Conference on Robotics and Automation, pp. 458–463. Cited by: §2.1.
  • [35] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei (2015) ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115 (3), pp. 211–252. External Links: Document Cited by: §3.2.2.
  • [36] C. C. d. Santos, J. L. A. Samatelo, and R. F. Vassallo (2019) Dynamic gesture recognition by using cnns and star rgb: a temporal information condensation. arXiv preprint arXiv:1904.08505. Cited by: §2.1.
  • [37] T. Sharp, C. Keskin, D. Robertson, J. Taylor, J. Shotton, D. Kim, C. Rhemann, I. Leichter, A. Vinnikov, Y. Wei, et al. (2015) Accurate, robust, and flexible real-time hand tracking. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 3633–3642. Cited by: §2.
  • [38] B. Stenger, A. Thayananthan, P. H. Torr, and R. Cipolla (2006) Model-based hand tracking using a hierarchical bayesian filter. IEEE transactions on pattern analysis and machine intelligence 28 (9), pp. 1372–1384. Cited by: §2.1.
  • [39] I. A. Stinghen Filho, B. B. Gatto, J. Pio, E. N. Chen, J. M. Junior, and R. Barboza (2016)

    Gesture recognition using leap motion: a machine learning-based controller interface

    In Sciences of Electronics, Technologies of Information and Telecommunications (SETIT), 2016 7th International Conference. IEEE, Cited by: §2.2.
  • [40] J. Suarez and R. R. Murphy (2012) Hand gesture recognition with depth images: a review. In 2012 IEEE RO-MAN: the 21st IEEE international symposium on robot and human interactive communication, pp. 411–417. Cited by: §2.2.
  • [41] M. Van den Bergh and L. Van Gool (2011) Combining rgb and tof cameras for real-time 3d hand gesture interaction. In 2011 IEEE workshop on applications of computer vision (WACV), pp. 66–72. Cited by: §2.2.
  • [42] M. R. Zhang, J. Lucas, G. Hinton, and J. Ba (2019) Lookahead optimizer: k steps forward, 1 step back. External Links: 1907.08610 Cited by: §4.1.