1 Introduction
The visionbased recovery of the 3D pose of human hands is an interesting and important problem in computer vision. Humans use their hands all the time and in several ways either to interact with the physical world or to communicate with other humans. An accurate, robust and real time solution to the problem has a huge impact in a number of application domains including HCI, HRI, medical rehabilitation, sign language recognition, etc.
Problem  Type  RGBD  Multi RGB  Stereo RGB  Mono RGB 
Single hand  G
D H 
[27] [42] [5] [17] [23] [24] [56] [16] [22] [67] [9] [53] [15] [25] [65] [51] [54] [20] [55] [52] [44] [43] [58] [35] [34] 
[26] [47]
[30]
[7] 
[36]
[Proposed] [41] 
[8] [21] [48] [64] [50] [61] [57] [13]
[2] [63]
[39]

HandObject  G
D H 
[33] [19] [10]
[18] [11]
[37] [38] [45] 
[28] [62]

[Proposed]

[40] 
Two hands  G
D H 
[29] [31]
[46] 
[3] [59] 
[Proposed]


The problem of 3D hand tracking is challenging. Difficulties arise due to the highly articulated structure of the hand which results in ambiguous poses and selfocclusions. These problems escalate when hands interact with each other and/or manipulate objects. On top of these intrinsic difficulties, some application areas impose constraints on the spatial and temporal resolution of the images that feed a 3D hand pose estimation/tracking algorithm.
During the last couple of years, a number of methods for 3D hand tracking or singleframe 3D hand pose estimation have appeared. Most of the methods address the problem under the assumption that a single acting hand is observed. To a lesser extend, solutions for the problem of tracking handobject interactions have also been proposed. The common characteristic of contemporary approaches is that they rely on RGBD or depth data. Dense and accurate depth information proves sufficient for arriving at accurate pose estimation. Still, it would be highly desirable if accurate and efficient 3D hand and hand/object tracking could be achieved on the basis of RGB information, alone. This would make 3D hand tracking possible by the vast majority of today’s camera systems that exist everywhere (including smartphones, tablets, etc) and do not record depth information. It would also make 3D hand tracking possible in outdoor environments where several active RGBD sensors do not provide reliable information.
In this work we address exactly this challenge. We present the first method that performs detailed, accurate and real time 3D tracking of scenes comprising hands based on a conventional, passive, shortbaseline, RGB stereo. The naive approach to this problem would be to first perform 3D reconstruction and then employ a state of the art approach for depthbased hand tracking/pose estimation. However, as we show experimentally, this approach does not produce reliable results. Depth information is either noisy, or sparse or smoothedout to support accurate 3D tracking. Thus we follow a completely different path. We capitalize on the very successful hypothesizeandtest, generative tracking paradigm. We assume a 3D model of the object(s) to be tracked (i.e., a hand, a handobject constellation, two hands). As shown in Figure 1, a hypothesis regarding the configuration of such a model gives rise to hypotheses on the 3D structure of the scene. A correct model hypothesis leads to photoconsistent views. Tracking the model is then formulated as an optimization problem that seeks the model configuration that maximizes stereo color consistency.
The proposed method has been evaluated extensively in standard datasets and in comparison to existing, state of the art methods that rely on depth information. We investigate three interesting problem subclasses, namely (a) single hand tracking, (b) a hand interacting with a rigid object and (c) two interacting hands. The obtained results lead to the conclusion that the proposed stereobased algorithm can be as good and can even outperform their RGBDbased counterparts, both in terms of accuracy and speed.
2 Related work
The three tracking scenarios correspond to problems of increasing dimensionality/complexity. Existing solutions can be classified along two important dimensions.
Discriminative vs generative vs hybrid: Discriminative methods learn a mapping from observations to poses. Generative ones fit a model to the set of available observations. Discriminative methods are faster, less accurate and do not require initialization, i.e., perform single frame pose estimation. Generative ones are more accurate, require initialization and perform tracking to exploit temporal continuity. Hybrid methods try to couple the benefits from both worlds by employing a discriminative component to arrive at a coarse solution which is then refined by a generative, modelbased component.
Based on their input: There are methods that rely on RGBD sensors, multicamera setups, stereo RGB cameras or single camera input.
Table 1 provides an overview of the research in the field, which leads to a number of interesting conclusions: (a) The vast majority of solutions are based on input from depth sensors. (b) There are very few stereobased methods, all of which deal with single hand tracking. (c) For the problems of handobject and twohands tracking, there is no available stereo method. Moreover, although it appears that there are several methods for monocular RGB tracking of a single hand, these methods only produce limited information regarding hand pose and in very constrained settings.
Our contribution: This paper proposes a generative, modelbased tracking framework that (a) applies to all three problem instances and (b) uses a compact, shortbaseline, calibrated stereo pair. As such, it covers a significant gap in the existing literature. As shown in Section 4, the proposed framework constitutes the first practical approach to the problem(s) based on such input and manages to provide solutions that are as good in terms of accuracy and speed as those obtained by their RGBD/depthbased competitors.
3 The proposed method
Figure 2 illustrates the proposed pipeline. Although this is shown for the problem of single hand tracking, it is straightforward to extend it to cover more complicated tracking scenarios such as tracking a hand in interaction with an object or two interacting hands.
We assume images acquired by a calibrated stereo pair (Figure 2a). We also assume a 3D hand model (skinned 3D mesh, Figure 2c) that can be articulated in 3D space based on its own intrinsic (kinematic model, articulation) and extrinsic (absolute 3D position and orientation) parameters (Section 3.1). A given hypothesis about the hand configuration provides a hypothesis about the 3D location of every point of the hand model. Figure 1 illustrates this for the cases of a wrong (left) and a correct (right) hypothesis. If the model hypothesis is wrong, then, different physical points are projected in the two stereo views. Therefore, it is quite likely that the corresponding 2D image points will have inconsistent appearance (e.g., different color). In contrast, if the model hypothesis is correct (Figure 1, right), the color consistency of all pairs of projections of the hand model points on the two stereo images is maximized. Provided that a reasonable measure of color consistency can be defined, tracking the model can be formulated as an optimization problem that seeks the hand pose that maximizes this color consistency (Figure 2df). In Section 3.3
we do provide such a quantification of color consistency and we employ Particle Swarm Optimization (PSO)
[6] in order to maximize it (Section 3.4). The defined objective function measures color consistency by measuring color similarity, weighted by the distinctiveness of the corresponding points. The intuition behind this choice is that color consistency over uniformly colored areas should weight less than color consistency of distinctive points. Our measure of distinctiveness (Section 3.2) is based on the analysis of the Harris corner detector [12]. Sample distinctiveness maps for the stereo pair of images of Figure 2a are shown in Figure 2b.3.1 Observing and modelling the scene
Observations come from a calibrated stereo pair of RGB cameras. The intrinsic (focal length, distortion, camera center) and extrinsic (relative position, orientation) parameters for each camera are computed using standard camera calibration techniques [4]. The images of each stereo pair are temporally synchronized and undistorted before further processing.
In this work, we consider the tracking of hands and rigid objects. For modelling hands we employ the anatomically consistent and visually realistic hand model provided by libhand [60] (Figure 2c). This is a skinned model of a right hand consisting of bones. In order to be directly comparable with results of existing methods [27, 29], the original model was adapted by removing the wrist vertices and root bone and by allowing mobility with degrees of freedom. The adapted hand model consists of bones and of a 3D mesh with vertices. The configuration of each hand is represented by parameters: Three for the hand position, four for the quaternion representation of the hand rotation and four articulation angles for each of the five fingers. The model of the left hand is a mirrored version of the model for the right hand. Using the hand model parameters and the camera parameters (intrinsics + extrinsics) we can render any configuration of the hand model on the stereo pair.
Libhand provides a realistic texture for the hand model. This information is not used during the tracking. However, the textured hand was used in order to render the synthetic datasets used for the quantitative evaluation of the method, as it will be detailed in Section 4.
Rigid objects in the scene have degrees of freedom, modelled with parameters, for the object position and
for the quaternion representation of their rotation. An observed scene can thus be represented as a multidimensional vector with as many dimensions as the sum of the numbers of parameters of the individual objects in it.
3.2 Distinctiveness maps
When checking the color consistency between image points, it is preferable to give more emphasis to the similarity of distinctive ones. This is because the similarity of distinctive points bears more information content than the similarity of points in uniformly colored regions. Thus, we create two distinctiveness maps, one for each of the input images. Our approach for defining distinctiveness borrows from related work in the field of corner detection [12]. More specifically, for each pixel in an image we compute the principal curvatures and of the local autocorrelation function in a neighborhood of pixels centered on . Without loss of generality, we assume that . We use in order to preserve fine image structures. Small and values mean a uniformly colored region. Large and values indicate a corner. Finally, if is significantly larger that , the area around is an edge.
For a definition of distinctiveness we could use the Harris corner detector [12] response function:
(1) 
This option has been evaluated experimentally and did not perform well for standard . The reason is that the Harris response function promotes corners and suppresses edges. The best results were obtained by employing . This is a simpler definition for distinctiveness, but still it is not assessing adequately both the magnitude and the relative scale of and .
For the purpose of building appropriate distinctiveness maps, we designed a function that gives a high response to pixels that look like corners, lower to pixels that are parts of edges and, finally, a zero response to uniform areas. Furthermore, the distinctiveness is measured relatively within each image to ensure that all available information is exploited. In that direction, we first define to be the log of the magnitude of the vector , i.e., The log function is used as a scaling operator. Subsequently, we compute the median of values over the whole image. Then, we define
(2) 
is a sigmoid function that maps its input to the range
. A response results for values equal to the median in the image. Similarly, we define as . measures the difference between and . Higher values for are indicative of a corner rather than an edge point. Given the median of values over the image, we define the function as:(3) 
By subtracting , from and in Eq. 2 and Eq. 3, respectively, we achieve relative and responses over the images. The product represents the relative distinctiveness of a point:
(4) 
Points for which are set to zero signifying that points that are less distinctive than a threshold are not considered at all. The threshold was determined experimentally as detailed in Section 4.3. By measuring the value of for each pixel in the left and the right images of the stereo pair we obtain two distinctiveness maps, and .
3.3 Rating a model hypothesis
We denote with , the two images of the stereo pair and with , the corresponding distinctiveness maps as computed in Section 3.2. We consider a hypothesis about the configuration of the modelled scene (position, orientation, possible articulation) of all modelled and considered hands and objects to be tracked (see Section 3.1). Using the intrinsic and extrinsic parameters of the stereo, we can estimate the projection and of a 3D point of in each of the two views. Then, we define the color consistency of points , as:
(5) 
Intuitively, this color consistency measure considers the minimum of the distinctivenesses of the two points, scaled by a function that takes its maximum value when corresponding colors are identical and drops to zero with increasing color difference. In Eq. 5, is a scale parameter that controls the steepness of the exponential. The value of was determined experimentally (see Section 4.3). The total color consistency of , is then defined as
(6) 
In Eq. 6, , and are the sets of points corresponding to the visible surface of in each view.
Care must be taken to exclude from consideration model points that are visible in one view but occluded in the other. To this end, we render the 3D model in both views. For each model point that is actually visible in one view, we consider its projection in the other view. If the same physical point is visible (no occlusion), then we expect a rendered 3D point at exactly the same position (ideally) or within a short range (in practice). Otherwise, the point is excluded from consideration. We set mm in all experiments.
For a specific time instant, estimating the scene state amounts to estimating the optimal hypothesis that maximizes the objective function of Eq. 6, i.e.:
(7) 
3.4 Stochastic optimization
The optimization (maximization) problem defined in Eq. 7 is solved based on Particle Swarm Optimization (PSO) [14] which is a stochastic, evolutionary optimization method. It has been demonstrated that PSO is a very effective and efficient method for solving vision optimization problems such as head pose estimation [32], hand articulation tracking [29] and others. PSO achieves optimization based on the collective behavior of a set of particles (candidate solutions) that evolve in runs called generations. The rules that govern the behavior of particles emulate “social interaction”. A population of particles is a set of points in the parameter space of the objective function to be optimized. PSO has a number of attractive properties. For example, depends on very few parameters, does not require differentiation of the objective function and converges with a relatively small computational budget [1].
Every particle holds its current position (current candidate solution, set of parameters) in a vector and its current velocity in a vector . Each particle keeps in vector the position at which it achieved, up to the current generation , the best value of the objective function. The swarm as a whole, stores the best position across all particles of the swarm. All particles are aware of the global optimum . The velocity and position update equations in every generation are and , where is a constant constriction factor [6], is called the cognitive component, is termed the social component and
are random samples of a uniform distribution in the range
. Finally, must hold [6]. As suggested in [6] we set , and with .In our problem formulation, the solution space has dimensions where is the sum of the parameters encoding the degrees of freedom of all the observed/tracked objects. Specifically, for tracking a single hand, for a hand interacting with a rigid object and
for two hands. Particles are initialized with a normal distribution around the center of the search range with their velocities set to zero. Each dimension of the multidimensional parameter space is bounded in some range. During the position update, a velocity component may force a particle to move to a point outside the bounded search space. Such a component is truncated and the particle does not move beyond the boundary of the corresponding dimension. Since the object(s) motion needs to be continuously tracked in a sequence instead of being estimated in a single frame, temporal continuity is exploited. More specifically, the solution over frame
is used to restrict the search space for the initial population at frame . In related experiments, the search range (or the space in which particle positions are initialized) extend (for positional parameters) and (for rotational parameters) around their estimated values in the previous frame.3.5 Implementation and performance issues
In order to achieve a real time frame rate, the proposed pipeline was implemented using CUDA on an NVIDIA GPU. In order to lower the computational requirements, the input images are segmented around the last known solution. The bounding box of the tracked objects is computed for each frame by rendering a synthetic view of the scene in full resolution as it appeared in the previous frame and then segmenting an area around the objects. The distinctiveness maps are computed only for the segmented images.
PSO is inherently parallelizable since the particles are only synchronized at the end of each generation. In our implementation we exploit this feature by evaluating the objective function for each particle in parallel, using CUDA. For each generation all hypothesized scene configurations are rendered using OpenGL. Our reference implementation can achieve near real time performance (fps) for single hand tracking running with a budget of particles and generations on a computer equipped with an Intel i7 950 @ 3.07GHz CPU, 12GB of RAM and an NVidia GTX970 GPU. Although the rendering of each generation can be performed in a single OpenGL drawing call, the reference implementation renders once for each camera view. Further optimizations in the pipeline implementation could increase the performance significantly on the same hardware.
4 Experimental evaluation
While in recent years some datasets with ground truth for tracking articulated hand motions have been made available, they use RGBD sensors and not stereo. In order to evaluate the proposed method and provide a fair comparison to other methods, the standard protocols used in the relevant literature [28, 26, 19] were followed. Synthetic stereo and RGBD datasets were created using the tracking results of baseline RGBD methods on real world sequences. The scenarios in the datasets consist of articulations of (a) a single hand, (b) a hand interacting with an object and (c) two interacting hands. To evaluate quantitatively the performance of our approach, we directly compare it with state of the art modelbased methods that use RGBD input. For the experiments, we used our implementation of [27] for single hand tracking and our implementation of [29] for two hands tracking. A variant of [29] was also used to track a hand interacting with a rigid object. In our implementations, the above methods were adapted so as to operate with skinned hand models instead of hand models consisting of collections of geometric solids. This reduces the tracking error of the baseline approaches over the original methods reported in [27, 29]. In the qualitative experiments, we considered multiple sequences covering several indoors and outdoors scenarios.
4.1 Datasets
Synthetic data: For the single and two hands tracking scenarios we used^{1}^{1}1Became available to us after contacting the authors of [27, 29]. the synthetic sequences and ground truth presented in [27, 29]. For the handobject tracking scenario we captured an RGBD sequence of a human manipulating a spray bottle. The spray bottle model was created with a laser scanner which provides millimetre accuracy. The handobject sequence was tracked with the variant of [29] in order to obtain a ground truth. Subsequently, the ground truth from the three sequences was used to render (a) synthetic stereo datasets to be used by our method and (b) synthetic RGBD to be used by the baseline RGBD methods we compare against. For the synthetic RGB images, the texture information of the hands and object were used during rendering. The rendered models were illuminated using an ambient light source. Real world images of indoor environments were captured with the ZED sensor and used as the background of the models for the synthetic stereo.
The single hand (SH) sequence consists of 638 frames, the handobject sequence (HO) of 545 frames and the twohands sequence (TH) of 705 frames. All sequences cover challenging articulations of hands as well as handobject and handhand interactions. To the best of our knowledge, no other datasets with ground truth exist for tracking handobject and handhand interactions with a stereo pair. We plan to make all three sequences publicly available.
Real world sequences: In order to support the qualitative evaluation of the proposed method on real data, stereo sequences were captured using the ZED stereo pair [49] both indoors and outdoors. Three different groups of sequences were recorded, for the problems of tracking a single hand, handobject interaction and two hands. The real world sequences were recorded at different combinations of resolutions and frame rates (1080p, 720p, 30fps and 60fps) both indoors and outdoors. The sequences show a user performing various hand motions, manipulating an object in front of the camera and performing bimanual hand gestures. The hand models used (see Section 3.1) do not exactly match the user’s hands (e.g., differ in size and finger length). For the handobject realworld sequences the object used was a pencilbox. The box was modelled as a cuboid that is just an approximation to the actual object’s 3D shape. It is demonstrated that the proposed method is robust to these inconsistencies between the scene models and the actual observations.
4.2 Performance metrics
We use standard metrics to assess the tracking error of the evaluated methods. For a hand, the tracking error is computed as the mean distance of the corresponding hand joints from the ground truth. For an object model, we use anchor points on the object’s model. The error of the object pose is the mean distance of the tracked anchor points from the corresponding ground truth. The tracking error for a frame is the average tracking error of all the objects it involves. Finally, the tracking error of a sequence is the average perframe tracking error.
4.3 Results
Deciding internal parameters: As presented in Section 3, the proposed method entails the setting of (a) the distinctiveness threshold that controls the image points that contribute to the objective function (Eq. 4) and (b) , which gauges the steepness of the color similarity curve (Eq. 5). Using the synthetic dataset for the single hand, we investigated the effect of different parameter values on the accuracy of the method. Figure 3 (left), shows the tracking error for a range of values () and . The right plot in this figure shows the tracking error for and
. The tests are performed with a relatively restricted PSO budget (32 particles and 32 generations) where a fine tuned objective function is needed to achieve good results and, thus, performance differences are more prominent. The plots show the mean tracking error over multiple runs as well as the standard deviation. It can be verified that good performance is attained for a wide range of values for both parameters, indicating that the method is not sensitive to their exact setting. The values
and were used in all experiments.Quantitative evaluation of tracking accuracy: Using the synthetic sequences we measure the tracking error of the proposed method as a function of different PSO budget configurations. In these the experiments, both methods (proposed and baseline) are initialized at the first frame of the sequence using the known ground truth for the frame (same initial position). Tracking failures were not reinitialized. Each plot on the top row of Figure 4 shows the tracking error of the proposed method and its RGBD competitor for the SH, HO and TH datasets (left to right). For each data point we consider the mean over runs.
The proposed method performs similarly or better for the SH dataset when compared to [27] despite the fact that the proposed method relies on much less (or implicit) information compared to [27]. For the problems of higher dimensionality the results indicate that RGBD solutions are more accurate but only by a small margin.
The bottom row of Figure 4 shows the percentage of frames of a dataset for which the tracking error was below a certain threshold. There are six curves in each plot. The dashed curves correspond to the RGBD baseline methods. Each color corresponds to a different PSO budget (particlesgenerations). It is interesting to note that when given a higher computational budget, the proposed method can achieve similar or better accuracy to the RGBD counterpart on the HO and TH datasets.
Assessing the effect of foreground detection: The RGBD methods use segmented depth as input. Typically, this is achieved either using skin color detection on the RGB input (less robust) or based on depth segmentation (more robust). In order to make a fair comparison between the proposed and the methods we are comparing against, in the experiments reported in Figure 4, we used the same masking method for all evaluated methods. However, in real life, our stereobased approach does not have access to the more robust depthbased foreground segmentation. In that respect, we are interested in assessing the influence of foreground masking on the performance of the proposed method. We performed the same experiments, for different PSO budget configurations, with and without foreground masking. Masking reduces tracking error but only marginally (mm on average). Thus, foreground masking can be skipped without significantly affecting the tracking accuracy.
Qualitative evaluation in real world sequences: Figure 5 shows representative frames with tracking results on the real world datasets. The computed pose is superimposed on the left RGB image of the stereo pair. Different objects are shown in different color (red and yellow). On the top left of each frame the original bounding box of the observation is shown. While depthbased methods fail outdoors due to ambient infrared light, our stereo based method does not suffer from this problem. In Figure 6 we demonstrate results from an outdoors sequence tracking a single hand in short () and longer () distances. In all sequences, tracking is performed without any foreground segmentation. Complete results are provided in the supplementary material accompanying the paper.
Tracking by relying on depth from stereo: A straightforward idea for stereobased tracking would be to first reconstruct 3D structure and then use the approach of [27, 29] on the resulting depth maps. This approach is also explored in [66]. We evaluated this alternative compared to our color consistency based approach that bypasses the problem of 3D reconstruction. Essentially, we fed the RGBD based approach with the stereobased depth information provided by the ZED camera and made sure that the depth around the hand was segmented correctly. It turns out that this is a rather unreliable solution that fails very fast. This is due to the quality of the depth information which is either dense but very noisy, or reliable but sparse or smoothed out. Figure 7 shows indicative results. Both methods are initialized at the same initial position on the first frame but they diverge quickly. The depth based method looses track after very few frames. The supplemental material provides further relevant evidence.
5 Summary and conclusions
We presented a novel approach for tracking hand interactions in various settings. The proposed method is the first that can cope accurately and efficiently with these tracking scenarios based on a conventional short baseline stereo. By employing a hypothesiseandtest framework, we cast tracking as an optimization problem that maximizes the color consistency of the tracked scene. Thus, we avoid the explicit computation of disparity maps which is particularly challenging for the relatively uniformly colored human hands. Our approach achieves similar, and in some cases, better tracking accuracy than state of the art methods that use specialized active sensors such as depth cameras, at a comparable computational performance. From a theoretical point of view, the significance of the proposed method is that it shows that accurate 3D structure information is not a prerequisite for 3D hand tracking and related problems. From a practical point of view, the significance of the developed method is also high, as it enables 3D hand tracking based on compact, conventional, widely deployed, passive vision systems that do not have the limitations of contemporary RGBD cameras (e.g. limited spatial resolution, constraints on distance of observation, sensitivity to infrared light). Moreover, although we focus on the challenging tracking scenarios involving hands, nothing prevents the applicability of the proposed algorithm to problems such as human skeleton tracking and 3D tracking of rigid objects.
References
 [1] P. Angeline. Evolutionary optimization versus particle swarm optimization: Philosophy and performance differences. Evolutionary Programming VII, 1998.
 [2] V. Athitsos and S. Sclaroff. Estimating 3d hand pose from a cluttered image. In CVPR, 2003.
 [3] L. Ballan, A. Taneja, J. Gall, L. Van Gool, and M. Pollefeys. Motion capture of hands in action using discriminative salient points. In ECCV, 2012.
 [4] J.Y. Bouguet. Camera calibration toolbox for matlab. 2004.
 [5] M. Bray, E. KollerMeier, and L. Van Gool. Smart particle filtering for highdimensional tracking. CVIU, 2007.

[6]
M. Clerc and J. Kennedy.
The particle swarm  explosion, stability, and convergence in a
multidimensional complex space.
Transactions on Evolutionary Computation
, 2002.  [7] T. E. de Campos and D. W. Murray. Regressionbased hand pose estimation from multiple cameras. In CVPR, 2006.
 [8] M. de La Gorce, D. J. Fleet, and N. Paragios. Modelbased 3d hand pose estimation from monocular video. PAMI, 2011.
 [9] S. Fleishman, M. Kliger, A. Lerner, and G. Kutliroff. Icpik: Inverse kinematics based articulatedicp. In CVPRW, 2015.
 [10] H. Hamer, J. Gall, T. Weise, and L. Van Gool. An objectdependent hand pose prior from sparse training data. In CVPR, 2010.
 [11] H. Hamer, K. Schindler, E. KollerMeier, and L. Van Gool. Tracking a hand manipulating an object. In ICCV, 2009.
 [12] C. Harris and M. Stephens. A combined corner and edge detector. In Alvey vision conference, 1988.
 [13] T. Heap and D. Hogg. Towards 3d hand tracking using a deformable model. In FG, 1996.
 [14] J. Kennedy, R. Eberhart, and Y. Shi. Swarm intelligence. 2001.
 [15] C. Keskin, F. Kıraç, Y. E. Kara, and L. Akarun. Hand pose estimation and hand shape classification using multilayered randomized decision forests. In ECCV, 2012.
 [16] S. Khamis, J. Taylor, J. Shotton, C. Keskin, S. Izadi, and A. Fitzgibbon. Learning an efficient model of hand shape variation from depth images. In CVPR, 2015.
 [17] D. Kim, O. Hilliges, S. Izadi, A. D. Butler, J. Chen, I. Oikonomidis, and P. Olivier. Digits: freehand 3d interactions anywhere using a wristworn gloveless sensor. In Proc., ACM symposium on User interface software and technology, 2012.
 [18] N. Kyriazis and A. Argyros. Physically plausible 3d scene tracking: The single actor hypothesis. In CVPR, 2013.
 [19] N. Kyriazis and A. Argyros. Scalable 3d tracking of multiple interacting objects. In CVPR, 2014.
 [20] P. Li, H. Ling, X. Li, and C. Liao. 3d hand pose estimation using randomized decision forest with segmentation index points. In ICCV, 2015.
 [21] J. MacCormick and M. Isard. Partitioned sampling, articulated objects, and interfacequality hand tracking. In ECCV, 2000.
 [22] A. Makris and A. A. Argyros. Modelbased 3d hand tracking with online shape adaptation. In BMVC, 2015.
 [23] A. Makris, N. Kyriazis, and A. A. Argyros. Hierarchical particle filtering for 3d hand tracking. In CVPR, 2015.
 [24] S. Melax, L. Keselman, and S. Orsten. Dynamics based 3d skeletal hand tracking. In Proc. of Graphics Interface, 2013.
 [25] M. Oberweger, P. Wohlhart, and V. Lepetit. Training a feedback loop for hand pose estimation. In ICCV, 2015.
 [26] I. Oikonomidis, N. Kyriazis, and A. A. Argyros. Markerless and efficient 26dof hand pose recovery. In ACCV, 2010.
 [27] I. Oikonomidis, N. Kyriazis, and A. A. Argyros. Efficient modelbased 3d tracking of hand articulations using kinect. In BMVC, 2011.
 [28] I. Oikonomidis, N. Kyriazis, and A. A. Argyros. Full dof tracking of a hand interacting with an object by modeling occlusions and physical constraints. In ICCV, 2011.
 [29] I. Oikonomidis, N. Kyriazis, and A. A. Argyros. Tracking the articulated motion of two strongly interacting hands. In CVPR, 2012.
 [30] I. Oikonomidis, N. Kyriazis, K. Tzevanidis, and A. A. Argyros. Tracking hand articulations: Relying on 3d visual hulls versus relying on multiple 2d cues. In ISUVR, 2013.
 [31] I. Oikonomidis, M. I. A. Lourakis, and A. A. Argyros. Evolutionary quasirandom search for hand articulations tracking. In CVPR, 2014.
 [32] P. Padeleris, X. Zabulis, and A. A. Argyros. Head pose estimation on depth data based on particle swarm optimization. In CVPRW, 2012.
 [33] P. Panteleris, N. Kyriazis, and A. A. Argyros. 3d tracking of human hands in interaction with unknown objects. In BMVC, 2015.
 [34] G. Poier, K. Roditakis, S. Schulter, D. Michel, H. Bischof, and A. A. Argyros. Hybrid oneshot 3d hand pose estimation by exploiting uncertainties. arXiv:1510.08039, 2015.
 [35] C. Qian, X. Sun, Y. Wei, X. Tang, and J. Sun. Realtime and robust hand tracking from depth. In CVPR, 2014.
 [36] J. M. Rehg and T. Kanade. Visual tracking of high dof articulated structures: an application to human hand tracking. In ECCV, 1994.
 [37] G. Rogez, J. S. Supancic, and D. Ramanan. Firstperson pose recognition using egocentric workspaces. In CVPR, 2015.
 [38] G. Rogez, J. S. Supancic, and D. Ramanan. Understanding everyday hands in action from rgbd images. In ICCV, 2015.
 [39] J. Romero, H. Kjellström, and D. Kragic. Monocular realtime 3d articulated hand pose estimation. In Int’l Conf. on Humanoid Robots, 2009.
 [40] J. Romero, H. Kjellström, and D. Kragic. Hands in action: realtime 3d reconstruction of hands in interaction with objects. In ICRA, 2010.
 [41] R. Rosales, V. Athitsos, L. Sigal, and S. Sclaroff. 3d hand pose reconstruction using specialized mappings. In ICCV, 2001.
 [42] T. Schmidt, R. Newcombe, and D. Fox. Dart: Dense articulated realtime tracking. RSS, 2014.
 [43] T. Sharp, C. Keskin, D. Robertson, J. Taylor, J. Shotton, D. Kim, C. Rhemann, I. Leichter, A. Vinnikov, Y. Wei, et al. Accurate, robust, and flexible realtime hand tracking. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015.
 [44] S. Sridhar, F. Mueller, A. Oulasvirta, and C. Theobalt. Fast and robust hand tracking using detectionguided optimization. In CVPR, 2015.
 [45] S. Sridhar, F. Mueller, M. Zollhöfer, D. Casas, A. Oulasvirta, and C. Theobalt. Realtime joint tracking of a hand manipulating an object from rgbd input. In ECCV, 2016.
 [46] S. Sridhar, A. Oulasvirta, and C. Theobalt. Interactive markerless articulated hand motion tracking using rgb and depth data. In ICCV, 2013.
 [47] S. Sridhar, H. Rhodin, H.P. Seidel, A. Oulasvirta, and C. Theobalt. Realtime hand tracking using a sum of anisotropic gaussians model. In 3DV, 2014.
 [48] B. Stenger, P. R. Mendonça, and R. Cipolla. Modelbased 3d tracking of an articulated hand. In CVPR, 2001.
 [49] StereoLabs. Stereolabs zed stereo camera. https://www.stereolabs.com/.
 [50] E. B. Sudderth, M. I. Mandel, W. T. Freeman, and A. S. Willsky. Visual hand tracking using nonparametric belief propagation. In CVPRW, 2004.
 [51] X. Sun, Y. Wei, S. Liang, X. Tang, and J. Sun. Cascaded hand pose regression. In CVPR, 2015.
 [52] A. Tagliasacchi, M. Schröder, A. Tkach, S. Bouaziz, M. Botsch, and M. Pauly. Robust articulatedicp for realtime hand tracking. In Computer Graphics Forum, 2015.
 [53] D. Tang, J. Taylor, P. Kohli, C. Keskin, T.K. Kim, and J. Shotton. Opening the black box: Hierarchical sampling optimization for estimating human hand pose. In ICCV, 2015.
 [54] D. Tang, T.H. Yu, and T.K. Kim. Realtime articulated hand pose estimation using semisupervised transductive regression forests. In ICCV, 2013.
 [55] J. Taylor, L. Bordeaux, T. Cashman, B. Corish, C. Keskin, T. Sharp, E. Soto, D. Sweeney, J. Valentin, B. Luff, et al. Efficient and precise interactive hand tracking through joint, continuous optimization of pose and correspondences. ACM TOG, 2016.
 [56] J. Taylor, R. Stebbing, V. Ramakrishna, C. Keskin, J. Shotton, S. Izadi, A. Hertzmann, and A. Fitzgibbon. Userspecific hand modeling from monocular depth sequences. In CVPR, 2014.
 [57] A. Thayananthan, B. Stenger, P. H. Torr, and R. Cipolla. Shape context and chamfer matching in cluttered scenes. In CVPR, 2003.
 [58] J. Tompson, M. Stein, Y. Lecun, and K. Perlin. Realtime continuous pose recovery of human hands using convolutional networks. ACM TOG, 2014.
 [59] D. Tzionas, L. Ballan, A. Srikantha, P. Aponte, M. Pollefeys, and J. Gall. Capturing hands in action using discriminative salient points and physics simulation. IJCV, 2015.
 [60] M. Šarić. Libhand: A library for hand articulation. http://www.libhand.org/, 2011. Version 0.9.
 [61] R. Y. Wang and J. Popović. Realtime handtracking with a color glove. In ACM TOG, 2009.
 [62] Y. Wang, J. Min, J. Zhang, Y. Liu, F. Xu, Q. Dai, and J. Chai. Videobased hand manipulation capture through composite motion control. ACM TOG, 2013.
 [63] Y. Wu and T. S. Huang. Viewindependent recognition of hand postures. In CVPR, 2000.
 [64] Y. Wu, J. Y. Lin, and T. S. Huang. Capturing natural hand articulation. In ICCV, 2001.
 [65] C. Xu and L. Cheng. Efficient hand pose estimation from a single depth image. In ICCV, 2013.
 [66] J. Zhang, J. Jiao, M. Chen, L. Qu, X. Xu, and Q. Yang. 3d hand pose tracking and estimation using stereo matching. arXiv:1610.07214, 2016.
 [67] W. Zhao, J. Chai, and Y.Q. Xu. Combining markerbased mocap and rgbd camera for acquiring highfidelity hand motion data. In Proc., ACM SIGGRAPH/eurographics symposium on computer animation, 2012.