In the past, researchers have proposed deep neural network architectures consisting of ensemble of models that solve specific sub-tasks; For instance, sub-tasks such as hand candidate detection, fingertip detection and classification are used to achieve a larger goal of hand gesture recognition in first-person view . For accurate hand gesture classification sans the depth data, each of the constituent models have to be trained separately and demand extensive human labour in annotating the data. The authors  use a manually annotated dataset containing over frames to introduce enough variability in background and lighting conditions so as to make the models robust.
On the other hand, synthetically generated data is also being increasingly used of late to train and validate vision systems . This is especially true of areas in which obtaining huge amounts of data with ground truth is tedious. However, existing literature states that the performance of systems that are trained only on synthetic data is not at par with systems that are trained on real-world data due to the issue of domain shift 
. This problem arises since the probability distribution over the parameters resulting from the process of generating the synthetic videos may diverge from the parameters that describe the real-world data. Divergence in critical parameters such as lighting, scene geometry, and camera parameters often lead to poor generalisability in models that are trained solely on synthetic data.
Various works have derived or designed representations such as geometry and motion in synthetic domains that are quasi invariant to the problem of domain shift . Ros et al.  have showed that augmenting large scale synthetic data with even a few real-world samples while training can relieve domain shift. Moreover, recent work in the field of generative adversarial learning [2, 8]
, has shown how unlabelled samples from a target domain can be used to iteratively obtain better point estimates of parameters in generative models by minimising the difference between the generative and target distributions. Taking cues from the two ideas, we generate photo-realistic videos with different backgrounds and gesture patterns and hypothesise that given a large-scale dataset, one can design simpler frameworks that implicitly learn the global task of gesture recognition without needing to explicitly localise hands and fingertips.
2 Proposed Framework
2.1 CycleGAN Based Approach
We adapt the architecture for our generative networks from Zhu et al. 
who have shown impressive results for image-to-image translation.
The network contains two
convolutions, two fractionally strided convolutions with, and several residual blocks. blocks are used for size input images. To detect whether overlapping image patches are real or fake, the discriminator network uses PatchGANs . Such a patch-level discriminator architecture has fewer parameters than a full-image discriminator and can work on arbitrarily-sized images in a fully convolutional fashion.
2.2 Sequential Scene Generation with GAN
We used the ability of the model outlined by Turkoglu et al.  to generate video sequences with different backgrounds but same (or controlled) fingertip and hand as in the reference input image. The proposed framework sequentially composes a scene, breaking down the underlying problem into foreground and background separately. Our approach (figure 2) utilises the foreground generator as proposed by Turkoglu et al.  to superimpose elements over the given background.
3 Experiments and Results
3.1 Experiment 1
We use the Adam solver with a batch size of . All models were trained from scratch with a learning rate of
. The results were observed on varying number of epochs where the model was trained forepochs with the same learning rate and linearly decaying the learning rate over next epochs. The model was trained on a Tesla V100 GPU for hours.
We train our model on the SCUT-Ego-Finger dataset . It has manually annotated frames for hand detection and fingertip detection in first-person view. The dataset includes videos from different environments such as classroom, lake, canteen etc. We demonstrate our results in Figure 1 on two pairs of source and target domains: (a) , and (b) .
3.2 Experiment 2
We ran our experiments on a subset of the SCUT-Ego-Finger dataset . Since we did not have ground-truth labelled semantic maps for our dataset, skin pixels are detected from the images using the skin-colour segmentation. We applied the GrabCut algorithm  for foreground extraction followed by skin-thresholding in HSV colour format. Morphological erosion is also applied to remove some of the isolated blobs.
We trained the foreground and background generator (for extracting background images from the dataset ) for 100 and 200 epochs respectively, with a batch size of 4. Figure 3 demonstrates the complete use-case of the network. Because of the segmentation masks given as input to the model, the network is able to replicate hand and fingertip in the foreground fully.
Figure 4 shows images with different background domains but the same mask layout as input. We observe that the synthesised images do not suffer from any artefacts as compared to images generated by CycleGAN . However, skin colour is a bit off in the fourth domain perhaps due to the texture of the background domain.
We extend this idea to generate egocentric gestures such as fingertip going down, up, left, and right. One such example has been demonstrated in Figure 5.
4 Future Work
The realisation of our end goal of generating photo-realistic videos with enough variability in background, lighting, and other such parameters that can help in designing, training, and benchmarking models for hand-gesture recognition would involve designing a model that introduces variations in the background features and some features present on the hand. We would like to experiment the inclusion of a recurrent network into the current framework which could generate photo-realistic hand movements corresponding to any given spatio-temporal sequence corresponding to an arbitrary input gesture. Finally, we observe that the background might change suddenly between consecutive frames leading to a jittery video and we would like to experiment with ways to make the background coherent across frames.
We have demonstrated a network capable of synthesising photo-realistic videos and show its efficacy by generating videos of hand gestures. We believe that this would help in the creation of large-scale annotated datasets, which, in turn, would encourage the development of novel neural network architectures that can recognise hand gestures from single RGB streams without the need of specialised hardware such as multiple cameras and depth sensors.
Unsupervised domain adaptation by domain invariant projection.
Proceedings of the IEEE International Conference on Computer Vision, pp. 769–776. Cited by: §1.
-  (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §1.
A pointing gesture based egocentric interaction system: dataset, approach and application.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 16–23. Cited by: §1, §3.1, §3.2, §3.2.
Image-to-image translation with conditional adversarial networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5967–5976. Cited by: §2.1.
-  (2016) The synthia dataset: a large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3234–3243. Cited by: §1, §1.
-  (2004) ”GrabCut”: interactive foreground extraction using iterated graph cuts. In ACM SIGGRAPH 2004 Papers, SIGGRAPH ’04. Cited by: §3.2.
A layer-based sequential framework for scene generation with gans.
Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), Cited by: §2.2.
-  (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Computer Vision (ICCV), 2017 IEEE International Conference on, Cited by: §1, §2.1, §3.2.
-  (2017) Learning to estimate 3d hand pose from single rgb images. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 4913–4921. Cited by: §1.