Automated surgical gesture recognition aims at automatically identifying meaningful action units within surgical tasks that constitute a surgical intervention. The process forms a fundamental step in the development of systems for surgical data science , objective skill evaluation [35, 25] and surgical automation [26, 23, 24]. The problem is however challenging because surgical gestures have high degree of variability due to multiple parameters in the operating surgeon’s style and the patients’ anatomy which alters the duration, kinematics and order of actions among different demonstrations .
Much research in the field, however, is based on the premise that many surgical tasks have well-defined structure and use specific action patterns to progress towards a surgical goal. Gesture flow has then been described through task-specific probabilistic grammars , which have been modelled with powerful statistical tools such as graphical models [34, 30] and neural networks [17, 8]
. This work investigates if the recognition performance improves when the progress of the surgical task is modelled explicitly and learnt jointly with the action sequence, resulting in a more discriminative feature extraction process.
The effectiveness of multi-task learning  and surgical progress modelling has been demonstrated in previous work focused on surgical workflow analysis [37, 19], where the aim is to recognise surgical phases representing high-level surgical states. We adopt this approach with high-granularity gesture sequences and design a multi-task recurrent neural network for simultaneous gesture recognition and progress estimation. Differently from previous work, however, the task progress is based on the underlying action sequence rather than on time. We hypothesize that action-based progress estimation could help to learn action sequentiality despite duration variability and the presence of adjustment gestures and spurious motions, and thus reduce out-of-order predictions and over-segmentation errors. We also analyse different progress estimation strategies and highlight correlations between gesture and progress predictions.
We validate our algorithm on the kinematic data of the JIGSAWS dataset , featuring demonstrations of elementary surgical tasks collected from eight surgeons with different skill level using the da Vinci Surgical System (dVSS, Intuitive Surgical Inc.) . Our experiments show that gesture recognition performance improves in multi-task frameworks with progress estimation at no additional cost, as the progress labels can be generated automatically from the data and available action labels.
I-a Related Work
Gesture recognition from robot kinematics has been tackled through probabilistic graphical models such as Hidden Markov Models (HMMs)[34, 30, 29] and Conditional Random Fields (CRFs) [31, 15, 22]
. These however rely on frame-to-frame and segment-to-segment transitions only, ignoring long-range temporal dependencies in the surgical demonstrations. Deep learning techniques have been recently used to capture complex, long-distance patterns through hierarchies of temporal convolutional filters[17, 14], LSTM networks 
or deep Reinforcement Learning (RL). Besides, unsupervised [13, 9] and weakly-supervised  recognition have been shown through clustering, which reduces the dependency on annotations but at the expense of performance.
Surgical video rather than kinematics also embeds gesture information which can be extracted with spatio-temporal CNNs , 3D CNNs , multi-scale temporal convolutions [18, 36] or hybrid encoder-decoder networks with temporal-convolutional filters for local motion modelling and bidirectional LSTM for long-range dependency memorization .
Finally, a number of studies have approached surgical workflow analysis through multi-task learning. Examples include systems for joint task and gesture classification , and models for joint phase recognition and tool detection  or progress estimation . Phase recognition networks have also been pre-trained on auxiliary tasks such as prediction of the Remaining Surgery Duration (RSD)  or estimation of the frame temporal order , aiming to improve understanding of the temporal progression of the surgical workflow. Such approaches show that multi-task learning and progress modelling are beneficial for surgical workflow understanding and could support fine-grained analysis that requires discriminative feature extraction.
We trained our network on the 39 suturing demonstrations of the JIGSAWS dataset, using the kinematic data (end-effector position, velocity, gripper angle) recorded at 30 Hz from the two Patient Side Manipulators (PSMs) of the dVSS. The trajectories were first smoothed with a low-pass filter with cut-off frequency Hz against measurement noise 
, and then normalized to zero mean and unit variance to compensate for different units of measure. Finally, data were re-sampled from 30 Hz to 5 Hz for shorter computation time.
In order to learn the task progress, new ground truth labels were automatically generated from the available data and action labels. As a preliminary step, however, we carefully inspected the video recordings in order to identify possible imprecisions in the available annotations, that would affect the automatic generation of our progress labels. We identified and corrected 12 mistakes, affecting a total of 2356 data samples. Amendments to the original annotations are reported in the Appendix.
As illustrated in Fig. 1, our definition of progress is dictated by the underlying action sequence. Out of the 10 original action labels from JIGSAWS, we identified five gestures that constitute essential progressive stages in any complete suturing demonstration (Reaching for the needle, Positioning the tip of the needle, Pushing the needle through the tissue, Pulling the suture, Dropping the suture), generating a simplified probabilistic state machine that describes the commonly-observed workflow of the suturing task. The other classes represent adjustment gestures that serve to prepare or help to complete the execution of the essential gestures and that generally appear in variable order. We thus grouped fundamental gestures (performed by any of the two arms, even if JIGSAWS only features right-handed suturing demonstrations) and their corresponding adjustment gestures into 5 progress stages (from 0 to 4), as detailed below:
Progress 0: G1 Reaching for needle with right + G5 Moving to center of workspace
Progress 1: G2 Positioning the tip of the needle + G4 Transferring the needle from left to right before G2 + G8 Orienting the needle before G2
Progress 2: G3 Pushing the needle through the tissue + G4/G8 before G3
Progress 3: G6 Pulling the suture with left + G9 Using right hand to tighten suture + G10 Loosening more suture + G4/G8 before G6/G9/G10
Progress 4: G11 Dropping suture and moving to end points + G4/G8 before G11
As the task evolution in time is affected by numerous factors, such as surgical skill and surgical context, we believe that activity-based progress could be better than time-based progress in reducing the kinematic feature variation for equal progress values. Moreover, it could help to learn action sequentiality despite the presence of adjustment gestures which occur in variable frequency and uncertain order.
Ii-B Multi-task Recurrent Neural Network
Our multi-task architecture performs action recognition jointly with progress estimation. As the progress is quantized into 5 sequential steps, we estimate it using three different strategies: regression, standard classification and classification with ordered classes (or ordinal regression).
Notation: vectors are represented in bold lowercase letters (e.g.y), scalars in lowercase letters (e.g. ), parameters and losses in uppercase letters (e.g. C).
As shown in Fig. 2, the kinematic features () are fed to a single-layer bidirectional LSTM with 1024 hidden units. Activations from the forward and backward streams are concatenated into a 2048-dimensional vector and then connected to the regression node by a Fully Connected (FC) layer with linear activation function. The same 2048 features are also projected by a second fully connected layer into 10 logits with softmax activation function for action classification.
At each training iteration, we compute the regression loss using the Mean Absolute Error (MAE) over individual demonstrations:
and the classification loss using the Mean Cross Entropy (MCE) over individual demonstrations, as in :
where is the demonstration length (number of samples), is the number of action classes, and are the regression and prediction nodes’ output at timestamp t, and and are the corresponding ground truths.
After model training, the regression output is rounded to the nearest integer for progress prediction, and the logit with largest activation is considered for action prediction.
To perform standard progress classification, we substitute the regression layer with a 5-logit fully connected layer with softmax activation function and MCE loss, thus obtaining a multi-hierarchical action recognition network (Fig. 2). After model training, the logit with largest activation is considered for progress prediction.
Standard classification considers independent categories and does not penalize major ordering mistakes. In order to represent the succession of progress classes, we thus encoded the target vectors with the ordinal formulation of  as represented in Fig. 3
, and substituted the categorical MCE loss with the Mean Binary Cross Entropy (MBCE) loss (i.e. Sigmoid activation function and MCE loss). MBCE sets up an independent binary classifier for each class and, in combination with the ordinal target encoding, generates a larger loss the further the prediction is from its ground truth. After model training, progress predictions are obtained from the outputof this classifier by finding the first index where <0.5.
In all the three cases, the final multi-task loss () is a weighted combination of the two single-task losses (, ):
However, multi-task networks are generally difficult to train, as task imbalances may lead to the generation of shared features that are not useful across all tasks. In order to automatically balance our model training, we used the GradNorm algorithm  for gradient normalization, that has been shown to improve accuracy and reduce overfitting across multiple tasks when compared to single-task networks. GradNorm dynamically updates the single-task loss weights (, ) during training by optimizing an additional loss (), which aims at regularizing the training rate of the individual tasks:
is the -norm of the gradient of the weighted single-task loss with respect to the network weights w.
is the average gradient norm across all tasks.
is the relative inverse training rate of task , with the single-task loss at the first training iteration and the average loss across all tasks.
is a balancing hyperparameter to be tuned.
Ii-C Evaluation Setup
As in , we evaluated our network recognition performance using accuracy, i.e. the percentage of correctly labelled frames, normalized segmental Edit score, which determines the precision of the predicted temporal ordering of actions, and segmental F1@10 score, which penalizes over-segmentation errors but is not sensitive to minor temporal shifts between predictions and ground truth. Progress regression was evaluated with MAE, normalized with respect to the full range of progress values ().
We followed the standard JIGSAWS Leave One User Out (LOUO) cross-validation setup : for every fold, all the trials performed by a single user are kept out as the test set and the other demonstrations are used to train our model.
Iii Experiments and Results
Our multi-task network for joint action recognition and progress regression (APr) was learnt on the multi-task loss
using Gradient Descent (GD) with Momentum 0.9, batch size 5 and initial learning rate 0.1. The multi-task architectures for standard progress classification (APc) and ordered progress classification (APoc) were instead trained with GD, batch size 5 and initial learning rate 1.0. We always applied learning rate decay of 0.5 after 80 iterations and stopped the training after 120 iterations. We used gradient clipping to avoid exploding gradients and dropout regularization with dropout rate of 0.5, as for the baseline (A). The single-task loss weights (, ) were updated at a learning rate of 0.025 using GD on the regularization loss (), with set to 1.5. Testing was performed after 100, 110 and 120 training iterations and results were averaged. We trained all networks on the pre-processed kinematic data with revised annotations.
Comparison between A, APr, APc and APoc is presented in Table I. Multi-task performance is evaluated with () and without (=
=1) GradNorm regularization. Scores are reported as mean values across the 8 validation folds and corresponding standard deviations, which are strongly representative of inter-surgeon style variability in the LOUO setup. All three multi-task architectures outperform the single-task baseline on the segmental scores (Edit and F1@10), which seems to confirm the hypothesis that action-based progress estimation could help to learn action sequentiality and to reduce out-of-order predictions and over-segmentation errors. Even if none of the proposed architectures clearly stands out from the others, APoc generates slightly better results, which could be explained by stronger penalization of major ordering mistakes than standard classification, and easier optimization goal than regression of a discontinuous progress function. The architecture that benefits the most from multi-task gradient normalization is APr, as it is perhaps more challenging to balance two different loss functions (MCE for classification and MSE for regression) than two similar or identical ones. However, balanced multi-task networks rely on a large number of hyperparameters, including optimization parameters for the regularization loss. We believe that results could be improved and differences between the three proposed architectures could be emphasized with more extensive parameter tuning, as well as with larger datasets.
, however, showed that the advantage of the proposed method relies in the regularization of the predicted sequences, which mainly affects the segmental scores and only marginally the framewise evaluation metrics. Some gestures, such as G9 and G10, are extremely challenging to recognize in both cases, as they are under-represented in the dataset.
In addition to recognising surgical gestures, our multi-task architectures segment the surgical demonstrations into 5 fundamental progressive steps of the suturing task, reaching an average accuracy of 89.1% with standard classification (Table I). For APr and APc, but not for APoc, all evaluation scores improved with respect to their single-task counterparts Pr and Pc111Pr, Pc and Poc were trained once with the same hyperparameters as their multi-task counterpart. Weight decay was however anticipated and training was stopped after 80 iterations.: not only higher-level progress understanding can help gesture recognition, but gesture recognition can reciprocally boost progress prediction.
Fig. 6 illustrates an example of recognition output where predictions generated by the multi-task network show reduced over-segmentation with respect to the baseline, as quantified by the segmental score improvement previously reported. It is also interesting to visualize the relationship between gesture and progress predictions, as the segmentations boundaries are frequently aligned (Fig. 7), and poor progress estimation often corresponds to poor gesture recognition, and vice versa (Fig. 8).
We also trained APc and APoc with the original annotations of JIGSAWS, in order to compare our multi-task models to the original single-task baseline  and to related work on robot kinematics. Our investigation, however, was carried out on a simple LSTM architecture, and we suggest the proposed multi-task approach could be applied on top of more complex architectures to boost performance. Results in Table II highlight sensitivity of our models to action annotation noise, which partially spoils the automatic generation of progress labels. This results in performance degradation with respect to the previous experiments, especially for APoc. Nonetheless, the proposed networks significantly outperform  both in accuracy and Edit score, and reach competitive performance with respect to related work on robot kinematics.
Finally, we substituted the Bidirectional LSTM cell in APc with a Forward LSTM cell for online recognition. We reached accuracy and Edit score of 82.2 and 76.2 respectively, improving upon the original single-task baseline  (Table III).
Our results support the hypothesis that joint surgical gesture recognition and progress estimation can induce more robust feature learning than gesture recognition alone, and boost performance in both online and offline applications.
In this paper, we performed joint recognition of surgical gestures and progress prediction from robot kinematic data. Differently from prior work, the progress labels were defined on the underlying action sequence rather than on time, in order to reduce kinematic feature variation for equal progress values. Moreover, adjustment gestures did not contribute to the progress advancement. We assumed that action-based progress prediction could help to recognize surgical gestures in well-structured tasks such as suturing and knot tying, which are generally performed several times during surgical interventions. We analysed different progress estimation strategies, and demonstrated on the suturing demonstrations of the JIGSAWS dataset that the proposed multi-task networks outperform the single-task baseline in terms of Edit score and F1@10 score, indicating a reduction in out-of-order predictions and over-segmentation errors. Since action-based progress does not depend on time nor on adjustment gestures, we conjecture this approach could also be effective beyond JIGSAWS in unconstrained environments, such as real surgical interventions or free surgical training sessions, where demonstrations do not have standardized length, right and left hands are often used interchangeably, and adjustment gestures, pauses and undefined motions are more frequent. In this scenario, contextualization of surgical motion into high-level progress stages could help to better recognize the surgical actions. The limitation of this method, however, is in the recognition of unstructured tasks such as blunt dissection, where action-based progress can not be clearly defined. In the presence of frequent and scattered mid-task failures and restarting, the ordered classification method might also lose its advantage to the standard classification method.
As suggested in , further investigation could be performed on alternative multi-task integration modalities, such as pre-training on the auxiliary task for feature extraction or fine-tuning on the target task. This might potentially match or even improve upon multi-task training, at the cost of additional training time. Another study could model the progress in time of the individual gestures, which could improve understanding of gesture evolution and duration. Moreover, the integration of visual features extracted from surgical videos could boost both action recognition and progress estimation, as video data encode complementary information about the surgical tools and the state of the environment.
Finally, evaluation of the proposed methodology was performed on the JIGSAWS dataset, which is currently the only publicly available dataset for surgical gesture recognition featuring robot kinematics. However, JIGSAWS is small and contains a limited range of surgical motions. New surgical data will be collected in the future, and extensive evaluation will be carried out on larger datasets of robotic surgical demonstrations.
Amendments to the original annotations of JIGSAWS:
-  (2017) A Dataset and Benchmarks for Segmentation and Recognition of Gestures in Robotic Surgery. IEEE Transactions on Biomedical Engineering. External Links: Cited by: §I.
Unsupervised temporal context learning using convolutional neural networks for laparoscopic workflow analysis. arXiv preprint arXiv:1702.03684 abs/1702.0. External Links: Cited by: §I-A.
-  (1996) Task and motion analyses in endoscopic surgery. In American Society of Mechanical Engineers, Dynamic Systems and Control Division DSC, Cited by: §I.
GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks.
International Conference on Machine Learning (ICML), External Links: Cited by: §II-B.
-  (2008) A neural network approach to ordinal regression. In IEEE International Joint Conference on Neural Networks, pp. 1279–1284. External Links: Cited by: §II-B.
-  (2016) Unsupervised Trajectory Segmentation for Surgical Gesture Recognition in Robotic Training. IEEE Transactions on Biomedical Engineering 63 (6), pp. 1280–1291. External Links: Cited by: §II-A.
-  (2017) TricorNet: A Hybrid Temporal Convolutional and Recurrent Network for Video Action Segmentation. arXiv preprint arXiv:1705.07818. External Links: Cited by: §I-A.
-  (2016) Recognizing Surgical Activities with Recurrent Neural Networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 551–558. External Links: Cited by: §I-A, §I, §II-B, TABLE II, TABLE III, §III, §III, §III.
-  (2017) Soft Boundary Approach for Unsupervised Gesture Segmentation in Robotic-Assisted Surgery. IEEE Robotics and Automation Letters 2 (1), pp. 171–178. External Links: Cited by: §I-A.
-  (2019) Using 3D Convolutional Neural Networks to Learn Spatiotemporal Features for Automatic Surgical Gesture Recognition in Video. In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), External Links: Cited by: §I-A.
-  (2014) JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS): A Surgical Activity Dataset for Human Motion Modeling. Modeling and Monitoring of Computer Assisted Interventions (M2CAI) - MICCAI Workshop. Cited by: §I, §II-C.
-  (2000) The Intuitive TM telesurgery system: overview and application. Robotics and Automation 1 (April), pp. 618–621. External Links: Cited by: §I.
-  (2015) Transition state clustering: Unsupervised surgical trajectory segmentation for robot learning. The International Journal of Robotics Research 36, pp. 1595–1618. Cited by: §I-A.
-  (2017) Temporal convolutional networks for action segmentation and detection. 2017-Janua, pp. 1003–1012. Note: need fixed length input: non usable with real data? External Links: Cited by: §I-A, §II-C.
-  (2015) An Improved Model for Segmentation and Recognition of Fine-Grained Activities with Application to Surgical Training Tasks. IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1123–1129. External Links: Cited by: §I-A.
-  (2016) Segmental spatiotemporal CNNs for fine-grained action segmentation. European Conference on Computer Vision (ECCV) 9907 LNCS, pp. 36–52. External Links: Cited by: §I-A.
-  (2016) Temporal convolutional networks: A unified approach to action segmentation. European Conference on Computer Vision (ECCV) 9915 LNCS, pp. 47–54. External Links: Cited by: §I-A, §I, TABLE II.
-  (2018) Temporal Deformable Residual Networks for Action Segmentation in Videos. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6742–6751. External Links: Cited by: §I-A.
-  (2017) Progress Estimation and Phase Detection for Sequential Processes. ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1 (3), pp. 1–20. External Links: Cited by: §I-A, §I.
-  (2018) Deep Reinforcement Learning for Surgical Gesture Segmentation and Classification. International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 11073 LNCS, pp. 247–255. External Links: Cited by: §I-A, TABLE II.
-  (2017) Surgical data science for next-generation interventions. Nature Biomedical Engineering 1, pp. 691–696. Cited by: §I.
-  (2018) End-to-end fine-grained action segmentation and recognition using conditional random field models and discriminative sparse coding. IEEE Winter Conference on Applications of Computer Vision (WACV) 2018-Janua, pp. 1558–1567. External Links: Cited by: §I-A.
-  (2015) Learning by observation for surgical subtasks: Multilateral cutting of 3D viscoelastic and 2D Orthotropic Tissue Phantoms. IEEE International Conference on Robotics and Automation (ICRA), pp. 1202–1209. External Links: Cited by: §I.
-  (2019) A DVRK-based Framework for Surgical Subtask Automation. Acta Polytechnica Hungarica 16 (8), pp. 61–78. External Links: Cited by: §I.
-  (2009) Task versus Subtask Surgical Skill Evaluation of Robotic Minimally Invasive Surgery. International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 435–442. Cited by: §I.
-  (2010) Motion generation of robotic surgical tasks: Learning from expert demonstrations. Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 967–970. External Links: Cited by: §I.
-  (2017) An Overview of Multi-Task Learning in Deep Neural Networks. arXiv preprint arXiv:1706.05098. External Links: Cited by: §I.
-  (2018) Joint Surgical Gesture and Task Classification with Multi-Task and Multimodal Learning. arXiv preprint arXiv:1805.00721. External Links: Cited by: §I-A.
-  (2015) Learning Shared, Discriminative Dictionaries for Surgical Gesture Segmentation and Classification. Modeling and Monitoring of Computer Assisted Interventions (M2CAI) - MICCAI Workshop, pp. 1–10. Cited by: §I-A.
-  (2012) Sparse hidden Markov models for surgical gesture classification and skill evaluation. International conference on information processing in computer-assisted interventions 7330 LNCS, pp. 167–177. External Links: Cited by: §I-A, §I.
-  (2013) Surgical gesture segmentation and recognition. International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 8151 LNCS (PART 3), pp. 339–346. External Links: Cited by: §I-A.
-  (2017) EndoNet: A Deep Architecture for Recognition Tasks on Laparoscopic Videos. IEEE Transactions on Medical Imaging 36 (1), pp. 86–97. External Links: Cited by: §I-A.
-  (2019) Weakly Supervised Recognition of Surgical Gestures. In IEEE International Conference on Robotics and Automation (ICRA), pp. 9565–9571. External Links: Cited by: §I-A.
-  (2009) Data-Derived Models for Segmentation with Application to Surgical Assessment and Training. In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 426–434. External Links: Cited by: §I-A, §I.
-  (2016) Analysis of the Structure of Surgical Activity for a Suturing and Knot-Tying Task. PLOS ONE 11. External Links: Cited by: §I.
-  (2019) Atrous temporal convolutional network for video action segmentation. IEEE International Conference on Image Processing (ICIP), pp. 1585–1589. External Links: Cited by: §I-A.
-  (2018) Less is More: Surgical Phase Recognition with Less Annotations through Self-Supervised Pre-training of CNN-LSTM Networks. arXiv preprint arXiv:1805.08569. External Links: Cited by: §I-A, §I, §IV.