Surgical activity recognition is an active research area. The potential of autonomous or human-robot collaborative surgeries, automated real-time feedback, guidance and navigation during surgical tasks is exciting for the community. There are studies that address this open research problem. The methods range from using SVM classification with Linear Discriminant Analysis (LDA) [1, 2]
, Gaussian mixture models (GMMs)4, 5]6, 7, 8].
. Some other recent studies focus on classifying surgery phases; a higher level of surgical activity that includes a sequence of different tasks. A recent work by Cadéneet al.  uses deep residual networks to extract visual features of the video frames and then applies temporal smoothing with averaging. The authors of this work finally model the transitions between the surgery phase steps with an HMM. Twinanda et al.  offer a study on classification of surgery phases by first extracting visual features of video frames via a CNN and then passing them to an SVM to compute confidences of a frame belonging to a surgery phase. Sarikaya et al.  proposes a multi-modal and multi-task CNN-LSTM architecture for simultaneous gesture and surgical task classification, they argue that surgical tasks are better modeled by the visual features that are determined by the objects in the scene, while gestures are better modeled with motion cues, however their performance remains on the lower end. Similary, we argue that surgical gestures are best modeled with motion cues as these cues can be charaterized by signature motions of surgeons.
Going through the more recent citations on JIGSAWS benchmark, we noticed that a large number of promising studies highly rely on kinematic data. Kinematic data requires additional recording devices, while a computer vision approach doesn’t. We propose a computer vision only approach by using dense optical flow as input as an alternative to kinematic data. We demonstrate that using optical flow information only, we achieve competitive results. While the setting and objects differ across different tasks, the surgeon’s motion and the transitions between these motions remain generic. For example, the common gesturePositioning the needle 1 might take place in different tasks of Suturing and Needle Passing . While the settings and objects differ for these tasks, the gesture Positioning the needle motions are identical and can be identified by the temporal dynamics. In this way, gestures can be defined with motion cues which are independent of the setting and the objects. Similarly, to differentiate between gestures in the same setting, we can use motion as a reliable identifier while the advantage of visual cues to recognize gestures is not so evident in such setting. In this paper, we adapt Optical flow ConvNets initially proposed by Simonyan et al. . While Simonyan et al. uses both RGB frames and dense optical flow, we use only dense optical flow representations as input to emphasize the role of motion in surgical gesture recognition and present it as a robust alternative to kinematics and RGB frames. We also overcome one of the limitations of Optical flow ConvNets; Simonyan et al.
initializes the weights of their spatial network on RGB frames with a pretrained model on Imagenet, however they do not initialize the weights of their Optical flow ConvNets for the lack of an alternative. We overcome this limitation by initializing our Optical flow ConvNets with the method called cross modality pre-training proposed by Wanget al. . Using the pretrained Imagenet weights on RGB, we first average the weight value across the RGB channels and replicate this average by the channel number of motion stream input. This helps our model to better converge and avoid overfitting, which is a significant gain since we work with JIGSAWS  which is a small dataset (and as far as we are concerned, the only public dataset that provides gesture annotation, which are low-level atomic surgical activities). Using a simple model as suggested we were able to get competitive results on JIGSAWS gesture classification task. We experiment with our model using JIGSAWS’s Leave-one-supertrial-out (LOSO) cross-validation scheme, and compare our results to the benchmark .
2.1 Optical flow ConvNets
Our choice of using dense optical flow for gesture recognition is supported by the findings of Karpathy  et al. that shows a network operating on individual RGB video frames performs similarly to the networks whose input is a stack of RGB frames, and thus learnt spatio-temporal features from these RGB frame models do not capture the motion well. Similar to the architecture proposed by Simonyan et al.  (Optical flow ConvNets), the input to our model is formed by stacking dense optical flow displacement fields between several consecutive frames (we chose the length of consecutive frames as in accordance with the proposed model  ). The dense optical flow is computed using Farneback method  by solving for a displacement field at multiple image scales. We chose to work with Farneback as its computation is fast. We get a 2-channel array with optical flow vectors, that represents the horizontal and vertical components of the vector field correspondingly , which results in a total of 20 channels for each input (L=-channels) (Figure 2
). We find their magnitude and direction, and save these two separate images as grayscale image representations rescaled to a [0, 255] range and compressed using JPEG. We adapt a BN-ResNet101, a ResNet with Batch Normalization (BN) where BN is used for normalizing the value distribution before going into the next layer[16, 17]. Since we work with a small dataset, we also add a dropout layer after pooling of ResNet to overcome overfitting. Normally, BN addresses this problem, however while working with smaller datasets, using BN only could lead to overfitting. We also adapt additional normalization techniques such as scaling the dense optical flow grayscale representations to , and data augmentation techniques such as taking five random crops for each frame. Also, since we take L= consequent frames of chunks for each video, it could be considered as further data augmentation, a means of random cropping for videos similar to random cropping of images . As mentioned earlier, we initialize our Optical flow ConvNets with the method called cross modality pre-training proposed by Wang et al. . Using the pretrained Imagenet weights on RGB, we first average the weight value across the RGB channels and replicate this average by the channel number of motion stream input, which in our case is . This helps us greatly to overcome overfitting and addresses the limitations of Optical flow ConvNets as initially suggested by Simonyan et al..
The JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS)  provides a public benchmark surgical activity dataset. In this video dataset, surgeons with varying expertise perform surgical tasks on the daVinci Surgical System (dVSS ) (Figure 6). The dataset includes video data captured from endoscopic cameras of the dVSS at Hz and at a resolution of . The videos are recorded during the performance of the tasks: Suturing, Needle Passing and Knot Tying, and the dataset provides low-level gesture labels, which are the smallest action units where the movement is intentional and is carried out towards achieving a specific goal. The gestures form a common vocabulary for small action segments that reoccur in different tasks. A list of the gestures is given in the Table 1.
|G1||Reaching for needle with right hand|
|G3||Pushing needle through tissue|
|G4||Transferring needle from left to right|
|G5||Moving to center with needle in grip|
|G6||Pulling suture with left hand|
|G7||Pulling suture with right hand|
|G9||Using right hand to help tighten suture|
|G10||Loosening more suture|
|G11||Dropping suture at end and moving to end points|
|G13||Making C loop around right hand|
|G14||Reaching for suture with right hand|
|G15||Pulling suture with both hands|
3.2 Preprocessing Data
We first clip the task videos into gesture clips using the start and end frame information for each gesture. Then, we extract each RGB frame (resized to ) at . For clips with a duration less than frames (less than second), we extract the frames at instead, this way we partially balance the dataset. We excluded the few clips resulting in having less than frames, we have also noticed that they might be wrongly annotated as a gesture loop is not completed in the timeframes given. We rescale our input data to and take a random of five crops with the size .
After extracting the frames, we have computed the dense optical flow of each consequent frame using the method described by Farneback . Although there are more recent and precise methods, we chose to work with Farneback method as it is fast. We get a array with optical flow vectors, that represents the horizontal and vertical components of the vector field . We find their magnitude and direction, and save these two separate images as grayscale image representations as mentioned in Methods.
3.3 Experimental Setup
JIGSAWS come with an experimental setup that can be used to evaluate automatic surgical gesture classification methods. Leave-one-supertrial-out (LOSO) is one of the cross-validation schemes included in this setup. According to this scheme, Supertrial is defined as the set consisting of the -th trial from all subjects for a given surgical task. In the LOSO setup for cross-validation, five folds with each fold comprising of data from one of the five supertrials are provided. The LOSO setup can be used to evaluate the robustness of a method by leaving out the -th repetition for all subjects. Although this setup has Train and Test splits for each fold, it does not provide a separate Validation split. To ensure robustness of our model, and to avoid overfitting, we split the Train list once again to leave a random trial out, and we use that trial as the Validation set. This way, we make sure that no videos belonging to the same trial appears in both Train and Validation.
We carried out our experiments with a TITAN X (Pascal architecture) GPU and an Intel Xeon (R) CPU E5 3.50 GHz x8 with a 31.2 GiB
with a Stochastic gradient descent (SGD) optimizer, and we decrease this learning rate in steps (stepsize=, gamma=). Our main focus was to overcome overfitting and ensure a network that can generalize, in addition to this we did experiment in a grid search manner to further optimize our model, however for the latter, there is room for improvement and a more broad grid-search could be done. For testing, we uniformly sample frames in each video and the video level prediction is the voting result (averaging) of all frame level predictions. Our training takes only about per mini-batch, which we have set at
, and an epoch for testing takes aboutseconds for videos which are the longest in the dataset, and about seconds for videos which are shorter. We train our models for epochs, however they converge around th epoch.
The methods we used are easy to reimplement as authors of mentioned methods Optical flow ConvNets, cross modality pre-training, ResNet, BN [11, 12, 16, 17] share their code publicly. Dense optical flow computation  can be found in OpenCV’s library. In the future, we may release additional scripts, experimentation splits and information on the 3rd party software used to make it easier to reproduce the experiments.
Our experimentation results are shown in Table 2, where a comparison with the JIGSAWS benchmark is made. Our optical flow ConvNets model using dense optical flow information only significantly outperforms the benchmark studies for the task. Our model performs more robust than the benchmark studies as our precision has smaller standard deviantion. In addition, our training takes only about per mini-batch, which we have set at , and an epoch for testing takes about seconds for Suturing videos which are the longest in the dataset, and about seconds for Knot Tying videos which are shorter. Our model can be extended for activity segmentation, and our competitive results suggest that optical flow information could be used as an alternative to kinematic data.
|Method (Data Type)||Evaluation||Suturing||Needle Passing||Knot Tying|
|LDS (kin)||Prec. std||73.30 28.41||52.91 17.31||76.07 18.72|
|LDS (vid)||Prec. std||82.26 29.59||73.40 15.09||91.67 7.10|
|GMM-HMM (kin)||Prec. std||81.20 30.42||73.60 20.43||91.52 7.41|
|Ours (vid/opt flow)||Prec. std||91.07 0.67||74.25 3.66||87.78 3.44|
-  H.C. Lin et al., “Automatic detection and segmentation of robot-assisted surgical motions”, Proc. Medical Image Computing and Computer-Assisted Intervention (MICCAI), vol. 3749, pp. 802-810, (2005)
-  H.C. Lin et al., “Towards automatic skill evaluation: detection and segmentation of robot-assisted surgical motions”, Computer Aided Surgery, vol. 11, pp. 220-230, (2006)
-  J.J.H. Leong et al., “HMM assessment of quality of movement trajectory in laparoscopic surgery”, Proc. Medical Image Computing and Computer-Assisted Intervention (MICCAI), vol. 4190, pp. 752759, (2006)
-  G. Z. Yang et al., “Data-derived models for segmentation with application to surgical assessment and training”, MICCAI Proc. Medical Image Computing and Computer-Assisted Intervention (MICCAI), vol. 5761, pp. 426434, (2009)
-  B. Varadarajan et al., “Learning and inference algorithms for dynamical system models of dextrous motion”, PhD thesis, Johns Hopkins University, (2011)
-  Rémi Cadéne, Thomas Robert, Nicolas Thome, and Matthieu Cord, “M2CAI workflow challenge: convolutional neural networks with time smoothing and hidden markov model for video frames classification”, Computing Research Repository (CoRR), abs. 1610.05541, (October 2016)
-  R. DiPietro et al., “Recognizing surgical activities with recurrent neural networks”, Proc. Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 551-558, (2016)
-  A. P. Twinanda, D. Mutter, J. Marescaux, M. de Mathelin, and N. Padoy, “Single and multi-task architectures for surgical workflow challenge” , Proc. Workshop and Challenges on Modeling and Monitoring of Computer Assisted Interventions (M2CAI) at Medical Image Computing and Computer-Assisted Intervention (MICCAI), (2016)
-  Gao, Y., Vedula, S.S., Reiley, C.E., Ahmidi, N., Varadarajan, B., Lin H.C., Tao, L., Zappella, L. Bejar, B., Yuh, D.D., Chen, C.C.G., Vidal, R., Khudanpur, S., Hager, G.D.: The JHU-ISI gesture and skill assessment working set (JIGSAWS): a surgical activity dataset for human motion modeling, Proc. Modeling Monitor. Comput. Assist. Interventions (MCAI), (2014).
-  Sarikaya, D., Corso, J. J., Guru, K. A. : Joint Surgical Gesture and Task Classification with Multi-Task and Multimodal Learning,, Computing Research Repository (CoRR), abs. 1805.00721, (May 2018)
-  Simonyan, K. , Zisserman, A.: Two-Stream Convolutional Networks for Action Recognition in Videos, Advances in Neural Information Processing Systems, vol. 27, pp. 568-576, (2014)
-  Wang, L., Xiong, Y., Wang Z., Qiao, Y., Lin, D., Tang, X., Van Gool, L.,: Temporal Segment Networks: Towards Good Practices for Deep Action Recognition, Proc. of European Conference on Computer Vision (ECCV), (2016)
-  N. Ahmidi et al., “A dataset and benchmarks for segmentation and recognition of gestures in robotic surgery”, Transaction of Biomedical Engineering, (2017)
Karpathy, A., Toderici, G., Shetty, S., Leung, T. , Sukthankar, R., Fei-Fei, L.,: Large-scale video classication with convolutional neural networks, Proc. of Conference on Computer Vision and Pattern Recognition (CVPR), (2014)
Farneback, G.: Two-frame motion estimation based on polynomial expansion, Lecture Notes in Computer Science, vol. 2749, pp. 363-370, (2003).
-  He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition, Proc. of Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, (2016)
Ioffe, S., Szegedy, C.: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, Proc. of Proceedings of the 32 nd International Conference on Machine Learning, vol. 35, (2015)
-  Donahue, J., Hendricks, A.L., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description, Proc. of Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2625-2634, (2015)