LRTD: Long-Range Temporal Dependency based Active Learning for Surgical Workflow Recognition

04/21/2020 ∙ by Xueying Shi, et al. ∙ The Chinese University of Hong Kong 0

Automatic surgical workflow recognition in video is an essentially fundamental yet challenging problem for developing computer-assisted and robotic-assisted surgery. Existing approaches with deep learning have achieved remarkable performance on analysis of surgical videos, however, heavily relying on large-scale labelled datasets. Unfortunately, the annotation is not often available in abundance, because it requires the domain knowledge of surgeons. In this paper, we propose a novel active learning method for cost-effective surgical video analysis. Specifically, we propose a non-local recurrent convolutional network (NL-RCNet), which introduces non-local block to capture the long-range temporal dependency (LRTD) among continuous frames. We then formulate an intra-clip dependency score to represent the overall dependency within this clip. By ranking scores among clips in unlabelled data pool, we select the clips with weak dependencies to annotate, which indicates the most informative ones to better benefit network training. We validate our approach on a large surgical video dataset (Cholec80) by performing surgical workflow recognition task. By using our LRTD based selection strategy, we can outperform other state-of-the-art active learning methods. Using only up to 50 samples, our approach can exceed the performance of full-data training.



There are no comments yet.


page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Computer-assisted surgery and robotic-assisted surgery have been dramatically developed in recent years towards powerful support for the demanding scenarios of modern operating theatre, which is with highly complicated and extensive information for the surgeon [7, 13]. Automatic surgical workflow recognition is a fundamental and crucial visual perception problem for computer-assisted surgery, which can enhance cognitive understanding of the surgical procedures in operating rooms [6, 8]. With accurate recognition of the surgical phases from endoscopy videos of minimally invasive surgery, a wide variety of downstream applications can be benefited from such context-awareness. For instance, intra-operative recognition helps generate adequate notifications and alter future complications, by detecting rare cases and unexpected variations [5, 8]. Real-time phase identification can potentially support the decision making, the arrangement of team collaboration and the surgical process optimization during intervention [17, 10, 4]. It can also assist to automatically index video database for surgical report documentation, which contributes developing post-operative tools for purposes of archiving, skill assessment and surgeon training [1, 26]. In this regard, enhancing automatic workflow recognition of surgical procedure is essential in computer-assisted surgery for improving surgeon performance and patient safety.

The convolutional neural network (CNN) and recurrent neural network (RNN) have been widely utilized for workflow recognition from surgical video, as well as demonstrated their appealing efficacy of modeling spatio-temporal features for this task. Existing successes achieved by deep learning models for workflow recognition are mostly based on fully supervised learning using frame-wise annotations 

[21, 14, 15]. For instance, Twinanda et al. [21]

build a CNN to capture visual information of each frame, followed by a hierarchical hidden markov model (HMM) for temporal information refinement. Jin et al. 


design an end-to-end recurrent convolutional model to jointly extract spatio-temporal features of video clips, where a CNN module is used to capture frame-wise visual information, and a LSTM (i.e., long short term memory) module is utilized for the clip-wise sequential dynamics modeling. However, these methods heavily relied on a large amount of data with extensive annotations to train the network. Notably, the frame-wise annotations for surgical videos are quite expensive, as it requires expert knowledge and is highly time-consuming and tedious, especially when surgery duration lasts for hours.

With increasing awareness of the impediment from unavailability of large-scale labeled video data, some works investigate semi-supervised learning to reduce annotation cost 

[24, 3, 18, 11, 25]. It can assist network training and promote prediction performance, with the demonstration that networks can learn a representation of certain inherent characteristics of the data, by first being trained towards the generated labels with the auxiliary task [9]. Other semi-supervised methods use self-supervision with only a small portion of available labels [24]. Unfortunately, such semi-supervised methods could not make full use of the annotation workload, because data to be labelled are not carefully selected. In addition, the current performance of semi-supervised learning is still less competitive to the fully supervised learning, which impedes clinical application in practice.

Instead, we explore sample mining techniques to incrementally enlarge the annotated database, so as to achieve state-of-the-art workflow recognition accuracy with minimal annotation cost. We investigate the direction of active learning [19], which has been frequently revisited in the deep learning era to learn models in a more cost-effective way. Its effectiveness has been verified by the successes of some medical image analysis scenarios (e.g., myocardium segmentation from MR image, grand segmentation from pathological data and disease classification from chest X-ray [16, 27, 23, 28]), while less studied in the context of surgical video analysis. The current state-of-the-art work Bodenstedt et al. [2]

uses active learning to iteratively select a bunch of representative surgical sequences to annotate and progressively promote the workflow recognition performance. They first estimate the uncertainty of each frame according to the likelihoods predicted by a recurrent deep Bayesian network (DBN). The method then divides each video into segments with a length of five minutes, and select the most uncertain segments by averaging or maximizing the predictive entropy of all the frames within a segment. High uncertainty verifies that the segments are hard and challenging for the network to recognize, while on the other hand, demonstrating their highly informative characteristic. Bodenstedt et al. assume that these samples are the most informative ones for annotation query, as they are key to learn the model more effectively and efficiently.

However, this previous active learning strategy selects video clips according to frame-wise uncertainty, where the uncertainty is first calculated separately for each single frame and then do the straightforward average and maximum operation to represent the entire clip. Given that the surgical video is actually a form of sequential data, leveraging the cross-frame dependency to calculate the intra-clip dependency for sample selection are crucial for accurate workflow recognition. Modeling the frame dependency within video clips can help to better identify the severe blur and noise samples which normally show the weak dependency with common surgical scenes. It can also help to select the clips with significant intra-class variance, whose dependency are quite low. Moreover, if there exist strong dependency within one clip, there is no need for network to be trained with the entire clip as there exist massive abundant information in such clip. We incorporate the non-local operations which can capture long-range temporal dependency towards time steps 

[22]. Recurrent operations like LSTM process a local neighborhood in time dimension, thus long-range dependencies can only be captured when these operations are applied repeatedly, propagating signals progressively through the data. However, repeating local operations has several limitations. It is computationally inefficient and causes optimization difficulties. These challenges further make multi-hop dependency modeling difficult, e.g., when information need to be delivered back and forward between distant time steps.

In this paper, we propose a novel active learning method to improve annotation efficiency for workflow recognition from surgical videos. We design a non-local recurrent convolutional network (NL-RCNet), which builds the non-local operation block on top of a CNN-LSTM framework to capture long-range temporal dependency (LRTD) within video clips. Such long-range temporal dependency can indicate the cross-frame dependencies among all the frames in a clip, without the limitation of time intervals. Based on the constructed dependency matrix of a clip, we propose to calculate a intra-clip dependency score to represent the overall dependency of this clip. By ranking scores of available video clips in the unlabelled data pool, we select the clips with lower scores and weaker dependencies to annotate, which are more informative to better benefit the network training. To the best of our knowledge, we are the first to model clip-wise dependency for sample selection in active learning related to surgical video recognition tasks. Opposed to other approaches, which select the complete videos or individual frames, we aim to select the clips of 10 consecutive frames sampled at 1 fps. We extensively validate our proposed NL-RCNet on a popular public surgical video dataset of Cholec80. Our approach achieves superior performance of workflow recognition over existing state-of-the-art active learning methods. By only requiring labeling 50% clips, our method can surpass fully-supervised counterpart, which endorses the potential value in clinical practice.

2 Method

In this section, we introduce methods for long-range temporal dependency (LRTD) active learning for surgical workflow recognition task. Our proposed active learning method is illustrated in Fig. 1. We first train the non-local recurrent convolutional network with the annotated set of , which is initialized with randomly selected data from the unlabelled sample pool . Next, we set up the active learning process by iteratively selecting samples and updating the model.

Figure 1: The overview of our proposed non-local recurrent convolutional network (NL-RCNet) to capture long-range temporal dependency (LRTD) within a video clip for surgical workflow recognition. The output of LSTM unit is flowed to the following non-local block.

2.1 Non-local Recurrent Convolutional Network (NL-RCNet)

As illustrated in Fig. 1, we design a non-local recurrent convolutional network to serve as a foundation for active learning. To meet the complex surgical environments, we employ the recurrent convolutional network to extract highly discriminative spatio-temporal feature from surgical videos. We exploit a deep 50-layer residual network (ResNet50) [12] to extract high-level visual features from each frame and harness a LSTM network to model the temporal information of sequential frames. We then seamlessly integrate these two components to form an end-to-end recurrent convolutional network, so that the complementary information of the visual and temporal features can be sufficiently encoded for more accurate recognition. Based on this high qualitative feature, we employ the non-local block to capture long-range temporal dependency of frames within each clip. Different from progressive behavior of convolutional and recurrent operations, non-local operations can directly compute interactions between any two positions in each clip, regardless of their positional distance. Therefore, it can enhance the feature distinctiveness for better workflow recognition, with the capability of deducing the cross-frame dependency of arbitrary intervals. Moreover, the non-local block can construct the dependency of each frame in clips with the captured long-range temporal dependency. Such advantage plays more important roles for our active learning systems, with detailed descriptions in Section 2.3.

2.2 Long-Range Temporal Dependency (LRTD) Modeling with Non-local Block

We introduce the non-local operation for modeling long-range temporal dependency of video clips. This section describes how we formulate the non-local operation and design a non-local block that can be integrated into the entire framework. Our non-local operation design follows [22].

The non-local operation is designed as follows:


Here is the index of an output time step whose response is to be computed and is the index that enumerates all possible time steps. x is the input signal and y is the output signal of the same size as x. Note that x is the high-level spatio-temporal feature outputted from our CNN-LSTM architecture (c in Fig. 1), forming a strong base for better non-local dependency modeling. A pairwise function computes a scalar between and all . The unary function computes a representation of the input signal at the time step . The response is normalized by a factor . The non-local behavior in Eq. 1 is due to the fact that all time steps () in one clip are considered in the operation. As a comparison, a recurrent operation only sums up the weighted input from adjacent frames.

Figure 2: Non-local block design along time dimension. The intermediate generated dependency matrix of non-local block can be utilized to represent cross-frame dependency among all the frames in a clip.

Next we describe the calculation of our non-local operator and . is defined by a linear embedding: , where is the model parameter to be learned. It is implemented by a D convolution to model the representation in spacetime aspect. For the definition of function , we choose Embedded Gaussian to compute similarity in an embedding space, where in our case, to compute similarity of embedding features in different time steps:


where and are two embeddings. The normalization factor in Eq. 1 is set as .

The non-local operation of Eq. 1 is then wrapped into a non-local block, which can be easily incorporated into our CNN-LSTM architecture. We illustrate the non-local block in Fig. 2. We first obtain the feature generated by our CNN-LSTM framework. It is a matrix (: batch size, : channel number, : clip length), which describes the feature of a 10-second video clip. Followed Eq. 1, we calculate where the pairwise computation of is done by matrix multiplication as shown in Fig. 2. In our designed non-local block, we then connect the and

with the residual connection by element-wise addition 

[12]. Note that the residual connection allows us to insert our non-local block into any pre-trained model, without breaking its initial behavior (e.g., if is initialized as zero). The overall definition is as follows:


In the practical implementation of the non-local block, we follow the design in [22], and utilize a simple yet effective subsampling strategy to reduce the computation workload when model the dependency among frames. Concretely, we modify Eq. 1 as: , where is a subsampled version of x. As shown in Fig. 2

, we add the max pooling layer after

and to achieve this. Note that this strategy does not alter the non-local behavior, instead, it can make the computation sparser by reducing the amount of pairwise computation of 1/4.

2.3 Active Sample Selection with Non-local Intra-clip Dependency Score

With using the non-local block, we can obtain the dependency of different frames within one video clip . As illustrated in Fig. 2, we get a matrix with embedded Gaussian function in Eq. 2. Such matrix can represent the intermediate dependency between frames within this clip. To clearly show the dependency we modeled, we interpret it with Fig. 3. As mentioned before, we use the subsampling strategy in the non-local block to reduce the computational workload. Moreover, we find that such subsampling on video clip can help to focus the dependency of frames with the intervals, and reduce the effect of neighbouring dependency which has been represented with the LSTM modeling. To this end, the matrix for one clip is with dimensions. For example in Fig. 3, is the dependency between the and with the frame interval of 2.

Figure 3: LRTD based sample selection. LRTD comes from the non-local cross-frame dependency score that is computed by dependency matrix for clip in Eq. 4.

We select video clips with the weak dependency for annotation query, as they contain richer information and of more representative to better benefit network training. The model would present relatively weak dependency when the video clip contains some “hard” unlabelled samples, which are usually either severe blur scenes or noise in surgical videos. In addition, the video clips with high intra-class variance also present weak dependency. The video clips in both situations are challenging for network to recognize, while in the other hand, demonstrating their high informativeness to train the network more effectively and efficiently.

To select the video clips with the weak dependency, we propose to calculate the overall intra-clip dependency based on the dependency matrix. For each clip sample , we first rank all the values of its dependency matrix in descending order and select the first values. These values with the strongest dependency responses are verified to better represent the overall dependency of this clip. We then average these values to obtain a final dependency score for each clip sample.


Given the unlabelled video clip pool , we calculate the intra-clip dependency score for all the clips. We then rank according to dependency score, and select the lower ones with weaker dependency and stronger informativeness. The selected clip samples following this criteria are represented as :


where is dependency score for each clip in , ranking is in ascending order, and the first samples are selected. We set and where is the total number of available clips. 5 is the hyper-parameter which controls the degree when representing intra-clip dependency. It is set based on the dimension of dependency matrix . is to control the scale of newly selected clips in each round of sample selection, which follows the traditional design of active learning based methods [23, 28, 20].

Layer name Output size NL-RCNet
Conv1_x , 64, stride2

max pool, stride 2

Average Pool
Non-local block Fig. 2
Output max pool, fc, softmax
Table 1: The network architecture of our NL-RCNet model. From Conv1_x to Conv5_x, we follow the 50-layer residual network design, then we use LSTM to capture temporal information and non-local block to capture intra-clip dependency.

2.4 Implementation Details of our Active Learning Approach

We first train the recurrent convolutional network with the annotated set of , which is initialized with randomly selected 10% data from the unlabelled sample pool . We then train the entire NL-RCNet in an end-to-end manner with the parameters of recurrent convolutional part initialized by the pre-trained model, and non-local block is randomly initialized. The whole network architecture is illustrated using Table 1.

Next, we start our active learning process using previous backbone, i.e. NL-RCNet. We iteratively select samples by LRTD method and update . By jointly training with newly added annotated data of

, we progressively update the model. In each update, we first pre-train the recurrent convolutional part (CNN-LSTM model) to learn reliable parameters for the following initialization in the overall network and here we initialize the ResNet50 with weights trained on the ImageNet dataset  


. We use back-propagation with stochastic gradient descent to train the model. The learning rates of CNN module and LSTM module are initialized by


, respectively. Both of them are divided by a factor of 10 every 3 epochs. After obtaining the pre-trained CNN-LSTM model, we train the entire NL-RCNet in an end-to-end manner. The network is fine-tuned by Adam optimizer, where learning rate of CNN-LSTM part and non-local block are initialized by


, and are also reduced by 10 every 3 epochs. The loss functions for CNN-LSTM and NL-RCNet are both cross entropy losses and stop with 25-epoch training. As for input process, we resize the frames from the original resolution of

and into to dramatically save memory and reduce network parameters. In order to enlarge the training dataset, we apply automatic augmentation with random cropping, horizontal flips by a factor of , random rotations of degrees, brightness, saturation and contrasts by a random factor of , and hue by a random factor of

. Our framework is implemented based on the PyTorch using 4 GPUs for acceleration.

3 Experiment

3.1 Dataset and Evaluation Metrics

We extensively validate our LRTD based active learning method on a popular public surgical dataset Cholec80 [21]. The dataset consists of 80 videos recording the cholecystectomy procedures performed by 13 surgeons. The videos are captured at 25 fps and each frame has the resolution of or . All the frames are labelled with 7 defined phases by experts. For fair comparison, we follow the same evaluation procedure reported in [21], splitting the dataset into two subsets with equal size, with 40 videos for training and the rest 40 videos for testing. For data generalization strategy, we create each clip sequentially in the form of a sliding window, with each time shifting one frame forward, which means 9 frames overlap between two continuous clips. So does the test time clip generation strategy. Moreover, one clip-wise annotation corresponding one frame-wise because we only utilize the last frame’s annotation during training. We conduct all the experiments in the online mode, by only using the preceding frames for recognition. The computing time for selection between two annotation stages is 0.58s/clip on a workstation with 1 Nvidia TITAN Xp.

To quantitatively analyze the performance of our method, we employ four metrics to evaluate our methods, including Accuracy(ACC), Precision (PR), Recall (RE), Jaccard (JA) and F1 Score (F1). PR, RE, JA and F1 Score are computed in phase-wise, defined as:


where and

represent the ground truth set and prediction set of one phase, respectively. After PR, RE, JA and F1 of each phase are calculated, we average these values over all the phases and obtain them of the entire video. The ACC is calculated at video-level, defined as the percentage of frames correctly classified into the ground truths in the entire video.

F1 Score
Full EndoNet [21] 100%          
Annotation SV-RCNet [14] 100%          
NL-RCNet (Ours) 100%          
Active Full Data 100%          
Learning DBN [2] 50%          
CNNLSTM-EMB 50%          
LRTD (Ours) 50%          
Table 2: Surgical workflow recognition performance of different methods under settings of full annotation and active learning (meanstd., %).

3.2 Quantitative Results and Comparison with Other Methods

In our active learning process, based on the initially randomly selected 10% data, we select and iteratively add training samples until obtaining predictions which cannot be significantly improved () over the accuracy of last round. It turns out that we only need 50% of the data for workflow recognition task.

Figure 4: Comparison of our proposed LRTD method with state-of-the-art DBN [2] for active learning on various metrics of (a) Accuracy, (b) Jaccard, (c) F1 Score, (d) Precision and (e) Recall.

In Table 2, we divide the different comparison methods into two groups, i.e., the fully supervised methods in this workflow recognition task, and active learning only with sample selection strategy. The amount of employed annotated data is indicated in data amount column. For the fully supervised comparison, we include two state-of-the-art methods, i.e. EndoNet [21] and SV-RCNet [14]. We observe that our NL-RCNet can slightly outperform these two methods, with adding the non-local block to capture the long-range temporal dependency.

Moreover, the more important point of adding this block is for active learning part, by using the cross-frame dependency matrix, which is the intermediate result of this block. We implement full-data training as standard bound, and we compare with the state-of-the-art active learning method for workflow recognition [2], which use the deep Bayesian network (DBN) to estimate uncertainty for sample selection. Note that [2] does not follow the common train-test data split setting. Therefore, we re-implement this method by using the same evaluation process and the same ResNet50-LSTM network architecture for the fair comparison. From Table 2

, we see that our LRTD based strategy achieves better performance than DBN method in 50% data ratio, in particular, improving around 1% for Accuracy. To verify the effectiveness of our non-local block to capture the dependency, we also conduct an ablation study named CNNLSTM-EMB. It uses pure CNN-LSTM without non-local block to train the network, and the dependency matrix is calculated using the dot-product similarity on the frame embeddings output by the CNN-LSTM network. We can see that our LRTD achieves the superior performance to CNNLSTM-EMB in all the evaluation metrics, demonstrating that the non-local block can better construct the intra-clip dependency.

In addition, we can see that our method reaches state-of-the-art performance and even surpass results against full-data training with significantly cost-effective labellings (i.e. only annotation). Specially, our LRTD based active learning achieves slightly better F1 Score performance than the network with full-data training. Because our LRTD based selection is not only consider the information from previous adjacent frame, but also consider the cross-frame dependency among a whole clip with 10 seconds length. By modeling the long-range temporal dependency in time dimension, this strategy encourages the prediction more consistent and robust.

We further conduct statistical test by calculating the p-values to compare the state-of-the-art results and our method, with the numbers for DBN and our LRTD, 0.044 for Full Data training and LRTD. We get both , which indicates a significant improvement for our approach. Moreover, we repeat experiments of NL-RCNet(ours), CNNLSTM-EMB, DBN and LRTD(ours) with doing the initial labeled data selection randomly, to verify that the result improvement is encouraged by the effectiveness of our methods. The p-values for NL-RCNet(ours) and LRTD(ours) are 0.18 and 0.71, respectively. Both of them are larger than 0.05, indicating that the two rounds’ results do not have much significance. While the p-value of DBN and CNNLSTM-EMB are separately and , which are smaller than 0.05, demonstrating that the two-round results of DBN or CNNLSTM-EMB have relatively large gap. The underlying reason is that DBN is sensitive to initially selected labeled data when conducting active learning process, so it is not as robust and stable as our LRTD strategy. For CNNLSTM-EMB, the relation matrix shows less effectiveness than LRTD, which causes the fluctuation on data representativeness among those 50% selected data in two runs, so the performance is not stable.

Figure 5: Comparison of our proposed LRTD method with state-of-the-art DBN [2]

for active learning on various metrics of Jaccard, F1 Score, Precision and Recall, with results on each phase at different annotation ratios. P0-P6 separately corresponds to each surgical phase in our dataset, and also corresponds to each phase name given in Table 

4 (Appendix).
Figure 6: (a) One selected clip sample with weak intra-clip dependency, so the color brightness of its dependency matrix in (c) is low in most matrix positions; (b) one unselected clip sample with strong intra-clip dependency, so the color brightness of its dependency matrix in (d) are high in most matrix positions; (c)-(d) the visualization of corresponding dependency matrices of clip (a) and (b).

3.3 Analytical Experiments of Our Proposed LRTD Approach

For detailed analysis of our LRTD method compared with other active learning method, we conduct the sub-experiments in each sample ratio, with totally five groups until 50% annotations. The quantitative comparison results are listed in Table 3 (see Appendix). We can see that the performance of our LRTD gradually increase through different sample ratios, and can keeps higher than results of DBN in terms of Accuracy, Jaccard, F1 Score and Recall. To more clearly show the change tendency of results with data gradually added by different selection strategies for network training, we draw the changing curves in Fig. 4. We can see that both LRTD and DBN can stably promote Accuracy, Jaccard, and F1 Score performance without a huge fluctuation until the p-value larger than ( data ratio). However, DBN shows fluctuation in Recall while LRTD shows fluctuation on Precision, as both of them do not consider data diversity when selecting samples. Therefore some newly selected data would change the data distribution of training set and result in unstable performance. We further present Fig. 5 to provide more details to show the results in each phase-level through different annotation ratios (quantitative result details can be found in Table 4 (see Appendix)). It is observed that LRTD consistently improves the performance in almost phases with the increasing annotation ratio. Compared with DBN, our method achieves better results in Phase 0-3 while DBN performs better in Phase 4-6 in various annotation ratios.

To intuitively show the long-range temporal dependency across frames, and provide the insight of why we choose the clips with weak dependencies to annotate, we illustrate one selected clip and one unselected clip using our LRTD method in Fig. 6. From selected clip in Fig. 5(a), we find they have low dependency in a long-range temporal dependency thus cause the cross-frame dependency score quite low. This can be clearly seen by Fig. 6c, that the color brightness is low in many positions. Such sample is informative to our model. However, in Fig. 5(b), we can find that these frames are highly related to each other, so the dependency scores are high with the strong brightness (see Fig. 6d), which has low information for the model to train. We further analyze which phases occupy more important proportion in the selected clips and illustrate the percentage in Fig. 7

(see Appendix). It is observed that P1 (43.0%) and P4 (27.5%) surpass other phases in ratio value, while clips containing phase transition only occupy 2.2%. It is reasonable as the phase proportion of the selected clips is corresponding to the original training data, where P1 and P3 take relatively longer duration in the surgical procedure.

4 Conclusion

In this paper, we propose a long-range temporal dependency (LRTD) based active learning for surgical workflow recognition. By modeling the cross-frame dependency within video clips, we select clips with weaker dependency for annotation query. Our network achieves superior performance of workflow recognition over other state-of-the-art active learning methods on a popular public surgical dataset. By only requiring labeling 50% clips, our method can surpass fully-supervised counterpart, which endorses the potential value in clinical practice.
Conflict of interest Xueying Shi, Yueming Jin, Qi Dou and Pheng-Ann Heng declare that they have no conflict of interest.
Ethical approval For this type of study formal consent is not required.
Informed consent This article contains patient data from publicly available datasets.

Acknowledgments. The work was partially supported by HK RGC TRS project T42-409/18-R, and a grant from the National Natural Science Foundation of China (Project No. U1813204) and CUHK T Stone Robotics Institute.


  • [1] N. Ahmidi, L. Tao, S. Sefati, Y. Gao, C. Lea, B. B. Haro, L. Zappella, S. Khudanpur, R. Vidal, and G. D. Hager (2017) A dataset and benchmarks for segmentation and recognition of gestures in robotic surgery. IEEE Transactions on Biomedical Engineering 64 (9), pp. 2025–2041. Cited by: §1.
  • [2] S. Bodenstedt, D. Rivoir, A. Jenke, M. Wagner, M. Breucha, B. Müller-Stich, S. T. Mees, J. Weitz, and S. Speidel (2019) Active learning using deep Bayesian networks for surgical workflow analysis. International Journal of Computer Assisted Radiology and Surgery 14 (6), pp. 1079–1087. Cited by: §1, Figure 4, Figure 5, §3.2, Table 2, Table 3, Table 4.
  • [3] S. Bodenstedt, M. Wagner, D. Katić, P. Mietkowski, B. Mayer, H. Kenngott, B. Müller-Stich, R. Dillmann, and S. Speidel (2017) Unsupervised temporal context learning using convolutional neural networks for laparoscopic workflow analysis. arXiv preprint arXiv:1702.03684. Cited by: §1.
  • [4] D. Bouget, M. Allan, D. Stoyanov, and P. Jannin (2017) Vision-based and marker-less surgical tool detection and tracking: a review of the literature. Medical Image Analysis 35, pp. 633–654. Cited by: §1.
  • [5] D. Bouget, R. Benenson, M. Omran, L. Riffaud, B. Schiele, and P. Jannin (2015) Detecting surgical tools by modelling local appearance and global shape. IEEE Transactions on Medical Imaging 34 (12), pp. 2603–2617. Cited by: §1.
  • [6] N. Bricon-Souf and C. R. Newman (2007) Context awareness in health care: a review. International Journal of Medical Informatics 76 (1), pp. 2–12. Cited by: §1.
  • [7] K. Cleary and A. Kinsella (2005) OR 2020: the operating room of the future.. Journal of laparoscopic & advanced surgical techniques. Part A 15 (5), pp. 495–497. Cited by: §1.
  • [8] O. Dergachyova, D. Bouget, A. Huaulmé, X. Morandi, and P. Jannin (2016) Automatic data-driven real-time segmentation and recognition of surgical workflow. International Journal of Computer Assisted Radiology and Surgery 11 (6), pp. 1081–1089. Cited by: §1.
  • [9] C. Doersch and A. Zisserman (2017) Multi-task self-supervised visual learning. In

    IEEE International Conference on Computer Vision

    pp. 2051–2060. Cited by: §1.
  • [10] G. Forestier, L. Riffaud, and P. Jannin (2015) Automatic phase prediction from low-level surgical activities. International Journal of Computer Assisted Radiology and Surgery 10 (6), pp. 833–841. Cited by: §1.
  • [11] I. Funke, A. Jenke, S. T. Mees, J. Weitz, S. Speidel, and S. Bodenstedt (2018) Temporal coherence-based self-supervised learning for laparoscopic workflow analysis. In OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis, pp. 85–93. Cited by: §1.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    IEEE Conference on Computer Vision and Pattern Recognition

    pp. 770–778. Cited by: §2.1, §2.2, §2.4.
  • [13] A. James, D. Vieira, B. Lo, A. Darzi, and G. Yang (2007) Eye-gaze driven surgical workflow segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 110–117. Cited by: §1.
  • [14] Y. Jin, Q. Dou, H. Chen, L. Yu, J. Qin, C. Fu, and P. Heng (2017) SV-RCNet: workflow recognition from surgical videos using recurrent convolutional network. IEEE Transactions on Medical Imaging 37 (5), pp. 1114–1126. Cited by: §1, §3.2, Table 2.
  • [15] Y. Jin, H. Li, Q. Dou, H. Chen, J. Qin, C. Fu, and P. Heng (2019) Multi-task recurrent convolutional network with correlation loss for surgical video analysis. Medical Image Analysis, pp. 101572. Cited by: §1.
  • [16] D. Mahapatra, B. Bozorgtabar, J. Thiran, and M. Reyes (2018) Efficient active learning for image classification and segmentation using a sample selection and conditional generative adversarial network. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 580–588. Cited by: §1.
  • [17] G. Quellec, K. Charrière, M. Lamard, Z. Droueche, C. Roux, B. Cochener, and G. Cazuguel (2014) Real-time recognition of surgical tasks in eye surgery videos. Medical Image Analysis 18 (3), pp. 579–590. Cited by: §1.
  • [18] T. Ross, D. Zimmerer, A. Vemuri, F. Isensee, M. Wiesenfarth, S. Bodenstedt, F. Both, P. Kessler, M. Wagner, B. Müller, H. Kenngott, S. Speidel, Kopp-Schneider,Annette, Maier-Hein,Klaus, and L. Maier-Hein (2018) Exploiting the potential of unlabeled endoscopic video data with self-supervised learning. International Journal of Computer Assisted Radiology and Surgery 13 (6), pp. 925–933. Cited by: §1.
  • [19] B. Settles (2009) Active learning literature survey. Computer Sciences Technical Report Technical Report 1648, University of Wisconsin–Madison. Cited by: §1.
  • [20] X. Shi, Q. Dou, C. Xue, J. Qin, H. Chen, and P. Heng (2019) An active learning approach for reducing annotation cost in skin lesion analysis. In

    International Workshop on Machine Learning in Medical Imaging

    pp. 628–636. Cited by: §2.3.
  • [21] A. P. Twinanda, S. Shehata, D. Mutter, J. Marescaux, M. De Mathelin, and N. Padoy (2016) Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Transactions on Medical Imaging 36 (1), pp. 86–97. Cited by: §1, §3.1, §3.2, Table 2.
  • [22] X. Wang, R. Girshick, A. Gupta, and K. He (2018) Non-local neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803. Cited by: §1, §2.2, §2.2.
  • [23] L. Yang, Y. Zhang, J. Chen, S. Zhang, and D. Z. Chen (2017) Suggestive annotation: a deep active learning framework for biomedical image segmentation. In International conference on medical image computing and computer-assisted intervention, pp. 399–407. Cited by: §1, §2.3.
  • [24] G. Yengera, D. Mutter, J. Marescaux, and N. Padoy (2018) Less is more: surgical phase recognition with less annotations through self-supervised pre-training of CNN-LSTM networks. arXiv preprint arXiv:1805.08569. Cited by: §1.
  • [25] T. Yu, D. Mutter, J. Marescaux, and N. Padoy (2018) Learning from a tiny dataset of manual annotations: a teacher/student approach for surgical phase recognition. arXiv preprint arXiv:1812.00033. Cited by: §1.
  • [26] L. Zappella, B. Béjar, G. Hager, and R. Vidal (2013) Surgical gesture classification from video and kinematic data. Medical Image Analysis 17 (7), pp. 732–745. Cited by: §1.
  • [27] H. Zheng, L. Yang, J. Chen, J. Han, Y. Zhang, P. Liang, Z. Zhao, C. Wang, and D. Z. Chen (2019) Biomedical image segmentation via representative annotation. In AAAI, Cited by: §1.
  • [28] Z. Zhou, J. Y. Shin, L. Zhang, S. R. Gurudu, M. B. Gotway, and J. Liang Fine-tuning convolutional neural networks for biomedical image analysis: actively and incrementally.. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §1, §2.3.

5 Appendix

F1 Score
Active Learning DBN [2] 10%          
LRTD (Ours) 10%          
DBN [2] 20%          
LRTD (Ours) 20%          
DBN [2] 30%          
LRTD (Ours) 30%          
DBN [2] 40%          
LRTD (Ours) 40%          
DBN [2] 50%          
LRTD (Ours) 50%          
Table 3: Surgical workflow recognition performance of DBN and LRTD for active learning (meanstd., %).
Method Ratio Preparation CalotTriangle Clipping Gallbladder Gallbladder Cleaning Gallbladder
Dissection    Cutting Dissection Packaging Coagulation Retraction
DBN [2] 10% 55.16 73.52 77.25 64.01
LRTD (Ours) 10% 58.29 58.65
DBN [2] 20% 56.18 76.15 72.43 76.41 64.25 59.67 59.84
LRTD (Ours) 20%
DBN [2] 30% 58.81 77.19 73.38 77.47 66.26
LRTD (Ours) 30% 60.10 60.56
DBN [2] 40% 59.34 77.03 74.10 77.43
LRTD (Ours) 40% 64.31 60.01 61.88
DBN [2] 50% 60.95 78.33 72.94 78.30 62.66
LRTD (Ours) 50% 66.74 60.70
F1 Score
DBN [2] 10% 70.10 86.65 84.87 86.53 72.95 74.30
LRTD (Ours) 10% 79.01 72.95
DBN [2] 20% 71.10 86.14 83.84 85.88 79.44 75.01
LRTD (Ours) 20% 73.82
DBN [2] 30% 72.98 86.84 84.08 86.63 80.95 74.82
LRTD (Ours) 30% 75.92
DBN [2] 40% 73.24 86.74 84.96 86.49 76.89
LRTD (Ours) 40% 79.46 74.54
DBN [2] 50% 74.10 87.59 84.39 87.22 77.36
LRTD (Ours) 50% 81.10 75.26
DBN [2] 10% 87.82 87.07 88.26 69.68
LRTD (Ours) 10% 71.17 81.04 72.57
DBN [2] 20% 79.87 87.14 73.05 75.05 73.15
LRTD (Ours) 20% 75.39 87.48
DBN [2] 30% 78.36 84.33 85.67 76.30
LRTD (Ours) 30% 88.64 72.75 73.99
DBN [2] 40% 85.52
LRTD (Ours) 40% 76.57 89.49 85.27 76.65 72.52 78.45
DBN [2] 50% 88.66 88.21 86.79 73.05 77.86
LRTD (Ours) 50% 76.92 78.50
DBN [2] 10% 72.57 85.41 87.53 82.80
LRTD (Ours) 10% 86.89 79.83 82.67
DBN [2] 20% 83.51 84.06 75.63
LRTD (Ours) 20% 76.45 88.34 84.13 78.10
DBN [2] 30% 71.10 86.14 85.50
LRTD (Ours) 30% 88.76 82.01 82.43 84.94
DBN [2] 40% 76.71 85.55 83.04 83.62
LRTD (Ours) 40% 88.67 82.34 82.17
DBN [2] 50% 87.73 82.96 81.46
LRTD (Ours) 50% 79.18 89.83 81.35 82.31
Table 4: Comparison of our proposed LRTD method with state-of-the-art DBN [2] for active learning on various metrics of Jaccard, Precision and Recall, with results on each phase at different annotation ratios.
Figure 7: Ratio statistics about selected clips’ phases.