Resources in the operating room (OR) are among the most expensive in a hospital and careful OR planning is crucial in order to minimize waiting times and idle phases. Estimating the remaining surgery duration (RSD) at specified points during an intervention can facilitate more efficient utilization of OR resources.
This work builds on deep learning-based methods for fully automated RSD prediction based solely on endoscopic video data [1, 2, 10]. Since the remaining time for each frame can be inferred automatically from a given videos, RSD prediction is a self-supervised task. This property is especially useful in medical applications, where manually annotating data is expensive.
However, RSD prediction is an extremely challenging task due to the complexity and uniqueness of a surgical procedure. It appears to require a high-level understanding of the workflow and progress of the surgery. These factors probably contribute to RSD models tending to overfit without proper regularization or pretraining. To alleviate this problem, Twinanda et. al. propose an RSD prediction network which is encouraged to learn progress-related features and utilizes the elapsed time in addition to visual features . Bodenstedt et. al. use multimodal sensor data from the OR including visual data and tool signals for their prediction . State-of-the-art results are obtained by Aksamentov et. al. who suggest to pretrain the RSD model on surgical phase recognition as an auxiliary task . However, surgical phase recognition is a supervised task and therefore reduces the advantages of self-supervised RSD training.
Our contributions consist of proposing an unsupervised auxiliary task to improve RSD prediction, namely unsupervised temporal video segmentation. To solve the auxiliary task, we present a method for finding segmentations that capture the progress of a surgery similar to surgical phases but without the need for manual annotations. As indicated in [1, 10], progress-related features can be beneficial for RSD prediction. Using an unsupervised auxiliary task makes this approach widely applicable to different datasets. Several image-based unsupervised temporal video segmentation methods have been proposed [6, 7, 8]. We adopt the method from  since its iterative procedure allows us to learn task-related image features. The other approaches extract or learn features prior to segmentation, making them unsuitable as an auxiliary task. Finally, we propose a novel loss function that targets undesirable characteristics of the RSD ground truth.
Our approach combines models for RSD prediction and unsupervised temporal video segmentation. A model consisting of a Convolutional Neural Network (CNN) for visual feature extraction and a Long Short-Term Memory network (LSTM) for propagating information through time is trained to perform our main task, RSD prediction, similar to[1, 2, 10]. For the temporal segmentation task, we use an unsupervised approach to train a discriminative-generative model alternating between learning segmentation labels through a generative model and learning visual features in a discriminative CNN-LSTM network. The results obtained by solving the temporal segmentation task can be leveraged for RSD prediction in several ways. First, we assume that the temporal segmentation training encourages the CNN-LSTM model to learn features relevant for RSD prediction. Thus, we investigate reusing the learned feature representations by initializing the CNN-LSTM model for RSD prediction with the learned network weights. We then pursue two different strategies for further training the RSD model: we either finetune only the upper layers or none of the layers in the CNN. In a complementary approach, we use the obtained segment labels to formulate an additional objective to regularize the RSD model during training.
2.1 RSD Model
For our RSD model (Fig. 1, right), we use an AlexNet-style CNN  to extract visual features from the video frames of a recorded surgical procedure. The feature representations are concatenated with the elapsed time of the procedure and fed into an LSTM, similar to . The LSTM can consider features from the current and previous frames and produces an RSD estimate for each frame of the video. The network predicts the remaining duration in minutes, scaled by a factor of due to high values of up to 100 minutes. RSD prediction is formulated as a regression task and optimized according to the SmoothL1 loss . We use a simpler model instead of RSDNet , since the latter showed no empirical improvement in combination with our auxiliary task.
2.2 Unsupervised Temporal Video Segmentation Model
We extend a method from  for recognition and segmentation of complex activities in videos, i.e. activities consisting of several subactivities. The author’s definiton of a complex activity can be applied to surgeries, where subactivities could represent surgical phases or similar steps.
The unsupervised learning algorithm alternates between learning frame features and subactivity labels (Fig. 3). Given the current subactivity labels, a discriminative appearance model learns frame features in a supervised manner. A generative temporal model is then estimated, which models the distribution of subactivity lengths and subactivity orders, given the distribution of frames in the learned appearance space. The subactivity lengths and order determine the segmentation of a video. After sampling new lengths and orders and subsequently updating subactivity labels, the algorithm continues to learn new frame features.
The discriminative appearance model is a CNN-LSTM model (Fig. 1
, left) optimized via the cross-entropy loss. An extensive hyperparameter search suggested the use of ten subactivities. Opposed to our deep learning approach, the appearance model in the original paper learns a simple linear embedding of image features. When replacing this simple model by a complex CNN-LSTM model, care must be taken to avoid overfitting on unrefined segmentations from early iterations. To this end, only the top two layers of the network are optimized in the first iteration and layers are added incrementally after each iteration (Fig. 1, left). In turn, the incremental depth increase requires an initialization of the fixed layers. We pretrain the CNN using the 2nd-order temporal coherence objective , which has shown promising results on a similar task .
The generative temporal model estimates the joint distribution of frame features and subactivity segmentations. The distribution over segmentations is modeled by distributions over the length of each subactivity(Multinomial) and over the order of subactivities (Generalized Mallows Model). Sampling-based approximations are used to infer segmentations. The generative temporal model is almost identical to the one proposed in . We only drop background model.
The method produces new models after each iteration. Hence, we need to evaluate and select a model to use as a support for the RSD model. Since the ground truth segmentation labels are unknown, we require a surrogate quality measure. We define a measure which quantifies the temporal coherence of subacitivty predictions by the appearance model. More precisely, we measure the prediction’s accuracy with respect to the best match of coherent segmentations with the same subacitivity lengths. This measure intends to capture how well a model has learned progress-related features. Preliminary experiments showed that the measure selects models which are beneficial for RSD prediction.
2.3 Combined Learning Pipelines
Fig. 1 shows three strategies for combining models.
Feature extraction: The unsupervised temporal segmentation method is used to train the CNN-LSTM network of the discriminative appearance model. The weights learned from layers Conv1 to FC1 are then re-used for the RSD model. While training the RSD model, the initialized layers are fixed. Only layers FC2, LSTM and FC3 are optimized. This method is equivalent to feature extraction, where layers Conv1 to FC1 serve as a feature extractor for a shallow RSD model.
Pretraining: Pretraining is almost identical to feature extraction, except that the layers Conv5 and FC1 are optimized during RSD training after being initialized by the temporal segmentation method. In order to prevent the previously learned information from being overwritten too quickly, a lower learning rate is applied to pretrained layers. To summarize, layers Conv1 to Conv4 are fixed, Conv5 and FC1 are optimized with a low learning rate, and FC2, LSTM and FC3 are optimized using the regular learning rate.
Regularization: The resulting subactivity labels of a learned temporal segmentation model are re-used for supervision during RSD training. First, segmentations are learned for each video by the unsupervised temporal segmentation model. Then, the RSD model is jointly trained on RSD prediction and predicting the current subactivity according to the previously found segmentations.
2.4 Corridor-based RSD Loss Function
In the early stages of a procedure it is extremely challenging to correctly predict the remaining duration since later occurring events are not yet known. To account for this, we propose an alternative RSD loss function which reduces the influence of early errors. Intuitively, we do not want to penalize the best guess at the beginning of a procedure, which is the average length of the given procedure type. For each video, we therefore define an area between the ground truth over time and a naïve median-based prediction , where is the median duration of all procedures in the training set. Errors within this corridor are decreased by a weighting function (Fig. 3). The corridor border is a linear combination of the ground truth and the median-based prediction .
Here is a time-dependent linear factor similar to the tanh function, where is the progress of the surgery in percent. Intuitively, is closer to the median-based prediction at early time points, when little information is available, and approaches the ground truth as the procedure progresses. The weight for a prediction at time is given by
realizes a smooth weighting distribution along the -axis inside the corridor from to (with to ). For predictions outside the corridor, . The corridor-weighted loss is finally given by
We evaluate our proposed models on the publicly available Cholec80 dataset 
. We use 50 videos for training, 10 for validation and 20 for testing. Video frames are extracted at 1fps. We train the RSD models using the Adam optimizer (200 epochs, learning rate, batch size 384, -weight ). For the pretraining pipelines, we use SGD, run 250 epochs and update pretrained layers with a learning rate of since these settings empirically perform better. The other settings are kept. For the segmentation model, Adam, learning rate , batch size 384, 5 epochs per iteration, 8 iterations, -weight are used. We select the best model from iterations 6 to 8 according to our TC measure (Sec. 2.2).
We consider four baselines for RSD prediction: The simplest baseline is the RSD model from Section 2.1 trained only on single-task RSD prediction with no auxiliary task (None). The other baselines are supported by auxiliary tasks each using all three proposed pipelines from Sec. 2.3. The first auxiliary task is temporal segmentation into 10 uniform segments (Uniform). This is an interesting baseline that can provide insight into how much RSD-relevant information is gained by learning more refined segmentations. The other two auxiliary tasks are state-of-the-art approaches, namely supervised surgical phase recognition (Phase)  and self-supervised prediction of progress (Progress) reimplemented from , which is an updated version of the RSDNet from . For the phase approach, we use the regularized RSD model from Fig. 1 in order preserve compariability to our methods. The main differences to the architecture from  are that we use an AlexNet-style CNN like in  and that we incorporate the elapsed time into the prediction like in [10, 11]. Hyperparameters of the optimization are identical to the proposed methods.
|Auxiliary Task||Feature Extraction||Pretraining||Regularization|
|Unsup. temp. seg. (ours)||9.0||9.3||9.2|
|Unsup. temp. seg. (ours)||9.0||9.1||9.2||8.7|
shows the mean average error (MAE) in minutes for each of our proposed models as well as all baselines using the SmoothL1 loss. All experiments involving our proposed method are performed four times, averaged and indicated by a standard deviation. Baseline experiments for settings which were effective for our method are repeated four times, in order to obtain more significant results.
Comparing our proposed methods, feature extraction achieves the best results ( min. MAE), while pretraining performs worst (
) and high variances were observed during regularization (). The high expressivity of RSD models likely causes overfitting in the two latter setups. In the pretraining setting, the RSD model is the least expressive, as only the top layers are optimized after initialization by the segmentation method. Hence, it is supposedly less prone to overfitting. We also observe that our approach outperforms or matches the self-supervised approaches (single-task RSD, uniform segmentation and progress) for all learning pipelines. Using feature extraction, we even achieve results comparable to the supervised phase-based approach (9.0 vs. 8.9).
Next, we compare the CorrSmoothL1 loss to SmoothL1 on the previously most successful feature extraction and the regularization pipeline (Table 2), since the high variance in regularization experiments indicates potential for improvement. The first two result columns show RSD errors for both loss functions on the feature extraction pipeline. No clear difference can be observed. The single-task RSD model as well as most regularized models, however, improve drastically. Since CorrSmoothL1 aims to reduce overfitting, it is more effective on very expressive deep models such as the regularization models or the single-task model. In the feature extraction setup, which has significantly fewer trained parameters during RSD training, the model’s low expressivity probably prevents further improvement. Using regularization, our approach improves from 9.2 to 8.7 minutes MAE and therefore exceeds our previously best result as well as all baselines. We even outperform all supervised phase-based setups. It is not clear how significant this difference is, since the supervised approach performed slightly better than ours in the SmoothL1 setup. However, even comparable results are very promising and our approach performs at least on a similar level as supervised methods. Fig. 4 shows that subactivity labels corresponds fairly well to surgical phases but are more fine grained due to the higher number of segments. Using a hand-picked mapping from subactivities to phases, we achieve an accuracy of 71% and 72% on the training and test set for surgical phase recognition. A limitation of our proposed method remains the complexity of the whole model and achieving stable results poses a challenge.
We present unsupervised temporal video segmentation as a novel auxiliary task for video-based RSD prediction and propose three different learning pipelines to utilize unsupervised temporal segmentation learning for RSD modeling. In our experiments on the Cholec80 dataset, our approach compares favorably with self-supervised auxiliary tasks and performs comparably to the state of the art, which utilizes supervised surgical phase recognition as auxiliary task. This is very promising since the method does not require any manual annotations and therefore has potential for improvement by utilizing larger, unlabeled datasets. Further, we specifically target the problem that RSD ground truth labels can be misleading in early stages of a procedure. Our novel corridor-based loss shows clear improvements on deep RSD models. Using the corridor-based loss, we even outperform the state of the art when we regularize the RSD model with the unsupervised temporal segmentation task. Future work could evaluate our method on procedure types with higher variance in duration and therefore lower correlation between RSD and progress. Analyzing how our method transfers to these procedures is interesting since temporal segmentations can potenially model more complex temporal structures than progress. Also, the similarity of unsupervised segmentations and surgical phases induces interesting new research directions.
-  Aksamentov, I., Twinanda, A.P., Mutter, D., Marescaux, J., Padoy, N.: Deep neural networks predict remaining surgery duration from cholecystectomy videos. In: MICCAI. pp. 586–593. Springer (2017)
-  Bodenstedt, S., Wagner, M., Mündermann, L., Kenngott, H., Müller-Stich, B., Breucha, M., et. al.: Prediction of laparoscopic procedure duration using unlabeled, multimodal sensor data. IJCARS 14(6), 1089–1095 (2019)
Funke, I., Jenke, A., Mees, S.T., Weitz, J., Speidel, S., Bodenstedt, S.: Temporal coherence-based self-supervised learning for laparoscopic workflow analysis. In: OR 2.0 Context-Aware Operating Theaters, pp. 85–93. Springer (2018)
-  Jayaraman, D., Grauman, K.: Slow and steady feature analysis: higher order temporal coherence in video. In: Proceedings of CVPR. pp. 3852–3861 (2016)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS - Volume 1 (2012)
-  Kukleva, A., Kuehne, H., Sener, F., Gall, J.: Unsupervised learning of action classes with continuous temporal embedding. In: Proceedings of CVPR. pp. 12066–12074 (2019)
-  Sener, F., Yao, A.: Unsupervised learning and segmentation of complex activities from video. In: CVPR (June 2018)
-  Tran, D.T., Sakurai, R., Yamazoe, H., Lee, J.H.: Phase segmentation methods for an automatic surgical workflow analysis. IJBI (2017)
-  Twinanda, A.P., Shehata, S., Mutter, D., Marescaux, J., De Mathelin, M., Padoy, N.: Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE transactions on medical imaging 36(1), 86–97 (2016)
-  Twinanda, A.P., Yengera, G., Mutter, D., Marescaux, J., Padoy, N.: Rsdnet: learning to predict remaining surgery duration from laparoscopic videos without manual annotations. IEEE transactions on medical imaging 38(4), 1069–1078 (2018)
-  Yengera, G., Mutter, D., Marescaux, J., Padoy, N.: Less is more: surgical phase recognition with less annotations through self-supervised pre-training of cnn-lstm networks. arXiv preprint arXiv:1805.08569 (2018)