DeepAI
Log In Sign Up

Surgical Skill Assessment via Video Semantic Aggregation

Automated video-based assessment of surgical skills is a promising task in assisting young surgical trainees, especially in poor-resource areas. Existing works often resort to a CNN-LSTM joint framework that models long-term relationships by LSTMs on spatially pooled short-term CNN features. However, this practice would inevitably neglect the difference among semantic concepts such as tools, tissues, and background in the spatial dimension, impeding the subsequent temporal relationship modeling. In this paper, we propose a novel skill assessment framework, Video Semantic Aggregation (ViSA), which discovers different semantic parts and aggregates them across spatiotemporal dimensions. The explicit discovery of semantic parts provides an explanatory visualization that helps understand the neural network's decisions. It also enables us to further incorporate auxiliary information such as the kinematic data to improve representation learning and performance. The experiments on two datasets show the competitiveness of ViSA compared to state-of-the-art methods. Source code is available at: bit.ly/MICCAI2022ViSA.

READ FULL TEXT VIEW PDF

page 7

page 8

page 13

page 14

page 15

06/02/2021

Towards Unified Surgical Skill Assessment

Surgical skills have a great influence on surgical safety and patients' ...
07/05/2022

Video-based Surgical Skills Assessment using Long term Tool Tracking

Mastering the technical skills required to perform surgery is an extreme...
03/03/2021

Deep Neural Networks for the Assessment of Surgical Skills: A Systematic Review

Surgical training in medical school residency programs has followed the ...
08/27/2020

Surgical Skill Assessment on In-Vivo Clinical Data via the Clearness of Operating Field

Surgical skill assessment is important for surgery training and quality ...
06/07/2018

Evaluating surgical skills from kinematic data using convolutional neural networks

The need for automatic surgical skills assessment is increasing, especia...
02/24/2017

Video and Accelerometer-Based Motion Analysis for Automated Surgical Skills Assessment

Purpose: Basic surgical skills of suturing and knot tying are an essenti...
03/31/2021

Long-Term Temporally Consistent Unpaired Video Translation from Simulated Surgical 3D Data

Research in unpaired video translation has mainly focused on short-term ...

1 Introduction

Automated assessment of surgical skills is a promising task in computer-aided surgical training [10, 26], especially in the resource-poor countries. Surgical skill assessment involves two major challenges [23]: 1. how to capture the difference between fine-grained atomic actions. 2. how to model the contextual relationships between these actions in the long-term range. Previous works [2, 9, 13, 23, 24]

counteract above two challenges by stacking convolutional neural networks (CNNs) and LSTMs: CNNs for short-term feature extraction and temporal aggregation networks (

e.g., LSTMs) for long-term relationship modeling. For example, Fawaz et al[1] used 1D CNN to encode kinematic data before the aggregation over time by global average pooling. Wang et al[23] extract 2D or 3D CNN features from video frames and model their temporal relationship by leveraging an LSTM network. Considerable progress has been made in predicting skill levels [1, 2, 9, 11, 19, 24, 27] or scores [13, 23, 26] using kinematic data or video frames.

Regarding the video-based methods, most methods [2, 9, 13, 24, 23]

perform the global pooling over the spatial dimension on CNN features before feeding them to the subsequent network. We argue that this global pooling operation ignores the semantics variance of different features and compresses all the information together in the spatial scale without distinction. As a result, the consequent networks could hardly model the temporal relationship of the local features in different spatial parts separately,

e.g., the movements of different tools and the status changes of tissue. This bottleneck is particularly severe for surgical skill assessment because the tracking of tools from a large part of the background is essential to judge the manipulation quality. Similarly, the interactions between tools and tissue across time are also important for the assessment. To make it worse, most of existing methods are end-to-end neural networks, revealing little about what motion or appearance information is captured.

Since surgical videos comprise limited objects with explicit semantic meanings such as tools, tissue, and background, we propose a novel framework, Video Semantic Aggregation (ViSA), for surgical skill assessment by aggregating local features across spatiotemporal dimensions according to these semantic clues. This aggregation allows our method to separate the video parts related to different semantics from the background and further dedicate to tracking the tool or modeling its interaction with tissue.

As shown in Fig. 1, ViSA first aggregates similar local semantic features through clustering and generates the abstract features for each semantic group in the semantic grouping stage. Then we aggregate the features for the same semantic across time and model their temporal contextual relationship via multiple bidirectional LSTMs. The spatially aggregated features can visualize the correspondence between different video parts and semantics in the form of assignment maps, facilitating the transparency of the network. In addition, the explicit semantic group of features allows us to incorporate auxiliary information that further improves the assignment and influences the intermediate features, e.g., using kinematic data to bound features for certain semantics to tools.

Our contribution is threefold: (1) We propose a novel framework, ViSA, to assess skills in surgical videos via explicitly splitting different semantics in video and efficiently aggregating them across spatiotemporal dimensions. (2) Via aggregating the video representations and regularization, our method can discover different semantic parts such as tools, tissue, and background in videos. This provides explanatory visualization as well as allows integrating auxiliary information like tool kinematics for performance enhancement. (3) The framework achieves competitive performance on two datasets: JIGSAWS [4] and HeiChole [22].

Figure 1: The proposed framework, Video Semantic Aggregation (ViSA), for surgical skill assessment. It aggregates local CNN features over space and time using semantic clues embedding in the video. We aim to group video parts according to semantics, e.g., tools, tissue and background, and model the temporal relationship for different parts separately via this framework. We also investigate the regularization or additional supervision to the group results for enhancement.

2 Methodology

As shown in Fig. 1, for each video, our framework takes frames from a fixed number of timesteps as input and predicts the final skill assessment score through the following 4 stacked modules: (1) the Feature Extraction Module (FEM) that produces feature maps for each timestep (Sec. 2.1); (2) the Semantic Grouping Module (SGM) that aggregates local features into a specified number of groups based on the embedded semantics (Sec. 2.2); (3) the Temporal Context Modeling Module (TCMM) that models the contextual relationship for the feature series of each group (Sec. 2.3); (4) the Score Prediction Module (SPM) that regresses the final score based on the spatiotemporally aggregated features (Sec. 2.4).

2.1 Feature Extraction Module

Our framework first feeds the input frames to CNNs to collect the intermediate layer responses as the feature maps for the subsequent processing. We stack the feature maps for one video along the temporal dimension and registered them as , where is the number of timesteps, represents the height and width of each feature map, and is the number of channels. can also be seen as a group of local features, where each feature is indexed by the temporal and spatial position as .

2.2 Semantic Grouping Module

This module aggregates the extracted features into a fixed number of groups, with each representing a specific kind of semantic meaning. Taking inspirations from [5, 7], we achieve this by clustering all local CNN features across the entire spatiotemporal range of the video according to

learnable vectors

which record the feature centroid of each group (drawn as colored triangles in Fig. 1).

2.2.1 Local Feature Assignment

We softly assign local features to different groups by computing the assignment possibilities subject to . Component represents the possibility of assigning the local CNN feature to the semantic group. is a learnable factor to adjust feature magnitudes and smooth the assignment for each semantic group. Taking the index of the group with the maximum possibility at each position, we obtain the assignment maps with component As shown in Fig. 1, the assignment maps visualize the correspondence between video parts and semantic groups.

2.2.2 Group Feature Aggregation

For each timestep, we aggregate the corresponding local features according to the soft-assignment results and generate an abstract representation for each semantic group. We first calculate the normalized residual between the centroid and the averaged local features weighted by assignment possibilities as , where . Then we get one group feature for capturing the abstract information of the semantics at each timestep by transforming with a sub-network

consisting of 2 linear transformations as

.

2.2.3 Group Existence Regularization

To avoid most local features being allocated to one single group, we leverage a regularization term [7]

to retain the even distribution among groups at every timestep. Specifically, it regularizes the existence of each group by constraining the max assignment probability of one group,

i.e., , as close to 1.0 as possible. Instead of tightly constraining

to an exact number, the regularization term restricts the cumulative distribution function of

to follow a beta distribution. We implement the regularization term as

. arranges the maximum assignment probabilities of the group at all timesteps in ascending order. =1 and =0.001 control the shape of the beta distribution. is a small value for numerical stability. means an inverse cumulative beta distribution function, which returns the element of the sequence of probability values obeying this distribution.

2.3 Temporal Context Modeling Module

For each obtained semantic group feature series, this module aims to keep track of its long-term dependencies and model their contextual relationships independently in a recurrent manner. As shown in Fig. 1, we achieve this by employing bidirectional LSTMs (BiLSTMs): , , where and denote the forward and backward LSTMs respectively. The output vectors of every timestep and from the two directions are further concatenated to form the contextual feature . To prevent the potential information loss caused by the separated modeling (e.g., the interaction between groups), we also employ another BiLSTM to model the global features which are obtained by taking spatial average pooling on . denotes the generated contextual features by this additional BiLSTM.

2.4 Score Prediction Module

This module concatenates the contextual features of different semantic groups at the same timestep into a vector, followed by an average pooling on these vectors across the temporal dimension. The pooled vector is finally regressed to the skill score by , a fully connected layer. We formulate this module mathematically as follows:

. The training loss function is defined as:

with denoting the ground-truth score. The scalar controls the contribution of the group existence regularization.

2.5 Incorporating Auxiliary Supervision Signals

Our unique representation aggregation strategy also allows enhancing the feature grouping results by incorporating additional supervision signals to assign specific known semantic information to certain groups. Specifically, we propose a Heatmap Prediction Module (HPM), which takes as input the CNN feature maps and the assignment possibility maps of the group. The module predicts the positions of the specified semantics by generating heatmaps , where denotes the Hadamard product, and is composed of one basic

conv block followed by a Sigmoid function. Assuming the 2D positions of the specified semantics is known, we generate the position heatmaps

from the 2D positions as the supervision signal. The framework integrating HPM is trained by the loss function with an extra position regularization , where computes the binary cross entropy between and and =10, =20.

3 Experiments

width= Input Method Task & Scheme KT NP SU Avg. LOSO LOUO 4-Fold LOSO LOUO 4-Fold LOSO LOUO 4-Fold LOSO LOUO 4-Fold K SMT-DCT-DFT [26] 0.70 0.73 - 0.38 0.23 - 0.64 0.10 - 0.59 0.40 - DCT-DFT-ApEn [26] 0.63 0.60 - 0.46 0.25 - 0.75 0.37 - 0.63 0.41 - V ResNet-LSTM [23] 0.52 0.36 - 0.84 0.33 - 0.73 0.67 - 0.72 0.59 - C3D-LSTM [16] 0.81 0.60 - 0.84 0.78 - 0.69 0.59 - 0.79 0.67 - C3D-SVR [16] 0.71 0.33 - 0.75 -0.17 - 0.42 0.37 - 0.65 0.18 - USDL [18] - - 0.61 - - 0.63 - - 0.64 - - 0.63 MUSDL [18] - - 0.71 - - 0.69 - - 0.71 - - 0.70 *S3D [25] 0.64 0.14 - 0.57 0.35 - 0.68 0.03 - - - - *ResNet-MTL-VF [23] 0.63 0.72 - 0.73 0.48 - 0.79 0.68 - 0.73 0.64 - *C3D-MTL-VF [23] 0.89 0.83 - 0.75 0.86 - 0.77 0.69 - 0.75 0.68 - V+K JR-GCN [14] - 0.19 0.75 - 0.67 0.51 - 0.35 0.36 - 0.40 0.57 AIM [3] - 0.61 0.82 - 0.34 0.65 - 0.45 0.63 - 0.47 0.71 MultiPath-VTP [13] - 0.58 0.78 - 0.62 0.76 - 0.45 0.79 - 0.56 0.78 *MultiPath-VTPE [13] - 0.59 0.82 - 0.65 0.76 - 0.45 0.83 - 0.57 0.80 V ViSA 0.92 0.76 0.84 0.93 0.90 0.86 0.84 0.72 0.79 0.90 0.81 0.83

Table 1: Baseline comparison on JIGSAWS. We report the Spearman’s Rank Correlations by three cross-validation schemes on every task. K: Kinematic data, V: video frames, *: extra annotations such as surgical gestures are utilized.

Dataset We evaluate ViSA framework on 2 datasets for surgical skill assessment: JIGSAWS [4] and HeiChole [22]. JIGSAWS is a widely used dataset consisting of 3 elementary surgical tasks: Knot Tying (KT), Needle Passing (NP), and Suturing (SU). Each task contains more than 30 trials performed on the da Vinci surgical system, which is rated from 6 aspects. Following [23, 13, 26], the sum score is used as the ground truth. We validate our method on every task by 3 kinds of cross-validation schemes: Leave-one-supertrial-out (LOSO), Leave-one-user-out (LOUO) and 4-Fold [14, 18, 3]. HeiChole is a challenging dataset containing 24 endoscopic videos in real surgical environments for laparoscopic cholecystectomy. For each video, 2 clips of the phases calot triangle dissection and gallbladder dissection are provided with skill scores from 5 domains. We use the sum score as the ground truth. We train and validate our framework on the 48 video clips by 4-fold cross-validation with a 75/25 partition ratio.

Evaluation Metric For JIGSAWS, since previous works [23, 13] only included the results on Spearman’s Rank Correlation (Corr), we report the validation correlations averaged on all folds for one task and compute the average correlation across tasks by Fisher’s z-value [15] for baseline comparison. We incorporate the results on Mean Absolute Error (MAE) in ablation studies. For HeiChole, we report both the correlation and MAE averaged on all folds. The reported results are averaged on multiple runs with different random seeds.

Implementation Details We employ R(2+1)D-18 [20] pre-trained on Kinetic-400 dataset [8] as the feature extractor and take the response of the 4 convolutional block as the 3D spatiotemporal feature. Since each 3D feature is extracted from 4 frames, we divide each video into segments and sample a 4-frame snippet from each. is set to 32 on JIGSAWS videos and 64 on longer HeiChole videos. Each sampled frame is resized to and crop the region in the center as input. Hence, the spatial size of the extracted CNN feature maps is

. We also investigate 2D-CNNs feature extractor by using ResNet-18 pre-trained on ImageNet. Since the surgical scenes in our experiments explicitly include 3 parts: represented by tools, tissues, and background, we initialize the number of semantic groups

. It should be set according to the scene complexity and fine-tuned based on the experiment results. The framework is implemented by PyTorch. We use SDG optimizer with mean squared error loss. Models are trained in 40 epochs with a batch size of 4. Learning rates are initialized as 3e-5 and decayed by 0.1 times for every 20 epochs.

width=

Method MAE Corr
R2D-18 + FC 1.56 0.32
R2D-18 + LSTM 1.42 0.15
R(2+1)D-18 + FC 1.54 0.29
R(2+1)D-18 + LSTM 1.33 0.31
ViSA (R2D-18) 1.27 0.46
ViSA (R(2+1)D-18) 1.27 0.46
Table 3: Ablation studies on JIGSAWS. We train frameworks across videos of three tasks and report Corr and MAEs on LOSO and LOUO settings. K denotes the number of Groups.

width=0.8 FEM SGM TCMM K LOSO LOUO MAE Corr MAE Corr R2D-18 BiLSTMs - 3.40 0.65 3.53 0.67 R2D-18 BiLSTMs 3 3.27 0.80 3.42 0.74 R(2+1)D-18 LSTMs 3 2.41 0.83 3.23 0.78 R(2+1)D-18 Transformer 3 3.02 0.76 3.27 0.68 R(2+1)D-18 BiLSTMs - 2.90 0.73 3.30 0.72 R(2+1)D-18 BiLSTMs 2 2.34 0.84 3.07 0.72 R(2+1)D-18 BiLSTMs 4 2.32 0.85 2.68 0.79 R(2+1)D-18 BiLSTMs 3 2.24 0.86 2.86 0.76

Table 2: Results on HeiChole. Baseline frameworks are newly constructed and compared with ViSA by MAE and Corr metrics.

3.1 Baseline Comparison

Tab. 1 shows the quantitative baseline comparison results of ViSA on JIGSAWS. Baseline results are taken from previous papers. We find that ViSA outperforms many competitive methods on most tasks and cross-validation schemes. ViSA has a CNN-RNN framework akin to C3D-MTL-VF [23] but surpasses it with obvious margins on most LOSO and LOUO metrics. On 4-Fold, ViSA achieves nearly equivalent performance as MultiPath-VTPE [13] which needs extra input sources such as frame-wise surgical gestures information and tool movement paths. For HeiChole, since no baseline is available, we newly constructed four basic frameworks by combining 2D or 3D CNN feature extractors with temporal aggregation modules of fully connected layers (FC) or LSTMs, which share similar CNN architectures and pre-trained parameters as ViSA. Tab. 3 shows the out-performance of ViSA against the four baselines on both metrics.

JIGSAWS HeiChole
Figure 2: Visualization of the assignment maps that display the correspondence between video parts and different semantic groups.
Frames Without SGM With SGM
Figure 3: Visual explanations generated by Grad-CAM. SGM facilitates the concentration on the tools and discarding the unrelated background regions.
Without Position Supervision Heatmaps for Position Supervision With Position Supervision
Figure 4: Improved semantic grouping results after using the tool position supervision. Green regions turn to focus on tool clips after using supervision.

3.2 Assignment Visualization

Fig. 2 visualizes the assignment maps generated by ViSA on two datasets. The three semantic groups generally correspond to the tools, the manipulated tissues, and the background regions as expected. Notably, ViSA gets this assignment result without taking any supervision of semantics, which further indicates its effectiveness in modeling surgical actions over spatiotemporal dimensions.

3.3 Ablation Study

We also conduct the ablation analysis on three key components and one hyper-parameter of ViSA: Feature Extraction Module (FEM), Semantic Grouping Module (SGM), Temporal Context Modeling Module (TCMM), and the number of semantic groups (K). Referring to [23], we train and validate the ablative frameworks across videos of 3 JIGSAWS tasks by forming them together, in order for stable results on more samples. Results in Tab. 3 indicate that: (1) leveraging SGM boosts the performance on either R(2+1)D-18 or R2D-18; (2) BiLSTMs perform better than LSTMs and Transformer [21] in modeling temporal context; (3) increasing from 2 to 3 causes more improvements than raising it from 3 to 4, which indicates separating semantics into 3 groups fits the JIGSAWS dataset. Transformer consists of two layers of LayerNorm+Multi-Head-Attention+MLP. Considering the data-hungry nature of Transformer [6], we attribute its unremarkable performance to the small amount of training data.

Fig. 4 presents the visual explanation results generated by the post-hoc interpretation method Grad-CAM [12, 17] which localizes the input regions used by networks for decision making. Compared to the network without using the semantic grouping module (SGM), the full framework employs more task-related parts for predictions (e.g., regions about robotic tools) and discards many unrelated regions. We attribute the improvement to explicitly discovering different video parts and modeling their spatiotemporal relationship in our framework.

3.4 Improved Performance with Supervision

ViSA also supports the explicit supervision on the grouping process that allocates specific semantics to certain expected groups. On JIGSAWS, the kinematic data recording the positions of two robotic clips in 3D space is available. We first approximate the two clips’ positions on the 2D image plane by projecting their 3D kinematic positions with estimated transformation matrix. Then we generate the heatmaps

from the 2D positions and train the framework as described in Sec. 2.5. Fig. 4 illustrates one example of the generated position heatmaps. On the Suturing task, we achieve the improvement in average validation correlations as , and on LOSO, LOUO, and 4-Fold schemes respectively. In Fig. 4, we show one imperfect assignment maps generated by the model without supervision and its corresponding maps after using the position supervision. Although the position supervision is not fully precise, it still benefits the network to discover the tools’ features. Hence, in Fig. 4, the green regions become less noisy and are guided to the regions of tool clips and tool-tissue interactions after using the supervision.

4 Conclusion

In this paper, a novel framework called ViSA is proposed to predict the skill score from surgical videos by discovering and aggregating different semantic parts across spatiotemporal dimensions. The framework can achieve competitive performance on two datasets: JIGSAWS and HeiChole as well as support explicit supervision on the feature semantic grouping for performance improvement.

4.0.1 Acknowledgment.

This work is supported by JST AIP Acceleration Research Grant Number JPMJCR20U1, JSPS KAKENHI Grant Number JP20H04205, JST ACT-X Grant Number JPMJAX190D, JST Moonshot R&D Grant Number JPMJMS2011, Fundamental Research Funds for the Central Universities under Grant DUT21RC(3)028 and a project commissioned by NEDO.

References

  • [1] H. I. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P. Muller (2018) Evaluating surgical skills from kinematic data using convolutional neural networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 214–221. Cited by: §1.
  • [2] I. Funke, S. T. Mees, J. Weitz, and S. Speidel (2019) Video-based surgical skill assessment using 3d convolutional neural networks. International Journal of Computer Assisted Radiology and Surgery 14 (7), pp. 1217–1225. Cited by: §1, §1.
  • [3] J. Gao, W. Zheng, J. Pan, C. Gao, Y. Wang, W. Zeng, and J. Lai (2020) An asymmetric modeling for action assessment. In

    European Conference on Computer Vision

    ,
    pp. 222–238. Cited by: Table 1, §3.
  • [4] Y. Gao, S. S. Vedula, C. E. Reiley, N. Ahmidi, B. Varadarajan, H. C. Lin, L. Tao, L. Zappella, B. Béjar, D. D. Yuh, et al. (2014) Jhu-isi gesture and skill assessment working set (jigsaws): a surgical activity dataset for human motion modeling. In MICCAI workshop: M2cai, Vol. 3, pp. 3. Cited by: §1, §3.
  • [5] R. Girdhar, D. Ramanan, A. Gupta, J. Sivic, and B. Russell (2017) Actionvlad: learning spatio-temporal aggregation for action classification. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 971–980. Cited by: §2.2.
  • [6] A. Hassani, S. Walton, N. Shah, A. Abuduweili, J. Li, and H. Shi (2021) Escaping the big data paradigm with compact transformers. arXiv preprint arXiv:2104.05704. Cited by: §3.3.
  • [7] Z. Huang and Y. Li (2020) Interpretable and accurate fine-grained recognition via region grouping. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8662–8672. Cited by: §2.2.3, §2.2.
  • [8] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. (2017) The kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Cited by: §3.
  • [9] J. D. Kelly, A. Petersen, T. S. Lendvay, and T. M. Kowalewski (2020)

    Bidirectional long short-term memory for surgical skill classification of temporally segmented tasks

    .
    International Journal of Computer Assisted Radiology and Surgery 15 (12), pp. 2079–2088. Cited by: §1, §1.
  • [10] J. L. Lavanchy, J. Zindel, K. Kirtac, I. Twick, E. Hosgor, D. Candinas, and G. Beldi (2021)

    Automation of surgical skill assessment using a three-stage machine learning algorithm

    .
    Scientific reports 11 (1), pp. 1–9. Cited by: §1.
  • [11] Z. Li, Y. Huang, M. Cai, and Y. Sato (2019) Manipulation-skill assessment from videos with spatial attention network. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pp. 0–0. Cited by: §1.
  • [12] Z. Li, W. Wang, Z. Li, Y. Huang, and Y. Sato (2021) Spatio-temporal perturbations for video attribution. IEEE Transactions on Circuits and Systems for Video Technology 32 (4), pp. 2043–2056. Cited by: §3.3.
  • [13] D. Liu, Q. Li, T. Jiang, Y. Wang, R. Miao, F. Shan, and Z. Li (2021) Towards unified surgical skill assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9522–9531. Cited by: §1, §1, §3.1, Table 1, §3, §3.
  • [14] J. Pan, J. Gao, and W. Zheng (2019) Action assessment by joint relation graphs. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6331–6340. Cited by: Table 1, §3.
  • [15] P. Parmar and B. Morris (2019) Action quality assessment across multiple actions. In IEEE Winter Conference on Applications of Computer Vision, pp. 1468–1476. Cited by: §3.
  • [16] P. Parmar and B. Tran Morris (2017) Learning to score olympic events. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 20–28. Cited by: Table 1.
  • [17] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Cited by: §3.3.
  • [18] Y. Tang, Z. Ni, J. Zhou, D. Zhang, J. Lu, Y. Wu, and J. Zhou (2020) Uncertainty-aware score distribution learning for action quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: Table 1, §3.
  • [19] L. Tao, E. Elhamifar, S. Khudanpur, G. D. Hager, and R. Vidal (2012)

    Sparse hidden markov models for surgical gesture classification and skill evaluation

    .
    In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 167–177. Cited by: §1.
  • [20] D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, and M. Paluri (2018) A closer look at spatiotemporal convolutions for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6450–6459. Cited by: §3.
  • [21] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. Advances in Neural Information Processing Systems 30. Cited by: §3.3.
  • [22] M. Wagner, B. Müller-Stich, A. Kisilenko, D. Tran, P. Heger, L. Mündermann, D. M. Lubotsky, B. Müller, T. Davitashvili, M. Capek, et al. (2021) Comparative validation of machine learning algorithms for surgical workflow and skill analysis with the heichole benchmark. arXiv preprint arXiv:2109.14956. Cited by: §1, §3.
  • [23] T. Wang, Y. Wang, and M. Li (2020) Towards accurate and interpretable surgical skill assessment: a video-based method incorporating recognized surgical gestures and skill levels. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 668–678. Cited by: §1, §1, §3.1, §3.3, Table 1, §3, §3.
  • [24] Z. Wang and A. M. Fey (2018) Deep learning with convolutional neural network for objective skill evaluation in robot-assisted surgery. International Journal of Computer Assisted Radiology and Surgery 13 (12), pp. 1959–1970. Cited by: §1, §1.
  • [25] X. Xiang, Y. Tian, A. Reiter, G. D. Hager, and T. D. Tran (2018) S3d: stacking segmental p3d for action quality assessment. In IEEE International Conference on Image Processing, pp. 928–932. Cited by: Table 1.
  • [26] A. Zia and I. Essa (2018) Automated surgical skill assessment in rmis training. International Journal of Computer Assisted Radiology and Surgery 13 (5), pp. 731–739. Cited by: §1, Table 1, §3.
  • [27] A. Zia, Y. Sharma, V. Bettadapura, E. L. Sarin, and I. Essa (2018) Video and accelerometer-based motion analysis for automated surgical skills assessment. International Journal of Computer Assisted Radiology and Surgery 13 (3), pp. 443–455. Cited by: §1.