SurGNN: Explainable visual scene understanding and assessment of surgical skill using graph neural networks

08/24/2023
by   Shuja Khalid, et al.
0

This paper explores how graph neural networks (GNNs) can be used to enhance visual scene understanding and surgical skill assessment. By using GNNs to analyze the complex visual data of surgical procedures represented as graph structures, relevant features can be extracted and surgical skill can be predicted. Additionally, GNNs provide interpretable results, revealing the specific actions, instruments, or anatomical structures that contribute to the predicted skill metrics. This can be highly beneficial for surgical educators and trainees, as it provides valuable insights into the factors that contribute to successful surgical performance and outcomes. SurGNN proposes two concurrent approaches – one supervised and the other self-supervised. The paper also briefly discusses other automated surgical skill evaluation techniques and highlights the limitations of hand-crafted features in capturing the intricacies of surgical expertise. We use the proposed methods to achieve state-of-the-art results on EndoVis19, and custom datasets. The working implementation of the code can be found at https://github.com/<redacted>.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset