Towards Surgical Context Inference and Translation to Gestures

02/28/2023
by   Kay Hutchinson, et al.
0

Manual labeling of gestures in robot-assisted surgery is labor intensive, prone to errors, and requires expertise or training. We propose a method for automated and explainable generation of gesture transcripts that leverages the abundance of data for image segmentation to train a surgical scene segmentation model that provides surgical tool and object masks. Surgical context is detected using segmentation masks by examining the distances and intersections between the tools and objects. Next, context labels are translated into gesture transcripts using knowledge-based Finite State Machine (FSM) and data-driven Long Short Term Memory (LSTM) models. We evaluate the performance of each stage of our method by comparing the results with the ground truth segmentation masks, the consensus context labels, and the gesture labels in the JIGSAWS dataset. Our results show that our segmentation models achieve state-of-the-art performance in recognizing needle and thread in Suturing and we can automatically detect important surgical states with high agreement with crowd-sourced labels (e.g., contact between graspers and objects in Suturing). We also find that the FSM models are more robust to poor segmentation and labeling performance than LSTMs. Our proposed method can significantly shorten the gesture labeling process ( 2.8 times).

READ FULL TEXT
research
08/24/2023

Robotic Scene Segmentation with Memory Network for Runtime Surgical Context Inference

Surgical context inference has recently garnered significant attention i...
research
09/14/2022

COMPASS: A Formal Framework and Aggregate Dataset for Generalized Surgical Procedure Modeling

Objective: We propose a formal framework for modeling surgical tasks usi...
research
03/03/2021

Arthroscopic Multi-Spectral Scene Segmentation Using Deep Learning

Knee arthroscopy is a minimally invasive surgical (MIS) procedure which ...
research
05/02/2021

Surgical Gesture Recognition Based on Bidirectional Multi-Layer Independently RNN with Explainable Spatial Feature Extraction

Minimally invasive surgery mainly consists of a series of sub-tasks, whi...
research
03/01/2022

Runtime Detection of Executional Errors in Robot-Assisted Surgery

Despite significant developments in the design of surgical robots and au...
research
06/20/2016

Recognizing Surgical Activities with Recurrent Neural Networks

We apply recurrent neural networks to the task of recognizing surgical a...
research
06/22/2021

Analysis of Executional and Procedural Errors in Dry-lab Robotic Surgery Experiments

Background We aim to develop a method for automated detection of potenti...

Please sign up or login with your details

Forgot password? Click here to reset