Deep learning-based computer vision to recognize and classify suturing gestures in robot-assisted surgery

08/26/2020
by   Francisco Luongo, et al.
0

Our previous work classified a taxonomy of suturing gestures during a vesicourethral anastomosis of robotic radical prostatectomy in association with tissue tears and patient outcomes. Herein, we train deep-learning based computer vision (CV) to automate the identification and classification of suturing gestures for needle driving attempts. Using two independent raters, we manually annotated live suturing video clips to label timepoints and gestures. Identification (2395 videos) and classification (511 videos) datasets were compiled to train CV models to produce two- and five-class label predictions, respectively. Networks were trained on inputs of raw RGB pixels as well as optical flow for each frame. Each model was trained on 80/20 train/test splits. In this study, all models were able to reliably predict either the presence of a gesture (identification, AUC: 0.88) as well as the type of gesture (classification, AUC: 0.87) at significantly above chance levels. For both gesture identification and classification datasets, we observed no effect of recurrent classification model choice (LSTM vs. convLSTM) on performance. Our results demonstrate CV's ability to recognize features that not only can identify the action of suturing but also distinguish between different classifications of suturing gestures. This demonstrates the potential to utilize deep learning CV towards future automation of surgical skill assessment.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/25/2019

Gesture Recognition in RGB Videos UsingHuman Body Keypoints and Dynamic Time Warping

Gesture recognition opens up new ways for humans to intuitively interact...
research
05/05/2022

Deep Neural Network approaches for Analysing Videos of Music Performances

This paper presents a framework to automate the labelling process for ge...
research
05/02/2018

Joint Surgical Gesture and Task Classification with Multi-Task and Multimodal Learning

We propose a novel multi-modal and multi-task architecture for simultane...
research
02/24/2021

TeethTap: Recognizing Discrete Teeth Gestures Using Motion and Acoustic Sensing on an Earpiece

Teeth gestures become an alternative input modality for different situat...
research
05/21/2019

Improved Optical Flow for Gesture-based Human-robot Interaction

Gesture interaction is a natural way of communicating with a robot as an...
research
09/29/2022

Bounded Future MS-TCN++ for surgical gesture recognition

In recent times there is a growing development of video based applicatio...
research
02/19/2018

Learning to recognize touch gestures: recurrent vs. convolutional features and dynamic sampling

We propose a fully automatic method for learning gestures on big touch d...

Please sign up or login with your details

Forgot password? Click here to reset