Log In Sign Up

A Learning-Driven Framework with Spatial Optimization For Surgical Suture Thread Reconstruction and Autonomous Grasping Under Multiple Topologies and Environmental Noises

by   Bo Lu, et al.

Surgical knot tying is one of the most fundamental and important procedures in surgery, and a high-quality knot can significantly benefit the postoperative recovery of the patient. However, a longtime operation may easily cause fatigue to surgeons, especially during the tedious wound closure task. In this paper, we present a vision-based method to automate the suture thread grasping, which is a sub-task in surgical knot tying and an intermediate step between the stitching and looping manipulations. To achieve this goal, the acquisition of a suture's three-dimensional (3D) information is critical. Towards this objective, we adopt a transfer-learning strategy first to fine-tune a pre-trained model by learning the information from large legacy surgical data and images obtained by the on-site equipment. Thus, a robust suture segmentation can be achieved regardless of inherent environment noises. We further leverage a searching strategy with termination policies for a suture's sequence inference based on the analysis of multiple topologies. Exact results of the pixel-level sequence along a suture can be obtained, and they can be further applied for a 3D shape reconstruction using our optimized shortest path approach. The grasping point considering the suturing criterion can be ultimately acquired. Experiments regarding the suture 2D segmentation and ordering sequence inference under environmental noises were extensively evaluated. Results related to the automated grasping operation were demonstrated by simulations in V-REP and by robot experiments using Universal Robot (UR) together with the da Vinci Research Kit (dVRK) adopting our learning-driven framework.


page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8


Automated pick-up of suturing needles for robotic surgical assistance

Robot-assisted laparoscopic prostatectomy (RALP) is a treatment for pros...

m2caiSeg: Semantic Segmentation of Laparoscopic Images using Convolutional Neural Networks

Autonomous surgical procedures, in particular minimal invasive surgeries...

A surgical dataset from the da Vinci Research Kit for task automation and recognition

The use of datasets is getting more relevance in surgical robotics since...

Data-driven Holistic Framework for Automated Laparoscope Optimal View Control with Learning-based Depth Perception

Laparoscopic Field of View (FOV) control is one of the most fundamental ...

"Train one, Classify one, Teach one" – Cross-surgery transfer learning for surgical step recognition

Prior work demonstrated the ability of machine learning to automatically...

Multi-stage Suture Detection for Robot Assisted Anastomosis based on Deep Learning

In robotic surgery, task automation and learning from demonstration comb...