Artificial Intelligence in Surgery

12/23/2019 ∙ by Xiao-Yun Zhou, et al. ∙ 0

Artificial Intelligence (AI) is gradually changing the practice of surgery with the advanced technological development of imaging, navigation and robotic intervention. In this article, the recent successful and influential applications of AI in surgery are reviewed from pre-operative planning and intra-operative guidance to the integration of surgical robots. We end with summarizing the current state, emerging trends and major challenges in the future development of AI in surgery.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Advances in surgery have made a significant impact on the management of both acute and chronic diseases, prolonging life and continuously extending the boundary of survival. These advances are underpinned by continuing technological developments in diagnosis, imaging, and surgical instrumentation. Complex surgical navigation and planning are made possible through the use of both pre- and intra-operative imaging techniques such as ultrasound, Computed Tomography (CT), and Magnetic Resonance Imaging (MRI) vitiello2012emerging ; surgical trauma is reduced through Minimally Invasive Surgery (MIS), now increasingly combined with robotic assistance troccaz2019frontiers ; post-operative care is also improved by sophisticated wearable and implantable sensors for supporting early discharge after surgery, enhancing the recovery of patients and early detection of post-surgical complications yang2014body ; yang2018implantable . Many terminal illnesses have been transformed into clinically manageable chronic lifelong conditions and increasing surgery is focused on the systematic level impact on patients, avoiding isolated surgical treatment or anatomical alteration, with careful consideration of metabolic, haemodynamic and neurohormonal consequences that can influence the quality of life.

For recent advances in medicine, AI has played an important role in clinical decision support since the early years of developing the MYCIN system shortliffe2012MYCIN

. AI is now increasingly used for risk stratification, genomics, imaging and diagnosis, precision medicine, and drug discovery. The introduction of AI in surgery is more recent and it has a strong root in imaging and navigation, with early techniques focused on feature detection and computer assisted intervention for both pre-operative planning and intra-operative guidance. Over the years, supervised algorithms such as active shape models, atlas based methods and statistical classifiers have been developed 

vitiello2012emerging . With recent successes of AlexNet krizhevsky2012imagenet

, deep learning methods, especially Deep Convolutional Neural Network (DCNN) where multiple convolutional layers are cascaded, have enabled automatically learned data-driven descriptors, rather than ad hoc hand-crafted features, to be used for image understanding with improved robustness and generalizability. With increasing use of robotics in surgery, AI is set to transform the future of surgery, through the development of more sophisticated sensorimotor functions with different levels of autonomy that can give the system the ability to adapt to constantly changing and patient-specific

in vivo environment, leveraging the parallel advances in medicine in early detection and targeted therapy yang2017medical . It is reasonable to expect that future surgical robots would be able to perceive and understand complicated surroundings, conduct real-time decision making and perform desired tasks with increased precision, safety, and efficiency. But what are the roles of AI in these systems and the future of surgery in general? How to deal with dynamic environments and learn from human operators? How to derive reliable control policy and achieve human-machine symbiosis?

In this article, we review the applications of AI in pre-operative planning, intra-operative guidance, as well as its integrated use in surgical robotics. Popular AI techniques including an overview of their requirements, challenges and subareas in surgery are outlined in Fig. 1, showing the main flow of the contents of the paper. We first introduce the application of AI in pre-operative planning and this is followed by AI techniques for intra-operative guidance, a review of AI in surgical robotics, as well as conclusions and future outlook. Technically, we put a strong emphasis on deep learning based approaches in this review.

Figure 1: An overview of popular AI techniques, as well as the key requirements, challenges, and subareas of AI used in pre-operative planning, intra-operative guidance and surgical robotics.

2 AI for Pre-operative Planning

Pre-operative planning where surgeons plan the surgical procedure from existing medical records and imaging is essential for the success of a surgery. Among existing imaging modalities, X-ray, CT, ultrasound and MRI are the most common ones used in practice. Routine tasks based on medical imaging include anatomical classification, detection, segmentation, and registration.

2.1 Classification

Classification outputs the diagnostic value of the input which is a single or a set of medical images or volumes of organs or lesions. In addition to traditional machine learning and image analysis techniques, deep learning based methods for pre-operative planning are on the rise

litjens2017survey . For the latter, the network architecture for classification is composed of convolutional layers for extracting information from the input images or volumes and fully connected layers for regressing the diagnostic value.

For example, a classification pipeline with a Convolutional Neural Network (CNN) architecture of Google’s Inception, with Inception and ResNet algorithm and with different training strategies has been proposed to segment the lung, bladder and breast cancer types khosravi2018deep . Chilamkurthy et al. demonstrate that deep learning can recognize intracranial haemorrhage, calvarial fracture, midline shift and mass effect through testing a set of deep learning algorithms on head CT scans chilamkurthy2018deep

. The mortality, renal failure and post-operative bleeding in patients after cardiosurgical care can be predicted by Recurrent Neural Network (RNN) in real time with improved accuracy compared to standard-of-care clinical tools

meyer2018machine . ResNet-50 and Darknet-19 have been used to classify benign or malignant lesions in ultrasound images, showing similar sensitivity and improved specificity li2019diagnosis . Many studies show promising human-level accuracy with good reproducibility, but explainability of these approaches remains a potential hurdle for regulatory considerations.

2.2 Detection

Detection provides the spatial localization of regions of interest, often in the form of bounding boxes or landmarks, additionally to image- or region-level classification. Similarly, deep learning based approaches have shown promises. Compared to traditional algorithms which are task-specific due to hand-crafted feature extractors, DCNNs for detection usually consist of convolutional layers for feature extraction and regression layers for regressing the bounding box properties.

For detecting prostate cancer from 4D Positron-Emission Tomography (PET) images, a deeply stacked convolutional autoencoder was trained to extract the statistical and kinetic biological features

rubinstein2019unsupervised . For pulmonary nodule detection, 3D CNNs with roto-translation group convolutions (3D G-CNNs) were proposed with good accuracy, sensitivity and convergence speed winkels2019pulmonary . CNNs are frequently used in orthopaedics for cartilage lesion detection liu2018deepZHOU

. For breast lesion detection, Deep Reinforcement Learning (DRL) based on an extension of the deep Q-network was used to learn a search policy from dynamic contrast-enhanced MRI

maicas2017deep . To detect acute intracranial haemorrhage from CT scans and to improve network interpretability, Lee et al. lee2019explainable used an attention map and an iterative process to mimic the workflow of radiologists.

2.3 Segmentation

Segmentation can be treated as a pixel- or voxel-level image classification problem. Due to the limitation of computational resources, early works on deep learning for segmentation often adopted a sliding window-based system. Specifically, each image or volume was divided into small windows, CNNs were trained to predict the target label at the central location of the window. Image- or voxel-wise segmentation can be achieved by running the CNN classifier over densely sampled image windows. One of the well-known networks that falls into this category is Deepmedic, which had shown good performances for multi-modal brain tumour segmentation from MRI kamnitsas2017efficient . However, the sliding window-based system is inefficient as the network activations of overlapping regions were computed repeatedly. More recently, it was replaced by Fully Convolutional Networks (FCNs) long2015fully . The key idea was to replace the fully connected layers in a classification network with convolutional layers and up-sampling layers, which significantly improved the segmentation efficiency. For medical image segmentation, U-Net ronneberger2015u cciccek20163d , or more generally, encoder-decoder network is a representative FCN that has shown promising performances. The encoder has multiple convolutional and down-sampling layers that extract image features at different scales. The decoder has convolutional and up-sampling layers that recover the spatial resolution of feature maps and finally achieves pixel- or voxel-wise dense segmentation. A review of different normalization methods in training U-Net for medical image segmentation could be found in zhou2019normalization .

For navigating the endoscopic pancreatic and biliary procedures, Gibson et al. gibson2018automatic used dilated convolutions and fused image features at multiple scales for segmenting abdominal organs from CT scans. For interactive segmentation of placenta and fetal brains from MRI, FCN and user defined bounding boxes and scribbles were combined, where the last few layers of FCN were fine-tuned based on the user input wang2018interactive . For aortic MRI, Bai et al. bai2018recurrent combined FCN with RNN to incorporate spatial and temporal information. The segmentation and localization of surgical instrument landmarks were modelled as heatmap regression and FCN was used to track the instruments in near real-time laina2017concurrent . For the segmentation and labelling of vertebrae from CT and MRI, Lessmann et al. proposed an iterative instance segmentation approach with FCN, where the network concurrently performed vertebra segmentation, regressed the anatomical landmark and predicted the vertebrae visibility lessmann2019iterative . For pulmonary nodule segmentation, Feng et al. addressed the issue of requiring accurate manual annotations when training FCNs by learning discriminative regions from weakly-labelled lung CT with a candidate screening method feng2017discriminative .

2.4 Registration

Registration is the spatial alignment between two medical images, volumes or modalities, which is particularly important for both pre- and intra-operative planning. Traditional algorithms usually iteratively calculate a parametric transformation, i.e., elastic, fluid or B-spline model to minimize a given metric, i.e., mean square error, normalized cross correlation, or mutual information, between the two medical images, volumes or modalities. Recently, deep regression models have been used to replace the traditional time consuming and optimization based registration algorithm.

Example deep learning based approaches include VoxelMorph based on CNN structures for maximizing the standard image matching objective functions by leveraging auxiliary segmentation to map an input image pair to a deformation field balakrishnan2019voxelmorph

. An end-to-end deep learning framework was proposed with three stages: affine transform prediction, momentum calculation and non-parametric refinement to combine affine registration and vector momentum-parameterized stationary velocity field for 3D medical image registration

shen2019networks . Pulmonary CT images were registered by training a 3D CNN with synthetic random transformation eppenhof2018pulmonary . A weakly supervised framework was proposed for multi-modal image registration, with training on images with higher-level correspondence, i.e., anatomical labels, rather than voxel-level transformation for predicting the displacement field hu2018weakly

. Markov decision process with each agent trained with dilated FCN was applied to align a 3D volume to 2D X-ray images

miao2018dilated . BIRNet was proposed to predict deformation from image appearance for image registration, with training an FCN with both the ground truth and image dissimilarity measures, where the FCN was improved with hierarchical loss, gap filling and multi-source strategies fan2019birnet

. A Deep Learning Image Registration (DLIR) framework was proposed to train CNN on image similarity between fixed and moving image pairs, hence affine and deformable image registration can be achieved in an unsupervised manner

de2019deep . RegNet has been proposed by considering multi-scale contexts and is trained on artificially generated Displacement Vector Field (DVF) to achieve a non-rigid registration sokooti2017nonrigid . 3D image registration can also be formulated as a strategy learning process with 3D raw image as the input, the next optimal action, i.e., up and down, as the output, CNN as the agent liao2017artificial .

3 AI for Intra-operative Guidance

Figure 2:

AI techniques for computer-aided intra-operative guidance in MIS. Multi-modal data acquired pre-operatively and intra-operatively are used in either supervised or unsupervised learning based techniques for various surgical applications (US - ultrasound; NIRF - near infrared fluorescence ; OCT - optical coherence tomography; pCLE - probe-based confocal laser endomicroscopy; EM sensor - electromagnetic sensor; RF - random forests; BL - bayesian learning; DT - decision tree; EM - expectation maximization; GMM - gaussian mixture models.)

Computer-aided intra-operative guidance has always been a corner-stone of MIS. Learning strategies have been extensively integrated into the development of intra-operative guidance to provide enhanced visualization and localization in surgery. For the purpose of intra-operative guidance, recent work can be divided into four main aspects: intra-operative shape instantiation, endoscopic navigation, tissue tracking and Augmented Reality (AR), as summarized in Fig. 2.

3.1 3D Shape Instantiation

For intra-operative 3D reconstruction, 3D volumes can be scanned with MRI, CT or ultrasound. In practice, this process (3D/4D) can be time-consuming or with a low resolution. Real-time 3D shape instantiation which instantiates the intra-operative 3D shape from a single or limited 2D images is an emerging area of research in intra-operative guidance.

For example, a 3D prostate shape was instantiated from multiple non-parallel 2D ultrasound images with a radial basis function

cool20063d . The 3D shape of Abdominal Aortic Aneurysm (AAA) was instantiated from two 2D fluoroscopic images toth2015adaption

. The 3D shapes of fully-compressed, fully-deployed and also partially-deployed stent grafts were instantiated from a single projection of 2D fluoroscopy with mathematical modelling, combined with the Robust Perspective-n-Point (RPnP) method, graft gap interpolation and graph neural networks

zhou2017stent ; zhou2018real ; zheng2019RT . Furthermore, equally weighted focal U-Net zhou2018real was proposed to automatically segment the makers on stent grafts to improve the efficiency of the intra-operative stent graft shape instantiation framework zhou2018towards . Moreover, the 3D AAA skeleton was instantiated from a single projection of 2D fluoroscopy with skeleton deformation and graph matching zheng20183d

. The 3D liver shape was instantiated from a single 2D projection with Principal Component Analysis (PCA), Statistical Shape Model (SSM) and Partial Least Square Regression (PLSR)

lee2010dynamic . This work was further generalized to a registration-free shape instantiation framework for any dynamic organ with sparse PCA, SSM and kernel PLSR zhou2018areal

. Recently, an advanced deep and one-stage learning strategy that estimates 3D point cloud from a single 2D image was proposed for 3D shape instantiation

zhou2019one .

3.2 Endoscopic Navigation

In surgery, there is an increasing trend towards intra-luminal procedures and endoscopic surgery driven by early detection and intervention. Navigation techniques have been investigated to guide the manoeuvre of endoscopes towards target locations. To this end, learning-based depth estimation, visual odometry and Simultaneous Localization and Mapping (SLAM) have been tailored for camera localization and environment mapping with the use of endoscopic images.

3.2.1 Depth estimation

Depth estimation from endoscopic images plays an essential role in 6 DoF camera motion estimation and 3D structural environment mapping, which has been tackled either by supervised mahmood2018deep ; mahmood2018unsupervised or by self-supervised turan2018unsupervised ; shen2019context deep learning methods. Compared to natural images of indoor or outdoor scenes, depth recovery from endoscopic images suffer from two main challenges. First, it is practically difficult to obtain a large amount of high-quality training data containing paired video images and depth maps due to both hardware constraints and labour-intensive labelling. Therefore, conventional supervised depth recovery methods such as liu2015deep are not applicable in endoscopic image scenarios. Second, surgical scenes are often textureless, making it difficult to apply depth recovery methods that rely on feature matching and reconstruction Zhou_2017_CVPR ; Zhan_2018_CVPR .

To address the challenge of limited training data, Ye et al. ye2017self proposed a self-supervised depth estimation approach for stereo images using siamese networks. For monocular depth recovery, Mahmood et al. mahmood2018deep ; mahmood2018unsupervised

learnt the mapping from rendered RGB images to the corresponding depth maps with synthetic data and adopted domain transfer learning to convert real RGB images to rendered images. Additionally, a self-supervised unpaired image to image translation

shen2019context using a modified Cycle Generative Adversarial Network (CycleGAN) zhu2017unpaired was proposed to recover the depth from bronchoscopic images. Moreover, a self-supervised CNN based on the principle of Shape from Motion (SFM) was applied to recover the depth and achieve visual odometry for endoscopic capsule robot turan2018unsupervised .

3.2.2 Visual odometry

Visual odometry uses consecutive video frames to estimate the pose of a moving camera. CNN-based approaches turan2018deep

have been adopted for camera pose estimation based on temporal information. Turan

et al. turan2018deep

estimated the camera pose for endoscopic capsule robot with using CNN for feature extraction and Long Short-Term Memory (LSTM) for dynamics estimation. Sganga

et al. sganga2018offsetnet combined ResNet and FCN for calculating the pose change between consecutive video frames. However, the feasibility of localization approaches based on visual odometry has only been validated on lung phantom data sganga2018offsetnet and Gastrointestinal (GI) tract data turan2018deep .

3.2.3 3D reconstruction and localization

Due to the dynamic nature of tissues, real-time 3D reconstruction of the environment and localization are vital prerequisites for navigation. SLAM is a widely studied research topic in robotics, in which the robot can simultaneously builds the 3D map of surrounding environments and localizes the camera pose in the built map. Traditional SLAM algorithms are based on the assumption of a rigid environment, which is in contrast to that found in a typical surgical scene where the deformation of soft tissues and organs is involved, limiting its direct adoption to surgical tasks. To tackle this challenge, Mountney et al. mountney2006simultaneous

first applied the Extended Kalman Filter SLAM (EKF-SLAM) framework 

davison2007monoslam in MIS with a stereo-endoscope, where the SLAM estimation was compensated with periodic motion of soft tissues caused by respiration mountney2010motion . Grasa et al. grasa2013visual evaluated the effectiveness of the monocular EKF-SLAM in hernia repair surgery for measuring hernia defect. In turan2017non , the depth images were first estimated from the RGB data through Shape from Shading (SfS). Then they adopted the RGB-D SLAM framework by using paired RGB and depth images. Song et al. song2018mis implemented a dense deformable SLAM on a GPU and a ORB-SLAM on a CPU to boost the localization and mapping performance of a stereo-endoscope.

Endovascular interventions have been increasingly used to treat cardiovascular diseases. However, visual cameras are not applicable inside vessels, for example, catheter mapping is commonly used in Radiofrequency Catheter Ablation (RFCA) for navigation zhou2016path . To this end, recent advances in Intravascular Ultrasound (IVUS) have opened up another avenue for endovascular intra-operative guidance. Shi and Yang first proposed the Simultaneous Catheter and Environment (SCEM) framework for 3D vasculature reconstruction by fusing the Electromagnetic (EM) sensing data and IVUS images shi2014simultaneous . To deal with the errors and uncertainty measured from both EM sensors and IVUS images, the improved SCEM+ solved the 3D reconstruction by solving a nonlinear optimization problem zhao2016scem . To further alleviate the burden of pre-registration between pre-operative CT data and EM sensing data, a registration-free SCEM framework zhao2016registration was proposed for more efficient data fusion.

3.3 Tissue Feature Tracking

Learning strategies have also been applied to soft tissue tracking in MIS. Mountney et al. mountney2008soft introduced an online learning framework that updates the feature tracker over time by selecting correct features using decision tree classification. Ye et al. ye2016online

proposed a detection approach that incorporates structured Support Vector Machine (SVM) and online random forest for re-targeting a pre-selected optical biopsy region on soft tissue surface of GI tract. Wang

et al. wang20173 adopted a statistical appearance model to differentiate the organ from the background in their region-based 3D tracking algorithm. All their validation results demonstrate that incorporating learning strategies can improve the robustness of tissue tracking with respect to the deformation and illumination variation.

3.4 Augmented Reality

AR improves surgeons’ intra-operative vision through a prevision of a semi-transparent overlay of pre-operative imaging on the area of interest. bernhardt2017status . Wang et al. wang2014augmented used a projector to project the AR overlay for oral and maxillofacial surgery. The 3D contour matching was used to calculate the transformation between the virtual image and real teeth. Instead of using projectors, Pratt et al. exploited Hololens, a head-mounted AR device, to demonstrate the 3D vascular model on the lower limb of patient pratt2018through . While one of the most challenging tasks is to project the overlay on markerless deformable organs, Zhang et al. zhang2019markerless introduced an automatic registration framework for AR navigation, of which the Iterative Closet Point (ICP) and RANSAC were applied for 3D deformable tissue reconstruction.

4 AI for Surgical Robotics

Figure 3: AI techniques for surgical robotics including perception, localization & mapping, system modelling & control, and human-robot interaction.

With the development of AI techniques, surgical robots can achieve superhuman performance during MIS topol2019high ; mirnezami2018surgery . The objective of AI is to boost the capability of surgical robotic systems in perceiving the complex in vivo environment, conducting decision making, and performing the desired task with increased precision, safety, and efficiency. As illustrated in Fig. 3, common AI techniques used for Robotic and Autonomous Systems (RAS) can be summarized in the following four aspects: 1) perception, 2) localization and mapping, 3) system modelling and control, and 4) human-robot interaction. As overlap exists between intra-operative guidance and robot localization & mapping, this section mainly covers the methods for increasing the level of autonomy in surgical robotics.

4.1 Perception

4.1.1 Instrument segmentation and tracking

The instrument segmentation task can be divided into three groups: segmentation for distinguishing the instrument and background, multi-class segmentation of instrument parts, i.e., shaft, wrist, and gripper, and multi-class segmentation for different instruments. The advancement of deep learning in segmentation has significantly improved the instrument segmentation accuracy from the exploitation of SVM for pixel-level binary classification bouget2015detecting to more recent popular DCNN architectures, e.g., U-Net, TernausNet-VGG11, TernausNet-VGG16, and LinkNet based on ResNet architecture, for both binary segmentation and multi-class segmentation shvets2018automatic . To further improve the performance, Islam et al. developed a cascaded CNN with a multi-resolution feature fusion framework islam2019real .

Algorithms for solving tracking problems can be summarized into two categories: tracking by detection and tracking via local optimization sznitman2012unified . Previous works in this field mainly relied on hand-crafted features, such as Haar wavelets sznitman2012unified , color or texture features zhang2017real , and gradient-based features ye2016real . These methods have different advantages and disadvantages. In the context of deep learning based surgical instrument tracking, the proposed methods were built on the tracking by detection zhao2017tracking ; nwoye2019weakly . Various CNN architectures, e.g., AlexNet zhao2017tracking and ResNet nwoye2019weakly ; laina2017concurrent , were used for detecting the surgical tools from RGB images while sarikaya2017detection additionally fed the optical flow estimated from color images into the network. In order to leverage the spatiotemporal information, the LSTM was integrated to smooth the detection results nwoye2019weakly . In addition to the position tracking, the pose of articulated end-effector was simultaneously estimated by the methods in ye2016real ; kurmann2017simultaneous .

4.1.2 Surgical tools and environment interaction

A representative example of tool-tissue interaction during surgery is suturing. In this task, the robot needs to recover the 2D or 3D shape of thread from 2D images in real-time. Other challenges to be addressed for this task include the deformation of thread and variations of the environment. Padoy et al. padoy20113d introduced a Markov Random Field (MRF) based optimization method to track the 3D thread modelled by a Non-Uniform Rational B-Spline (NURBS). Recently, a supervised two-branch CNN called Deep Multi-Stage Detection (DMSD), was proposed for surgical thread detection hu2018multi . In addition, they improved the DMSD framework with a CycleGAN zhu2017unpaired structure for the foreground and background adaptation gu2018cross . Based on adversarial learning, more synthetic data for thread detection was generated while preserving the semantic information, which enabled the learned knowledge to be transferred to the target domain.

The estimation of the interaction force between surgical instruments and tissues can provide meaningful feedbacks to ensure a safe manipulation. Due to the limited size of surgical tools for MIS, the high precision and miniaturized force sensors are still immature. Recent works have incorporated AI techniques in the field of Vision-based Force Sensing (VBFS), which can accurately estimate the force values from visual inputs. The LSTM-RNN architecture can automatically learn the accurate mapping between visual-geometric information and applied force in a supervised manner aviles2016towards

. In addition to the supervised learning, a semi-supervised DCNN was proposed in

marban2018estimation , where the convolution auto-encoder learns the representation from RGB images followed by minimizing the error between the estimated force and ground truth using the LSTM.

4.2 System Modelling and Control

4.2.1 Learning from human demonstrations

Learning from demonstration (LfD), also known as programming by demonstration, imitation learning, and apprenticeship learning, is a popular paradigm for enabling robots to autonomously perform new tasks with the learned policies. This paradigm is beneficial for complicated automation tasks such as surgical procedures, for which surgical robots can autonomously execute specific motions or tasks simply through learning from surgeons’ demonstrations without tedious programming procedures. The robots could reduce surgeons’ tedium as well as providing superhuman performance both fast speed and smoothness. The common framework of LfD is to first segment a complicated surgical task into several motion primitives or subtasks, followed by recognition, modelling and execution of these motion primitives sequentially.

A. Surgical task segmentation and recognition


JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) dataset ahmidi2017dataset is the first public benchmark dataset for surgical activity segmentation and recognition. This dataset contains the synchronized video and kinematic data (3D motion trajectory and 3D rotation of the end-effector) of three subtasks captured from the Da Vinci robot: suturing, needle passing, and knot tying. For surgical task segmentation, unsupervised clustering algorithms are most popular. In fard2016soft , a soft boundary modified Gath-Geva clustering was proposed for segmenting kinematic data. A Transition State Clustering (TSC) method krishnan2017transition was presented to exploit both the video and kinematic data to detect and cluster transitions between linear dynamic regimes based on kinematic, sensory and temporal similarity. The authors extended their TSC method to improve the segmentation results by applying DCNNs for extracting features from video data murali2016tsc . For surgical subtask recognition, most previous works ahmidi2017dataset ; zappella2013surgical ; tao2013surgical

were developed towards variations on Hidden Markov Model (HMM), Conditional Random Field (CRF), and Linear Dynamic Systems (LDS). Particularly, the joint segmentation and recognition frameworks were proposed in

despinoy2015unsupervised ; dipietro2019segmenting . In specific, dipietro2019segmenting

modelled complex and non-linear dynamics of kinematic data with RNN to recognize both surgical gestures and activities. They compared the simple RNN, forward LSTM, Bidirectional LSTM, Gated Recurrent Unit (GRU), and Mixed history RNN with traditional methods in terms of surgical activity recognition. Liu

et al. liu2018deep introduced a novel method by modelling the recognition task as a sequential decision-making process and trained an agent by Reinforcement Learning (RL) with hierarchical features from a DCNN model.

B. Surgical task modelling, generation, and execution


After acquiring the segmented motion trajectories representing surgical subtasks, e.g., suturing, needle passing, and knot tying, the Dynamic Time Warping (DTW) algorithm can be applied to temporally align different demonstrations before modelling. In order to autonomously generate the motion in a new task, Gaussian Mixture Model (GMM) padoy2011human ; calinon2014human , Gaussian Process Regression (GPR) osa2014online , dynamics model van2010superhuman , finite state machine murali2015learning , and RNN mayer2008system were extensively studied for modelling the demonstrated trajectories in previous works. The experts’ demonstrations are encoded by the GMM algorithm, and the parameters of mixture model can be iteratively estimated by the expectation maximization algorithm. With the given GMM, the Gaussian Mixture Regression (GMR) was then used to generate the target trajectory of the desired surgical task padoy2011human ; calinon2014human . GPR is a non-linear Bayesian function learning technique that models a sequence of observations generated by a Gaussian process. Osa et al. osa2014online chose GPR for online path planning in a dynamic environment. Given the predicted motion trajectory, different control strategies, e.g., Linear-Quadratic Regulator (LQR) controller van2010superhuman , sliding mode control osa2014online , neural network de2016neural , etc., can be applied to improve the robustness in surgical task execution.

4.2.2 Reinforcement learning

In many surgical tasks, RL is another popular machine learning paradigm to solve the problem that is difficult to analytically model and explicitly observe kober2013reinforcement , e.g., control of the continuum robot, soft tissue manipulation, cutting gauze tensioning, tube insertion, etc.. In the learning process, the controller of autonomous surgical robot, known as an agent, tries to find the optimized policies that yield highly accumulated reward through iterative interaction with the surrounding environment. The environment of RL is modelled as a Markov Decision Process (MDP). To efficiently reduce the learning time, the RL algorithm can be initialized with the learned policies from human expert demonstrations abbeel2004apprenticeship ; calinon2014human ; tan2019robot . Instead of learning from scratch, the robot can improve the initial policy based on the demonstrations to reproduce the desired surgical tasks. In tan2019robot , a Generative Adversarial Imitation Learning (GAIL) ho2016generative agent was trained to imitate latent patterns existed in human demonstrations, which can deal with the mismatch distribution caused by multi-modal behaviours. Recently, DRL with advanced policy search methods endows robots to autonomously execute a wide range of tasks levine2016end . However, it is unrealistic to repeat the experiments on the surgical robotic platform for over a million times. To this end, the agent can be first trained in a simulation environment and transferred to a real robotic system. thananjeyan2017multilateral first learned tensioning policies from a finite-element simulator via DRL, and then transferred to a real physical system. However, the discrepancy between the simulation data and the real-world environment remains less developed.

4.3 Human-Robot Interaction

Human-Robot Interaction (HRI) is a field that integrates knowledge and techniques from multiple disciplines to build an effective communication between human and robots. With the help of AI, surgical task-oriented HRI allows surgeons to cooperatively control the surgical robotic systems with touchless manipulation. Interaction mediums between surgeons and intelligent robots are usually through surgeons’ gaze, head movement, speech/voice, and hand gesture. By understanding the intention of human, robots can then perform the most appropriate actions that satisfy surgeons’ needs.

The 2D/3D eye-gaze point of surgeons tracked via head-mounted or remote eye trackers can assist surgical instrumental control and navigation yang2002visual . For surgical robots, the eye-gaze contingent paradigm is able to facilitate the transmission of images and enhance the procedure performance, enabling much more accurate navigation of the instruments yang2002visual . Yang et al. yang2008perceptual first introduced the concept of gaze-contingent perceptual docking for robot-assisted MIS in 2008, in which the robot can learn the operators’ specific motor and perceptual behaviour through their saccadic eye movements and ocular vergence. Inspired by this idea, Visentini et al. visentini2009brush used the gaze-contingent to reconstruct the surgeon’s area of interest with a Bayesian chains method in real-time. Fujii et al. fujii2018gaze performed gaze gesture recognition with the HMM so as to pan, zoom, and tilt the laparoscope during the surgery. In addition to the use of human gaze, the recognition of surgeons’ head movement can also be used to control laparoscope or endoscope remotely nishikawa2003face ; hong2019head .

Robots have the potential to interpret humans’ intentions or commands through voice commands, but for assisting robotic surgery, it still remains challenging due to the noisy environment in the operation room. With the development of deep learning in speech recognition, the precision and the accuracy of speech recognition have been significantly improved graves2013speech . This improvement leads to a more reliable control of the surgical robot zinchenko2016study .

Moreover, hand gesture is another popular medium in different HRI scenarios. In the previous works, learning-based real-time hand gestures detection and recognition methods have been studied by taking advantages of different sensors. Jacob et al. jacob2012gestonurse ; jacob2013collaboration designed a robotic scrub nurse, Gestonurse, to understand nonverbal hand gestures. They used the Kinect sensor to localize and recognize different gestures generated by surgeons, which can help to deliver surgical instruments to surgeons. Wen et al. introduced an HMM-based hand gesture recognition method for AR control wen2014hand , and more recently, with the help of deep learning, vision-based hand gesture recognition with high precision oyedotun2017deep can be achieved, therefore, significantly improve the safety for HRI in surgery.

5 Conclusion and Future Outlook

Figure 4: An outlook of the future of surgery in pre-operative planning, intra-operative guidance, surgical robotics, and also potentially caused ethical and legal issues.

The advancement in AI has been transforming modern surgery towards more precise and autonomous intervention for treating both acute and chronic symptoms. By leveraging such techniques, marked progresses have been made in pre-operative planning, intra-operative guidance and surgical robotics. In the following, we summarize the major challenges for these three aspects as shown in Fig. 4, and then discuss achievable visions of the future directions. Finally, other key issues, such as ethics, regulation, and privacy, are further discussed.

5.1 Pre-operative Planning

Deep learning has been widely adopted in pre-operative planning for tasks ranging from anatomical classification, detection, segmentation to image registration. The results seem to suggest that the deep learning based methods can outperform those rely on conventional approaches. However, data-driven approaches often suffer from inherited limitations, making the deep learning based approaches less generalizable, explainable and more data-demanding.

To overcome these issues, close collaborations between multidisciplinary teams, particularly the surgeons and AI researchers should be encouraged to generate large scale annotated data, providing more training data for AI algorithms. An alternative solution is to develop AI techniques such as meta-learning, or learning to learn, that enable generalizable systems to perform diagnosis with limited dataset yet improved explainability.

Although many state-of-the-art machine learning and deep learning algorithms have made breakthroughs in the field of general computer vision, the differences between medical and natural images can be significant, which may impede their clinical applicability. In addition, the underlying models and the derived results may not be easily interpretable by humans, therefore it raises issues such as potential risks and uncertainty in surgery. Potential solutions to these problems would be to explore different transfer learning techniques to mitigate the differences between image modalities and to develop more explainable AI algorithms to enhance its decision-making performance.

Furthermore, utilizing personalized multimodal patient information, including omics-data and life style information, in the development of AI can be useful in early detection and diagnosis, leading to personalized treatment. These also allow early treatment options featured with minimal trauma, smaller surgical risks and shorter recovery time.

5.2 Intra-operative Guidance

AI techniques have already contributed to more accurate and robust intra-operative guidance for MIS. 3D shape instantiation, camera pose estimation and dynamic environment tracking and reconstruction have been tackled to assist various surgical interventions.

For developing computer-assisted guidance from visual observations, key focuses should be on improving the localization and mapping performance with textureless surfaces, variation in illumination, and limited field of view.

Another key challenge is that the deformation of organs/tissues forcing the pre-operative planning to work with a dynamic and uncertain environment during surgery. Although AI technologies have succeeded in detection, segmentation, tracking, and classification, the studies on extending to more sophisticated 3D applications are required. Additionally, during a surgery, one important requirement is to assist surgeons in real-time, and therefore the efficiency of an AI algorithm becomes a crucial issue. Such demands have been encountered in developing AR or VR where frequent interactions are required either between surgeons and autonomous guidance systems or during remote surgery involving multidisciplinary teams located in different geographical locations.

In addition to the visual information, future AI technologies need to fuse multimodal data from various sensors to achieve more precise perception of the complicated environment. Furthermore, the increasing use of micro- and nano-robotics in surgery will come with new guidance issues.

5.3 Surgical Robotics

With the integration of AI, surgical robotics would be able to perceive and understand complicated surroundings, conduct real-time decision making and perform surgical tasks with increased precision, safety, automation, and efficiency. For instance, current robots can already automatically perform some simple surgical tasks, such as suturing and knot tying hu2018robotic ; hu2019designing . Nevertheless, the increased level of robotic autonomy for more complicated tasks could be achieved by advanced LfD and RL algorithms, especially with the consideration of the interaction with dynamic environments. Due to the diversity of surgical robotic platforms, generalized learning for accurate modelling and control is also required.

Most of the current surgical robots are associated with high cost, large size and being only to perform master-slave operations. We believe that a more versatile, lighter and probably cheaper robotic system needs to be developed, so it can access more constrained regions during MIS

troccaz2019frontiers . Certainly, it also needs to be easily integrated in well-developed surgical workflows, so that the robot can collaborate with the human operators seamlessly. To date, the current technologies in RAS are still far from achieving full autonomy, human supervision would remain to ensure safety and high-level decision making.

In the coming future, intelligent micro- and nano-robots for non-invasive surgeries and drug delivery could be realized. Furthermore, with the data captured during pre-operative examinations, robots could also assist manufacturing personalized 3D bio-printed tissues and organs for transplant surgery.

5.4 Ethical and Legal Considerations of AI in Surgery

Beyond precision, robustness, safety and automation, it is necessary to carefully consider the legal and ethical considerations of AI in Surgery. These include: 1) privacy - patients’ medical records, gene data, illness prediction data, and operation process data need to be protected with high security; 2) cyber crime - impact on patients needs to be minimized when failures happen in AI-based surgical systems which should be verified and certificated while considering all possible risks; 3) ethics – to make sure new technologies are used responsibly, e.g., gene editing and bio-printed organ transplant on long-term human reproduction, and to build the trust between human and AI techniques gradually.

In conclusion, we still have a long way to go to replicate and match the levels of intelligence that we see in surgeons and “AIs that can learn complex tasks on their own and with a minimum of initial training data will prove critical for next-generation systems” yang2018grand . Here we quote some of the questions raised by Yang et al. in their article on Medical Robotics - Regulatory, Ethical, and Legal Considerations for Increasing Levels of Autonomy yang2017medical : “As the capabilities of medical robotics following a progressive path represented by various levels of autonomy evolve, most of the role of the medical specialists will shift toward diagnosis and decision-making. Could this shift also mean that medical specialists will be less skilled in terms of dexterity and basic surgical skills as the technologies are introduced? What would be the implication on future training and accreditation? … If robot performance proves to be superior to that of humans, should we put our trust in fully autonomous medical robots?” Clearly there are many more issues need to be addressed before AI can be more seamless integrated in the future of surgery.

References

  • (1) V. Vitiello, S.-L. Lee, T. P. Cundy, and G.-Z. Yang, “Emerging robotic platforms for minimally invasive surgery,” IEEE Reviews in Biomedical Engineering, vol. 6, pp. 111–126, 2012.
  • (2) J. Troccaz, G. Dagnino, and G.-Z. Yang, “Frontiers of medical robotics: from concept to systems to clinical translation,” Annual review of biomedical engineering, vol. 21, pp. 193–218, 2019.
  • (3) G.-Z. Yang, Body sensor networks.   Springer, 2014.
  • (4) G. Z. Yang, Implantable sensors and systems: from theory to practice.   Springer, 2018.
  • (5) E. Shortliffe, Computer-based medical consultations: MYCIN.   Elsevier, 2012, vol. 2.
  • (6)

    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in

    Advances in Neural Information Processing Systems (NIPS), 2012, pp. 1097–1105.
  • (7) G.-Z. Yang, J. Cambias, K. Cleary, E. Daimler, J. Drake, P. E. Dupont, N. Hata, P. Kazanzides, S. Martel, R. V. Patel et al., “Medical robotics—regulatory, ethical, and legal considerations for increasing levels of autonomy,” Science Robotics, vol. 2, no. 4, p. 8638, 2017.
  • (8) G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Medical Image Analysis, vol. 42, pp. 60–88, 2017.
  • (9) P. Khosravi, E. Kazemi, M. Imielinski, O. Elemento, and I. Hajirasouliha, “Deep convolutional neural networks enable discrimination of heterogeneous digital pathology images,” EBioMedicine, vol. 27, pp. 317–328, 2018.
  • (10) S. Chilamkurthy, R. Ghosh, S. Tanamala, M. Biviji, N. G. Campeau, V. K. Venugopal, V. Mahajan, P. Rao, and P. Warier, “Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study,” The Lancet, vol. 392, no. 10162, pp. 2388–2396, 2018.
  • (11) A. Meyer, D. Zverinski, B. Pfahringer, J. Kempfert, T. Kuehne, S. H. Sündermann, C. Stamm, T. Hofmann, V. Falk, and C. Eickhoff, “Machine learning for real-time prediction of complications in critical care: a retrospective study,” The Lancet Respiratory Medicine, vol. 6, no. 12, pp. 905–914, 2018.
  • (12) X. Li, S. Zhang, Q. Zhang, X. Wei, Y. Pan, J. Zhao, X. Xin, C. Qin, X. Wang, J. Li et al., “Diagnosis of thyroid cancer using deep convolutional neural network models applied to sonographic images: a retrospective, multicohort, diagnostic study,” The Lancet Oncology, vol. 20, no. 2, pp. 193–201, 2019.
  • (13) E. Rubinstein, M. Salhov, M. Nidam-Leshem, V. White, S. Golan, J. Baniel, H. Bernstine, D. Groshar, and A. Averbuch, “Unsupervised tumor detection in dynamic PET/CT imaging of the prostate,” Medical Image Analysis, vol. 55, pp. 27–40, 2019.
  • (14) M. Winkels and T. S. Cohen, “Pulmonary nodule detection in CT scans with equivariant CNNs,” Medical Image Analysis, vol. 55, pp. 15–26, 2019.
  • (15) F. Liu, Z. Zhou, A. Samsonov, D. Blankenbaker, W. Larison, A. Kanarek, K. Lian, S. Kambhampati, and R. Kijowski, “Deep learning approach for evaluating knee MR images: achieving high diagnostic performance for cartilage lesion detection,” Radiology, vol. 289, no. 1, pp. 160–169, 2018.
  • (16) G. Maicas, G. Carneiro, A. P. Bradley, J. C. Nascimento, and I. Reid, “Deep reinforcement learning for active breast lesion detection from DCE-MRI,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2017, pp. 665–673.
  • (17) H. Lee, S. Yune, M. Mansouri, M. Kim, S. H. Tajmir, C. E. Guerrier, S. A. Ebert, S. R. Pomerantz, J. M. Romero, S. Kamalian et al., “An explainable deep-learning algorithm for the detection of acute intracranial haemorrhage from small datasets,” Nature Biomedical Engineering, vol. 3, no. 3, p. 173, 2019.
  • (18) K. Kamnitsas, C. Ledig, V. F. Newcombe, J. P. Simpson, A. D. Kane, D. K. Menon, D. Rueckert, and B. Glocker, “Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation,” Medical Image Analysis, vol. 36, pp. 61–78, 2017.
  • (19) J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , 2015, pp. 3431–3440.
  • (20) O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2015, pp. 234–241.
  • (21) Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: learning dense volumetric segmentation from sparse annotation,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2016, pp. 424–432.
  • (22) X.-Y. Zhou and G.-Z. Yang, “Normalization in training U-Net for 2D biomedical semantic segmentation,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1792–1799, 2019.
  • (23) E. Gibson, F. Giganti, Y. Hu, E. Bonmati, S. Bandula, K. Gurusamy, B. Davidson, S. P. Pereira, M. J. Clarkson, and D. C. Barratt, “Automatic multi-organ segmentation on abdominal CT with dense v-networks,” IEEE Transactions on Medical Imaging, vol. 37, no. 8, pp. 1822–1834, 2018.
  • (24) G. Wang, W. Li, M. A. Zuluaga, R. Pratt, P. A. Patel, M. Aertsen, T. Doel, A. L. David, J. Deprest, S. Ourselin et al., “Interactive medical image segmentation using deep learning with image-specific fine tuning,” IEEE Transactions on Medical Imaging, vol. 37, no. 7, pp. 1562–1573, 2018.
  • (25) W. Bai, H. Suzuki, C. Qin, G. Tarroni, O. Oktay, P. M. Matthews, and D. Rueckert, “Recurrent neural networks for aortic image sequence segmentation with sparse annotations,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2018, pp. 586–594.
  • (26) I. Laina, N. Rieke, C. Rupprecht, J. P. Vizcaíno, A. Eslami, F. Tombari, and N. Navab, “Concurrent segmentation and localization for tracking of surgical instruments,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2017, pp. 664–672.
  • (27) N. Lessmann, B. van Ginneken, P. A. de Jong, and I. Išgum, “Iterative fully convolutional neural networks for automatic vertebra segmentation and identification,” Medical Image Analysis, vol. 53, pp. 142–155, 2019.
  • (28) X. Feng, J. Yang, A. F. Laine, and E. D. Angelini, “Discriminative localization in CNNs for weakly-supervised segmentation of pulmonary nodules,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2017, pp. 568–576.
  • (29) G. Balakrishnan, A. Zhao, M. R. Sabuncu, J. Guttag, and A. V. Dalca, “VoxelMorph: a learning framework for deformable medical image registration,” IEEE Transactions on Medical Imaging, 2019.
  • (30) Z. Shen, X. Han, Z. Xu, and M. Niethammer, “Networks for joint affine and non-parametric image registration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 4224–4233.
  • (31) K. A. Eppenhof and J. P. Pluim, “Pulmonary CT registration through supervised learning with convolutional neural networks,” IEEE Transactions on Medical Imaging, vol. 38, no. 5, pp. 1097–1105, 2018.
  • (32) Y. Hu, M. Modat, E. Gibson, W. Li, N. Ghavami, E. Bonmati, G. Wang, S. Bandula, C. M. Moore, M. Emberton et al., “Weakly-supervised convolutional neural networks for multimodal image registration,” Medical Image Analysis, vol. 49, pp. 1–13, 2018.
  • (33) S. Miao, S. Piat, P. Fischer, A. Tuysuzoglu, P. Mewes, T. Mansi, and R. Liao, “Dilated FCN for multi-agent 2D/3D medical image registration,” in Proceedings of AAAI Conference on Artificial Intelligence, 2018.
  • (34) J. Fan, X. Cao, P.-T. Yap, and D. Shen, “BIRNet: Brain image registration using dual-supervised fully convolutional networks,” Medical Image Analysis, vol. 54, pp. 193–206, 2019.
  • (35) B. D. de Vos, F. F. Berendsen, M. A. Viergever, H. Sokooti, M. Staring, and I. Išgum, “A deep learning framework for unsupervised affine and deformable image registration,” Medical Image Analysis, vol. 52, pp. 128–143, 2019.
  • (36) H. Sokooti, B. de Vos, F. Berendsen, B. P. Lelieveldt, I. Išgum, and M. Staring, “Nonrigid image registration using multi-scale 3D convolutional neural networks,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2017, pp. 232–239.
  • (37) R. Liao, S. Miao, P. de Tournemire, S. Grbic, A. Kamen, T. Mansi, and D. Comaniciu, “An artificial agent for robust image registration,” in Proceedings of AAAI Conference on Artificial Intelligence, 2017.
  • (38) D. Cool, D. Downey, J. Izawa, J. Chin, and A. Fenster, “3D prostate model formation from non-parallel 2D ultrasound biopsy images,” Medical Image Analysis, vol. 10, no. 6, pp. 875–887, 2006.
  • (39) D. Toth, M. Pfister, A. Maier, M. Kowarschik, and J. Hornegger, “Adaption of 3D models to 2D X-ray images during endovascular abdominal aneurysm repair,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2015, pp. 339–346.
  • (40) X. Zhou, G. Yang, C. Riga, and S. Lee, “Stent graft shape instantiation for fenestrated endovascular aortic repair.”   The Hamlyn Symposium on Medical Robotics, 2017.
  • (41) X.-Y. Zhou, J. Lin, C. Riga, G.-Z. Yang, and S.-L. Lee, “Real-time 3D shape instantiation from single fluoroscopy projection for fenestrated stent graft deployment,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 1314–1321, 2018.
  • (42) J.-Q. Zheng, X.-Y. Zhou, C. Riga, and G.-Z. Yang, “Real-time 3D shape instantiation for partially deployed stent segments from a single 2D fluoroscopic image in fenestrated endovascular aortic repair,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3703–3710, 2019.
  • (43) X.-Y. Zhou, C. Riga, S.-L. Lee, and G.-Z. Yang, “Towards automatic 3D shape instantiation for deployed stent grafts: 2D multiple-class and class-imbalance marker segmentation with equally-weighted focal U-Net,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 1261–1267.
  • (44) J.-Q. Zheng, X.-Y. Zhou, C. Riga, and G.-Z. Yang, “3D path planning from a single 2D fluoroscopic image for robot assisted fenestrated endovascular aortic repair,” arXiv preprint arXiv:1809.05955, 2018.
  • (45) S.-L. Lee, A. Chung, M. Lerotic, M. A. Hawkins, D. Tait, and G.-Z. Yang, “Dynamic shape instantiation for intra-operative guidance,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2010, pp. 69–76.
  • (46) X.-Y. Zhou, G.-Z. Yang, and S.-L. Lee, “A real-time and registration-free framework for dynamic shape instantiation,” Medical Image Analysis, vol. 44, pp. 86–97, 2018.
  • (47) X.-Y. Zhou, Z.-Y. Wang, P. Li, J.-Q. Zheng, and G.-Z. Yang, “One-stage shape instantiation from a single 2D image to 3D point cloud,” arXiv preprint arXiv:1907.10763, 2019.
  • (48) F. Mahmood and N. J. Durr, “Deep learning and conditional random fields-based depth estimation and topographical reconstruction from conventional endoscopy,” Medical Image Analysis, vol. 48, pp. 230–243, 2018.
  • (49) F. Mahmood, R. Chen, and N. J. Durr, “Unsupervised reverse domain adaptation for synthetic medical images via adversarial training,” IEEE Transactions on Medical Imaging, vol. 37, no. 12, pp. 2572–2581, 2018.
  • (50) M. Turan, E. P. Ornek, N. Ibrahimli, C. Giracoglu, Y. Almalioglu, M. F. Yanik, and M. Sitti, “Unsupervised odometry and depth learning for endoscopic capsule robots,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 1801–1807.
  • (51) M. Shen, Y. Gu, N. Liu, and G.-Z. Yang, “Context-aware depth and pose estimation for bronchoscopic navigation,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 732–739, 2019.
  • (52) F. Liu, C. Shen, and G. Lin, “Deep convolutional neural fields for depth estimation from a single image,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 5162–5170.
  • (53) T. Zhou, M. Brown, N. Snavely, and D. G. Lowe, “Unsupervised learning of depth and ego-motion from video,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
  • (54)

    H. Zhan, R. Garg, C. Saroj Weerasekera, K. Li, H. Agarwal, and I. Reid, “Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction,” in

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  • (55) M. Ye, E. Johns, A. Handa, L. Zhang, P. Pratt, and G.-Z. Yang, “Self-supervised siamese learning on stereo image pairs for depth estimation in robotic surgery,” arXiv preprint arXiv:1705.08260, 2017.
  • (56)

    J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in

    Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2223–2232.
  • (57) M. Turan, Y. Almalioglu, H. Araujo, E. Konukoglu, and M. Sitti, “Deep endovo: A recurrent convolutional neural network (RCNN) based visual odometry approach for endoscopic capsule robots,” Neurocomputing, vol. 275, pp. 1861–1870, 2018.
  • (58) J. Sganga, D. Eng, C. Graetzel, and D. Camarillo, “OffsetNet: Deep learning for localization in the lung using rendered images,” arXiv preprint arXiv:1809.05645, 2018.
  • (59) P. Mountney, D. Stoyanov, A. Davison, and G.-Z. Yang, “Simultaneous stereoscope localization and soft-tissue mapping for minimal invasive surgery,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2006, pp. 347–354.
  • (60) A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-time single camera SLAM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, no. 6, pp. 1052–1067, 2007.
  • (61) P. Mountney and G.-Z. Yang, “Motion compensated SLAM for image guided surgery,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2010, pp. 496–504.
  • (62) O. G. Grasa, E. Bernal, S. Casado, I. Gil, and J. Montiel, “Visual SLAM for handheld monocular endoscope,” IEEE Transactions on Medical Imaging, vol. 33, no. 1, pp. 135–146, 2013.
  • (63) M. Turan, Y. Almalioglu, H. Araujo, E. Konukoglu, and M. Sitti, “A non-rigid map fusion-based direct SLAM method for endoscopic capsule robots,” International Journal of Intelligent Robotics and Applications, vol. 1, no. 4, pp. 399–409, 2017.
  • (64) J. Song, J. Wang, L. Zhao, S. Huang, and G. Dissanayake, “MIS-SLAM: Real-time large-scale dense deformable SLAM system in minimal invasive surgery based on heterogeneous computing,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 4068–4075, 2018.
  • (65) X.-Y. Zhou, S. Ernst, and S.-L. Lee, “Path planning for robot-enhanced cardiac radiofrequency catheter ablation,” in 2016 IEEE international conference on robotics and automation (ICRA).   IEEE, 2016, pp. 4172–4177.
  • (66) C. Shi, S. Giannarou, S.-L. Lee, and G.-Z. Yang, “Simultaneous catheter and environment modeling for trans-catheter aortic valve implantation,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2014, pp. 2024–2029.
  • (67) L. Zhao, S. Giannarou, S.-L. Lee, and G.-Z. Yang, “SCEM+: real-time robust simultaneous catheter and environment modeling for endovascular navigation,” IEEE Robotics and Automation Letters, vol. 1, no. 2, pp. 961–968, 2016.
  • (68) ——, “Registration-free simultaneous catheter and environment modelling,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2016, pp. 525–533.
  • (69) P. Mountney and G.-Z. Yang, “Soft tissue tracking for minimally invasive surgery: Learning local deformation online,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2008, pp. 364–372.
  • (70) M. Ye, S. Giannarou, A. Meining, and G.-Z. Yang, “Online tracking and retargeting with applications to optical biopsy in gastrointestinal endoscopic examinations,” Medical Image Analysis, vol. 30, pp. 144–157, 2016.
  • (71) R. Wang, M. Zhang, X. Meng, Z. Geng, and F.-Y. Wang, “3D tracking for augmented reality using combined region and dense cues in endoscopic surgery,” IEEE Journal of Biomedical and Health Informatics, vol. 22, no. 5, pp. 1540–1551, 2017.
  • (72) S. Bernhardt, S. A. Nicolau, L. Soler, and C. Doignon, “The status of augmented reality in laparoscopic surgery as of 2016,” Medical Image Analysis, vol. 37, pp. 66–90, 2017.
  • (73) J. Wang, H. Suenaga, K. Hoshi, L. Yang, E. Kobayashi, I. Sakuma, and H. Liao, “Augmented reality navigation with automatic marker-free image registration using 3D image overlay for dental surgery,” IEEE Transactions on Biomedical Engineering, vol. 61, no. 4, pp. 1295–1304, 2014.
  • (74) P. Pratt, M. Ives, G. Lawton, J. Simmons, N. Radev, L. Spyropoulou, and D. Amiras, “Through the hololens™ looking glass: augmented reality for extremity reconstruction surgery using 3D vascular models with perforating vessels,” European Radiology Experimental, vol. 2, no. 1, p. 2, 2018.
  • (75) X. Zhang, J. Wang, T. Wang, X. Ji, Y. Shen, Z. Sun, and X. Zhang, “A markerless automatic deformable registration framework for augmented reality navigation of laparoscopy partial nephrectomy,” International Journal of Computer Assisted Radiology and Surgery, pp. 1–10, 2019.
  • (76) E. J. Topol, “High-performance medicine: the convergence of human and artificial intelligence,” Nature Medicine, vol. 25, no. 1, p. 44, 2019.
  • (77) R. Mirnezami and A. Ahmed, “Surgery 3.0, artificial intelligence and the next-generation surgeon,” British Journal of Surgery, vol. 105, no. 5, pp. 463–465, 2018.
  • (78) D. Bouget, R. Benenson, M. Omran, L. Riffaud, B. Schiele, and P. Jannin, “Detecting surgical tools by modelling local appearance and global shape,” IEEE Transactions on Medical Imaging, vol. 34, no. 12, pp. 2603–2617, 2015.
  • (79) A. A. Shvets, A. Rakhlin, A. A. Kalinin, and V. I. Iglovikov, “Automatic instrument segmentation in robot-assisted surgery using deep learning,” in Proceedings of IEEE International Conference on Machine Learning and Applications (ICMLA).   IEEE, 2018, pp. 624–628.
  • (80) M. Islam, D. A. Atputharuban, R. Ramesh, and H. Ren, “Real-time instrument segmentation in robotic surgery using auxiliary supervised deep adversarial learning,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 2188–2195, 2019.
  • (81) R. Sznitman, R. Richa, R. H. Taylor, B. Jedynak, and G. D. Hager, “Unified detection and tracking of instruments during retinal microsurgery,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 5, pp. 1263–1273, 2012.
  • (82) L. Zhang, M. Ye, P.-L. Chan, and G.-Z. Yang, “Real-time surgical tool tracking and pose estimation using a hybrid cylindrical marker,” International Journal of Computer Assisted Radiology and Surgery, vol. 12, no. 6, pp. 921–930, 2017.
  • (83) M. Ye, L. Zhang, S. Giannarou, and G.-Z. Yang, “Real-time 3D tracking of articulated tools for robotic surgery,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2016, pp. 386–394.
  • (84) Z. Zhao, S. Voros, Y. Weng, F. Chang, and R. Li, “Tracking-by-detection of surgical instruments in minimally invasive surgery via the convolutional neural network deep learning-based method,” Computer Assisted Surgery, vol. 22, no. sup1, pp. 26–35, 2017.
  • (85) C. I. Nwoye, D. Mutter, J. Marescaux, and N. Padoy, “Weakly supervised convolutional LSTM approach for tool tracking in laparoscopic videos,” International Journal of Computer Assisted Radiology and Surgery, vol. 14, no. 6, pp. 1059–1067, 2019.
  • (86) D. Sarikaya, J. J. Corso, and K. A. Guru, “Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection,” IEEE Transactions on Medical Imaging, vol. 36, no. 7, pp. 1542–1549, 2017.
  • (87) T. Kurmann, P. M. Neila, X. Du, P. Fua, D. Stoyanov, S. Wolf, and R. Sznitman, “Simultaneous recognition and pose estimation of instruments in minimally invasive surgery,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2017, pp. 505–513.
  • (88) N. Padoy and G. D. Hager, “3D thread tracking for robotic assistance in tele-surgery,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2011, pp. 2102–2107.
  • (89) Y. Hu, Y. Gu, J. Yang, and G.-Z. Yang, “Multi-stage suture detection for robot assisted anastomosis based on deep learning,” in Proceedings of IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2018, pp. 1–8.
  • (90) Y. Gu, Y. Hu, L. Zhang, J. Yang, and G.-Z. Yang, “Cross-scene suture thread parsing for robot assisted anastomosis based on joint feature learning,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 769–776.
  • (91) A. I. Aviles, S. M. Alsaleh, J. K. Hahn, and A. Casals, “Towards retrieving force feedback in robotic-assisted surgery: A supervised neuro-recurrent-vision approach,” IEEE Transactions on Haptics, vol. 10, no. 3, pp. 431–443, 2016.
  • (92) A. Marban, V. Srinivasan, W. Samek, J. Fernández, and A. Casals, “Estimation of interaction forces in robotic surgery using a semi-supervised deep neural network model,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 761–768.
  • (93) N. Ahmidi, L. Tao, S. Sefati, Y. Gao, C. Lea, B. B. Haro, L. Zappella, S. Khudanpur, R. Vidal, and G. D. Hager, “A dataset and benchmarks for segmentation and recognition of gestures in robotic surgery,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 9, pp. 2025–2041, 2017.
  • (94) M. J. Fard, S. Ameri, R. B. Chinnam, and R. D. Ellis, “Soft boundary approach for unsupervised gesture segmentation in robotic-assisted surgery,” IEEE Robotics and Automation Letters, vol. 2, no. 1, pp. 171–178, 2016.
  • (95) S. Krishnan, A. Garg, S. Patil, C. Lea, G. Hager, P. Abbeel, and K. Goldberg, “Transition state clustering: Unsupervised surgical trajectory segmentation for robot learning,” The International Journal of Robotics Research, vol. 36, no. 13-14, pp. 1595–1618, 2017.
  • (96) A. Murali, A. Garg, S. Krishnan, F. T. Pokorny, P. Abbeel, T. Darrell, and K. Goldberg, “TSC-DL: Unsupervised trajectory segmentation of multi-modal surgical demonstrations with deep learning,” in Proceedings of IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2016, pp. 4150–4157.
  • (97) L. Zappella, B. Béjar, G. Hager, and R. Vidal, “Surgical gesture classification from video and kinematic data,” Medical Image Analysis, vol. 17, no. 7, pp. 732–745, 2013.
  • (98) L. Tao, L. Zappella, G. D. Hager, and R. Vidal, “Surgical gesture segmentation and recognition,” in Proceedings o International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2013, pp. 339–346.
  • (99) F. Despinoy, D. Bouget, G. Forestier, C. Penet, N. Zemiti, P. Poignet, and P. Jannin, “Unsupervised trajectory segmentation for surgical gesture recognition in robotic training,” IEEE Transactions on Biomedical Engineering, vol. 63, no. 6, pp. 1280–1291, 2015.
  • (100) R. DiPietro, N. Ahmidi, A. Malpani, M. Waldram, G. I. Lee, M. R. Lee, S. S. Vedula, and G. D. Hager, “Segmenting and classifying activities in robot-assisted surgery with recurrent neural networks,” International Journal of Computer Assisted Radiology and Surgery, pp. 1–16, 2019.
  • (101) D. Liu and T. Jiang, “Deep reinforcement learning for surgical gesture segmentation and classification,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2018, pp. 247–255.
  • (102) N. Padoy and G. D. Hager, “Human-machine collaborative surgery using learned models,” in Proceedings of IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2011, pp. 5285–5292.
  • (103) S. Calinon, D. Bruno, M. S. Malekzadeh, T. Nanayakkara, and D. G. Caldwell, “Human-robot skills transfer interfaces for a flexible surgical robot,” Computer Methods and Programs in Biomedicine, vol. 116, no. 2, pp. 81–96, 2014.
  • (104) T. Osa, N. Sugita, and M. Mitsuishi, “Online trajectory planning in dynamic environments for surgical task automation.” in Robotics: Science and Systems, 2014, pp. 1–9.
  • (105) J. Van Den Berg, S. Miller, D. Duckworth, H. Hu, A. Wan, X.-Y. Fu, K. Goldberg, and P. Abbeel, “Superhuman performance of surgical tasks by robots using iterative learning from human-guided demonstrations,” in Proceedings of IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2010, pp. 2074–2081.
  • (106) A. Murali, S. Sen, B. Kehoe, A. Garg, S. McFarland, S. Patil, W. D. Boyd, S. Lim, P. Abbeel, and K. Goldberg, “Learning by observation for surgical subtasks: Multilateral cutting of 3D viscoelastic and 2D orthotropic tissue phantoms,” in Proceedings of IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2015, pp. 1202–1209.
  • (107) H. Mayer, F. Gomez, D. Wierstra, I. Nagy, A. Knoll, and J. Schmidhuber, “A system for robotic heart surgery that learns to tie knots using recurrent neural networks,” Advanced Robotics, vol. 22, no. 13-14, pp. 1521–1537, 2008.
  • (108) E. De Momi, L. Kranendonk, M. Valenti, N. Enayati, and G. Ferrigno, “A neural network-based approach for trajectory planning in robot–human handover tasks,” Frontiers in Robotics and AI, vol. 3, p. 34, 2016.
  • (109) J. Kober, J. A. Bagnell, and J. Peters, “Reinforcement learning in robotics: A survey,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1238–1274, 2013.
  • (110) P. Abbeel and A. Y. Ng, “Apprenticeship learning via inverse reinforcement learning,” in Proceedings of International Conference on Machine Learning (ICML).   ACM, 2004, p. 1.
  • (111) X. Tan, C.-B. Chng, Y. Su, K.-B. Lim, and C.-K. Chui, “Robot-assisted training in laparoscopy using deep reinforcement learning,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 485–492, 2019.
  • (112) J. Ho and S. Ermon, “Generative adversarial imitation learning,” in Proceedings of Advances in Neural Information Processing Systems (NIPS), 2016, pp. 4565–4573.
  • (113) S. Levine, C. Finn, T. Darrell, and P. Abbeel, “End-to-end training of deep visuomotor policies,” The Journal of Machine Learning Research, vol. 17, no. 1, pp. 1334–1373, 2016.
  • (114) B. Thananjeyan, A. Garg, S. Krishnan, C. Chen, L. Miller, and K. Goldberg, “Multilateral surgical pattern cutting in 2D orthotropic gauze with deep reinforcement learning policies for tensioning,” in Proceedings of IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2017, pp. 2371–2378.
  • (115) G.-Z. Yang, L. Dempere-Marco, X.-P. Hu, and A. Rowe, “Visual search: psychophysical models and practical applications,” Image and Vision Computing, vol. 20, no. 4, pp. 291–305, 2002.
  • (116) G.-Z. Yang, G. P. Mylonas, K.-W. Kwok, and A. Chung, “Perceptual docking for robotic control,” in International Workshop on Medical Imaging and Virtual Reality.   Springer, 2008, pp. 21–30.
  • (117) M. Visentini-Scarzanella, G. P. Mylonas, D. Stoyanov, and G.-Z. Yang, “I-brush: A gaze-contingent virtual paintbrush for dense 3D reconstruction in robotic assisted surgery,” in Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI).   Springer, 2009, pp. 353–360.
  • (118) K. Fujii, G. Gras, A. Salerno, and G.-Z. Yang, “Gaze gesture based human robot interaction for laparoscopic surgery,” Medical Image Analysis, vol. 44, pp. 196–214, 2018.
  • (119) A. Nishikawa, T. Hosoi, K. Koara, D. Negoro, A. Hikita, S. Asano, H. Kakutani, F. Miyazaki, M. Sekimoto, M. Yasui et al., “Face mouse: A novel human-machine interface for controlling the position of a laparoscope,” IEEE Transactions on Robotics and Automation, vol. 19, no. 5, pp. 825–841, 2003.
  • (120) N. Hong, M. Kim, C. Lee, and S. Kim, “Head-mounted interface for intuitive vision control and continuous surgical operation in a surgical robot system,” Medical & Biological Engineering & Computing, vol. 57, no. 3, pp. 601–614, 2019.
  • (121) A. Graves, A.-r. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing.   IEEE, 2013, pp. 6645–6649.
  • (122) K. Zinchenko, C.-Y. Wu, and K.-T. Song, “A study on speech recognition control for a surgical robot,” IEEE Transactions on Industrial Informatics, vol. 13, no. 2, pp. 607–615, 2016.
  • (123) M. G. Jacob, Y.-T. Li, G. Akingba, and J. P. Wachs, “Gestonurse: a robotic surgical nurse for handling surgical instruments in the operating room,” Journal of Robotic Surgery, vol. 6, no. 1, pp. 53–63, 2012.
  • (124) M. G. Jacob, Y.-T. Li, G. A. Akingba, and J. P. Wachs, “Collaboration with a robotic scrub nurse,” Communications of ACM, vol. 56, no. 5, pp. 68–75, 2013.
  • (125) R. Wen, W.-L. Tay, B. P. Nguyen, C.-B. Chng, and C.-K. Chui, “Hand gesture guided robot-assisted surgery based on a direct augmented reality interface,” Computer Methods and Programs in Biomedicine, vol. 116, no. 2, pp. 68–80, 2014.
  • (126) O. K. Oyedotun and A. Khashman, “Deep learning in vision-based static hand gesture recognition,” Neural Computing and Applications, vol. 28, no. 12, pp. 3941–3951, 2017.
  • (127) Y. Hu, L. Zhang, W. Li, and G.-Z. Yang, “Robotic sewing and knot tying for personalized stent graft manufacturing,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 754–760.
  • (128) Y. Hu, W. Li, L. Zhang, and G.-Z. Yang, “Designing, prototyping, and testing a flexible suturing robot for transanal endoscopic microsurgery,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1669–1675, 2019.
  • (129) G.-Z. Yang, J. Bellingham, P. E. Dupont, P. Fischer, L. Floridi, R. Full, N. Jacobstein, V. Kumar, M. McNutt, R. Merrifield et al., “The grand challenges of science robotics,” Science Robotics, vol. 3, no. 14, p. eaar7650, 2018.