A surgical system for automatic registration, stiffness mapping and dynamic image overlay

11/23/2017 ∙ by Nicolas Zevallos, et al. ∙ Carnegie Mellon University 0

In this paper we develop a surgical system using the da Vinci research kit (dVRK) that is capable of autonomously searching for tumors and dynamically displaying the tumor location using augmented reality. Such a system has the potential to quickly reveal the location and shape of tumors and visually overlay that information to reduce the cognitive overload of the surgeon. We believe that our approach is one of the first to incorporate state-of-the-art methods in registration, force sensing and tumor localization into a unified surgical system. First, the preoperative model is registered to the intra-operative scene using a Bingham distribution-based filtering approach. An active level set estimation is then used to find the location and the shape of the tumors. We use a recently developed miniature force sensor to perform the palpation. The estimated stiffness map is then dynamically overlaid onto the registered preoperative model of the organ. We demonstrate the efficacy of our system by performing experiments on phantom prostate models with embedded stiff inclusions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Supplementary Material

This paper is accompanied by a video: Click video Link

I Introduction

Robot-assisted minimally invasive surgeries (RMIS) are becoming increasingly popular as they provide increased dexterity and control to the surgeon while also reducing trauma, blood loss and hospital stays for the patient [1]. These devices are typically teleoperated by the surgeons using visual feedback from stereo-cameras, but without any haptic feedback. This can result in the surgeon relying only on vision to identify tumors by mentally forming the correspondence between intra-operative view and pre-operative images such as CT scans/MRI, which can be cognitively demanding.

Automation of simple but laborious surgical sub-tasks and presenting critical information back to the surgeon in an intuitive manner has the potential to reduce the cognitive overloading and mental fatigue of surgeons [2]. This work leverages the recent advances in force sensing technologies [3], tumor localization strategies [4, 5, 6], online registration techniques [7, 8] and augmented reality [9] to automate the task of tumor localization and dynamically overlay the information on top of intraoperative view of the anatomy.

Fig. 1: Experimental setup showing the dVRK robot with a miniature force sensor attached to the end-effector. A stereo camera overlooks the workspace of the robot. A phantom prostate with embedded stiff inclusion is placed in the workspace of the robot.

While the works in literature deal with force sensing [10, 11], tumor localization [4, 2, 5, 12, 6] and graphical image overlays [13, 14, 15, 16], there is a gap in literature when it comes to systems that deal with all these issues at the same time. For example, Yamamoto et al. [16] deal with tumor localization and visual overlay, but they assume the organ is flat and place the organ on a force sensing plate, which is not representative of a surgical scenario. On the other hand, Garg et al. [2] use a palpation probe mounted on a da Vinci research kit (dVRK) tool [17]. However, they do not deal with registering the organ or visual overlay of the estimated stiffness map. This work, aims to bridge these shortcomings and present a unified system capable of addressing all the above mentioned issues at the same time.

The system of Naidu et al. [18] comes closest to our work. They use a custom designed tactile probe to find tumors and visually overlay the tactile image along with the ultrasound images. The wide tactile array that they use, allows for imaging sections of the organ instead of obtaining discrete measurements, as in our case. This eliminates their need to develop sophisticated tumor search algorithms. However, as acknowledged by the authors [19], it is not clear as to how their system would perform when using non-flat organs such as prostates and kidneys; since the tactile array cannot deform and confirm to the shape of the organ. Without performing registration, the image overlay would also be affected on non-flat organs.

The framework presented in this work is robot agnostic and modular in nature. We demonstrate the efficacy of the system by performing autonomous tumor localization on a phantom prostate model with embedded tumors using the dVRK (see Fig. 1. A miniature force sensor mounted at the tip of the dVRK needle driver tool [3] is used to sense the contact forces. An active tumor search strategy [20, 6] is used to localize the tumor. The estimated stiffness map is overlaid on a registered model of the anatomy and displayed in real-time on a stereo viewer.

Ii Related Work

Ii-a Force sensing for surgical applications

The following survey papers report a number of devices that measure contact forces [10, 11]. Some common drawbacks with many of the existing devices are: difficulty to sterilize, high cost , delicate components and lack of flexibility of form factor. Recently, our group has developed a miniature force sensor that uses an array of thin-film force-sensitive resistors (FSR) with embedded signal processing circuits [3]. The FSR sensor is light weight, inexpensive, robust and has a flexible form factor.

Ii-B Tumor search approaches

The recent developments in force sensors have also resulted in a number of works that automate mapping of the surface of the anatomy to reveal stiff inclusions. The different palpation strategies commonly used are: discrete probing motion [16, 21], rolling motion [22] and cycloidal motion [23]. Some of these works direct the robot along a predefined path that scans the region of interest on the organ [16, 24, 25], while others adaptively change the grid resolution to increase palpation resolution around boundaries of regions with high stiffness gradients [23, 21].

Over the last two years, Bayesian optimization-based methods have gained popularity [4, 2, 5, 12]. These methods model the stiffness map using a Gaussian process regression (GPR) and reduce the exploration time by directing the robot to stiff regions. While the objective of most prior works is to find the high stiffness regions [4, 2, 5], our recent work on active search explicitly encodes finding the location and the shape of the tumor as its objective [6].

Ii-C Surgical registration and image overlay

There is a rich literature of image overlay for minimally invasive surgeries [13], including some works on usage of augmented reality in human-surgeries [26]. Often the image that is overlaid is a segmented preoperative model, and it manually placed in the intraoperative view [26, 15]. Very few works such as [14, 27], deal with manual placement followed by automatic registration of the organ models. There are a number of registration techniques that have been developed for surgical applications; the most popular one being iterative closest point (ICP) [28] and its variants [29].

Probabilistic methods for registration have recently gained attention as they are better at handling noise in the measurements. Billings et al. [30] use a probabilistic matching criteria for registration, while methods such as [31, 7]

( and the references therein) use Kalman filters to estimate the registration parameters. Our recent work reformulates registration as a linear problem in the space of dual quaternions and uses a Bingham filter and a Kalman filter to estimate the rotation and translation respectively 

[8]. Such an approach has been shown to produce more accurate and fast online updates of the registration parameters.

While the above literature deals with registering preoperative models onto an intraoperative scene, there is very little literature that deals with overlaying stiffness maps on the preoperative models and updating the maps in real-time as new force sensing information is obtained. Real-time update is very important, because it gives the surgeon a better sense of what the robot has found and gives them insight into when to stop the search algorithm which is a subjective decision, as observed in [5]. The works of Yamamoto et al. [16] and Naidu et al. [18] are exceptions and deal with dynamic overlaying of the stiffness image, but only onto flat organs. Their approaches do not generalize to the cases of non-flat organs such as kidneys or prostates that we consider in this work.

Iii Problem Setting and Assumptions

We use an ELP stereo camera (model 1MP2CAM001) overlooking the workspace of a dVRK [32]. A custom fabricated prostate phantom (made using Ecoflex 00-10) embedded with a plastic-disc to mimic a stiff tumor, is used for experimental validation.

Given an a priori geometric model of an organ, the measurements of the tool tip positions and associated contact forces, and stereo-camera images of the intraoperative scene, our goal is to (i) register the camera-frame, robot-frame and model-frame to each other, (ii) estimate the stiffness distribution over the organ’s surface, and (iii) overlay the estimated stiffness distribution on the registered model of the organ and display it back to the user.

We make the following assumptions in this work:

  • The shape of the organ never deforms globally but instead experiences local deformations only due to tool-interaction.

  • The tool-tip pose can be obtained accurately from the robot kinematics.

  • The forces applied by the tool are within the admissible range ( N) in which the organ only undergoes a small deformation (mm) that allows it to realize its undeformed state when the force is removed.

  • The stiff inclusion is located relatively close to the tissue surface, so that it can be detected by palpation.

Iv System Modeling and Experimental Validation

Fig. 2: Flowchart showing all the modular components of our system. Some of the modules such as camera calibration, stereo reconstruction, model creation, and camera-robot-model registrations are implemented once before the start of the experiment, while the other modules are constantly run for the duration of the experiment.

Fig. 2 shows the flowchart of the entire system. Modules such as camera calibration, model generation and registration need to be run only once at the beginning of the experiment. On the other hand, the tumor search, probing, and augmented display modules are run in a loop until the user is satisfied with the result and halts the process. While the system is largely autonomous, user input is required in two steps: (i) Camera-model registration, to select the organ of interest in the view of the camera, (ii) selecting region of interest for stiffness mapping. The modularity of the system allows the user to choose any implementation for registration, force-sensing and tumor localization. The important modules of our system are discussed in detail in the following sections.

Iv-a Registering Camera and Robot Frames

The cameras are calibrated using standard ROS calibration. The robot is fitted with a colored bead on its end effector that can be easily segmented from the background by hue, saturation, and value. Registration between the camera-frame and the robot-frame is performed by the user through a graphical user interface (GUI) that shows the left and right camera images and has sliders representing color segmentation parameters.

The robot is moved to a fixed set of six points. These points are chosen to cover a substantial amount of the robot’s workspace, stay within the field of view in the camera, and not contain symmetries that would make registration difficult. We chose to use only six points after experiments showed that additional points failed to significantly decrease the root mean squared error (RMSE), as shown in Table I. For each of the points, we perform a series of actions.

First, we move the robot to the specified location, then we process both the left and right images to find the centroid of the colored bead fitted to the robot. The centroid of the ball in pixels is found as the center of the minimum enclosing circle of the contour with the largest area. We repeat this for as many frames as are received over ROS in one second (in our case 15), and the centroid is then averaged over all frames to reduce the effect of noise in the image. The centroid is drawn onto both images in the GUI, allowing the user to evaluate the accuracy of the centroid estimation. The pixel disparity is calculated as the difference between the coordinates of the centroid in the left and right images. This disparity is fed into a stereo-camera model that ROS provides, to calculate a 3D point in the camera-frame.

Number of points 5 6 7 8 11 51
RMSE (mm) 2.71 2.37 2.84 3.01 2.82 2.85
TABLE I:

Following this, we obtain six points in both the camera-frame and the robot-frame (using the kinematics of the robot). We use Horn’s method [33] to calculate the transformation between the camera and the robot frames. This transformation is saved to a file and the calculated RMSE is displayed to the user. In addition, the robot’s current position is transformed by the inverse of the calculated transformation and projected back into the pixel space of both cameras. Circles are drawn at these pixel positions in the left and right images in the GUI so that the user can visually confirm that the registration is successful and accurate.

Iv-B Registering Camera and Preoperative Model Frames

The transformation between camera-frame and model-frame, is estimated by registering the reconstructed point cloud from stereo images with the preoperative model of the organ. The intraoperative scene as viewed by the stereo cameras is as shown in the top of Fig. 3. A user manually selects the region containing the organ of interest. Following this the user can also further refine the selection using a graph cut-based image segmentation.

Fig. 3: Top row: Original left and right camera images. Middle row: Camera images with registered prostate model shown in semi-transparent blue. The tumor model is also shown to allow us to compare our stiffness mapping result. Bottom row: The robot probes the organ and records force-displacement measurements. The estimated stiffness map is then augmented on the registered model in this figure. Dark blue regions show high stiffness. Note that the stiffness map reveals the location and shape of the tumor.

A Bingham distribution-based filtering approach is used to automatically register the stereo point cloud to the preoperative model [8]. The mean time taken to register is 2s and the RMS error is 1.5mm. The center row in Fig. 3 shows the registered model of the organ overlaid on the stereo views. Note how the pose of the registered model accurately matches the pose of the organ. In the same figure we also show the model of the tumor in the registered view to highlight how accurately the stiffness map estimates the location of the tumor (see bottom row of Fig. 3)

Iv-C Tumor Search and Stiffness Mapping

The problem of tumor search is often posed as a problem of stiffness mapping, where the stiffness of each point on a certain organ is estimated, and regions with stiffness higher than a certain threshold are considered as regions of interest (tumors, arteries, etc.). The framework that we use for localizing tumors utilizes Gaussian processes (GP) to model the stiffness distribution combined with a GP-based acquisition function to direct where to sample next for efficient and fast tumor localization.

By using GP, we assume a smooth change in the stiffness distribution across the organ. Since every point on the organ’s surface can be uniquely mapped to a 2D grid, the domain of search used is . The measured force and position after probing the organ by the robot at provides the stiffness estimation represented by .

The problem of finding the location and shape of the stiff inclusions can be modeled as an optimization problem. However, an exact functional form for such an optimization is not available in reality. Hence, we maintain a probabilistic belief about the stiffness distribution and define a so called “acquisition function”, , to determine where to sample next. This acquisition function can be specified in various ways and thus our framework is flexible in terms of the choice of this acquisition function that is being optimized. Our prior works have considered various choices for the acquisition functions such as expectation improvement (EI), upper confidence bound (UCB), uncertainty sampling (UNC), active areas search (AAS) and active level sets estimation (LSE) [4, 5, 6].

While our system is flexible to the choice of acquisition function, in this work we demonstrate tumor localization using LSE. LSE determines the set of points, for which an unknown function (stiffness map in our case) takes value above or below some given threshold level

. The mean and covariance of the GP can be used to define a confidence interval,

(1)

for each point . Furthermore, a confidence region which results from intersecting successive confidence intervals can be defined as,

(2)

LSE then defines a measure of classification ambiguity defined as,

(3)

LSE chooses sequentially queries (probes) at such that,

(4)

For details on how to select the parameter , we refer the reader to the work of Gotovos et al. [20].

Iv-D Probing and Force Sensing

We adopted a miniaturized Tri-axial sensor developed in [3] onto the needle driver tool for the dVRK, to provide contact force measurements (see Fig. 1

). The force sensor is a Force-Sensitive-Resistor (FSR) based force-to-voltage transducer operating in thru-mode electrodes configuration. The design combines FSR array with a center mounted pre-load mechanical structure to provide a highly responsive measurement of contacting force and direction of the force vector. In this experiment, we electrically bridged the four sensing array elements on the force sensor, to provide improved sensitive force measurement along the normal direction of the sensor, since the dVRK can be accurately oriented to probe along the local surface normal. In addition, we implemented online signal processing software in the sensor embedded controller, for analog signal amplification, filtering, automatic self-calibration, which is crucial step to improve sensor performance when using inexpensive force sensing materials such as 3M Velostat film from

Adafruit.

First, the robot is commanded to a safe position which is at a known safe height as shown in Fig. 4(a). The robot is then commanded to move to position which is at an estimated distance from the desired probing point , along the normal to the surface at , (see Fig. 4(a)). While maintaining its orientation, the tool is commanded to move to position . The force and position data are constantly recorded as the robot moves from to . When the force sensor contacts the tissue surface, if the contact force exceeds a set threshold or if the probe penetrates more than a set depth , the robot is no longer moved. This ensures that the probing does not hurt the patient or cause any damage to the robot. Following this we retract the robot to position and then . Note that we do not record force and displacement data during the retraction process.

Fig. 4: (a) The various steps taken to probe a desired point along a desired normal direction as provided by the tumor search module. (b)The plot shows forces vs displacement for two sample points A and B on the surface of the organ. Note that the forces are limited to 10N and the displacement is also restricted to 8mm. RANSAC is used to find the best-fit line and the slope gives us an estimate of the stiffness at the probed location. (c) This 2D space forms a one-to-one mapping with the 3D surface of the organ. The green circle represents the user-defined ROI. The stiffness map is estimated in this ROI. Different shades of blue are used to represent the stiffness values. Point A is located in on a stiff region, while B is located on a soft region. The plot reveals the corresponding stiffness.

Next the recorded data is treated as input to the stiffness mapping algorithm similar to [25]. There are two important steps of this algorithm: (i) baseline removal, (ii) stiffness calculation. Ideally, the force sensor reading should be zero when there is no contact between force sensor and the interest area. However, in reality there is always a small residue in the sensor readings even when there is no contact. Thus we find the mean sensor output value when the probe is at and then subtract all the subsequent measurements from this baseline force. For stiffness calculation, we use a standard RANSAC algorithm to find the best fit line between the y-axis (force sensor data) and x-axis (displacement data). As a result, the calculated regression coefficient indicates the changing rate of the contact force respect to a unit displacement, which can be used as the best approximation of stiffness value. Fig. 4(b) shows the nearly linear variation of force with displacement, justifying the use of slope of the best fit line as an approximation for the stiffness.

Iv-E Dynamic Image Overlay

The rendering of the overlays is done using the Visualization Toolkit (VTK). Two virtual 3D cameras are created to match the real cameras using the results of camera calibration. The pre-operative model is placed in virtual 3D space according to the camera-to-organ registration, , and rendered as a polygonal mesh from the perspective of each camera. These two renders are overlaid onto live video from the left and right camera feeds as their backgrounds.

Fig. 5: The figures show the augmented stiffness map at various stages of probing. The high stiffness regions are shown in darker shades of blue, while the low stiffness regions are in lighter shades of blue.(a) Result after a single probe, (b) result after 4 probings, (c) result after 10 probings.

These renderings are displayed in a GUI divided into three tabs. The first tab is for registration, which overlays the pre-operative model as described above and additionally allows the user to mask and segment the point cloud as described in Sec. IV-B. It also provides buttons to start and stop model registration. The second tab allows the user to select a region of interest (ROI) defined in a 2D UV texture map that represents a correspondence between pixels on a 2D image to 3D coordinates on the surface of the pre-operative model (see Fig. 4(c)). The third tab overlays the pre-operative model over the camera feeds and allows the user to set the opacity of the overlay using a slider at the bottom of the window.

In addition, the renderings in the third tab add a texture to the rendered model. For this texture, the results of the tumor search are turned into a heat-map image representing relative stiffness in a user-specified region of interest (ROI) (see Fig. 4(c)). This ROI is defined in 2D UV texture coordinates that represent a correspondence between pixels on a 2D image to 3D coordinates on the surface of the polygonal mesh. The heat-map image is broadcast over ROS and overlaid onto the pre-operative model’s 2D texture image resulting in dark marks in high-stiffness areas while preserving texture details found in the pre-operative model’s original texture (see Fig. 4(c)). This 2D texture is then applied to the polygonal mesh using the UV map, resulting in a 3D overlay of the stiffness map onto the video feed from each camera. Fig. 5 shows the stiffness maps at various stages of probing, dynamically overlaid on the registered model of the organ. Note that the stiffness map clearly reveals the location and shape of the tumor which is shown in the middle row of Fig. 3.

V Discussions and Future Work

In this paper, we presented a system that unifies autonomous tumor search with augmented reality to quickly reveal the shape and location of the tumors while visually overlaying that information on the real organ. This has the potential to reduce the cognitive overload of the surgeons and assist them during the surgery. Our system demonstrates promising results in experimentation on phantom silicone organs.

While we demonstrate the task of stiffness mapping in this work, our system can be used to visually overlay pre-surgical plans, ablation paths, annotate important landmarks, etc. to aid the surgeon during the procedure. In our future work we plan to account for large deformations of the organ and update the model accordingly. We plan to utilize computationally fast approaches to segment the dVRK tools from the images and avoid any obstructions to the overlaid stiffness map. Furthermore, as demonstrated by other researchers in this field, we believe a hybrid force-position controller can result in more accurate probing and hence better stiffness estimation. Finally, we plan to perform experiments on ex-vivo organs to asses the efficacy of the system in a realistic surgical setting.

Acknowledgment

Special thanks to James Picard, Sasank Vishnubhatla, Peggy Martin and other colleagues from Biorobotics Lab, Carnegie Mellon University.

References

  • [1] J. Palep, “Robotic assisted minimally invasive surgery,” Journal of Minimal Access Surgery, vol. 5, no. 1, pp. 1–7, Jan 2009.
  • [2] A. Garg, S. Sen, R. Kapadia, Y. Jen, S. McKinley, L. Miller, and K. Goldberg, “Tumor localization using automated palpation with gaussian process adaptive sampling,” in CASE.   IEEE, 2016, pp. 194–200.
  • [3] L. Li, B. Yu, C. Yang, P. Vagdargi, R. A. Srivatsan, and H. Choset, “Development of an inexpensive tri-axial force sensor for minimally invasive surgery,” in In proceedings of the International Conference on Intelligent Robots and Systems.   IEEE, 2017.
  • [4] E. Ayvali, R. A. Srivatsan, L. Wang, R. Roy, N. Simaan, and H. Choset, “Using bayesian optimization to guide probing of a flexible environment for simultaneous registration and stiffness mapping,” in ICRA, no. 10.1109/ICRA.2016.7487225, 2016, pp. 931–936.
  • [5] E. Ayvali, A. Ansari, L. Wang, N. Simaan, and H. Choset, “Utility-guided palpation for locating tissue abnormalities,” IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 864–871, 2017.
  • [6] H. Salman, E. Ayvali, R. A. Srivatsan, Y. Ma, N. Zevallos, R. Yasin, L. Wang, N. Simaan, and H. Choset, “Trajectory-optimized sensing for active search of tissue abnormalities in robotic surgery,” in Submitted to ICRA.   IEEE, 2018.
  • [7] R. A. Srivatsan, G. T. Rosen, F. D. Naina, and H. Choset, “Estimating SE(3) elements using a dual quaternion based linear Kalman filter,” in Robotics : Science and Systems, 2016.
  • [8] R. A. Srivatsan, M. Xu, N. Zevallos, and H. Choset, “Bingham Distribution-Based Linear Filter for Online Pose Estimation,” in Robotics : Science and Systems, 2017.
  • [9] K. Patath, R. A. Srivatsan, N. Zevallos, and H. Choset, “Dynamic Texture Mapping of 3D models for Stiffness Map Visualization,” in Workshop on Medical Imaging, IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017.
  • [10] P. Puangmali, K. Althoefer, L. D. Seneviratne, D. Murphy, and P. Dasgupta, “State-of-the-art in force and tactile sensing for minimally invasive surgery,” IEEE Sensors Journal, vol. 8, no. 4, pp. 371–381, 2008.
  • [11] M. I. Tiwana, S. J. Redmond, and N. H. Lovell, “A review of tactile sensing technologies with applications in biomedical engineering,” Sensors and Actuators A: physical, vol. 179, pp. 17–31, 2012.
  • [12] P. Chalasani, L. Wang, R. Yasin, N. Simaan, and H. Taylor, Russel, “Online estimation of organ geometry and tissue stiffness using continuous palpation,” submitted to IEEE Robotics and Automation Letters, 2017.
  • [13] J. H. Shuhaiber, “Augmented reality in surgery,” Archives of surgery, vol. 139, no. 2, pp. 170–174, 2004.
  • [14] D. Teber, S. Guven, T. Simpfendörfer, M. Baumhauer, E. O. Güven, F. Yencilek, A. S. Gözen, and J. Rassweiler, “Augmented reality: a new tool to improve surgical accuracy during laparoscopic partial nephrectomy? preliminary in vitro and in vivo results,” European urology, vol. 56, no. 2, pp. 332–338, 2009.
  • [15] L.-M. Su, B. P. Vagvolgyi, R. Agarwal, C. E. Reiley, R. H. Taylor, and G. D. Hager, “Augmented reality during robot-assisted laparoscopic partial nephrectomy: toward real-time 3D-CT to stereoscopic video registration,” Urology, vol. 73, no. 4, pp. 896–900, 2009.
  • [16] T. Yamamoto, B. Vagvolgyi, K. Balaji, L. L. Whitcomb, and A. M. Okamura, “Tissue property estimation and graphical display for teleoperated robot-assisted surgery,” in ICRA, 2009, pp. 4239–4245.
  • [17] S. McKinley, A. Garg, S. Sen, R. Kapadia, A. Murali, K. Nichols, S. Lim, S. Patil, P. Abbeel, A. M. Okamura et al., “A single-use haptic palpation probe for locating subcutaneous blood vessels in robot-assisted minimally invasive surgery,” in CASE.   IEEE, 2015, pp. 1151–1158.
  • [18] A. S. Naidu, M. D. Naish, and R. V. Patel, “A breakthrough in tumor localization,” IEEE Robotics & Automation Magazine, vol. 1070, no. 9932/17, 2017.
  • [19] A. L. Trejos, J. Jayender, M. Perri, M. D. Naish, R. V. Patel, and R. Malthaner, “Robot-assisted tactile sensing for minimally invasive tumor localization,” The International Journal of Robotics Research, vol. 28, no. 9, pp. 1118–1133, 2009.
  • [20]

    A. Gotovos, N. Casati, G. Hitz, and A. Krause, “Active learning for level set estimation,” in

    IJCAI, 2013, pp. 1344–1350.
  • [21] K. A. Nichols and A. M. Okamura, “Methods to segment hard inclusions in soft tissue during autonomous robotic palpation,” IEEE Transactions on Robotics, vol. 31, no. 2, pp. 344–354, 2015.
  • [22] H. Liu, D. P. Noonan, B. J. Challacombe, P. Dasgupta, L. D. Seneviratne, and K. Althoefer, “Rolling mechanical imaging for tissue abnormality localization during minimally invasive surgery,” IEEE Transactions on Biomedical Engineering, vol. 57, pp. 404–414, 2010.
  • [23] R. E. Goldman, A. Bajo, and N. Simaan, “Algorithms for autonomous exploration and estimation in compliant environments,” Robotica, vol. 31, no. 1, pp. 71–87, 2013.
  • [24] R. D. Howe, W. J. Peine, D. Kantarinis, and J. S. Son, “Remote palpation technology,” IEEE Engineering in Medicine and Biology Magazine, vol. 14, no. 3, pp. 318–323, 1995.
  • [25] R. A. Srivatsan, E. Ayvali, L. Wang, R. Roy, N. Simaan, and H. Choset, “Complementary Model Update: A Method for Simultaneous Registration and Stiffness Mapping in Flexible Environments,” in ICRA, 2016, pp. 924–930.
  • [26] J. Marescaux, F. Rubino, M. Arenas, D. Mutter, and L. Soler, “Augmented-reality–assisted laparoscopic adrenalectomy,” Jama, vol. 292, no. 18, pp. 2211–2215, 2004.
  • [27] N. Haouchine, J. Dequidt, I. Peterlik, E. Kerrien, M.-O. Berger, and S. Cotin, “Towards an accurate tracking of liver tumors for augmented reality in robotic assisted surgery,” in ICRA, 2014, pp. 4121–4126.
  • [28] P. J. Besl and N. D. McKay, “Method for registration of 3-D shapes,” in Robotics-DL tentative.   International Society for Optics and Photonics, 1992, pp. 586–606.
  • [29] S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” in Proceedings of 3rd International Conference on 3-D Digital Imaging and Modeling.
  • [30] S. D. Billings, E. M. Boctor, and R. H. Taylor, “Iterative most-likely point registration (IMLP): A robust algorithm for computing optimal shape alignment,” PloS one, vol. 10, no. 3, p. e0117688, 2015.
  • [31] M. H. Moghari and P. Abolmaesumi, “Point-based rigid-body registration using an unscented Kalman filter,” IEEE Transactions on Medical Imaging, vol. 26, no. 12, pp. 1708–1728, 2007.
  • [32] P. Kazanzides, Z. Chen, A. Deguet, G. S. Fischer, R. H. Taylor, and S. P. DiMaio, “An open-source research kit for the da vinci® surgical system,” in ICRA.   IEEE, 2014, pp. 6434–6439.
  • [33] B. K. Horn, “Closed-form solution of absolute orientation using unit quaternions,” JOSA A, vol. 4, no. 4, pp. 629–642, 1987.