Learning Soft Tissue Behavior of Organs for Surgical Navigation with Convolutional Neural Networks

by   Micha Pfeiffer, et al.

Purpose: In surgical navigation, pre-operative organ models are presented to surgeons during the intervention to help them in efficiently finding their target. In the case of soft tissue, these models need to be deformed and adapted to the current situation by using intra-operative sensor data. A promising method to realize this are real-time capable biomechanical models. Methods: We train a fully convolutional neural network to estimate a displacement field of all points inside an organ when given only the displacement of a part of the organ's surface. The network trains on entirely synthetic data of random organ-like meshes, which allows us to generate much more data than is otherwise available. The input and output data is discretized into a regular grid, allowing us to fully utilize the capabilities of convolutional operators and to train and infer in a highly parallelized manner. Results: The system is evaluated on in-silico liver models, phantom liver data and human in-vivo breathing data. We test the performance with varying material parameters, organ shapes and amount of visible surface. Even though the network is only trained on synthetic data, it adapts well to the various cases and gives a good estimation of the internal organ displacement. The inference runs at over 50 frames per second. Conclusions: We present a novel method for training a data-driven, real-time capable deformation model. The accuracy is comparable to other registration methods, it adapts very well to previously unseen organs and does not need to be re-trained for every patient. The high inferring speed makes this method useful for many applications such as surgical navigation and real-time simulation.


Non-Rigid Volume to Surface Registration using a Data-Driven Biomechanical Model

Non-rigid registration is a key component in soft-tissue navigation. We ...

PhysGNN: A Physics-Driven Graph Neural Network Based Model for Predicting Soft Tissue Deformation in Image-Guided Neurosurgery

Correctly capturing intraoperative brain shift in image-guided neurosurg...

SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans

We present SoftSMPL, a learning-based method to model realistic soft-tis...

In-flight Novelty Detection with Convolutional Neural Networks

Gas turbine engines are complex machines that typically generate a vast ...

Registration made easy – standalone orthopedic navigation with HoloLens

In surgical navigation, finding correspondence between preoperative plan...

Leveraging Vision and Kinematics Data to Improve Realism of Biomechanic Soft-tissue Simulation for Robotic Surgery

Purpose Surgical simulations play an increasingly important role in surg...

Autonomously Navigating a Surgical Tool Inside the Eye by Learning from Demonstration

A fundamental challenge in retinal surgery is safely navigating a surgic...

1 Introduction

An important prerequisite for soft tissue navigation during surgery is the simulation and prediction of tissue behavior. Here, the goal is to aid surgeons by visualizing information which is typically hidden - such as the position of tumors and blood-vessels - in a context-sensitive manner.

The usual workflow is to first generate a pre-operative model from CT or MRI data, containing structures of interest such as vessels and tumors. During the surgery, these structures are visualized for navigation. However, as organs deform during a surgery, the models have to be adapted to correctly reflect the intra-operative scene. This requires intra-operative sensor data such as video, CT or ultrasound to capture the state of the organs which can then be used to deform the pre-operative model so that it matches the current situation. Since this task involves many sub-stages which should ideally all run in real-time (image processing, registration of organs, soft tissue simulation, rendering of virtual reality), it is desirable to use systems which require as little computational resources as possible.

When acquiring intra-operative sensor data to feed into the model, there are many challenges. Besides the limited field of view and noisy data, there are often unknown parameters which are either difficult or impossible to obtain during a surgery, such as knowledge about unseen organ surfaces, friction between organs and material parameters such as the tissue’s elasticity. Approximations have to be used and the deformation model needs to be able to deal with uncertain information.

The real-time simulation of deformable elastic bodies is a complex problem and an active area of research. Many pre- to intra-operative registration systems are based on the Finite Element Method (FEM) which can accurately calculate deformations of organs Suwelack2014 ; Reichard2017 ; Plantefeve2016 ; Mendizabal2017 ; PeterlikFastElRegistration ; Simpson2012 . The FEM has several benefits like the possibility to precisely represent complex meshes by using small irregular volume elements and the ability to compute physically accurate solutions. It can be made real-time capable Wu2014 ; Allard11 ; Bui2018 , but due to the large, often sparse matrices the speed gain obtained by running the code on the GPU is limited. Changes in the mesh topology (such as a cut) require a computationally expensive update of the topology and the matrices Wu2014 . It is also difficult to incorporate unknown material parameters or uncertain boundary conditions Peterlik2017 .

Recently, convolutional neural networks (CNNs) have proven to be a powerful tool in building many data-driven models DeepLearning2016 , with the benefit of parallelizing well on modern GPUs. In this work, we explore the usage of a CNN to estimate an organ’s internal deformation from known surface deformation. The network is trained on synthetic FEM simulation data and learns to interpret the mesh structure of an organ as well as boundary conditions to calculate the displacement of internal points. Our goal is to determine the displacement of internal structures when knowing about the displacement of (some) surface points. We assume that surface correspondences between pre-operative and intra-operative models have been computed using data from an intra-operative modality such as the laparoscope.

Our contribution consists of a novel method to generate a real-time capable soft tissue model which immediately generalizes to new patients, by training on a large number of synthetic meshes. We show that our model performs well on both synthetic and real data even without knowing the precise nature of the underlying material model. Since the training data makes very few assumptions about the structure of the organ and the type of deformation, our method shows that neural networks can indeed be taught how soft tissue deformation works in a general setting. In effect, we substitute a highly engineered computational model for a data-driven one and in doing so we reach a very low computation time. Our code is available online111Code publically available at: https://gitlab.com/nct_tso_public/cnn-deformation-estimation.

1.1 Related Work

The idea of using neural networks to estimate the outcome of an FEM simulation is not new. Hambli et al. Hambli2006 have used a fully connected net to predict the velocity and angle of a tennis ball after hitting a racket. A similar approach has been adopted by Tonutti et al. Tonutti17 for predicting the movement of a tumor during brain surgery, but the displacement of the healthy tissue is not considered. Rechowicz et al. Rechowicz2013 estimate the displacement of a patient’s rib cage, but only predict the displacement of surface nodes. In contrast, Marooka et al. Morooka2008 estimate the displacement of a full liver model using neural networks by superimposing basic deformation modes. These approaches show that neural networks can indeed be trained to estimate soft tissue behavior. However, they work with the data of a single patient, requiring re-training (and in some cases even re-designing) the network for every new patient. Additionally, they use the acting surface forces as input which are very difficult to obtain intra-operatively.

Yamamoto et al. Yamamoto2017 show that neural networks can estimate the deformation of a liver from the known displacement of a partial surface. They report very small errors while using only 3% of the liver surface, but also design their network for specific patients and evaluate their method on the same liver mesh that was used during training.

Approaches which generalize to new patients were made by Lorente et al. Lorente2017

, who estimate the liver’s deformation from breathing motion using various machine learning methods and Martínez-Martínez et al.

Martinez2017 who estimate breast compression. However, both methods focus on single scenarios where the main direction of the forces stay similar throughout all experiments.

Our method is inspired by the work of Guo et al. Guo2016 , who have used neural networks to estimate the results of fluid dynamics simulations for real-time applications. One major difference is that fluid dynamics are often computed on regular grids with a fixed number of cells which makes them easier to handle with neural networks. In contrast to this, we need to re-sample the irregular simulation domain before we pass the data to the network.

2 Methods

The goal of this work is to estimate the current displacement of an organ and its internal structures when given a) the pre-operative geometric model of the organ and b) the displacement of the visible part of the organ’s surface as seen by an intra-operative sensor. Since CNNs work best with a regular grid, we discretize all data at regularly spaced points. In doing so, our network’s input becomes a cube of voxels and the network’s output becomes a displacement field with a three-dimensional displacement vector for each of these voxels.

We sample the cubical volume of side length into a grid of points of interest. For each point we determine three properties:

  • Organ structure: To represent the mesh, we determine the signed distance for each point to the nearest surface of the organ in meters.

  • Visible displacement: We assume that the displacement of part of the organ surface is known, for example by tracking features with a (stereo-)laparoscope. For these surface points, we assign a visible displacement vector . For all other points, .

  • Zero displacement: We assign a binary value which is set to one if a point of the surface is fixed to surrounding tissue and should be considered static. For all other points, is set to zero.

Knowing , and , the goal is to find a displacement vector for each point . The following sections explain how we randomly generate training data, how we build the network and how the network is trained to generate the displacement field for an organ.

2.1 Synthetic Training Data

Since training neural networks can be very time consuming, it is often not feasible to train a network before surgery using patient-specific data. Instead, we generate synthetic datasets which are based on simulations of random, organ-like meshes (see Fig. 2). First, a random surface mesh is generated by extruding and deforming a mesh primitive multiple times. The volume inside this surface is then filled with tetrahedral elements with the Gmsh GMSH software. We choose boundary conditions which are inspired by laparascopic surgeries: A zero displacement boundary condition is applied to a random surface region with a radius ranging from 2.5 cm to 5.5 cm (indicating areas where the organ is fixed to other organs) and a random force between 0 and 1 N is applied to another random surface region with a radius ranging from 1.5 to 2.5 cm (simulating instruments which manipulate the organ).

A non-linear, homogeneous, isotropic material model is chosen for the mesh, because we expect large deformations and expect to know very little about a patient’s specific tissue parameters. The Youngs Modulus is set to 1.7 kPa and the Poisson Ratio to 0.35. The Elmer simulation software Elmer2013 is used to run the steady-state simulations, resulting in a known target displacement vector for each vertex (internal and external) of the random meshes.

The grid with side length cm and is generated and and are calculated for each point

via interpolation with a gaussian kernel. To generate the visible displacement

, another random surface region is selected and the target displacement vector is copied to for each point in the area (for other points, ).

This process is repeated for 10 000 random meshes. If a randomly extruded mesh lies partly outside the 30 cm large cube or if it has self-intersections, the sample is discarded. The same is done if the deformation is larger than 10 cm, since these samples are much rarer and result in a very unevenly distributed dataset. We augment the training data by flipping the grid along the X, Y and Z axis or any combination of the three, resulting in a factor eight increase in the number of training samples. Of the resulting dataset, roughly 90% (37440 samples) are used for training and the remaining data is used for validation. Before passing the data to the network, we scale and by a factor of which improves convergence properties.






Figure 2: a) Random mesh structure, b) signed distance function on the grid , c) visible displacement (blue, magnitude) and zero displacement (red), d) target displacement (magnitude). On b), half of the grid is hidden. In c) and d) (signed distance function ) are clipped away to show only the organ structure.

2.2 Network Architecture

The input to our CNN is the regular grid . Each point has five values assigned to it (, and the three values from ), so the input to the network contains scalars. Since we want to calculate the three components of for each , the output of the network is scalars.

When learning displacement fields, a force acting on one side of the mesh usually influences not only the displacement in nearby points but can potentially have an effect on all organ points, even those on the far side of the organ. This means that in our network, each output must have the potential to be influenced by each input point (i.e. each output point must have a receptive field spanning the entire input).

The network uses an architecture similar to U-Net UNet2015 , with an encoder which reduces the input resolution and learns a high-level representation of the data and a decoder which reconstructs the high resolution output. We also use skip connections which copy features forward without modifying them, to allow the decoder to incorporate more detail in the computed output. Unlike U-Net, our network works with three-dimensional input data and all convolutions are calculated across the three dimensions.

The decoder uses average pooling layers to decrease the resolution of the input data, leading to a bottleneck with a resolution of . In the bottleneck, the convolutional kernels are large enough to carry information across the computational domain, meeting the requirement for the large field of view in the output layers. The decoder has three upsampling layers, each of which performs a simple nearest-neighbour interpolation to double the resolution, ensuring that the network’s output resolution is the same as the input resolution. All convolutions have a kernel side length

of 3 and padding of 1 and each is followed by a SoftSign non-linear activation function.

For all points which lie outside the organ (), the output is set to zero. The final network architecture is depicted in Fig. 3. It has about 9.1 million learnable parameters.

Figure 3: Network architecture. Each bar represents the output or input to a convolutional layer. The network downsamples the input data to lower resolutions while increasing the number of channels per grid point. Then it increases resolution while decreasing the number of channels. Number of channels are shown above the bars and side lengths of the voxel grids are shown underneath (only changes are indicated). Skip-connections allow the network to access earlier information by copying feature maps to later layers. Some layers generate additional (downsampled) outputs (yellow bars).

2.3 Error Functions

Instead of only comparing the network’s output to the target displacement, we let the network compute additional down-sampled displacement estimations and compare them to down-sampled version of the target displacement. This encourages the network to estimate correct displacements in the encoding- and bottleneck-layers while letting the decoding layers focus on increasing resolution. Thus the network computes additional lower-resolution outputs at intermediate steps (compare Fig. 3) and a down-scaled version of the target displacement is created for each resolution. The mean square error is then calculated for each resolution :


where is the number of points at resolution , is zero if p is outside the organ and one otherwise and denotes the magnitude of a vector. In practice, . The final error is the weighted sum of the errors over the four different resolutions:


where the are weighting factors. We choose . The network is trained using the Adam Diederik15 optimizer until the error on the validation dataset no longer decreases.

3 Experiments and Results

We perform multiple experiments using the displacement estimation network. In a first in-silico test, we test whether the network can generalize from the synthetic training structures to the shape of a real liver as segmented from CT data and how the network deals with varying amounts of visible surface. Secondly, the network is tested on CT data of a phantom liver model undergoing a large deformation. In a final test, we use human in-vivo data showing liver deformation due to breathing and let the network register the liver in the inhaled state to the liver in exhaled state.

All of these experiments are very different from the training data, as they contain never before seen mesh structures, material parameters and deformations. Thus, we implicitly evaluate the network’s ability to generalize from the synthetic training data to settings which are closer to real-world scenarios.

Training as well as experiments were carried out on a personal computer with four Intel®  i7 cores (4.20GHz) and an Nvidia GeForce GTX 1080 GPU with 8 GB of video RAM.

3.1 In-Silico: Liver Mesh from CT

We generate a new dataset with the same methods previously described for the training data (Section 2.1), but instead of a random mesh, we use the mesh of a patient’s liver (OpenHELP Phantom Kenngott2015 ). Again, random zero displacement and forces act on the organ and the Elmer software calculates displacements. After separating out samples with a displacement greater than 10 cm, this process results in 1334 deformed liver samples. Each of these samples has the same mesh structure but a different , and . The network runs on each sample, generating an estimated displacement field (see Fig. 1).

For each point inside the liver, a displacement error can be calculated given the actual displacement and the estimated displacement as . On average, this error is expected to increase as the distance to the visible displacement increases. To quantify and visualize this effect, the points are sorted by their distance to the nearest visible surface point in the current sample. We call this distance the depth and collect all points from all 1334 data samples which have a similar depth. Fig. 4 shows how the average errors increase with the depth as well the target displacement .

The average error is also expected to depend on the amount of visible surface. To show how the error behaves as a larger percentage of surface becomes visible, we also sort the samples by percentage of visible surface and plot the average error (see Fig. 5).

Figure 4: Visualization of metrics for the deformed liver dataset, at three different depths (left column: , center column , right column: ). For all six plots, the abscissa indicates the magnitude of , i.e. by how much a point should be displaced. The top row of plots show how the average error increases with increasing target displacement as well as with increasing depth. To generate the bottom row of plots, the points were sorted into bins of size 0.2 cm according to the magnitudes of their and and the color of the plots indicates how many points end up in each bin. Most points (note the logarithmic scale) lie close to the diagonal, indicating that the network displaced them by the correct amount. As the distance from the visible area increases (rightmost plot), so does the average error. It can also be seen that the network is more likely to over-estimate the displacement than to under-estimate it. For all plots, points outside the organ () are ignored.
Figure 5: Average error in the 1334 in silico liver samples by their amount of visible surface. To compute the averages, samples were first sorted into bins with a width of 10% each. The largest drop in error is when approximately 20% of the whole organ surface is visible. Incidentally, this corresponds to the amount of surface used in Suwelack2014 and is roughly the amount of surface which is accessible in a laparoscopic setting.

3.2 Phantom: Silicone Liver Deformation

To test the network’s ability to transfer its learned displacement estimation, we used the data of a silicone liver undergoing a large deformation (Liver registration dataset, Suwelack et al. Suwelack2014 , open-cas.org). The dataset contains the surface of the original, undeformed liver and a second surface which shows the same liver after it has been deformed by applying a strong force to its side using a spherical object (Fig. 6, left). Six small Teflon markers were placed into the liver and their positions before and after the deformation can be used to determine a target registration error. Due to the deformation, they move by up to 46.6 mm (mean 23.9 mm).

Again, we calculate the signed distance function for every grid point using the original surface . The zero displacement condition is set for approximately 21% of the liver surface where the organ was fixed to neighboring structures. To generate the visible displacement , we manually annotate 13 points on the anterior side of the surface and their correspondences on . This sparse displacement information is then interpolated to other surface points, giving us an approximated dense surface displacement for roughly 18.5% of the organ surface (Fig. 6, center).

We use the network to estimate the internal displacement of the organ. The final registration error for the Teflon markers is 5.1 mm on average, with a maximum of 7.6 mm. Suwelack et al. Suwelack2014 used 19% of the surface area and also reported a mean error of 5.1 mm (maximum: 6.2 mm). While our method only takes a small fraction of the computation time (20 milliseconds compared to multiple seconds), we note that their method also solves the surface correspondence problem, which we assume as given.

Figure 6: Left: Original, undeformed CT model (grey) deformed by a strong force (white arrow), resulting in a large deformation (blue). Yellow arrows indicate the manually annotated sparse surface point correspondences. Center: The surface areas where a visible displacement (orange, values interpolated from sparse surface dislacement) and zero displacement condition (red) are set. Right: The network’s estimated displacement field, applied to the points.

3.3 In-Vivo: Human Breathing Motion

The network is applied to in-vivo human breathing motion data (3D-IRCADb-02 dataset, IRCAD, Strasbourg, ircad.fr/research/3d-ircadb-02/). This dataset contains surface meshes of abdominal organs in an inhaled and exhaled state. Our goal is to register the liver in inhaled state to the exhaled (target) state.

First, we perform a rigid registration using the ICP algorithm ICP1992 . To estimate the surface displacement, we then use the coherent point drift (CPD) algorithm CPD2006 on the liver surface model of the inhaled state to register it onto the liver model in exhaled state. We generate our grid from the inhaled state and set the zero displacement condition in the area where the vena cava touches the liver. The CPD’s output is interpolated into G to generate the visible displacement .

We iteratively decrease the amount of visible surface by moving a plane from the patient’s posterior to their anterior and discarding all visible displacement information on the posterior side of the plane. We run the network on each of these steps, giving us a displacement field estimation for each amount of visible surface. The generated displacement fields are used to deform the segmented portal vein tree inside the inhaled liver and compared to the portal veins of the exhaled liver. Since contrast agent was injected before the experiment, the two vessel trees differ a lot visually. This makes them difficult to compare quantitatively, but the results can be compared qualitatively (Fig. 7).

Figure 7: Human breathing motion experiment. In all images, blue represents the source (inhaled) state and grey is the target (exhaled) state. Left: After rigidly aligning the livers, there is still a non-rigid deformation of up to 2 cm, as seen in the liver surfaces (background) and the portal veins (zoomed box). Center: Using the full visible displacement from CPD, the network estimates a displacement field which is applied to the vessels, resulting in a good registration for all three main branches. Right: We decrease the amount of visible surface area (orange, shown here for 18% of visible surface). The registration is still accurate close to the visible surface (right vessel branch) and becomes slightly less accurate as we go deeper into the liver.

4 Discussion

Our experiments show that the network can easily generalize to organ structures which it has never seen during training. In fact, when we tried further training the network with the patient specific mesh, no significant improvement was found. Even though we used the same material parameters for all training samples, the network performed well on evaluation data with different material properties. The accuracy of the model depends mainly on the amount of visible surface and how close this visible area is to the area of largest deformation. This is in accordance with other research which indicates that the boundary conditions and a good geometric model have a higher influence on correct tissue modeling than the used material model Misra2009 Suwelack2011 .

The surface information available in the phantom experiment was very sparse, yet the very simple interpolation scheme we used to generate the dense surface displacement did not stop the network from creating satisfactory results. Similarly, in all experiments, the network deals well with the boundary conditions even though they are interpolated into our relatively coarse grid, indicating that it is not very susceptible to noise.

When training on purely artificial data, care must be taken that the data is representative of the real-world problem which the network should solve. We make multiple assumptions and simplifications which need to be addressed in the future:

During our discretization, we place a grid point roughly every 4.7 mm. While this resolution is similar to many real-time biomechanical models inside the organ, it results in a rough approximation of the organ surface.

Secondly, the training dataset does not simulate gravity and varying atmospheric pressure. The network shows promising results on the phantom and the human liver, both of which feature gravity, but adding body forces to the training dataset could further improve the network’s ability to simulate real situations.

Thirdly, the network assumes that the zero displacement boundary condition is known. This is usually not the case in a real intra-operative setting, where only rough assumptions can be made. Our experience shows, however, that the model is robust to changes in size of the zero displacement area.

We also assume that surface correspondences between the pre- and intra-operative meshes are known. In the example of laparoscopic liver surgery, finding correspondences is far from trivial due to smooth, textureless surfaces and the large deformations. One approach would be to compute an initial non-rigid registration using a method such as Suwelack2014 , which matches the two surfaces without needing to compute features. Subsequent real-time tracking of intra-operative surface features Giannarou2013 could be used to keep the correspondences up to date. To circumvent the difficulty of finding low-level geometric and texture features, current research focuses on using high-level cues - such as anatomical landmarks, the silhouette of the organ in the laparoscopic camera image and shading - to update a biomechanical model Adagolodjo2017 ; Koo2017 . Since these approaches compute displacement vectors to update their biomechanical model iteratively until they find a good registration, a method such as ours could be incorporated directly into a similar registration approach.

5 Conclusion

In this work, we employ a novel, data-driven model to estimate displacement information which is usually calculated using highly specialized hand-engineered models. Due to the high speed, our method opens up many new possibilities in real-time applications. Besides being used in laparoscopic navigation, it could easily be extended to tackle brain-shift in neurosurgery or could potentially be used in motion-compensation during radio-therapy. The method could also be used to estimate unknown boundary conditions and material parameters.

We have shown that a data-driven model can be used to model soft tissue deformation without seeing patient-specific data during the training phase. The model’s accuracy is similar to other models, it is robust to simplifications in its input and it is very fast.

6 Compliance with Ethical Standard

Conflict of Interest: The authors declare that they have no conflict of interest. This article does not contain any studies with human participants or animals performed by any of the authors.


  • (1) Adagolodjo, Y., Trivisonne, R., Haouchine, N., Cotin, S., Courtecuisse, H.: Silhouette-based Pose Estimation for Deformable Organs Application to Surgical Augmented Reality. In: IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems. Vancouver, Canada (2017)
  • (2) Allard, J., Courtecuisse, H., Faure, F.c.: Implicit FEM Solver on GPU for Interactive Deformation Simulation. In: W.m.W. Hwu (ed.) GPU Computing Gems Jade Edition. Elsevier (2011)
  • (3) Bui, H.P., Tomar, S., Chouly, F., Lozinski, A., Bordas, S.: Real-time Patient Specific Surgical Simulation using Corotational Cut Finite Element Method: Application to Needle Insertion Simulation. In: 13th World Congress in Computational Mechanics. New York, United States (2018)
  • (4) Chen, Y., Medioni, G.: Object modelling by registration of multiple range images. Image and Vision Computing 10(3) (1992)
  • (5) Diederik P. Kingma, J.L.B.: Adam: A Method for Stochastic Optimization. In: International Conference on Learning Representations (ICLR), San Diego. Ithaca (2015)
  • (6) Geuzaine, C., Remacle, J.F.: Gmsh: A 3-D finite element mesh generator with built-in pre- and post-processing facilities. International Journal for Numerical Methods in Engineering 79(11), 1309–1331 (2009)
  • (7) Giannarou, S., Visentini-Scarzanella, M., Yang, G.: Probabilistic Tracking of Affine-Invariant Anisotropic Regions. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(1) (2013)
  • (8)

    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning.

    MIT Press (2016)
  • (9) Guo, X., Li, W., Iorio, F.: Convolutional Neural Networks for Steady Flow Approximation. In: Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16. ACM (2016)
  • (10) Hambli, R., Chamekh, A., Salah, H.B.H.: Real-time deformation of structure using finite element and Neural networks in virtual reality applications. Finite Elements In Analysis And Design 42 (2006)
  • (11) Kenngott, H.G., Wünscher, J.J., Wagner, M., Preukschas, A., Wekerle, A.L., Neher, P., Suwelack, S., Speidel, S., Nickel, F., Oladokun, D., Maier-Hein, L., Dillmann, R., Meinzer, H.P., Müller-Stich, B.P.: OpenHELP (Heidelberg laparoscopy phantom): development of an open-source surgical evaluation and training tool. Surgical Endoscopy (2015)
  • (12) Koo, B., Özgür, E., Le Roy, B., Buc, E., Bartoli, A.: Deformable Registration of a Preoperative 3D Liver Volume to a Laparoscopy Image Using Contour and Shading Cues. In: Medical Image Computing and Computer Assisted Intervention MICCAI 2017. Springer International Publishing (2017)
  • (13) Lorente, D., Martínez-Martínez, F., Rupérez, M.J., Lago, M.A., Martínez-Sober, M., Escandell-Montero, P., Martínez-Martínez, J.M., Martínez-Sanchis, S., Serrano-López, A.J., Monserrat, C., Martín-Guerrero, J.D.: A framework for modelling the biomechanical behaviour of the human liver during breathing in real time using machine learning. Expert Systems with Applications 71 (2017)
  • (14) Malinen, M., Råback, P.: Elmer Finite Element Solver for Multiphysics and Multiscale Problems, pp. 101–113. Multiscale Modelling Methods for Applications in Materials Science. Forschungszentrum Jülich (2013)
  • (15) Martínez-Martínez, F., Rupérez-Moreno, M.J., Martínez-Sober, M., Solves-Llorens, J.A., Lorente, D., Serrano-López, A.J., Martínez-Sanchis, S., Monserrat, C., Martín-Guerrero, J.D.: A finite element-based machine learning approach for modeling the mechanical behavior of the breast tissues under compression in real-time. Computers in Biology and Medicine 90 (2017)
  • (16) Mendizabal, A., Duparc, R.B., Bui, H.P., Paulus, C.J., Peterlik, I., Cotin, S.: Face-based smoothed finite element method for real-time simulation of soft tissue. Proc.SPIE 10135 (2017)
  • (17) Misra, S., Macura, K.J., Ramesh, K.T., Okamura, A.M.: The importance of organ geometry and boundary constraints for planning of medical interventions. Medical Engineering & Physics 31(2) (2009)
  • (18) Morooka, K., Chen, X., Kurazume, R., Uchida, S., Hara, K., Iwashita, Y., Hashizume, M.: Real-Time Nonlinear FEM with Neural Network for Simulating Soft Organ Model Deformation. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2008. Springer Berlin Heidelberg (2008)
  • (19) Myronenko, A., Song, X., Á. Carreira-Perpiñán, M.: Non-rigid point set registration: Coherent Point Drift. Advances in neural information processing systems 19 (2006)
  • (20) Peterlik, I., Courtecuisse, H., Rohling, R., Abolmaesumi, P., Nguan, C., Cotin, S., E Salcudean, S.: Fast Elastic Registration of Soft Tissues under Large Deformations. Medical Image Analysis 45 (2017)
  • (21) Peterlik, I., Haouchine, N., Ručka, L.s., Cotin, S.: Image-driven Stochastic Identification of Boundary Conditions for Predictive Simulation. In: 20th International Conference on Medical Image Computing and Computer Assisted Intervention. Québec, Canada (2017)
  • (22) Plantefève, R., Peterlik, I., Haouchine, N., Cotin, S.: Patient-Specific Biomechanical Modeling for Guidance During Minimally-Invasive Hepatic Surgery. Annals of Biomedical Engineering 44(1) (2016)
  • (23) Rechowicz, K.J., McKenzie, F.D.: Development and validation methodology of the Nuss procedure surgical planner. SIMULATION 89(12) (2013)
  • (24) Reichard, D., Häntsch, D., Bodenstedt, S., Suwelack, S., Wagner, M., Kenngott, H., Müller-Stich, B., Maier-Hein, L., Dillmann, R., Speidel, S.: Projective biomechanical depth matching for soft-tissue registration in … International Journal of Computer Assisted Radiology and Surgery (IJCARS) (7) (2017)
  • (25) Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Medical Image Computing and Computer-Assisted Intervention (MICCAI), LNCS, vol. 9351. Springer (2015)
  • (26) Simpson, A.L., Dumpuri, P., Jarnagin, W.R., Miga, M.I.: Model-Assisted Image-Guided Liver Surgery Using Sparse Intraoperative Data. Springer Berlin Heidelberg, Berlin, Heidelberg (2012)
  • (27) Suwelack, S., Röhl, S., Bodenstedt, S., Reichard, D., Dillmann, R., dos Santos, T., Maier-Hein, L., Wagner, M., Wünscher, J., Kenngott, H., Müller, B.P., Speidel, S.: Physics based shape matching for intraoperative image guidance. Medical Physics (41) (2014)
  • (28) Suwelack, S., Talbot, H., Röhl, S., Dillmann, R., Speidel, S.: A biomechanical liver model for intraoperative soft tissue registration. Progress in Biomedical Optics and Imaging - Proceedings of SPIE 7964 (2011)
  • (29) Tonutti, M., Gras, G., Yang, G.Z.: A machine learning approach for real-time modelling of tissue deformation in image-guided neurosurgery. Artificial Intelligence in Medicine 80 (2017)
  • (30) Wu, J., Westermann, R., Dick, C.: Real-Time Haptic cutting of high-resolution soft tissues. Studies in health technology and informatics 196 (2014)
  • (31) Yamamoto, U., Nakao, M., Ohzeki, M., Matsuda, T.: Deformation estimation of an elastic object by partial observation using a neural network. CoRR abs/1711.10157 (2017)