Interpatient Respiratory Motion Model Transfer for Virtual Reality Simulations of Liver Punctures

07/26/2017 ∙ by Andre Mastmeyer, et al. ∙ Universität Lübeck 0

Current virtual reality (VR) training simulators of liver punctures often rely on static 3D patient data and use an unrealistic (sinusoidal) periodic animation of the respiratory movement. Existing methods for the animation of breathing motion support simple mathematical or patient-specific, estimated breathing models. However with personalized breathing models for each new patient, a heavily dose relevant or expensive 4D data acquisition is mandatory for keyframe-based motion modeling. Given the reference 4D data, first a model building stage using linear regression motion field modeling takes place. Then the methodology shown here allows the transfer of existing reference respiratory motion models of a 4D reference patient to a new static 3D patient. This goal is achieved by using non-linear inter-patient registration to warp one personalized 4D motion field model to new 3D patient data. This cost- and dose-saving new method is shown here visually in a qualitative proof-of-concept study.



There are no comments yet.


page 2

page 5

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The virtual training and planning of minimally invasive surgical interventions with virtual reality simulators provides an intuitive, visuo-haptic user interface for the risk-sensitive learning and planning of interventions. The simulation of liver punctures has been an active research area for years [For16, For15, Mas14].

Figure 1: Left: Hardware: (1) Main stereo rendering window with successful needle insertion into a target, (2) fluoroscopic simulation, (3) Ultrasound simulation, (4) haptic device handle. Right: Main rendering window displaying oblique cut and color-coded patient structures just before needle insertion into a targeted bile duct (green).

Obviously first, the stereoscopic visualization of the anatomy of the virtual patient body is important [For12]. Second, the haptic simulation of the opposing forces through the manual interaction, rendered by haptic input and output devices, with the patient is key [For13]. Third in recent developments, the simulation of the appearance and forces of the patient’s breathing motions is vital [Mas17, Mas14].

The previously known VR training simulators usually use time invariant 3D patient models. A puncture of the spinal canal can be simulated sufficiently plausibly by such models. In the thoracic and upper abdominal region, however, respiratory and cardiac movements are constantly present. In the diaphragm area at the bottom of the lungs just above the liver, breathing movement differences in the longitudinal z direction of up to 5 cm were measured [Sep02]. Now for 4D animation, the necessary data consists of a single 3D CT data set and a mathematical or personalized animation model. Our aim here is to incorporate these physiological-functional movements into realistic modeling in order to offer the user a more realistic visuo-haptic VR puncture simulation. This means also to take into account the intra- and intercycle variability (hysteresis, variable amplitude during inhalation / exhalation).

A major interest and long term goal of virtual and augmented reality is the planning [Rei06] and intra-operative navigation assistance [Nic05]. However, in these works breathing motion is not incorporated or applicability limits by neglecting breathing motion in terms of minimal tumor size are given [Nic05]. Published approaches from other groups [Vil14, Vil11] model only a sinusoidal respiratory motion without hysteresis and amplitude variation. First steps in the direction of a motion model building framework were taken by our group [Ehr11]. Accurate simulation of respiratory motion depending on surrogate signals is relevant e.g. in fractionated radiotherapy. However, since a patient-specific 4D volume data set is required for personalized breathing model building and its acquisition is associated with a high radiation dose with 4D-CT ( 20-30 mSv (eff.)), our approach is the transfer of existing 4D breathing models to new 3D patient data. For comparison, the average natural background radiation is approximately 2.1 mSv (eff.) per year111Intercontinental flight max. 0.11 mSv (eff.).

On the other hand, there is no medical indication to acquire 4D CT data just for training purposes and model building from 4D MR data to be included is unjustifiable for cost reasons.

In this paper, we present a feasibility study with first qualitative results for the transfer of an existing 4D breathing model [Wil14] to static 3D patient data, in which only a 3D CT covering chest and upper abdomen at maximum inhalation is necessary (approximately 2- 13 mSv (eff.))222Siemens Somatom Definition AS.

2 Recent Solution

The existing solution requires a full 4D data set acquisition for each new patient. In [For16, Mas16, Mas13, For14], concepts for a 3D VR simulator and efficient patient modeling for the training of different punctures (e.g. liver punctures) have already been presented, see Fig. 1. A Geomagic Phantom Premium 1.5 HighForce is used for the manual control and haptic force feedback of virtual surgical instruments. Nvidia shutter glasses and a stereoscopic display provide the plausible rendering of the simulation scene. This system uses time invariant 3D CT data sets as a basis for the patient model. In case of manual interaction with the model, tissue deformation due to acting forces of the instruments are represented by a direct visuo-haptic volume rendering method.

New developments of VR simulators [For15]

allow a time-variant 4D-CT data set to be used in real time for the visualized patient instead of a static 3D CT data set. The respiratory movement can be visualized visuo-haptically as a keyframe model using interpolation or with a flexible linear regression based breathing model as described below.

3 Proposed Solution

The new solution requires only a 3D data set acquisition for each new patient.

3.1 Modeling of Breathing Motion

Realistic, patient-specific modeling of breathing motion in [For15] relies on a 4D CT data set covering one breathing cycle. It consists of phase 3D images indexed by . Furthermore, a surrogate signal (for example, acquired by spirometry) to parametrize patient breathing in a low-dimensional manner is necessary.

We use a measured spirometry signal [ml] and its temporal derivative in a composite surrogate signal: This allows to describe different depths of breathing and the distinction between inhalation and exhalation (respiratory hysteresis). We assume linearity between signal and motion extracted from the 4D data. First, we use the ’sliding motion’-preserving approach from [SR12] for intra-patient inter-phase image registrations to a selected reference phase :


where a distance measure (normalized sum of voxel-wise squared differences [Thi98]) and a specialized regularization establishes smooth voxel correspondences except in the pleural cavity where discontinuity is a wished feature [SR12]. Based on the results, the coefficients

are estimated as vector fields over the positions

. The personalized breathing model then can be stated as a linear multivariate regression [Wil14]:


Thus, a patient’s breathing state can be represented by a previously unseen breathing signal: Any point in time corresponds to a shifted reference image . Equipped with a real-time capable rendering technique via ray-casting with bent rays (see [For15] for full technical details), the now time variant model-based animatable CT data can be displayed in a new variant of the simulator and used for training. The rays are bent by the breathing motion model and this conveys the impression of an animated patient body, while being very time efficient (by space-leaping and early ray-termination) compared to deforming the full 3D data set for each time point and linear ray-casting afterwards [For15].

3.2 Transfer of Existing Respiratory Models to new, static Patient Data

Using the method described so far, personalized breathing models can be created, whose flexibility is sufficient to approximate the patients’ breathing states, which are not seen in the observation phase of the model formation.

However, the dose-relevant or expensive acquisition of at least one 4D data set has thus far been necessary for each patient.

Therefore, here we pursue the idea to transfer a readily built 4D reference patient breathing model to new static patient data and to animate it in the VR simulator described in Sec. 2.

For this purpose, it is necessary to correct for the anatomical differences between the reference patient with the image data and the new patient image data based on a similar breathing phase. This is achieved, for example, by a hold-breath scan () in the maximum inhalation state, which corresponds to a certain phase in a standardized 4D acquisition protocol. A nonlinear inter-patient registration with minimization of a relevant image distance ensures the necessary compensation [Mas16, Mas13]:


where a distance measure (sum of squared voxel-wise differences) and a diffusive non-linear regularization establishes smooth inter-patient voxel correspondences. On both sides, the breathing phase 3D image of maximum inhalation is selected as the reference phase (ref). The distance measurement can be selected according to the modality and quality of the image data. The transformation , which is determined in the nonlinear inter-patient registration, can now be used to warp the intra-patient inter-phase deformations of the reference patient as a plausible estimate (; : right to left):


The approach for estimating the respiratory motion for the new patient can now be applied analogously to the reference patient (see Sec. 3.1). With a efficient regression method [Wil14], the breathing movement of virtual patient models, which are only based on a comparatively low dose of acquired 3D-CT data, can be plausibly approximated:


Optionally, simulated surrogate signals can be used for the 4D animation of 3D CT data. Simple alternatives are to use the surrogate signal of the reference patient or also a (scaled) signal of the new patient , which can simply be recorded with a spirometric measuring device without new image acquisition.

(a) Field of view difference between reference patient (turquoise) and target patient (yellow).
(b) Selected times of the spirometry signal from (blue).
Figure 2: (a) Field of views and (b) respiratory signals of the patients (gray dashed) vs. .
(a) First time point.
(b) Second time point.
(c) Third time point.
(d) Fourth time point.
Figure 3: Field of view, respiratory signal and coronal views with overlayed motion field to the CT data of the patients (a-d). The color wheel legend below indicates the direction of the motion field.
(a) First time point.
(b) Second time point.
(c) Third time point.
(d) Fourth time point.
Figure 4: Coronal views with overlayed motion field to the CT data of the patient (a-d) deformed with the model of . The color wheel legend below indicates the direction of the motion field.
(a) First time point.
(b) Second time point.
(c) Third time point.
(d) Fourth time point.
Figure 5: Upper thorax coronal views of the animated CT data of the patient (a-d) deformed with the model of . Rib artifacts are indicated by the yellow arrow in (c).

4 Experiments and Results

We performed a qualitative feasibility study, results are animated in the 4D VR training simulator [For15].

For the 4D reference patient, a 4D-CT data set of the thorax and upper abdomen with 14 respiratory phases (512 462 voxel to 1 mm) and a spirometry signal were used (Fig. 2). The new patient is represented only by a static 3D CT data set (512 318 voxel to 1 mm).

All volume image data was reduced to a size of 256 voxel due to the limited graphics memory of the GPU used (Nvidia GTX 680 with 3 GB RAM).

According to Eq. 3.1 we first perform the intra-patient inter-phase registrations to a chosen reference phase .

The registrations from Eqs. 3.1 and 3.2 use weights and for the regularizers and . In both registration processes, the phase with maximum inhalation is used as the reference respiratory phase and for the training of the breathing model.

The respiratory signal used for model training is shown in Fig. (b)b, gray curve. We show the areas with plausible breathing simulation and use the unscaled respiratory signal of

with larger variance to provoke artifacts (Fig. 

(b)b, blue curve). The model training according to Eqs. 3.1 and 3.2 is very efficient using matrix computations.

We use manual expert segmentations of the liver and lungs, available for every phase of the 4D patient, to mainly assess the quality of the inter-patient registration in Eq. 3.2. Via the availabe inter-phase registrations (Eq. 3.1) to the 4D reference phase, we first warp the phase segmentation masks accordingly. After applying the inter-patient registration to , we have the segmentation masks of

in the space of the targeted 3D patient. Now for this patient, also a manual expert segmentation is availabe for comparison. Quantitatively, the DICE coefficients of the transferred segmentation masks (liver, lungs) can be given to classify the quality of the registration chain of the reference respiratory phases (single atlas approach). Qualitatively, we present sample images for four time instants and a movie.

The mean DICE coefficients of the single-atlas registration of the liver and lung masks to the new static patient yield satisfying values of 0.860.12 and 0.960.09. Note the clearly different scan ranges of the data sets (Fig. 2a). The animation of the relevant structures is shown as an example in Fig. 3, using a variable real breathing signal of the target patient (Fig. 2b). In the puncture-relevant liver region, the patient’s breathing states are simulated plausibly for the 4D reference patient (Fig. 3) and, more importantly, the 3D patient (Figs. 4, 5), to which the motion model of was transferred333Demo movie, click here.

5 Discussion, Outlook and Conclusion

For interested readers, the basic techiques for 4D breathing motion models have been introduced in [Ehr11] by our group. However there, the motion model is restricted to the inside of the lungs and by design a mean motion model is built from several 4D patients. The mean motion model is artificial to some degree, more complex and timely to build. The method described here for the transfer of retrospectively modeled respiratory motion of one 4D reference patient to a new 3D patient data set is less complex and extends to a larger body area. It already allows the plausible animation of realistic respiratory movements in a 4D-VR-training-simulator with visuo-haptic interaction. Of course in the future, we want to build a mean motion model for the whole body section including (lower) lungs and the upper abdomen, too.

In other studies, we found in Eq. 3.2 robust (compromise between accuracy and smoothness) for inter-patient registration with large shape variations [Mas13, Mas16]. In Eq. 3.1 for intra-patient inter-phase registration, we use to allow more flexibility for more accuracy as the shape variation between two phases of the same patient is much smaller [SR12].

We achieve qualitatively plausible results for the liver area in this feasibility study. In the upper thorax especially at the rib cage in neighborhood to the dark lungs stronger artifacts can occur (Fig. (c)c). They are due to problems in the inter-patient registration that is a necessary step for the transfer of the motion model. The non-linear deformation sometimes is prone to misaligned ribs. The same is true for the lower thorax with perforation first of the liver and then diaphragm (Fig. (c)c). Further optimization have to be carried out as artifacts can appear on the high contrast lung edge (diaphragm, ribs) with a small tidal volume. For liver punctures only, the artifacts of smeared ribs are minor as can be seen in Fig. 4.

Summing up, the previous assumption from Sec. 2 of a dose-relevant or expensive acquisition of a 4D-CT data set for each patient, can be mitigated for liver punctures by the presented transfer of an existing 4D breathing model.

Future work will deal with the better adaptation and simulation of the breathing signal. Further topics are the optimization of the inter-patient registration and the construction of alternatively selectable mean 4D reference breathing models. As in [For16], the authors plan to perform usability studies with medical practitioners.

To conclude, the method allows VR needle puncture training in the hepatic area of breathing virtual patients based on a low-risk and cheap 3D data acquisition for the new patient only. The requirement of a dose-relevant or expensive acquisition of a 4D CT data set for each new patient can be mitigated by the presented concept. Future work will include the reduction of artifacts and building mean reference motion models.

6 Acknowledgement

Support by grant: DFG HA 2355/11-2.


  • [Ehr11] Ehrhardt, J., Werner, R., Schmidt-Richberg, A., Handels, H. Statistical modeling of 4D respiratory lung motion using diffeomorphic image registration. IEEE Transactions on Medical Imaging, 30(2):251–265, September 2011.
  • [For12] Fortmeier, D., Mastmeyer, A., Handels, H. GPU-based visualization of deformable volumetric soft-tissue for real-time simulation of haptic needle insertion. German Conference on Medical Image Processing BVM - 2012: Algorithms - Systems - Applications. Proceedings from 18.-20. March 2012 in Berlin, pages 117–122, 2012.
  • [For13] Fortmeier, D., Mastmeyer, A., Handels, H. Image-based palpation simulation with soft tissue deformations using chainmail on the GPU. German Conference on Medical Image Processing - BVM 2013, pages 140–145, 2013.
  • [For14] Fortmeier, D., Mastmeyer, A., Handels, H. An image-based multiproxy palpation algorithm for patient-specific VR-simulation. Medicine Meets Virtual Reality 21, MMVR 2014, pages 107–113, 2014.
  • [For15] Fortmeier, D., Wilms, M., Mastmeyer, A., Handels, H. Direct visuo-haptic 4D volume rendering using respiratory motion models. IEEE Trans Haptics, 8(4):371–383, 2015.
  • [For16] Fortmeier, D., Mastmeyer, A., Schröder, J., Handels, H. A virtual reality system for PTCD simulation using direct visuo-haptic rendering of partially segmented image data. IEEE J Biomed Health Inform, 20(1):355–366, 2016.
  • [Mas13] Mastmeyer, A., Fortmeier, D., Maghsoudi, E., Simon, M., Handels, H. Patch-based label fusion using local confidence-measures and weak segmentations. Proc. SPIE Medical Imaging: Image Processing, pages 86691N–1–11, 2013.
  • [Mas14] Mastmeyer, A., Hecht, T., Fortmeier, D., Handels, H. Ray-casting based evaluation framework for haptic force-feedback during percutaneous transhepatic catheter drainage punctures. Int J Comput Assist Radiol Surg, 9:421–431, 2014.
  • [Mas16] Mastmeyer, A., Fortmeier, D., Handels, H. Efficient patient modeling for visuo-haptic VR simulation using a generic patient atlas. Comput Methods Programs Biomed, 132:161–175, 2016.
  • [Mas17] Mastmeyer, A., Fortmeier, D., Handels, H. Evaluation of direct haptic 4d volume rendering of partially segmented data for liver puncture simulation. Nature Scientific Reports, 7(1):671, 2017.
  • [Nic05] Nicolau, S., Pennec, X., Soler, L., Ayache, N. A complete augmented reality guidance system for liver punctures: First clinical evaluation. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2005, pages 539–547, 2005.
  • [Rei06] Reitinger, B., Bornik, A., Beichel, R., Schmalstieg, D. Liver surgery planning using virtual reality. IEEE Computer Graphics and Applications, 26(6):36–47, 2006.
  • [Sep02] Seppenwoolde, Y., Shirato, H., Kitamura, K., Shimizu, S., Herk, M.van , Lebesque, J. V., Miyasaka, K. Precise and real-time measurement of 3D tumor motion in lung due to breathing and heartbeat, measured during radiotherapy. Int J Radiation Oncololgy, Biology, Physics, 53(4):822–834, Jul 2002.
  • [SR12] Schmidt-Richberg, A., Werner, R., Handels, H., Ehrhardt, J. Estimation of slipping organ motion by registration with direction-dependent regularization. Medical Image Analysis, 16(1):150 – 159, 2012.
  • [Thi98] Thirion, J.-P. Image matching as a diffusion process: an analogy with maxwell’s demons. Medical Image Analysis, 2(3):243 – 260, 1998.
  • [Vil11] Villard, P., Boshier, P., Bello, F., Gould, D. Virtual reality simulation of liver biopsy with a respiratory component. Liver Biopsy, InTech, pages 315–334, 2011.
  • [Vil14] Villard, P., Vidal, F., Cenydd, L., Holbrey, R., Pisharody, S., Johnson, S., Bulpitt, A., John, N., Bello, F., Gould, D. Interventional radiology virtual simulator for liver biopsy. Int J Comput Assist Radiol Surg, 9(2):255–267, 2014.
  • [Wil14] Wilms, M., Werner, R., Ehrhardt, J., et al. Multivariate regression approaches for surrogate-based diffeomorphic estimation of respiratory motion in radiation therapy. Phys Med Biol, 59:1147–1164, 2014.