Prostate motion modelling using biomechanically-trained deep neural networks on unstructured nodes

07/09/2020 ∙ by Shaheer U. Saeed, et al. ∙ UCL 0

In this paper, we propose to train deep neural networks with biomechanical simulations, to predict the prostate motion encountered during ultrasound-guided interventions. In this application, unstructured points are sampled from segmented pre-operative MR images to represent the anatomical regions of interest. The point sets are then assigned with point-specific material properties and displacement loads, forming the un-ordered input feature vectors. An adapted PointNet can be trained to predict the nodal displacements, using finite element (FE) simulations as ground-truth data. Furthermore, a versatile bootstrap aggregating mechanism is validated to accommodate the variable number of feature vectors due to different patient geometries, comprised of a training-time bootstrap sampling and a model averaging inference. This results in a fast and accurate approximation to the FE solutions without requiring subject-specific solid meshing. Based on 160,000 nonlinear FE simulations on clinical imaging data from 320 patients, we demonstrate that the trained networks generalise to unstructured point sets sampled directly from holdout patient segmentation, yielding a near real-time inference and an expected error of 0.017 mm in predicted nodal displacement.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction and related work

Computational biomechanical modelling has applications in the areas of computer aided intervention, surgical simulation and other medical image computing tasks [12, 11, 9, 8]. In particular, numerical approaches based on finite element (FE) analysis have been applied in a wide range of clinical tasks, such as modelling soft tissue deformation in augmented reality [9] and medical image registration [11, 4]

. For example, during transrectal ultrasound (TRUS) guided prostate biopsy and focal therapy, FE-based biomechanical simulations have been proposed to predict physically plausible motion estimations for constraining the multimodal image registration

[11, 12, 24].

Due to the highly nonlinear behaviour of soft tissues, complex anatomical geometry and boundary condition estimates, FE simulations often rely on iterative algorithms that are computationally demanding. Many developments have made the modern FE solver highly efficient, such as using parallel computing algorithms with graphics processing units (GPUs) [22], simplifying the mechanical formulation [16, 6], or learning reduced-order solutions [23]. However, a real-time solution remains challenging. In one of the approaches for the prostate imaging application, an average meshing-excluded computation time of 16s per simulation, on a GPU, was reported [11]. To meet the surgical requirement in efficiency, many turned to statistical learning approaches that summarise the FE simulations with a lower dimensional approximation [13, 21, 15, 24].

Largely motivated by the representation capability and the fast inference, deep neural networks have been used to reduce computation time for biomechanical modelling problems. Meister et al have proposed to use neural networks to approximate the time integration, which allowed much larger time steps to accelerate the iterative optimisation [18]. Mendizbal et al presented a methodology for estimating the deformations from FE simulations using a U-Net variant, to efficiently predict a deformation field on regularly sampled grid nodes [19]. U-Mesh was able to make approximations for deformations in the human liver under an applied force and a mean absolute error of 0.22 mm with a prediction time of 3 ms was reported [19]. The model requires, as input, point clouds derived from tetrahedral meshes mapped to a sparse hexahedral grid. It also assumed that the deformable region has uniform material properties throughout. Liang et al used deep learning to estimate stress distributions by learning a mapping from a geometry encoding to stress distribution [17]. The model was trained on FE simulation data and was able to predict stress distributions in the aortic wall given an aorta geometry [17]. The constitutive properties were also assumed invariant throughout the entire geometry [17].

In this work, we adapt a PointNet [20]

with a bootstrap aggregating sampling strategy, to model the biomechanics on a set of input feature vectors that represent patient-specific anatomy with node-wise boundary conditions and material properties. The use of unstructured data to represent the geometry potentially alleviates the need for meshing or shape encoding via principal component analysis. The incorporation of material properties within the proposed input feature vectors allows accommodation of more realistic inhomogeneous biological materials. We integrated these changes into the proposed PointNet based neural network training for deformation prediction tasks. The PointNet has been applied for a wide range of learning tasks with un-ordered point clouds as input, such as classification and segmentation.

We summarise the contributions in this work: 1) The proposed network provides a permutation-invariant deep network architecture over the unstructured input feature vectors, additionally allowing flexible point set sampling schemes without requiring pre-defined number of points or spatially regular nodal locations; 2) Out-of-nodal-sample generalisation ability is also investigated, based on the input feature vectors directly sampled from segmentation on holdout patient data; 3) The efficiency and accuracy of the proposed method is validated using a large holdout patient data set (30,000 simulations from 60 patients) in a real clinical application for predicting TRUS probe induced deformations in the prostate gland and the surrounding regions.

2 Methods

In Sections 2.1-2.3, a deep neural network and its training strategy are described to learn a high-dimensional nonlinear mapping between a set of input feature vectors representing the geometry, applied loads and material properties, and a set of displacements on the input nodal locations.

2.1 Unstructured input feature vectors

Without loss of generality, let be a set of un-ordered feature vectors , where . are the point coordinates; represent the externally applied loads with known boundary conditions; and specify the parameter values of the material property models.

For the prostate motion modelling application, contain 3D Euclidean coordinates of the sampled points. contain the shear and bulk moduli in an isotropic, elastic neo-Hookean material model [25]. The nodes with available displacement loads are assigned with vectors where indicates the availability of the assigned displacement for the node, while those without are assigned . This representation can be readily generalised to dimension-specific loads by adding more ’switches’, i.e. the first elements of the current vectors, and to other types of boundary conditions such as force.

Figure 1: Point cloud of prostate (red) with data flow (blue arrows) and error distribution of deformation prediction due to simulated TRUS probe (green sphere) movement.
Figure 2: Adapted PointNet architecture used for the displacement prediction task. Solid lines show data flow, blue boxes show data size and dashed lines show expanded views.

2.2 Permutation-invariant nodal displacement prediction

The PointNet is adapted to predict displacement vectors from input features , as illustrated in Fig. 1. The adapted PointNet architecture is illustrated in Fig. 2, generalising the first transformation net T1-net to 9D space instead of the original 3D space. The readers are referred to [20] for other architectural details.

In this work, ground-truth displacements

are computed from finite element simulations (see Sections 3.1 and 3.2). Mean squared error is minimised as the loss function to optimise the network weights:

.

2.3 Training-time bootstrap aggregating

Although the adapted PointNet in theory accepts variable numbers of feature vectors as input during training and inference, an implementation with a fixed number of input vectors is much more efficient [20] with modern GPU parallel computing support. Reliably and efficiently mapping the often irregularly-shaped input to regular space, such as a cubic grid or a fixed number of points, remains an interesting challenge.

We propose an alternative bootstrap aggregating training strategy to randomly sample, with replacement, the input feature vectors, in each optimisation iteration. During inference, the final prediction is computed by averaging the predictions from a number of forward-passes of the trained network, which cover all the input feature vectors. We note that the expected back-propagated gradient remains the same using the proposed sampling and averaging scheme, as when training using all the input feature vectors. The proposed sampling-averaging scheme provides a flexible mechanism for training and prediction with patient anatomy represented by different point sets, without restriction on number of points sampled from varying patient geometry. The bootstrap aggregating, also known as bagging, is a model averaging strategy that may also improve the generalisability of the trained network [10].

In this work, the tetrahedron mesh nodes are used to train the network (Section 3.2), after solid meshing of the patient geometry. All mesh nodes have an equal probability of being sampled during each training iteration. This results in a relatively sparse representation of the patient geometry, during the bootstrap aggregating training. The same sampling and model averaging strategy are also applicable to input feature vectors with different and, potentially, simpler point sampling schemes other than finite element mesh nodes. One such example is validated in this study and described in Section 3.4.

3 Experiments

All the patients are randomly sampled into three groups, training, validation and holdout sets. For all experiments, we compute mean absolute error (MAE), and the standard deviation (St.D.), between FE simulation results and network predictions. Specific for this application, accurate displacement prediction on the prostate gland and its zonal structures is of importance in guiding both targeted biopsy and focal therapy procedures. Therefore, nodal displacement errors are also computed for each of the gland regions, represented by central zone (CZ) and whole gland (WG) with transition- and peripheral zones. Additionally, the first, second and third quartiles (Q1, Q2 and Q3 respectively) are reported for all nodes. Paired T-test results are reported for holdout set experiments.

3.1 Data acquisition and finite element simulations

T2-weighted MR images were acquired from 320 prostate cancer patients who underwent TRUS-guided transperineal targeted biopsy or focal therapy. Tetrahedron meshes of approximately volume of the patient abdominal region were generated, after the prostate glands and surrounding structures were manually segmented. In each simulation, two types of boundary conditions were applied: zero displacement on pelvic bones and sampled nodal displacements on the simulated TRUS probe. Sampled material properties were assigned to elements in different anatomical regions and zonal structures. For each patient, 500 finite element (FE) simulations were performed using NiftySim [14], resulting in 160,000 simulated motion data. Each predicts one plausible prostate motion due to change of ultrasound probe movement, acoustic balloon dilation, bony constraints and varying material properties. Averaged over all simulations, the maximum displacement value in the ground truth data is 5.21 mm, and the mean displacement over the entire point cloud is 0.83 mm. Further details of the finite element modelling can be found in [11, 12]. Similar FE simulation strategies have been widely used and validated in the same applications [4, 2, 7, 1, 24].

3.2 Network training and hyperparameter tuning

The point cloud node locations, boundary conditions and material properties used in the FE simulations were assembled into the input feature vectors

, as the PointNet input. Batch normalization was used at each hidden layer, with additional dropout regularisation at a drop rate of 0.25, on the final fully connected layer. The Adam optimiser was used to minimize the mean squared error loss function, with an initial learning rate of 0.001 and a minibatch size of 32. The number of input feature vectors used for training was 14500. Based on preliminary results on the validation set, varying these hyperparameters produced limited impact on the network performance. Conversely, the global feature vector (GFV) size may impact performance significantly

[20], as it linearly correlates with the number of trainable parameters, and affected model training and inference time in our validation experiments. Therefore, we report the network performance results with different GFV sizes, on the validation set. All the other hyperparameters were configured empirically and remained fixed throughout the validation and holdout experiments presented in this paper.

The networks were trained using 100,000 FE simulations from 200 patients, while validation and holdout sets each comprised data from 60 patients (30,000 simulations). Each network took approximately 32 hours to train on two Nvidia Tesla V100 GPUs.

3.3 Sensitivity analysis on material properties

Several previous studies have trained deep neural networks to predict finite element solutions with uniform material properties in the regions of interest [19, 5, 17]. This in principle is unrealistic for human anatomy; e.g. the prostate peripheral, transitional, and central zones are generally accepted to have differing material properties. The importance of material heterogeneity depends on the application, though Hu et al.’s results suggest it is significant for prostate motion predictions [11]. The proposed network readily includes material properties in the input feature vectors. We therefore investigated the difference between networks trained with and without region-specific material properties by treating region-specificity as a hyperparameter. Networks with homogeneous materials were correspondingly trained with reduced input feature vectors, which excluded material parameters: .

3.4 Generalisation to points sampled directly from segmentation

To demonstrate the generalisability of the network to alternative point sampling methods, we re-sampled all the point locations in the holdout set using a well-established region-wise cuboid tessellation approach. Each of the anatomical regions was represented by cuboid tessellations computed from segmentation surface points, without using the solid tetrahedral mesh. From each tessellation, points were then randomly generated, this resulted in points per region and

points in total. For training and validation purposes, material properties were assigned based on which tessellation the node belongs to, while BCs were interpolated using an inverse Euclidean-distance transform based on five nearest neighbouring nodes. It is noteworthy that, in general,

and the bootstrap aggregating, described in Sect. 2.3, still applies to the new input feature vectors.

4 Results

Global feature vector size and material properties

Summarised in Table 1, results on the validation set suggest that increasing GFV size, up to a maximum of 1024, reduced error rates. However, further increasing GFV size from 1024 to 2048, the mean error increased from 0.0100.011 to 0.0130.015. This small yet significant increase (p-value1e-3) may suggest overfitting due to larger network size. Therefore, a GFV size of 1024 was subsequently used in holdout experiments. Also shown in Table 1, excluding material properties from the network inputs significantly increased nodal displacement errors, from 0.010.011 to 0.0270.029 (p-value1e-3). Material parameters were therefore retained in the input feature vectors in the reported results based on the holdout set.

Network performance on holdout set

As summarised in Table 1, the overall MAEs, on the holdout set, were 0.0100.012 mm and 0.0170.015 mm, for the FE tetrahedral mesh nodes and the points from the tessellation-sampling, respectively, with a significant difference (p-value

1e-3). However, when these network-predicted displacements were compared with the nodal displacements produced with FE simulations, there was no significance found with p-values of 0.093 and 0.081, respectively. The error distributions were skewed towards zero, as can be observed based on the median,

and percentile values reported in Table 1. Computed over all the cases in the holdout set, the average MAEs were 0.34 mm and 0.48 mm, for the points sampled from FE mesh nodes and points sampled using tessellation, respectively. We also report an inference time of 520 ms, when predicting displacements for approximately 14,500 nodes, using one single Nvidia GeForce 1050Ti GPU.

All points/ nodes CZ WG
MAESt.D. Q1 Q2 Q3 MAESt.D. MAESt.D.
GFV Sizes Results on validation set
256 0.1130.091 0.043 0.094 0.165 0.1070.080 0.1100.094
512 0.0650.047 0.002 0.031 0.086 0.0580.052 0.0630.049
1024 0.0100.011 0.000 0.005 0.013 0.0080.013 0.0090.012
2048 0.0130.015 0.000 0.006 0.016 0.0090.011 0.0090.016
Input Feat. Vectors Results on validation set
0.0100.011 0.000 0.005 0.013 0.0080.013 0.0090.012
0.0270.029 0.000 0.009 0.031 0.0230.021 0.0290.020
Sampling Strategy Results on holdout set
Tetrahedral Mesh 0.0100.012 0.000 0.007 0.018 0.0090.013 0.0100.012
Tessellation 0.0170.015 0.000 0.009 0.019 0.0140.020 0.0170.021
Table 1: Results from different experiments. All results presented in mm.

5 Discussion

Based on statistical test results on the holdout set, reported in Section 4, we conclude that the models presented in this study can predict highly accurate nodal displacements for new patients in our application. This provides an efficient alternative to FE simulations in time critical tasks, where additional computation can be offloaded to the model training phase, such as surgical simulation [3] and surgical augmented reality [9] where computation times need to be curtailed during deformation prediction. For the MR-to-TRUS registration application described in Section 1, rather than replacing the FE simulations for constructing a patient-specific motion model for intervention planning, we conjecture a possibility to optimise the deformation prediction directly during the procedure, enabled by the highly efficient inference with non-iterative forward network evaluation.

An important contribution in this work is to provide evidence that, the proposed network can generalise to input feature vectors sampled from alternative methods other than the FE mesh, a simple and versatile tessellation method in this case. This is an important avenue for further investigation of the proposed method, especially for point spatial locations integrated with features like other types of BCs, material parameters and even other physiological measurements of clinical application interests.

The computational cost of data generation and model training is significant and this can be a hindrance in the deployment of deep learning models in practice. Augmentation can serve to alleviate a part of the problem by presenting augmented examples to the network for training. Re-sampling, like regional tessellation is a viable candidate, although this has yet to be investigated. More interestingly, the model generalisability to cases other than probe induced deformation prediction has not been investigated and this presents an opportunity for further research to investigate training strategies such as transfer learning and domain adaptation, which may substantially reduce the computational cost.

6 Conclusions

We have presented a PointNet based deep neural network learning biomechanical modelling from FE simulations, for the case of TRUS probe induced deformation of the prostate gland and surrounding regions. The PointNet architecture used for this task allows for randomly sampled unstructured point cloud data, representing varying FE loads, patient anatomy and point-specific material properties. The presented approach can approximate FE simulation with a mean absolute error of 0.017 mm with a sub-second inference. The method can be generalised to new patients and to randomly sampled 3D point clouds without requiring quality solid meshing. The proposed methodology is applicable for a wide range of time critical tasks, and we have demonstrated its accuracy and efficiency with clinical data for a well-established prostate intervention application.

Acknowledgments

This work is supported by the Wellcome/EPSRC Centre for Interventional and Surgical Sciences (203145Z/16/Z). ZAT acknowledges funding from CRUK RadNet Leeds Centre of Excellence (C19942/A28832).

References

  • [1] du Bois d’Aische A., Craene, M., Haker, S., Weisenfeld, N., Tempany, C., Macq, B., Warfield, S.: Improved non-rigid registration of prostate mri. In: Barillot, C., Haynor, D., Hellier, P. (eds.) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2004. Lecture Notes in Computer Science, vol. 3216. Springer, Berlin, Heidelberg (2004)
  • [2] Alterovitz, R., Goldberg, K., Pouliot, J., Hsu, I., Kim, Y., Noworolski, S., Kurhanewicz, J.: Registration of mr prostate images with biomechanical modeling and nonlinear parameter estimation. Medical Physics 33(2), 446–454 (2006)
  • [3] Berkley, J., Turkiyyah, G., Berg, D., Ganter, M., Weghorst, S.: Real-time finite element modeling for surgery simulation: an application to virtual suturing. IEEE Transactions on Visualization and Computer Graphics 10(3), 314–325 (2004)
  • [4] Bharatha, A., Hirose, M., Hata, N., Warfield, S., Ferrant, M., Zou, K., Suarez-Santana, E., Ruiz-Alzola, J., D’Amico, A., Cormack, R., Kikinis, R., Jolesz, F., Tempany, C.: Evaluation of three-dimensional finite element-based deformable registration of pre- and intraoperative prostate imaging. Medical Physics 28(12), 2551–2560 (2001)
  • [5] Brunet, J., Mendizabal, A., Petit, A., Golse, N., Vibert, E., Cotin, S.: Physics-based deep neural network for augmented reality during liver surgery. In: et al, S.D. (ed.) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2019. Lecture Notes in Computer Science, vol. 11768. Springer, Cham (2019)
  • [6] Cotin, S., Delingette, H., Ayache, N.: Real-time elastic deformations of soft tissues for surgery simulation. IEEE Transactions on Visualization and Computer Graphics 5(1), 62–73 (1999)
  • [7] Crouch, J., Pizer, S., Chaney, E., Hu, Y., Mageras, G., Zaider, M.: Automated finite-element analysis for deformable registration of prostate images. IEEE transactions on medical imaging 26(10), 1379–1390 (2007)
  • [8] Erhart, P., Hyhlik-Dürr, A., Geisbüsch, P., Kotelis, D., Müller-Eschner, M., Gasser, T., von Tengg-Kobligk, H., Böckler, D.: Finite element analysis in asymptomatic, symptomatic, and ruptured abdominal aortic aneurysms: in search of new rupture risk predictors. European Journal of Vascular and Endovascular Surgery 49(3), 239–245 (Mar 2014). https://doi.org/10.1016/j.ejvs.2014.11.010
  • [9] Haouchine, N., Dequidt, J., Berger, M., Cotin, S.: Deformation-based augmented reality for hepatic surgery. Studies in Health Technology and Informatics 184, 182–188 (2013)
  • [10] Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer series in statistics, Springer-Verlag New York, 12 edn. (2001)
  • [11] Hu, Y., Ahmed, H., Taylor, Z., Allen, C., Emberton, M., Hawkes, D., Barratt, D.: Mr to ultrasound registration for image-guided prostate interventions. Medical Image Analysis 16(3), 687–703 (2012)
  • [12] Hu, Y., Carter, T., Ahmed, H., Emberton, M., Allen, C., Hawkes, D., Barratt, D.: Modelling prostate motion for data fusion during image-guided interventions. IEEE Transactions on Medical Imaging 30(11), 1887–1900 (Nov 2011)
  • [13] Hu, Y., Morgan, D., Ahmed, H., Pendsé, D., Sahu, M., Allen, C., Emberton, M., Hawkes, D., Barratt, D.: A statistical motion model based on biomechanical simulations for data fusion during image-guided prostate interventions. In: Metaxas, D., Axel, L., Fichtinger, G., Székely, G. (eds.) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2008. Lecture Notes in Computer Science, vol. 5241. Springer, Berlin, Heidelberg (2008)
  • [14] Johnsen, S., Taylor, Z., Clarkson, M., Hipwell, J., Modat, M., Eiben, B., Han, L., Hu, Y., Mertzanidou, T., Hawkes, D., Ourselin, S.: Niftysim: A gpu-based nonlinear finite element package for simulation of soft tissue biomechanics. International journal of computer assisted radiology and surgery 10(7), 1077–1095 (2015)
  • [15] Khallaghi, S., Sanchez, C., Rasoulian, A.. Nouranian, S., Romagnoli, C., Abdi, H., Chang, S., Black, P., Goldenberg, L., Morris, W., Spadinger, I., Fenster, A., Ward, A., Fels, S., Abolmaesumi, P.: Statistical biomechanical surface registration: application to mr-trus fusion for prostate interventions. IEEE transactions on medical imaging 34(12), 2535–2549 (2015)
  • [16] Lee, B., Popescu, D., Joshi, B., Ourselin, S.: Efficient topology modification and deformation for finite element models using condensation. Studies in Health Technology and Informatics 119, 299–304 (2006)
  • [17] Liang, L., Liu, M., Martin, C., Sun, W.: A deep learning approach to estimate stress distribution: A fast and accurate surrogate of finite-element analysis. Journal of The Royal Society Interface (2018). https://doi.org/10.1098/rsif.2017.0844
  • [18] Meister, F., Passerini, T., Mihalef, V., Tuysuzoglu, A., Maier, A., Mansi, T.: Deep learning acceleration of total lagrangian explicit dynamics for soft tissue mechanics. Computer Methods in Applied Mechanics and Engineering 358 (2020)
  • [19] Mendizabal, A., Márquez-Neila, P., Cotin, S.: Simulation of hyperelastic materials in real-time using deep learning. Medical Image Analysis 59, 101569 (2020). https://doi.org/10.1016/j.media.2019.101569, hal-02097119v3
  • [20] Qi, C., Su, H., Mo, K., Guibas, L.: Pointnet: Deep learning on point sets for 3d classification and segmentation (2016), arXiv:1612.00593v2
  • [21] Saito, A., Nakada, M., Oost, E., Shimizu, A., Watanabe, H., Nawano, S.: A statistical shape model for multiple organs based on synthesized-based learning. In: Yoshida, H., Warfield, S., Vannier, M. (eds.) Abdominal Imaging. Computation and Clinical Applications. ABD-MICCAI 2013. Lecture Notes in Computer Science, vol. 8198. Springer, Berlin, Heidelberg (2013)
  • [22] Taylor, Z., Cheng, M., Ourselin, S.: High-speed nonlinear finite element analysis for surgical simulation using graphics processing units. IEEE transactions on medical imaging 27(5), 650–663 (2008)
  • [23] Taylor, Z., Crozier, S., Ourselin, S.: A reduced order explicit dynamic finite element algorithm for surgical simulation. IEEE transactions on medical imaging 30(9), 1713–1721 (2011)
  • [24] Wang, Y., Cheng, J., Ni, D., Lin, M., Qin, J., Luo, X., Xu, M., Xie, X., Heng, P.: Towards personalized statistical deformable model and hybrid point matching for robust mr-trus registration. IEEE transactions on medical imaging 35(2), 589–684 (2015)
  • [25] Zienkiewicz, O., Taylor, R.: The Finite Element Method. Oxford:Butterworth-Heinemann, The Netherlands (2000)