Image Quality Assessment for Rigid Motion Compensation

10/09/2019
by   Alexander Preuhs, et al.
FAU
16

Diagnostic stroke imaging with C-arm cone-beam computed tomography (CBCT) enables reduction of time-to-therapy for endovascular procedures. However, the prolonged acquisition time compared to helical CT increases the likelihood of rigid patient motion. Rigid motion corrupts the geometry alignment assumed during reconstruction, resulting in image blurring or streaking artifacts. To reestablish the geometry, we estimate the motion trajectory by an autofocus method guided by a neural network, which was trained to regress the reprojection error, based on the image information of a reconstructed slice. The network was trained with CBCT scans from 19 patients and evaluated using an additional test patient. It adapts well to unseen motion amplitudes and achieves superior results in a motion estimation benchmark compared to the commonly used entropy-based method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

11/29/2019

Deep autofocus with cone-beam CT consistency constraint

High quality reconstruction with interventional C-arm cone-beam computed...
01/17/2022

Neural Computed Tomography

Motion during acquisition of a set of projections can lead to significan...
12/01/2016

Unsupervised learning of image motion by recomposing sequences

We propose a new method for learning a representation of image motion in...
10/24/2019

Reconstruction of Undersampled 3D Non-Cartesian Image-Based Navigators for Coronary MRA Using an Unrolled Deep Learning Model

Purpose: To rapidly reconstruct undersampled 3D non-Cartesian image-base...
09/17/2018

Retrospective correction of Rigid and Non-Rigid MR motion artifacts using GANs

Motion artifacts are a primary source of magnetic resonance (MR) image q...
11/06/2021

Artifact- and content-specific quality assessment for MRI with image rulers

In clinical practice MR images are often first seen by radiologists long...
09/05/2017

Intraoperative Organ Motion Models with an Ensemble of Conditional Generative Adversarial Networks

In this paper, we describe how a patient-specific, ultrasound-probe-indu...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Mechanical thrombectomy Powers and et al. (2015); Berkhemer et al. (2015) is guided by an interventional C-arm system capable of 3-D imaging. Although its soft tissue contrast is comparable to helical CT scans, the prolonged acquisition time pose C-arm CBCT more susceptible to rigid head motion artifacts Leyhe et al. (2017). In the clinical workflow, however, it is desirable to reduce the time-to-therapy by avoiding prior patient transfers to helical CT or MR scanners Psychogios et al. (2017). To this end, robust motion compensation methods are desirable.

Methods for rigid motion compensation can be clustered in four categories: 1) image-based autofocus Sisniega et al. (2017); Wicklein et al. (2012), 2) registration-based Ouadah et al. (2016), 3) consistency-based Frysch and Rose (2015); Preuhs et al. (2018, 2019) and 4) data-driven Bier et al. (2018); Latif et al. (2018); Küstner et al. (2019).

Recent data-driven approaches use image-to-image translation methods based on GANs

Latif et al. (2018); Küstner et al. (2019) or aim to estimate anatomical landmarks in order to minimize a reprojection error (RPE) Bier et al. (2018). The latter approach does not provide the required accuracy, whereas GAN-based approach are deceptive for clinical applications, as the data-integrity cannot be assured Huang et al. (2019).

We propose a learning-based approach for rigid motion compensation ensuring data integrity. An image-based autofocus method is introduced, where a regression network predicts the RPE directly from reconstructed slice images. The motion parameters are found by iteratively minimizing the predicted RPE using the Nelder-Mead simplex method Olsson and Nelson (1975).

2 Motion Estimation and Compensation Framework

Autofocus Framework:

Rigid motion is compensated by estimating a motion trajectory which samples the motion at each of the acquired views within the trajectory Kim et al. (2014). contains the motion matrices , where each motion matrix  — with being the special Euclidean group — describes the patient movement at view . The motion matrices can be incorporated in the backprojection operator of a filtered backprojection-type (FBP) reconstruction algorithm. We denote the reconstructed image in dependence of the motion trajectory by FBP, where FBP is the FDK-reconstruction Feldkamp et al. (1984) from projection data . In the following, FBP will reconstruct the central slice on a pixel grid using a sharp filter kernel to emphasize motion artifacts.

Typical autofocus frameworks (cf. Sisniega et al. (2017)) estimate the motion trajectory based on an image quality metric (IQM) evaluated on the reconstructed image by minimizing

(1)

A common problem in solving (1) is the non-convexity of the IQM, which is typically chosen to be the image histogram entropy or total variation of the reconstructed slice. To overcome this limitation, we propose to replace the IQM by a network architecture that is trained to regress the RPE, which was shown to be quasi convex for a geometric reconstruction problem Ke and Kanade (2007).

Learning to Assess Image Quality:

Let be a set of 3-D points uniformly sampled from a sphere surface and let the acquisition trajectory associated to a dataset be defined by projection matrices mapping world points on the detector of a CBCT system at view Hartley and Zisserman (2003), then the RPE is computed as

(2)

This metric measures the reconstruction-relevant deviations induced by motion Strobel et al. (2003) and can thus be expected to be estimated directly from the reconstruction images. To this end, we device a regression network learning the RPE directly from a reconstructed image. Our regression network

consists of a feature extraction stage, pretrained on ImageNet and realized by the first 33 layers from a residual network,

He et al. (2016) followed by a densely connected layer defining the regression stage. The cost function of the network is defined by the difference between the network-predicted RPE from a reconstruction slice with simulated motion trajectory and the corresponding RPE as defined by Eq. (2)

(3)

For training, the projection data is ensured to be motion free, such that motion artifacts solely source from the virtual motion trajectory . For training and testing, we use CBCT acquisitions (Artis zee Q, Siemens Healthcare GmbH, Germany) of the head () acquired from 20 patients which were split in 16 for training 3 for validation and 1 for testing. For each patient we simulate 450 random motion trajectories resulting in a training set of 7650 reconstructions.

3 Experiments and Results

For motion generation, we use rotational movements along the patient’s longitudinal axis. The motion trajectory is modeled by an Akima spline Akima (1970) with 15 equally distributed nodes inducing RPEs ranging from  mm to  mm. With the RPE measurement being sensitive to constant offsets, not inducing motion artifacts, we further only use motions affecting a third of the acquisition.

First, we inspect how well the network is able to regress the RPE on test and validation data. Then, in in an inverse crime scenario — i.e. the modeling capacity of the spline used for motion generation is equal to the spline used for motion compensation — we inspect the behavior for motion types significantly varying in their shape from any motion seen during training. In a last experiment we compare the performance of the network with a state-of-the-art IQM utilizing the histogram entropy. Therefore, we deploy an inverse crime scenario and a more realistic case where we use 10 spline nodes for motion generation and 20 nodes for compensation.

Regression Network:

We use Adam optimization with learning rate of and select the network parameters that achieved the best RPE prediction on our validation dataset. Our network achieves an average RPE deviation from the Gt of  mm on the test dataset, as depicted in Fig. 1.

width=0.94trim=0ex 0pt 0ex 0pt

RPE [mm]

Estimated RPE [mm]

Test data

Val 1

Val 2

Gt RPE

Figure 1: Network estimated RPE and different reconstructions, all revealing a RPE of  mm.

Network Inference for Motion Compensation:

width=0.94trim=0ex 0pt 0ex 0pt

Iteration

RPE [mm]

Pred

Gt

Projection Frame

Motion []

Sim.

Est.

Figure 2: Left: Network-predicted and Gt RPE in each iteration step of the optimization. Right: Simulated motion trajectory and estimated motion trajectory after optimization.

Using the test patient, the network behavior for motion exceeding the RPE of the training process is inspected in an inverse crime scenario. The simulated motion trajectory is depicted in Fig. 2 together with the estimated motion trajectory after optimization using the network as IQM (cf. Eq. 1). For each iteration of the optimization process the network predicted RPE together with the corresponding Gt RPE is depicted. While the RPE is underestimated within the first iterations, the proportionality is still kept, guiding the optimization to a motion free reconstruction.

Figure 3 compares the proposed network-based IQM with the entropy-based IQM. The optimization process is identically for both metrics. In an inverse crime scenario both methods can restore the original image quality, however, in a more realistic setting the image entropy is stuck in a local minimum, whereas the network is able to lead the optimization to a nearby motion-free solution.

Gt

Mo
a

Ent

Pro
a

Ent

Pro
Ground Truth (Gt) and Motion Affected Inverse Crime Compensation Clinical Setting (Entropy and Proposed)
Figure 3: Reconstructions of the test patient using [500-2000] HU window. In the inverse crime scenario, the SSIM to the Gt is (Ent/Gt) and (Pro/Gt), respectively for the entropy (Ent) and proposed (Pro) measure. For the more realistic setting (Clinical Setting) the SSIM is (Ent/Gt) and (Pro/Gt), respectively.

4 Conclusion and Discussion

We present a novel data driven autofocus approach lead by a convolutional neural network. The network is trained to predict the RPE given a slice of a CBCT reconstruction. The final motion compensated reconstruction is solely based on the projection raw-data and the estimated motion trajectory. This allows us to device a learning-based motion compensation approach while ensuring data integrity. We showed that the network is capable of generalizing well to unseen motion shapes and achieves higher SSIM compared to a state-of-the-art IQM measure.

Disclaimer: The concepts and information presented in this paper are based on research and are not commercially available.

References

  • H. Akima (1970)

    A new method of interpolation and smooth curve fitting based on local procedures

    .
    Journal of the ACM (JACM) 17 (4), pp. 589–602. Cited by: §3.
  • O. A. Berkhemer, P. S. Fransen, D. Beumer, L. A. van den Berg, H. F. Lingsma, A. J. Yoo, W. J. Schonewille, J. A. Vos, P. J. Nederkoorn, M. J. Wermer, et al. (2015) A randomized trial of intraarterial treatment for acute ischemic stroke. N Engl J Med 372 (1), pp. 11–20. Cited by: §1.
  • B. Bier, K. Aschoff, C. Syben, M. Unberath, M. Levenston, G. Gold, R. Fahrig, and A. Maier (2018) Detecting anatomical landmarks for motion estimation in weight-bearing imaging of knees. MICCAI Workshop on MLMIR, pp. 83–90. Cited by: §1, §1.
  • L. Feldkamp, L. Davis, and J. Kress (1984) Practical cone-beam algorithm. J Opt Soc Am A 1 (6), pp. 612–619. Cited by: §2.
  • R. Frysch and G. Rose (2015) Rigid motion compensation in C-arm CT using consistency measure on projection data. MICCAI, pp. 298–306. Cited by: §1.
  • R. Hartley and A. Zisserman (2003)

    Multiple view geometry in computer vision

    .
    Cambridge. Cited by: §2.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. CVPR, pp. 770–778. Cited by: §2.
  • Y. Huang, A. Preuhs, G. Lauritsch, M. Manhart, X. Huang, and A. Maier (2019)

    Data consistent artifact reduction for limited angle tomography with deep learning prior

    .
    MICCAI Workshop on MLMIR. Cited by: §1.
  • Q. Ke and T. Kanade (2007) Quasiconvex optimization for robust geometric reconstruction. IEEE Transactions on Pattern Analysis and Machine Intelligence 29 (10), pp. 1834–1847. Cited by: §2.
  • J. Kim, T. Sun, J. Nuyts, Z. Kuncic, and R. Fulton (2014) Feasibility of correcting for realistic head motion in helical CT. Fully 3-D, pp. 542–545. Cited by: §2.
  • T. Küstner, K. Armanious, J. Yang, B. Yang, F. Schick, and S. Gatidis (2019) Retrospective correction of motion-affected mr images using deep learning frameworks. Magn Reson Med 82 (4), pp. 1527–1540. Cited by: §1, §1.
  • S. Latif, M. Asim, M. Usman, J. Qadir, and R. Rana (2018) Automating motion correction in multishot mri using generative adversarial networks. MedNeurips. Cited by: §1, §1.
  • J. R. Leyhe, I. Tsogkas, A. C. Hesse, D. Behme, K. Schregel, I. Papageorgiou, J. Liman, M. Knauth, and M. Psychogios (2017) Latest generation of flat detector ct as a peri-interventional diagnostic tool: a comparative study with multidetector ct. JNIS 9 (12), pp. 1253–1257. Cited by: §1.
  • D. M. Olsson and L. S. Nelson (1975) The nelder-mead simplex procedure for function minimization. Technometrics 17 (1), pp. 45–51. Cited by: §1.
  • S. Ouadah, W. Stayman, J. Gang, T. Ehtiati, and J. Siewerdsen (2016) Self-calibration of cone-beam CT geometry using 3D–2D image registration. Phys Med Biol 61 (7), pp. 2613. Cited by: §1.
  • W. J. Powers and et al. (2015) 2015 aha/asa focused update of the 2013 guidelines for the early management of patients with acute ischemic stroke regarding endovascular treatment. Stroke 46 (10), pp. 3020–3035. Cited by: §1.
  • A. Preuhs, A. Maier, M. Manhart, J. Fotouhi, N. Navab, and M. Unberath (2018) Double your views – exploiting symmetry in transmission imaging. MICCAI, pp. 356–364. Cited by: §1.
  • A. Preuhs, A. Maier, M. Manhart, M. Kowarschik, E. Hoppe, J. Fotouhi, N. Navab, and M. Unberath (2019) Symmetry prior for epipolar consistency. IJCARS 14 (9), pp. 1541–1551. Cited by: §1.
  • M. Psychogios, D. Behme, K. Schregel, I. Tsogkas, and M. Knauth (2017) One-stop management of acute stroke patients: minimizing door-to-reperfusion times. Stroke 48 (11), pp. 3152–3155. Cited by: §1.
  • A. Sisniega, J. W. Stayman, J. Yorkston, J. Siewerdsen, and W. Zbijewski (2017) Motion compensation in extremity cone-beam ct using a penalized image sharpness criterion. Phys Med Biol 62 (9), pp. 3712–3734. Cited by: §1, §2.
  • N. K. Strobel, B. Heigl, T. M. Brunner, O. Schuetz, M. M. Mitschke, K. Wiesent, and T. Mertelmeier (2003) Improving 3d image quality of x-ray c-arm imaging systems by using properly designed pose determination systems for calibrating the projection geometry. Medical Imaging, pp. 943–954. Cited by: §2.
  • J. Wicklein, H. Kunze, W. A. Kalender, and Y. Kyriakou (2012) Image features for misalignment correction in medical flat-detector ct. Med Phys 39 (8), pp. 4918–4931. Cited by: §1.