Decoupling Respiratory and Angular Variation in Rotational X-ray Scans Using a Prior Bilinear Model

04/30/2018 ∙ by Tobias Geimer, et al. ∙ FAU 0

Data-driven respiratory signal extraction from rotational X-ray scans has always been challenging due to angular effects overlapping with respiration-induced change in the scene. In the context of motion modelling their main drawback is the fact that most of these methods only extract a 1D signal, that, at best, can be decomposed into amplitude and phase. In this paper, we use the linearity of the Radon operator to propose a bilinear model based on a prior 4D scan to separate angular and respiratory variation. Out-of-sample extension is enhanced by a B-spline interpolation using prior knowledge about the trajectory angle to extract respiratory feature weights independent of the acquisition angle. Though the prerequisite of a prior 4D scan seems steep, our proposed application of respiratory motion estimation in radiation therapy usually meets this requirement. We test our approach on DRRs of a patient's 4D CT in a leave-one-out manner and achieve a mean estimation error of 5.2

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Extracting information about a patient’s breathing state from X-ray projections is important for many time-resolved application, such as CT reconstruction or motion compensation in image-guided radiation therapy (IGRT) [8]. In the context of CT imaging, respiratory motion during the scan time introduces data inconsistencies that, ultimately, manifest in reconstruction artifacts. These effects can be mitigated by incorporating non-linear motion models into state-of-the-art algorithms for motion-compensated reconstruction [11]. In IGRT, the consequences of respiratory motion may be particularly harmful if not addressed properly. Here, malignant tumor cells are irradiated following an optimized dose distribution that is the result of treatment planning based on CT imaging. However, respiratory motion may lead to a displacement of the target volume during irradiation, resulting in underdosage of the tumor and, ultimately, the potential survival of malignant cells [8]. Motion tracking represents the state-of-the-art procedure, where the treatment beam is continuously following the tumor motion. However, this process requires sophisticated motion monitoring, often incorporating motion models to estimate internal deformation from the available imaging modalities such as on-board X-ray imagers.

Many data-driven approaches have been proposed to extract respiratory information from X-ray projections, ranging from the established Amsterdam Shroud [15] to more sophisticated approaches based on epipolar consistency [1]. In the context of respiratory motion models [11] used to estimate internal deformation fields in IGRT [7], their main drawback is the fact that most of these methods only extract a 1D signal, that, at best, can be decomposed into amplitude and phase [5]. When the motion representation is covered by a statistical shape model (SSM) [3, 11], it is highly desirable to extract more features to allow for a lower reconstruction error. While approaches exist that extract multiple respiratory features from X-ray projections [6], they are restricted to static acquisition angles. As training a separate model for every possible static angle is infeasible, these methods are typically not suited for applications with a continuously rotating gantry, such as cone-beam CT or volumetric arc therapy.

With both angular and respiratory variation present in the X-ray images of a rotational scan, we aim to separate these effects by expressing them as the two domains of a bilinear model with corresponding rotational and respiratory feature space. A mathematical foundation to decompose multiple sources of variation was given by De Lathauwer et al. [4]

who formulated Higher-order Singular Value Decomposition (HOSVD) on a tensor of arbitrary dimensionality. Meanwhile, bilinear models have seen use in many fields including medical applications. Among others, Tenenbaum

et al. [13] used a bilinear model to separate pose and identity from face images, while Çimen et al. [2] constructed a spatio-temporal model of coronary artery centerlines.

In this work, we show that due to the linearity of the X-ray transform, a bilinear decomposition of respiratory and angular variation exists (Sec. 2.1). Subsequently, Sec. 2.2 and 2.3 will cover both model training and its application in feature estimation. With the simultaneous estimation of both bilinear weights being an ill-posed problem, we propose a B-spline interpolation of rotational weights based on prior knowledge about the trajectory angle. With known rotational weights, the task of estimating respiratory weights reduces to a linear one. We validate our model in a leave-one-phase-out manner using digitally reconstructed radiographs (DRRs) of five patient 4D CTs consisting of eight respiratory phases each. The main purpose of our evaluation is to give a proof-of-concept of the proposed decoupling process. In addition, we provide first indication that the extracted respiratory features contain volumetric information suitable for driving a motion model.

2 Material & Methods

2.1 X-ray Transform under Respiratory and Angular Variation

An X-ray projection at rotation angle and respiratory phase is given by the X-ray transform applied to the volume

(1)

where N indicates the arbitrary dimension of the volume and projection image. It has been shown that the respiratory-induced changes in the anatomy can be described by an active shape model of the internal anatomy [11]:

(2)

where is the data mean of the mode,

contains the eigenvectors corresponding to the first

principal components, and are the model weights corresponding to phase . Thus, Eq. 1 can be further processed to

(3)

Now, represents a model (with mean ) for describing the projection image under the fixed angle given the respiratory weights . The inversion of the X-ray transform itself is ill-posed for a single projection. However, can be inverted more easily with encoding mostly variation in superior-inferior direction observable in the projection. As a result, respiratory model weights can be estimated from a single projection image if the angle-dependent model matrix is known [6].

Furthermore, we propose a X-ray transform to be approximated by a linear combination using a new basis of X-ray transforms, such that

(4)

This essentially mimics Eq. 2 with describing variation in the projection images solely caused by the rotation of the gantry. The resulting scalar factors

form the weight vector

. Note that in this formulation, we implicitly assume a continuous trajectory and that the breathing motion is observable from each view. This gives rise to a bilinear formulation for any given projection image . However, bilinear models typically do not operate on mean-normalized data. Therefore, we use the decomposition described in Eq. 2 without mean subtraction:

(5)

where is a model tensor with respiratory and rotational feature dimensionality and . Here, denotes the mode product along the given mode . For more details on tensor notation please refer to [9].

2.2 Model Training

For model training, a prior 4D CT scan is required yielding phase-binned volumes . Using the CONRAD software framework [10], DRRs are computed at angles along a circular trajectory. The resulting projection images form the data tensor . Using HOSVD [4] we perform dimensionality reduction on the data tensor. First, is unfolded along mode :

Fig. 1-left illustrates the unfolding process. Second, SVD is performed on each unfolded matrix

(6)

which yields the tensor basis to project onto (see Fig. 1-right):

(7)
Figure 1: Left: Tensor unfolding concatenates the slices of a tensor along a selected mode. Right: Dimensionality reduction (HOSVD) of the rotational and respiratory domain.

Finally, can be described by a model tensor :

(8)

where and carry the low-dimensional () model weights for respiratory and angular variation, respectively.

2.3 Weight Estimation

Given an observed projection image at unknown respiratory phase and angle , our objective is to find coefficients , for respiration and rotation to best represent the observation in terms of the model:

(9)

However, as and need to be optimized simultaneously, this task is highly ill-posed. Tenenbaum et al. [13]

used an Expectation-Maximization algorithm to cope with this problem in their separation of identity and pose in face images. However, they benefit from the fact that only the pose is a continuous variable whereas identity is a discrete state, drastically simplifying their EM-approach. In our case, both respiratory and angular variation have to be considered continuous. Fortunately, we can incorporate prior knowledge about the trajectory into the estimation process. From the trajectory, the angle of each projection image is known even though the corresponding weights

are only given for particular angles within the training samples. Consequently, interpolating the desired rotational weights using those within the training set seems feasible under the assumption of a continuous non-sparse trajectory.

2.3.1 Rotational B-spline Interpolation.

Using the rotation angle as prior knowledge, we propose extending the bilinear model with a B-spline curve fitted to the rotational weights from training:

(10)

with being the B-spline basis functions. Using a uniform parametrization with respect to the training angles, of new angle is given as

(11)

2.3.2 Respiratory Weight Computation.

With the rotational weight interpolated, multiplying and , first removes the angular variation:

(12)

Collapsing the 1-dimension results in the angle-dependent model matrix for the new angle . Closing the loop to Eq. 3 without mean, computation of the respiratory weights simplifies to a linear problem solved via the pseudo-inverse of :

(13)

For application, the extracted respiratory weights could either be used as input for a respiratory model [7] to estimate internal deformation fields, or for data augmentation in the context of cone-beam CT by generating additional gated projections for different rotational weights at the constant phase corresponding to (see Eq. 5).

2.4 Data & Experiments

Evaluation was performed on 4D CTs of five patients, consisting of eight phase-binned volumes each at respiratory states , , , , inhale, and , , exhale. Using CONRAD [10], DRRs of size with an isotropic pixel spacing were created by forward projecting each of the volumes at angles over a circular trajectory of (in steps). Consequently, the full data tensor featured a dimensionality of for each patient. With the model being patient-specific, the following experiments were conducted individually for each patient and results were averaged where indicated.

Experiment 1.

Our goal is to provide a proof-of-concept of the bilinear decoupling and to investigate how accurately respiratory weights can be extracted using the proposed method. For each patient, a dense bilinear model was trained on the entire data tensor to assess the variance explained by the respiratory domain. For comparison, a SSM (linear PCA with mean normalization) of the 4D CT volumes was trained to assess weights and variance prior to the influence of the X-ray transform.

Experiment 2.

To investigate how well a previously unseen projection image for unknown angles and breathing phase can be decomposed into rotational and respiratory weights, every 6th angle (10 in total) was removed from training. Further, leave-one-out evaluation was performed, were each phase was subsequently removed prior to training, resulting in a sparser data tensor of size . In this scenario, the dense bilinear model provided a reference for the features to be expected. In the evaluation step, the corresponding projection image and its respective trajectory angle were fed to the model and the rotational and respiratory weights were estimated as described in section 2.3. From these weights, the projection image was rebuilt and model accuracy was assessed with respect to the mean gray-value error between the reconstructed image and the original.

Experiment 3.

To assess their use for predicting 3D information, the bilinear respiratory weights were used as a driving surrogate for a motion model [11], which, apart from the surrogate, usually consists of an internal motion representation and a internal-external correlation model. For motion representation, a SSM was trained for each test phase on the remaining seven 4D CT volumes (Eq. 2). With the 4D SSM weights and bilinear respiratory weights showing similar behavior but differing in scale (Fig. 2

), multi-linear regression 

[14] was chosen to correlate the bilinear weights to the 4D SSM weights. True to the leave-one-out nature, the regression matrix was trained between the weights of the seven remaining phases relating bilinear respiratory weights of feature dimension to 4D SSM weights of dimension (see also results of experiment 1). The rebuilt 3D volume was then compared to the ground truth volume in the 4D CT in terms of HU difference.

3 Results & Discussion

Figure 2: Feature weights for the first few principal components for linear PCA on the 4D CT (t. l.), bilinear respiratory (b. l.) and rotational weights (b. r.) as well as the explained variance caused by respiration in the linear and bilinear case (t. r.).
Experiment 1.

Fig. 2 shows the weights of the first few principal components for the linear PCA of the volumes as well as each phase and angle in the projection images. Notably, both first bilinear components are near constant. Unlike linear PCA (Eq. 2), data in the bilinear model does not have zero-mean (Eq. 5). Consequently, the first component points to the data mean while variation in the respective domain is encoded starting from the second component [13]. Appropriately, the -th bilinear respiratory component corresponds to the -th linear component of the 4D CT indicating that separation of respiratory and angular variance in the projections is in fact achieved.

Figure 3: Eigenimages in model tensor corresponding to rotational and respiratory features both starting from the same mean.

Additionally, the respiratory variance explained by the principal components is plotted (top right). Most of the variance in the bilinear case is already explained by movement towards the mean. Thus, the linear components better reflect that four components accurately describe over of the data variance. As a consequence of the two last mentioned results, one more feature should be extracted than expected by the volumetric 4D SSM, when using the bilinear weights as the driving surrogate. This is also the motivation for the regression matrix chosen in experiment 3. Fig. 3 shows the eigenimages corresponding to angular and respiratory variation, respectively. Noticeably, the rotational eigenimages contain mostly low-frequent variation inherent to the moving gantry whereas the respiratory direction encodes comparably high-frequent changes.

gray value trajectory angle
error []
Pat 1
Pat 2
Pat 3
Pat 4
Pat 5
Table 1:

Mean gray value error and standard deviation for leave-one-out evaluation of representing the projection images in terms of the bilinear model averaged over all phases. Percentage values are given with respect to the mean gray value in the respective reference projection image

Figure 4: Example reconstruction for exhale phase at angle. (a) Original DRR sample. (b) Leave-one-out bilinear reconstruction. (c) Difference image with level/window /. (d) Rotational weights from dense bilinear model and interpolated weights for the loo estimation. (e) Respiratory weight estimate.
Experiment 2.

Regarding the bilinear decomposition, Tab. 1 lists the percentage mean gray value error in the reconstructed projection images for each test-angle averaged over all estimated phases. The average error was compared to a reference mean gray value of . Exemplarily for one phase-angle combination of patient 1, Fig. 4 shows the leave-one-out estimation result for and . The proposed B-spline interpolation yields rotational weights close to the dense bilinear model up to the 10th component. Since both sets of weights correspond to slightly different eigenvectors, due to one model being trained on less data, deviation especially in the lower components is to be expected. Still, four to five respiratory weights are estimated accurately which, as shown previously, is sufficient to recover over respiratory variance. As such, we believe they contain much more information than just the respiratory phase.

mean voxel respiratory phase
error [HU]
Pat 1
Pat 2
Pat 3
Pat 4
Pat 5
Table 2: Mean voxel error and standard deviation in HU for each patient and phase averaged over all voxels and angles. The average standard deviation between angles was only within the same patient
Figure 5: Estimation visualization for patient 1 at exhale phase estimated from projections at angle. The overlay displays the original CT in cyan and the estimated volume in red, adding up to gray for equal HU.
Experiment 3.

To substantiate this assumption, Tab. 2 provides the mean HU-errors in the estimated CT volumes using projections at the ten test angles for each patient. Errors showed very small deviation w.r.t. the acquisition angle, indicating that the respiratory domain is extracted sufficiently regardless of the view. Given an HU range from to , a mean error of to indicates reasonable performance. For visualization, Fig. 5 shows the estimated CT for patient 1 at exhale phase, that was estimated using a projection at a trajectory angle of . Deviation is most prominent at vessel structures within the lung as well as at the left side of the diaphragm.

In this scenario, the motion representation is covered by a HU-based SSM to generate CT volumes for different respiratory weights. This is of course interchangeable by, for instance, 3D vector fields obtained via deformable image registration. In the context of motion tracking in IGRT, the estimated displacements could then be used to steer the treatment beam according to the tumor motion while at the same time enabling quality assurance in terms of 4D dose verification [12]. However, the main focus of this work was to provide proof-of-concept for the angular-respiratory decoupling process for which a HU-based SSM was sufficient. In future work, we will investigate the potential to predict entire dense deformation fields.

Our current leave-one-out evaluation assumed two simplifications, that will pose additional challenges. First, a perfect baseline registration of the training CT to the projection images may not be the case in every scenario. However, for the case of radiation therapy accurate alignment of patient and system is a prerequisite for optimal treatment. Second, no anatomical changes between the 4D CT and the rotational scan are taken into account. Further investigation on how these effects interfere with the decomposition are subject to future work.

4 Conclusion

In this paper, we demonstrate that the X-ray transform under respiratory and angular variation can be expressed in terms of a bilinear model given a continuous trajectory and that motion is observable in every projection. Using a prior 4D CT, we show that projection images on the trajectory can be bilinearly decomposed into rotational and respiratory components. Prior knowledge about the gantry angle is used to solve this ill-posed out-of-sample problem. Results for both 2D DRRs and estimated 3D volumes demonstrate that up to five components of the respiratory variance are recovered independent of the view-angle. These explain more than of the volumetric variation. As such, recovery of 3D motion seems possible. Currently our study is limited by two simplifications, namely perfect alignment and no inter-acquisition changes. Their investigation is subject to future work.

Acknowledgement

This work was partially conducted at the ACRF Image X Institute as part of a visiting research scholar program. The authors gratefully acknowledge funding of this research stay by the Erlangen Graduate School in Advanced Optical Technologies (SAOT).

References

  • [1] Aichert, A., Berger, M., Wang, J., Maass, N., Doerfler, A., Hornegger, J., Maier, A.K.: Epipolar Consistency in Transmission Imaging. IEEE Trans Med Imag 34(11), 2205–2219 (2015). https://doi.org/10.1109/TMI.2015.2426417
  • [2] Çimen, S., Hoogendoorn, C., Morris, P.D., Gunn, J., Frangi, A.F.: Reconstruction of Coronary Trees from 3DRA Using a 3D+t Statistical Cardiac Prior. In: Golland, P., Hata, N., Barillot, C., Hornegger, J., Robert, H. (eds.) Med Image Comput Comput Assist Interv (MICCAI). pp. 619–626 (2014)
  • [3] Cootes, T.F., Taylor, C.J., Cooper, D.H., Graham, J.: Active Shape Models - their training and application. Comput Vis Image Underst 61(1), 38–59 (Jan 1995)
  • [4] De Lathauwer, L., De Moor, B., Vandewalle, J.: A Multilinear Singular Value Decomposition. SIAM J Matrix Anal Appl 21(4), 1253–1278 (Jan 2000)
  • [5] Fassi, A., Seregni, M., Riboldi, M., Cerveri, P., Sarrut, D., Ivaldi, G.B., de Fatis, P.T., Liotta, M., Baroni, G.: Surrogate-driven deformable motion model for organ motion tracking in particle radiation therapy. Phys Med Biol 60(4), 1565–1582 (2015). https://doi.org/10.1088/0031-9155/60/4/1565
  • [6]

    Fischer, P., Pohl, T., Faranesh, A., Maier, A., Hornegger, J.: Unsupervised Learning for Robust Respiratory Signal Estimation from X-Ray Fluoroscopy. IEEE Trans Med Imag

    36(4), 865–877 (2017). https://doi.org/10.1109/TMI.2016.2609888
  • [7] Geimer, T., Unberath, M., Birlutiu, A., Wölfelschneider, J., Bert, C., Maier, A.: A Kernel-based Framework for Intra-fractional Respiratory Motion Estimation in Radiation Therapy. In: Proc IEEE Int Symp Biomed Imaging. pp. 1036–1039 (2017)
  • [8] Keall, P.J., Mageras, G.S., Balter, J.M., Emery, R.S., Forster, K.M., Jiang, S.B., Kapatoes, J.M., Low, D., Murphy, M.J., Murray, B.R., Ramsey, C.R., Van Herk, M.B., Vedam, S.S., Wong, J.W., Yorke, E.: The management of respiratory motion in radiation oncology report of AAPM Task Group 76. Med Phys 33(10), 3874–3900 (2006). https://doi.org/10.1118/1.2349696
  • [9] Kolda, T.G., Bader, B.W.: Tensor Decompositions and Applications. SIAM Rev 51(3), 455–500 (2009). https://doi.org/10.1137/07070111X
  • [10] Maier, A., Hofmann, H.G., Berger, M., Fischer, P., Schwemmer, C., Wu, H., Müller, K., Hornegger, J., Choi, J.H., Riess, C., Keil, A., Fahrig, R.: CONRAD–a software framework for cone-beam imaging in radiology. Med Phys 40(11), 111914 (2013). https://doi.org/10.1118/1.4824926
  • [11] McClelland, J.R., Hawkes, D.J., Schaeffter, T., King, A.P.: Respiratory motion models: A review. Med Image Anal 17(1), 19–42 (2013). https://doi.org/10.1016/j.media.2012.09.005
  • [12] Prasetio, H., Wölfelschneider, J., Ziegler, M., Serpa, M., Witulla, B., Bert, C.: Dose calculation and verification of the vero gimbal tracking treatment delivery. Phys Med Biol 63(3), 035043 (Feb 2018). https://doi.org/10.1088/1361-6560/aaa617
  • [13] Tenenbaum, J.B., Freeman, W.T.: Separating Style And Content With Bilinear Models. Neural Comput 12(6), 1247–1283 (2000)
  • [14] Wilms, M., Werner, R., Ehrhardt, J., Schmidt-Richberg, A., Schlemmer, H.P., Handels, H.: Multivariate regression approaches for surrogate-based diffeomorphic estimation of respiratory motion in radiation therapy. Phys Med Biol 59(5), 1147–1164 (2014). https://doi.org/10.1088/0031-9155/59/5/1147
  • [15] Yan, H., Wang, X., Yin, W., Pan, T., Ahmad, M., Mou, X., Cerviño, L., Jia, X., Jiang, S.B.: Extracting respiratory signals from thoracic cone beam CT projections. Phys Med Biol 58(5), 1447–64 (2013). https://doi.org/10.1088/0031-9155/58/5/1447