Deep Learning Framework for Digital Breast Tomosynthesis Reconstruction

08/14/2018 ∙ by Nikita Moriakov, et al. ∙ Radboudumc 0

Digital breast tomosynthesis is rapidly replacing digital mammography as the basic x-ray technique for evaluation of the breasts. However, the sparse sampling and limited angular range gives rise to different artifacts, which manufacturers try to solve in several ways. In this study we propose an extension of the Learned Primal-Dual algorithm for digital breast tomosynthesis. The Learned Primal-Dual algorithm is a deep neural network consisting of several `reconstruction blocks', which take in raw sinogram data as the initial input, perform a forward and a backward pass by taking projections and back-projections, and use a convolutional neural network to produce an intermediate reconstruction result which is then improved further by the successive reconstruction block. We extend the architecture by providing breast thickness measurements as a mask to the neural network and allow it to learn how to use this thickness mask. We have trained the algorithm on digital phantoms and the corresponding noise-free/noisy projections, and then tested the algorithm on digital phantoms for varying level of noise. Reconstruction performance of the algorithms was compared visually, using MSE loss and Structural Similarity Index. Results indicate that the proposed algorithm outperforms the baseline iterative reconstruction algorithm in terms of reconstruction quality for both breast edges and internal structures and is robust to noise.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Digital breast tomosynthesis (DBT) is rapidly replacing digital mammography (DM) as the basic x-ray technique for evaluation of the breasts. DBT overcomes some of the inherent limitations of DM by adding limited depth information to mammographic images. This prevents inherent information loss caused by tissue superposition, and may even increase specificity by resolving tissue projections that mimick breast lesions. A DBT acquisition consists of several low-dose planar x-ray projections at equally spaced intervals over a limited angle. These projections are then reconstructed to a three-dimensional volume. However, this sparse sampling and limited angular range gives rise to different artifacts, which manufacturers try to solve in several ways. In previous work [6] it was shown that the chosen reconstruction algorithm can greatly influence the reconstruction quality.

In this paper, we are interested in DBT reconstruction with a data-driven approach using deep learning. As we will show, not only does this allow us to easily include complicated (shape) priors into the reconstruction, but it additionally opens opportunities for end-to-end learning and predicting lesion locations in CAD-systems, or to compute the accumulated x-ray dose. We propose a data-driven reconstruction algorithm, Deep Breast Tomographic Reconstruction (DBToR) using a deep neural network, which extends the previously proposed Learned Primal-Dual algorithm. The neural network consists of several ‘reconstruction blocks’, which take in raw sinogram (i.e., projection) data as the initial input, perform a forward and a backward pass by taking projections and back-projections, and use a convolutional neural network to produce an intermediate reconstruction result which is then improved further by each successive reconstruction block. This neural network is trained by stochastic gradient descent, which minimizes the

loss between the reconstruction computed reconstruction algorithms to determine the reconstruction volume. In contrast to CT imaging, measurements from only a narrow range of sparsely sampled angles are available in breast tomosynthesis. However we also have access to information on the compressed breast thickness, and it is used in the classical reconstruction algorithms to determine the reconstruction volume. We provide these thickness measurements as additional prior information to the Learned Primal-Dual algorithm by giving it a mask and allow it to learn how to use this mask efficiently.

We have tested the algorithm on virtual breast phantoms. The results indicate that the proposed algorithm outperforms the baseline iterative reconstruction algorithm in terms of reconstruction quality for both breast edges and breast internal structures. Furthermore, the algorithm generalizes well even when trained on a small dataset and is robust to noise.

2 Methods

2.1 Material

To train and evaluate the algorithm, we created a total of 1124 simulated breast phantoms. To limit computational complexity, the phantoms consisted of 2D coronal slices extracted from virtual 3D breast phantoms [2]. These phantoms were indexed with labels for four different materials: skin, adipose tissue, glandular tissue, and Cooper’s ligaments. The elemental compositions of these materials were obtained from the work of Hammerstein et. al.[3], except for the composition of Cooper’s ligaments, which was assumed to be identical to that of glandular tissue. Linear attenuation coefficients at were calculated for each material using the software from Boone and Chavez [4]. The phantoms include compressed breast thicknesses from   to   and widths from   to   with an isotropic voxel size of .

Figure 1: Sample breast image

Limited angle fan-beam projections were simulated for all phantoms using a geometry with the center of rotation placed at the bottom center of the phantom. The x-ray source was placed above the center of rotation, and the source-detector distance was . A total of 25 equally spaced projections between   and   were generated, with the detector rotating with the x-ray source. The detector was a perfect photon counting system consisting of 1280 elements of width. The forward model was used for the simulations, with the simulated projection data,  the number of x-ray photons emitted towards detector pixel ,  the intersection between voxel  and the line between the source and detector pixel , and the linear attenuation in voxel . The noiseless simulated projection data were used to generate a series of data sets at 17 noise levels. This was simulated by setting photon count with . The cases with have a noise level of similar magnitude to clinical DBT projection data. For each noise level, 10 Poisson noise-realizations were generated, resulting in a total of 11240 projection sets at each dose level.

Reference reconstructions were generated for both noiseless and noisy data using 100 iterations of MLTR without any regularization [5].

2.2 Algorithm

The DBToR algorithm, which we propose for the problem is a modification of the Learned Primal-Dual Algorithm[1] (LPD), which we extend by taking breast thickness measurements into account in order to improve reconstruction quality. These breast thickness measurements are computed as the distance between the detector cover plate and the compression paddle, and are available during testing. The measurements of breast thickness for breast can be turned into a 2D mask , which is a mask with constant height and full-width, which restricts the region in which the breast is located in one axis. Compared to the base LPD algorithm, we have seen that the addition of mask information leads to more stable training and higher reconstruction quality. Complete algorithm training procedure is provided as Algorithm 1, where is the projection operator and is the backprojection. At test time, we compute the reconstruction from the given height mask and the projections as .

1 Function compute_reconstruction(, ):
2       ;

# Initialize primal vector

 ; # Initialize dual vector for  to  do
3            
4             end for
5            return
6            
7            for  to  do
8                   if  then
9                         ; # Sample image from the training dataset ; # Retrieve corresponding projection data ; # Retrieve corresponding mask data 
10                         ; change parameters to reduce ;
11                         end for
Algorithm 1 DBToR algorithm and training

Input sinogram data is log-transformed, after which we scale it so that the resulting mean and standard deviation across the training dataset equal

. Forward projection is a linear operator. In all experiments and freq=1. for training on noisy data and for training on noise-free data. for are neural networks with weights respectively, which we call the dual reconstruction block and the primal reconstruction block. Each primal/dual reconstruction block is a ResNet-type block consisting of 3 convolutional layers, which is similar to the reconstruction blocks in LPD algorithm [1]. We initialize by zeros. Parameters of the neural network are optimized by performing an iteration of ADAM optimizer (line 16) using cosine annealing as a learning rate schedule starting at a learning rate starting of .

(a) Baseline,
(b) Baseline,
(c) Baseline,
(d) DBToR,
(e) DBToR,
(f) DBToR,
Figure 2: Sample reconstructions for different noise levels

3 Results

In this section we provide a summary of the results and compare the proposed DBToR algorithm to the baseline iterative reconstruction algorithm and the Learned Primal-Dual algorithm. We trained two versions of the DBToR algorithm: one on noise-free projections and one on noisy projections at noise level . For DBToR trained on noise-free data we report the corresponding loss, Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR) on noise-free test data in Table 1, while for DBToR trained on noisy projections we report these metrics for noise levels in Table 2. These results were obtained by making 3 random cross-validation splits with approximately 50% for training and 50% for testing at the ‘patient’ level, thus we ensured that all breast slices for any specific patient belong to either the train or to the test set for each split. For noise-free projections, we trained the basic LPD algorithm in addition to DBToR in order to compare the performance (Table 1). Since LPD performed poorly on noise-free projections, we excluded it from further training on noisy projections.

The proposed DBToR algorithm outperforms the baseline iterative reconstruction algorithm at all noise levels and for all metrics being considered, while yielding visually more accurate reconstructions as well (see ground truth in Figure 1 and reconstructions in Figure 2). The LPD algorithm is significantly outperformed in the noise-free case. It is also interesting to note from Table 2 that the performance of DBToR at noise level is comparable to the baseline iterative reconstruction algorithm at noise level , which corresponds to a 4 times higher photon count. Further performance gains are to be expected when training on a larger dataset.

Model -loss SSIM PSNR
Baseline (noise-free)
LPD algorithm (noise-free)
DBToR algorithm (noise-free)
Table 1: Result summary for testing on noise-free projections, mean standard deviation across 3 cross-validation dataset splits are given for each algorithm and each metric.
Model -loss SSIM PSNR
Baseline ()
DBToR ()
Baseline ()
DBToR ()
Baseline ()
DBToR ()
Table 2: Result summary for testing on noisy projections for different noise levels , mean standard deviation across 3 cross-validation dataset splits are given for each algorith, metric and noise level.

4 Discussion and conclusions

We have presented DBToR, a modification of the Learned Primal-Dual reconstruction algorithm, which is specifically suited for digital breast tomosynthesis. We showed that adding priors such as the breast thickness improves learning stability, generalization and reconstruction quality. Furthermore, we have shown that the DBToR algorithm outperforms the baseline iterative reconstruction algorithm and is robust to noise.

This paper has not been submitted for consideration elsewhere.

References

  • [1] J. Adler and O. Öktem, “Learned Primal-Dual Reconstruction,” IEEE Trans. Med. Imaging 37(6), pp. 1322–-1332 (2018).
  • [2] B. A. Lau, I. Reiser, R. M. Nishikawa, and P. R. Bakic, “A statistically defined anthropomorphic software breast phantom,” Med. Phys., vol. 39, no. 6, pp. 3375–3385, 2012.
  • [3] G. R. Hammerstein, D. W. Miller, D. R. White, M. E. Masterson, H. Q. Woodard, and J. S. Laughlin, “Absorbed radiation dose in mammography,” Radiology, vol. 130, no. 2, pp. 485–491, 1979.
  • [4] J. M. Boone and A. E. Chavez, “Comparison of x-ray cross sections for diagnostic and therapeutic medical physics,” Med. Phys., vol. 23, no. 12, pp. 1997–2005, 1996.
  • [5] J. Nuyts, B. De Man, P. Dupont, M. Defrise, P. Suetens, and L. Mortelmans, “Iterative reconstruction for helical CT: a simulation study,” Phys. Med. Biol., vol. 43, no. 4, pp. 729–737, 1998.
  • [6] A. Rodriguez-Ruiz, J. Teuwen, S. Vreeman, R.W. Bouwman, R.E. van Engen, N. Karssemeijer, R.M. Mann, A.  Gubern-Merida, I. Sechopoulos, “New reconstruction algorithm for digital breast tomosynthesis: better image quality for humans and computers,” Acta Radiologica, 2017.