1 Introduction
Breast cancer screening with mammography has proven effective in reducing breast cancer related mortality (Plevritis and others, 2018). However, since mammography is a 2D imaging modality, it results in the projection of the internal tissue of the breast onto a single plane, yielding tissue superposition. This results in a lower sensitivity and specificity, especially with dense breasts. In addition, this lack of information on the true tissue distribution in the vertical (xray source to detector) direction limits the ability to accurately estimate the breast density (i.e., the proportion of the breast that consists of fibroglandular tissue). Up to now, estimates of breast density from mammography necessitated the use of models of the image acquisition process and assumptions and simplifications regarding tissue distribution. Although these models have been characterized for consistency and precision, there accuracy is unknown. Estimation of breast density is of intense interest due to it being an important factor for both masking effects and breast cancer risk (McCormack, 2006). As a result of its significance, breast density reporting is now mandated by law in several states in the USA. This makes it especially important that we use methods that provide reliable, robust, and consistent breast density estimates.
In addition to limiting the accuracy of breast density estimation, the 2D nature of mammography makes it impossible to estimate the radiation dose received by a specific patient during image acquisition. This is because, in breast imaging, the dose of interest is only that to the fibroglandular tissue, since this is the tissue at risk of developing breast cancer (Dance and Sechopoulos, 2016), and to determine this dose it is necessary to know its vertical location within the breast. Therefore, currently all breast dosimetry is based on dose estimates using a model breast, which does not reflect the dose deposited in the actual patient breast. It has been shown that the use of a model breast results in an average overestimation of the true patient breast of 30% and that this error can be as high as 120%, if not more (Dance and others, 2005; Sechopoulos et al., 2012; Hernandez et al., 2015). Even if a more accurate, unbiased model of the average breast is developed, an effort which is currently ongoing (Arana Peña et al., 2020), the over 100% error in patientspecific dose estimates by use of a populationwide model will not be ameliorated. To obtain accurate radiation dose estimates, the actual amount and position of the fibroglandular tissue in the actual patient’s breast needs to be considered. In this work we address both the estimation of amount and location of the fibroglandular tissue within each patient’s breast, resulting in accurate estimates of both the breast density and fibroglandular dose.
Digital breast tomosynthesis (DBT) has been introduced over the last two decades to decrease the impact of tissue superposition in mammography, by providing limited depth information, resulting in improved detection and diagnosis performance (Zackrisson and others, 2018; Zuley and others, 2013). Hence, DBT is rapidly replacing digital mammography as the primary xray technique for breast imaging (Niklason and others, 1997). DBT imaging consists of acquiring several lowdose planar xray projections over a limited angle. These projections are then used to reconstruct a pseudo3D volume, albeit with very limited vertical spatial resolution. Even so, the introduction of an imaging modality that provides some vertical information into widespread use for screening provides the opportunity, for the first time, to obtain accurate estimates of the breast density and fibroglandular dose. However, given the very poor spatial resolution in the vertical direction obtained with all current DBT reconstruction algorithms, the localization of the fibroglandular tissue in DBT images has proven extremely challenging. Previous efforts to achieve this have not resulted in improvements over the modelbased dose estimates obtained from 2D imaging (Geeraert, 2014), or have involved algorithms that require a lot of manual input, making them challenging to implement clinically (Förnvik et al., 2018).
In general, reconstruction of a 3D volume from computed tomography (CT) or DBT projections is an inverse problem, and there exist multiple algorithms for solving such problems. For DBT, in particular, the choice of the reconstruction algorithm and regularization parameters can greatly influence the reconstruction quality, as was shown in previous work (RodriguezRuiz and others, 2017; Michielsen et al., 2016). In this work, we propose a new approach to DBT reconstruction, based on our earlier work (Moriakov et al., 2019), using deep learning methods, that results in the 3D representation of the imaged breast optimized for estimation of the true distribution of the fibroglandular tissue. This in turn allows for accurate estimation of both the breast density and the radiation dose imparted on it, improving significantly upon the stateoftheart.
A recent and very promising development in medical imaging reconstruction is using methods that rely on deep learning (Arridge et al., 2019). The goal of this paper is to show the potential of such methods for the problem of DBT reconstruction, using the quantitative estimation of breast density and radiation dose as the target application. For this, we extended our previous results on DBT reconstruction (Moriakov et al., 2019) and investigate a datadriven reconstruction algorithm called Deep Breast Tomographic Reconstruction (DBToR), where the reconstruction is computed from projection data with a deep neural network. To train the model, and to test the performance of DBToR for the tasks of breast density and radiation dose estimation, we used dedicated breast CT images, where the tissue distribution is known, and use a finiteelement model to simulate the change in the tissue distribution when under compression in DBT. In addition to these images, we also evaluate the model on simulated breast phantoms.
1.1 Our contribution
In this work, we show: (i) the feasibility of reconstruction of DBT using deep learning, having developed a novel deep learningbased architecture to reconstruct DBT; (ii) that the resulting reconstructions have greatly improved vertical resolution compared to stateoftheart analytical and iterative reconstruction methods; (iii) that the proposed reconstruction method is able to provide accurate breast density estimates; and (iv) that the dense tissue location information results in accurate patientspecific dosimetric estimates.
2 Deep learningbased reconstruction
Before we describe our model in more detail, we give a brief overview of reconstruction problems formulated as inverse problems.
2.1 Inverse problems and DBT reconstruction
Image reconstruction can be stated as an inverse problem. Formally this means that given an image and measured projection data , we have
(1) 
where is the forward operator, or projection operator, that models how the image gives rise to the projection in the absence of noise, and is a
valued random variable modeling the noise component of the measurements. Typically, the spaces
and are Banach spaces, and here anddenote the spaces of functions describing true breast anatomy and measurements, respectively. The noise vector
can typically be considered to be Gaussian when the photon count is large enough, as is typically the case in transmission imaging.This problem can also be considered from a Bayesian perspective, where and
are probability spaces and the probability distribution
on is called the prior. Bayes theorem states that
(2) 
where the conditional probability is called the likelihood of data , which expresses the probability of taking measurements from given data . The likelihood is derived from the forward model, using the knowledge of the forward operator and the noise distribution .
The goal of reconstruction is to retrieve the image from the noisecorrupted measurements . Inversion of the operator is generally an illposed problem, and there exist different approaches towards estimating . From a probabilistic perspective, two of the most important estimators are Maximum Likelihood and Bayes estimators. In the Maximum Likelihood estimator, the estimated is given by finding that maximizes the likelihood of data . For Bayes estimator, the goal is minimizing the expected loss over all estimators that give an estimate of given measurements . That is,
(3) 
It can be shown that the Bayes estimate of the unknown parameter from measurements is simply the mean of the posterior distribution , that is,
(4) 
In our approach, we will obtain a neural network approximation to the Bayes estimator , where in Equation (3) is taken over the estimators given by a family of neural networks and where optimization is performed using minibatch stochastic gradient optimization.
2.2 DBT neural network
The DBToR algorithm, a datadriven algorithm designed for DBT reconstruction, extends the Learned PrimalDual (LPD) reconstruction algorithm (Adler and Öktem, 2018) to incorporate additional prior information about the geometry in the form of the thickness measurement of the breast under compression in DBT (as measured by the DBT device and stored in the DICOM image header). The algorithm provides a Bayes estimator given by a deep neural network, which is trained to reconstruct the images directly from projection data. The choice of a particular neural network architecture can greatly affect the results and is nontrivial. The DBToR neural network consists of several ‘reconstruction blocks’, which take in projection data, together with information on the thickness of the breast under compression as the initial input, perform a forward and a backward pass by taking projections and backprojections, and use a convolutional neural network to produce an intermediate reconstruction result, which is then improved further by each successive reconstruction block. The architecture of the network is illustrated in Figure 1.
We denote the training set of images . For an image , we let be the corresponding projection data. We assume that the training data is a representative sample from the domain of images that we want to reconstruct (DBT images). The neural network is trained in a supervised fashion as follows. We repeatedly sample image and the corresponding input projection data from the training dataset. The corresponding thickness mask is denoted by and is represented by a rectangular mask with the same width as the detector and where the height is given by the measured breast thickness after compression. These measurements are provided by the DBT system, and are available both at training and at test time.
Using this information, the reconstruction
is given by a deep neural network with parameters . We select the loss
to train the network in a supervised fashion. The input projection data is logtransformed, and scaled such that the standard deviation and mean over the training set is 1. The algorithm is described in Figure
2 and the network architecture in Figure 1. In all experiments we set the number of primal blocks and dual blocks to . The parameters are updated using the Adam optimizer with a cosine annealing learning rate schedule (Loshchilov and Hutter, 2019), i.e. the learning rate in step wasstarting at a learning rate of . The other Adam parameters were the default choices. is the total number of iterations (as in Algorithm 2). and for are ResNettype blocks consisting of three convolutional layers with kernel size
followed by a max pooling layer and
, and filters respectively. Finally, the operators and are the forward and backward operators respectively. At test time, the algorithm takes the input projection data and the breast thickness information to compute the reconstruction using the function .In total, we trained three versions of the DBToR algorithm: two versions were trained on virtual phantom projections, both without any noise and with varying levels of noise, and one version was pretrained on noisy virtual phantom projections and subsequently finetuned on noisy deformed Breast Computed Tomography (BCT) phantoms. This data is described in full detail in Section 3.1 and Section 3.2 respectively. For comparison, we also trained the basic LPD algorithm in addition to DBToR on noisefree virtual phantom projections. This was achieved by removing the height mask from the algorithm input.
2.3 Reconstructed image classification
The final step of the reconstruction for estimation of breast density and radiation dose involved in image acquisition, is the classification of the reconstructed image into skin, adipose, and fibroglandular tissue voxels. This is required because rather than relying on voxel attenuation values, which can be quite similar for different types of tissue, we need correct tissue labels to compute the breast density and the dose absorbed by the fibroglandular tissue. Given its high contrast, the skin layer was segmented through a fast seeded regiongrowing algorithm (Adams and Bischof, 1994), which grows the segmented region starting from a subset of seeds (corresponding to the voxels located on the outer edge of the skin layer) by subsequently including voxels whose intensity was higher than or equal to the mean seed intensity value. For fibroglandular tissue classification, we extend our previous work on breast CT classification (Caballo et al., 2018), and in the first step remove the skin temporarily from the image, with the resulting representation undergoing a wellestablished, automatic thresholding method based on fuzzy cmeans clustering (Bezdek, 1981)
, an algorithm generally adopted in the case of images with low noise content, accompanied by a nonnegligible degree of blurring, as is the case for our images. Briefly, voxels are iteratively assigned to a given class (adipose or fibroglandular tissue) in an unsupervised fashion, with the iteration stopping criterion aiming at maximizing the distance between the average voxel values of the two classes. As opposed to traditional cluster analysis, this method allows for a degree of fuzzy overlap between the classes over each iteration, which helps classify the boundary voxels in each subsequent iteration. The fuzzy partition term
(Bezdek, 1981) was experimentally tuned to a value of 2.0.3 Materials and Methods
To train and evaluate the algorithm, we created two datasets of 3D breast phantoms from which we extracted the coronal slices and their corresponding DBT projections. The first dataset consists of virtual 3D breast phantoms generated using a stochastic model, while the second is based on patient dedicated BCT images. DBT projections of these phantoms were simulated using deterministic simulation methods, with the posterior addition of Poisson noise. The use of virtual phantoms not only provided training data, but also allowed for assessing the accuracy of the density and dose estimates, since ground truth is known.
Each voxel of these phantoms was indexed with a label denoting the corresponding tissue type: skin, adipose tissue, fibroglandular tissue, and Cooper’s ligaments. The elemental compositions of these materials were obtained from the work of Hammerstein et al. (1979), except for the composition of Cooper’s ligaments, which was assumed to be identical to that of fibroglandular tissue. Linear attenuation coefficients at , a typical average energy of the spectra in clinical DBT imaging, were calculated for each material using the software from Boone and Chavez (Boone and Chavez, 1996). The resulting linear attenuation coefficients were for adipose tissue, for fibroglandular tissue (and Cooper’s ligaments), and for skin.
This process was done for the virtual phantom dataset and the patient BCT dataset.
3.1 Virtual phantom
We extracted 41499 coronal slices from 50 breast phantoms generated using the method of Lau et al. (2012). This method generates breast phantoms in two steps: first, the breast structure is simulated on a coarse scale by generating large compartments of adipose tissue Zhang et al. (2008); Bakic et al. (2011). Second, finer detail for fibroglandular tissue is added subsequently in the form of powerlaw noise (Reiser and Nishikawa, 2010). An example of the simulated phantoms is shown in Figure 3.
These 41499 coronal slices were used for training and validating the algorithm. Each breast phantom was either included in a training or validation fold completely or not at all, in order to prevent data contamination and bias.
3.2 Patient dedicated breast CT phantoms
Patient dedicated breast CT images were acquired for an unrelated, ethicalboard approved patient study evaluating this imaging technology. The images were released for other research purposes after anonymization. In order to compute the density and the accumulated dose to the fibroglandular tissue, the patient breast CT images were automatically classified into four categories (air, skin, adipose and fibroglandular tissue) using a previously developed algorithm for BCT image classification (Caballo et al., 2018) (see Section 2.3). The classified breasts then underwent simulated mechanical deformation as previously described (Fedon and others, 2019; García et al., 2020). Briefly, the breasts were converted into a finite element (FE) biomechanical model using the package iso2mesh (v.1.8; Matlab v.13a). A large number of 4node tetrahedral elements, between 100k and 500k, were used in order to minimize the numerical error during the FE analysis (del Palomar et al., 2008). Nearly incompressible (Poisson ratio equal to 0.495), homogeneous and isotropic NeoHookean material models for each tissue were used to describe their mechanical behaviour. The Young’s modulus for fibroglandular, adipose and skin tissue were set to , , and , respectively (Wellman, 1999)
. The mechanical compression was then simulated using the opensource package NiftySim (v.2.3.1; University College London)
(Johnsen and others, 2015), which uses a Total Explicity Dynamic Lagrangian approach to solve the mechanical FE problem (Miller et al., 2007).Each breast model was compressed to the thickness recorded in the corresponding DICOM header of the craniocaudal DBT view of that breast, which was acquired, for clinical purposes, during the same visit as the acquisition of the BCT image.
The total phantom population includes compressed breast thicknesses from to and chest walltonipple distances from to with an isotropic voxel size of , which is more than sufficient for dosimetric applications (Fedon and others, 2019).
Using this method, of which an example is given in Figure 4, we obtained a total of 28891 deformed BCT slices extracted from 91 different patient breasts. Given that the number of deformed BCT slices was substantially lower than that of virtual phantom slices, we pretrained the model with the latter, and then finetuned the model using the BCT slices from 46 patient BCT images. The other 45 patient BCT image phantoms were used for testing the reconstruction performance of the model and the accuracy of density and dosimetry estimates. Each patient breast was either completely included or excluded when selecting slices for finetuning and testing the model in order to prevent data contamination and bias.
3.3 Projection data
Limited angle fanbeam projections were simulated for all coronal phantom slices using a geometry with the center of rotation placed at the center of the phantom as seen in Figure 5. The xray source was placed above the center of rotation, and the sourcedetector distance was . A total of 25 equally spaced projections between and were generated, with the detector rotating with the xray source. The detector was a perfect photon counting system (100% efficiency) consisting of 1280 elements with a resolution of .
The forward model
was used for the simulations, with being the simulated projection data, the number of xray photons emitted towards detector pixel , the length of the intersection between voxel and the line between the source and detector pixel . The linear attenuation in voxel is denoted by . For the virtual phantom data, we generated a series of data sets at 3 noise levels from the noiseless simulated projections. This was accomplished by setting photon count with . For each noise level, a single Poisson noise realization was generated. For the deformed BCT phantoms, only photon counts corresponding to were used.
Baseline reconstructions were generated for both noiseless and noisy data using iterations of the Maximum Likelihood for Transmission (MLTR) algorithm (Nuyts et al., 1998), using the compressed breast thickness to set the size of the reconstruction volume and with no additional regularization. The MLTR algorithm with smoothing prior and resolution model is competitive with stateoftheart DBT reconstruction methods (Michielsen, 2016, pp. 165181). Resolution effects were not included in the computation and thus were not needed in the reconstruction. The smoothing prior was not included because it does not influence the spatial distribution of fibroglandular tissue in the reconstruction.
3.4 Density computation
Breast density by mass, also called glandularity (), was computed as follows:
(5) 
where is the number of voxels classified as fibroglandular tissue, is the number of voxels classified as adipose tissue, and and are the corresponding density for fibroglandular and adipose tissue, respectively, according to Hammerstein et al. Hammerstein et al. (1979). The true breast density of each BCT phantom was obtained by applying this equation to the phantom volumes themselves. The estimated breast density resulting from the proposed method was obtained by applying the equation to the classified reconstructed images.
3.5 Dose calculation
The mean glandular dose (MGD) estimations were performed using a previously described and validated Monte Carlo code (Fedon et al., 2018, 2018), based on the Geant4 toolkit (release 10.05, December 2018). The Monte Carlo geometry replicates the one used to generate the projections and is shown in Figure 5.
As during simulation of the DBT projections above, each voxel was labelled with an index related to its composition: air, adipose tissue, fibroglandular tissue, and skin, using the chemical compositions reported by Hammerstein et. al. Hammerstein et al. (1979).
The energy deposited in the fibroglandular voxels was recorded and then converted into dose according to the formula
(6) 
where is the energy deposited at the interaction event , and is the total fibroglandular breast mass.
Breast  Spectrum  HVL  # Cases 

Thickness (mm)  (mm Al)  
3039  W/Rh  27 kV  
4049  W/Rh  28 kV  
5059  W/Rh  29 kV  
6069  W/Rh  30 kV  
7079  W/Rh  31 kV 
primary xrays were emitted by an isotropic point source placed at from the detector and collimated to irradiate the entire detector. In order to replicate the tomosynthesis acquisition mode, a total of 25 projections were simulated from to , every 2°. The projection at replicates the mammographic acquisition. The number of primary particles ensured a statistical uncertainty on the total dose of less than , evaluated using the method proposed by Sempau et al. (2001). Photoelectric interactions, and coherent and incoherent scatter were included in the simulations without modifying the default cut range for photons (, corresponding to an energy of and for adipose and fibroglandular tissue, respectively).
The xray spectra were modeled using the TASMICS model (Hernandez and Boone, 2014) by adjusting the thickness of the modeled rhodium filter to match the first halfvalue layer measured with a solid state detector (RaySafe X2MAM sensor, Billdal, Sweden) in the modelled system as shown in Table 1. The Monte Carlo simulations for estimating the MGD were performed twice for each breast; once for each of the 45 BCT phantoms, and once each for the corresponding labelled DBToR reconstructions. In this way, the accuracy of the resulting patientspecific dosimetry estimates could be assessed.
3.6 Comparison to current reconstructions
The results of the DBToR reconstruction, prior to the voxel classification for estimation of breast density and dose, were compared to the baseline iterative MLTR reconstruction algorithm and the LPD algorithm. The LPD algorithm was trained using the noiseless version of the simulated projections.
4 Results
All models were trained on a single NVidia Titan X GPU with 12 GB of memory and batch size 1, the number of iterations was set to for all models. Training each model took approximately a week. At inference stage, the reconstruction takes around 1 second for a single coronal slice. For DBToR trained on noisefree virtual phantom data we report the corresponding loss, Structural Similarity Index (SSIM) (Wang et al., 2004)
and Peak SignaltoNoise Ratio (PSNR) on noisefree virtual phantom test data in Table
2. Lower loss and higher SSIM and PSNR values indicate better reconstruction performance. For DBToR trained on noisy virtual phantom projections, where we used noise level only during training, we report these metrics for noise levels in Table 3. In both tables, we report the mean and standard deviation of the metrics obtained using crossvalidation folds with of the data used for training and used for testing in each fold.The LPD algorithm is significantly outperformed in the noisefree case. Visual inspection of the slices produced by LPD revealed that the LPD often reconstructs breast regions adjacent to the compression paddle very poorly for both test and training data. In particular, it frequently fails to reproduce the flatness of the part of the skin surface which is in contact with the compression paddle. Since the same type of reconstruction artifacts appear for both training and test data, we ruled out overfitting of LPD as the cause of the artifacts. While it is possible that a much larger version of LPD with more reconstruction blocks would learn to correct these artifacts, we will see that DBToR resolves them without any further increase in the number of parameters. Since LPD performed poorly on noisefree projections (see Table 2), we excluded it from further training on noisy projections. The proposed DBToR algorithm outperforms the MLTR reconstruction algorithm at all noise levels and for all metrics being considered, while yielding visually more accurate reconstructions as well. It is also interesting to note from Table 3 that the performance of DBToR at noise level is comparable to the MLTR reconstruction algorithm at noise level , which corresponds to a 4 times higher photon count. At noise level , the performance of MLTR is slightly below the performance of MLTR for the noisefree case. At the same time, DBToR at reaches comparable level of performance to that of the DBToR on noisefree data, which can be seen from slightly higher PSNR and slightly lower SSIM as compared to the noisefree version.
Model  loss  SSIM  PSNR 

MLTR  
LPD  
DBToR 
Model  loss  SSIM  PSNR 

MLTR ()  
DBToR ()  
MLTR ()  
DBToR ()  
MLTR ()  
DBToR () 
Model  loss  SSIM  PSNR 

MLTR  
DBToR 
For DBToR trained on noisy virtual phantom projections and subsequently finetuned on deformed breast CT slices, where we used noise level only during training and finetuning, we summarize the reconstruction performance in Table 4, and in Figures 9, 10, and 11 we give examples of coronal, axial, and sagittal slices of the virtual breast phantom and corresponding MLTR reconstruction, DBToR reconstruction, and DBToR classification (all on noisy deformed breast CT slices). We observe that DBToR outperforms the baseline MLTR algorithm in terms of the reported metrics and visual quality of the slices, particularly noticeable for coronal and sagittal directions.
4.1 Breast density estimation
Figure 6 shows a boxwhisker plot of the absolute percentage difference in glandularity between the DBToR estimate and the ground truth (GT). It can be seen that, on average, no bias is observed (value ), and that the breast density estimates are accurate to within 2%, and almost always within two standard deviations of the mean.
(top) Boxwhisker plot and (bottom) BlandAltman plot of the absolute difference (DBToR  GT) of glandularity (in percentage). The symbols in the boxwhisker plot denote outliers.
4.2 Mean glandular dose estimation
The comparison between the MGD evaluated from the DBToR reconstructions and the GT is shown in the BlandAltman plots of Figure 7, for both mammography and DBT geometries.
As can be seen, no bias is observerd in the proposed dose estimation method (value ), with the data points equally scattered around zero, and that the largest error in the dose estimation is less than 20Visual inspection of the cases that lie beyond the limits reveals that this is the consequence of differences in the reconstructed fibroglandular distribution obtained with the DBToR model compared to the GT, as shown in an example in Figure 8.
For this case, the absolute difference on the glandularity is (namely for the DBToR and for GT). Thus, due to a higher amount of fibroglandular tissue closest to the xray source in the case of the GT, the radiation dose estimated by DBToR is lower by .
5 Discussion and conclusion
We presented a deep learningbased method for the reconstruction of DBT, which we call DBToR. The model is both data driven and modelbased, since the forward and backprojection operators for a given DBT geometry are a part of our neural network architecture and at the same time the model is trained to reduce the tomographic artifacts of the reconstruction. As training data we used two sources of data, one based on random samples with statistical properties similar to real breast volumes, and one dataset of patient breast CT images that have been compressed with a finite element model to simulate the same breast under compression in a DBT system.
Compared to LPD, in DBToR we added the compressed breast thickness as prior information. Since the limited angle causes a severely illposed problem, it was expected that this information would be definitely required, and the experiments (Table 2) confirmed that the result dramatically degrades, compared to the ’full’ DBToR, when this prior knowledge is not provided to the algorithm. Requiring this additional information does not limit the generalizability of the method, since it is readily available in all DBT systems.
The results indicate that the proposed algorithm outperforms the MLTR iterative reconstruction in terms of reconstruction quality for this application. Furthermore, the algorithm generalizes well even when trained on a small dataset and is robust to noise.
The simulated acquisitions in this work used a monoenergetic beam and and did not include xray scattered radiation, so it remains to be seen how our new reconstruction method will handle these factors. In practice the effect of not modeling the spectrum will likely be minimal as regular filtered backprojection reconstructions also do not account for these physical effects and apply a series of precorrection steps to the projection data instead, such as the beam hardening correction described by Herman (1979). We foresee using the same approach to extend our method to work in clinical data.
We have shown that the method achieves robust and accurate predictions of breast density, which is an important metric relating to masking and cancer risk. As opposed to current density estimation methods based on mammography and DBT projections, which require assumptions and modeling of the image acquisition process, the use of the images produced by DBToR allows for a direct estimation of the amount of dense tissue present in the volume, resulting in estimates to within 3%. Accurate determination of breast density opens up further opportunities for personalized riskbased screening. As is crucial for an accurate dosimetric estimate, the location in the vertical direction of the dense tissue is also estimated with accuracy, resulting in a stateoftheart dosimetric estimate. It is known that current model dose estimates introduce an average bias of 30%, and can misrepresent the actual patientspecific dose by up to 120% (Dance and others, 2005; Sechopoulos et al., 2012; Hernandez et al., 2015). In comparison, the results obtained here achieve errors below 20% with no systematic bias. True patientspecific dosimetry could be used, for the first time, to gather dose registries, especially for screening, ensuring the optimal use of this imaging technology, and allowing for continuous monitoring of dose trends and providing valuable data for additional optimization and development of existing and new imaging technologies.
The main limitation of the current work is that it works on a slice by slice basis rather than on a full 3D volume. With this simplification we were able to concentrate on the network structure rather than on the logistics of handling the enormous datasets needed to train a 3D model.
The logical next step for our algorithm is an extension to fully 3D data instead of 2D slices. From there on, we could extend the modelbased parts of the deep learning network instead of starting from precorrected projection data, as in current filtered backprojection methods, by including a polychromatic xray spectrum, xray scatter, and other relevant factors in the forward model. It could also be valuable to optimize the network output specifically for artificial intelligence reading by training the composite network in an endtoend fashion. Finally, the reconstruction of a diagnosticquality volume, for interpretation by radiologists, with the potential for the higher vertical resolution obtained here, would be a valuable improvement for clinical performance.
To conclude, we created a deep learning based reconstruction for DBT that was able to achieve accurate predictions of breast density and from there an accurate calculation of patient specific MGD.
Acknowledgments
The research was partially supported by National Cancer Institute, National Institutes of Health: 7R01CA181171 Susan G. Komen Foundation for the Cure: IIR13262248.
References
 Seeded Region Growing. IEEE Transactions on Pattern Analysis and Machine Intelligence 16 (6). Cited by: §2.3.
 Learned PrimalDual Reconstruction. IEEE Transactions on Medical Imaging 37 (6), pp. 1322–1332. External Links: Document, ISSN 02780062 Cited by: §2.2.
 Monte Carlo dose evaluation of different fibroglandular tissue distribution in breast imaging. In 15th International Workshop on Breast Imaging (IWBI2020), C. Van Ongeval, N. Marshall, and H. Bosmans (Eds.), pp. 76. External Links: Document, ISBN 9781510638310 Cited by: §1.
 Solving inverse problems using datadriven models. Acta Numerica 28, pp. 1–174. External Links: Document Cited by: §1.
 Development and characterization of an anthropomorphic breast software phantom based upon regiongrowing algorithm. Medical Physics 38 (6Part1), pp. 3165–3176. External Links: Document, ISSN 00942405, Link Cited by: §3.1.
 Pattern recognition with fuzzy objective function algorithms. Plenum, New York, NY. Cited by: §2.3.
 Comparison of xray cross sections for diagnostic and therapeutic medical physics. Medical Physics 23 (12), pp. 1997–2005. External Links: Document, ISSN 00942405, Link Cited by: §3.
 An unsupervised automatic segmentation algorithm for breast tissue classification of dedicated breast computed tomography images. Medical Physics 45, pp. 2542–2559. Cited by: §2.3, §3.2.
 Dosimetry in xraybased breast imaging. Physics in Medicine and Biology 61 (19), pp. R271–R304. External Links: Document, ISSN 00319155 Cited by: §1.
 Breast dosimetry using highresolution voxel phantoms. Radiation Protection Dosimetry 114 (13), pp. 359–363. External Links: Document, ISSN 01448420 Cited by: §1, §5.
 A finite element model to accurately predict real deformations of the breast. Med Eng Phys 30 (9), pp. 1089–1097. External Links: Document Cited by: §3.2.
 Internal breast dosimetry in mammography: Experimental methods and Monte Carlo validation with a monoenergetic xray beam. Medical Physics 45 (8), pp. 1724–1737. External Links: Document Cited by: §3.5.
 Internal breast dosimetry in mammography:Monte Carlo validation in homogeneous and anthropomorphic breast phantom with a clinical mammography system. Medical Physics 45 (8), pp. 3950–3961. External Links: Document Cited by: §3.5.
 Monte Carlo study on optimal breast voxel resolution for dosimetry estimates in digital breast tomosynthesis. Physics in Medicine & Biology 64 (1), pp. 015003. External Links: Document, ISSN 13616560 Cited by: §3.2, §3.2.
 Towards determination of individual glandular dose. In Medical Imaging 2018: Physics of Medical Imaging, G. Chen, J. Y. Lo, and T. Gilat Schmidt (Eds.), pp. 3. External Links: Document, ISBN 9781510616356 Cited by: §1.
 Realistic compressed breast phantoms for medical physics applications. In 15th International Workshop on Breast Imaging (IWBI2020), C. Van Ongeval, N. Marshall, and H. Bosmans (Eds.), pp. 73. External Links: Document, ISBN 9781510638310 Cited by: §3.2.
 Quantitative evaluation of fibroglandular tissue for estimation of tissuedifferentiated absorbed energy in breast tomosynthesis. Ph.D. Thesis, KU Leuven. Faculty of medicine. Cited by: §1.
 Absorbed Radiation Dose in Mammography. Radiology 130 (2), pp. 485–491. External Links: Document, ISSN 00338419, Link Cited by: §3.4, §3.5, §3.
 Correction for beam hardening in computed tomography. Physics in Medicine and Biology 24 (1), pp. 008. External Links: Document, ISSN 00319155, Link Cited by: §5.

Tungsten anode spectral model using interpolating cubic splines: unfiltered xray spectra from 20 kV to 640 kV
. Medical Physics 41, pp. 042101. Cited by: §3.5.  Breast dose in mammography is about 30% lower when realistic heterogeneous glandular distributions are considered. Medical Physics 42 (11), pp. 6337–6348. External Links: Document, ISSN 00942405 Cited by: §1, §5.
 NiftySim: A GPUbased nonlinear finite element package for simulation of soft tissue biomechanics. Int J Comput Assist Radiol Surg 10 (7), pp. 1077–1095. External Links: Document Cited by: §3.2.
 A statistically defined anthropomorphic software breast phantom. Medical Physics 39 (6Part1), pp. 3375–3385. External Links: Document, ISSN 00942405, Link Cited by: §3.1.

SGDR: Stochastic gradient descent with warm restarts
. In 5th International Conference on Learning Representations, ICLR 2017  Conference Track Proceedings, pp. 1–16. Cited by: §2.2.  Breast Density and Parenchymal Patterns as Markers of Breast Cancer Risk: A Metaanalysis. Cancer Epidemiology Biomarkers & Prevention 15 (6), pp. 1159–1169. External Links: Document, ISSN 10559965, Link Cited by: §1.
 Design of a model observer to evaluate calcification detectability in breast tomosynthesis and application to smoothing prior optimization. Medical Physics 43 (12), pp. 6577–6587. External Links: Document, ISSN 00942405 Cited by: §1.
 Maximum a posteriori reconstruction for digital breast tomosynthesis. Ph.D. Thesis, KU Leuven. Faculty of medicine, Leuven. Cited by: §3.3.
 Total Lagrangian explicit dynamics finite element algorithm for computing soft tissue deformation. Communications in Numerical Methods in Engineering 23 (2), pp. 121–134. External Links: Document, ISSN 10698299 Cited by: §3.2.
 Deep learning framework for digital breast tomosynthesis reconstruction. In Medical Imaging 2019: Physics of Medical Imaging, H. Bosmans, G. Chen, and T. Gilat Schmidt (Eds.), Vol. 1094804, pp. 220. External Links: Document, ISBN 9781510625433, ISSN 16057422 Cited by: §1, §1.
 Digital tomosynthesis in breast imaging. Radiology 205 (2), pp. 399–406. External Links: Document, ISSN 00338419 Cited by: §1.
 Iterative reconstruction for helical CT: a simulation study. Physics in Medicine and Biology 43 (4), pp. 729–737. External Links: Document Cited by: §3.3.
 Association of screening and treatment with breast cancer mortality by molecular subtype in US women, 20002012. JAMA  Journal of the American Medical Association 319 (2), pp. 154–164. External Links: Document, ISSN 15383598 Cited by: §1.
 Taskbased assessment of breast tomosynthesis: Effect of acquisition parameters and quantum noisea). Medical Physics 37 (4), pp. 1591–1600. External Links: Document, ISSN 00942405, Link Cited by: §3.1.
 New reconstruction algorithm for digital breast tomosynthesis: better image quality for humans and computers. Acta Radiologica. Cited by: §1.
 Characterization of the homogeneous tissue mixture approximation in breast imaging dosimetry. Medical Physics 39 (8), pp. 5050–5059. External Links: Document, ISSN 00942405 Cited by: §1, §5.
 Monte Carlo simulation of electron beams from an accelerator head using PENELOPE. Physics in Medicine and Biology 46, pp. 1163–1186. Cited by: §3.5.
 Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Transactions on Image Processing 13 (4), pp. 600–612. External Links: Document, ISSN 10577149, Link Cited by: §4.
 Tactile imaging. Ph.D. Thesis, Harvard University. Cited by: §3.2.
 Oneview breast tomosynthesis versus twoview mammography in the malmö breast tomosynthesis screening trial (mbtst): a prospective, populationbased, diagnostic accuracy study. The Lancet Oncology 19 (11), pp. 1493–1503. External Links: ISSN 14702045 Cited by: §1.
 Development of an anthropomorphic breast software phantom based on region growing algorithm. In Medical Imaging 2008: Visualization, ImageGuided Procedures, and Modeling, M. I. Miga and K. R. Cleary (Eds.), pp. 69180V. External Links: Document Cited by: §3.1.
 Digital breast tomosynthesis versus supplemental diagnostic mammographic views for evaluation of noncalcified breast lesions. Radiology 266 (1), pp. 89–95. External Links: Document, ISSN 00338419 Cited by: §1.
Comments
There are no comments yet.