1 Introduction
Breast cancer is the most frequently diagnosed cancer among women across the globe. Among all types of breast abnormalities, breast masses are the most common but also the most challenging to detect and segment, due to variations in their size and shape and low signaltonoise ratio [6]. An irregular or spiculated margin is the most important feature in indicating a cancer. The more irregular the shape of a mass, the more likely the lesion is malignant [12]. Oliver et al. demonstrated in their review paper that mass segmentation provides detailed morphological features with precise outlines of masses, and plays a crucial role in a subsequent cancerous classification task [12].
The main roadblock faced by mass segmentation algorithms is the insufficient volume of contour delineated data, which directly leads to inadequate accuracy [4]. The UNet [13]
, as a Convolutional Neural Network (CNN) based segmentation algorithm, is shown to perform well with limited training data by interlacing multiresolution information. However, the CNN segmentation algorithms including the UNet are limited by the weak consistency of predicted pixel labels over homogeneous regions. To improve the labelling consistency and completeness, probabilistic graphical models
[5]have been applied for mass segmentation, including Structured Support Vector Machine (SSVM)
[7] and Conditional Random Field (CRF) [6]as a postprocessing technique. To train the CRF integrated network in an endtoend way, the CRF with the meanfield inference is realised as a recurrent neural network
[14]. This is applied on mass segmentation [15], and achieved the stateofart mass segmentation performance. Another limitation of CNN segmentation algorithms is that as the depth of the CNNs increase for better performing deep features, they may suffer from the gradient vanishing and exploding problems, which are likely to hinder the convergence
[8]. Deep residual learning is shown to address this issue by mapping layers with residuals explicitly instead of mapping the deep network directly [8].In this work, the CRUNet is proposed to precisely segment breast masses with smallsamplesized mammographic datasets. Our main contributions include: 1) the first neural network based segmentation algorithm that considers both pixellevel labelling consistency and efficient training via integrating the UNet with CRF and deep residual learning; 2) the first deep learning mass segmentation algorithm, which does not require any preprocessing or postprocessing techniques; 3) the CRUNet achieves the best mass segmentation performance on the two most commonly used mammographic datasets when compared to other related methodologies.
2 Methodology
The proposed algorithm CRUNet is schematically shown in Fig. 1. The inputs are mammogram regions of interest (ROIs) that contain masses and the outputs are the predicted binary images. In this section, a detailed description of applied methods is introduced: our UNet with residual learning, followed by the pixellevel labelling with graphical inference.
2.1 UNet with Residual Learning
The UNet is shown to perform well with a limited volume of training data for segmentation problems in medical imaging [13], which suits our situation. However, the gradient vanishing and explosion problem, which hinders the convergence, is not considered in the UNet. We integrate residual learning into the UNet to precisely segment breast masses over a small sample size training data. Assuming ( represents the image lattice) as an ROI and as the corresponding binary labelling image (0 denotes background pixels and 1 for the mass pixels), the training set can be represented by .
The UNet comprises of a contractive downsampling and expansive upsampling path with skip connections between the two parts, which makes use of standard convolutional layers. The output of th layer with input at pixel is formulated as follows:
(1) 
where represents for kernel size,
for stride or maxpooling factor, and
is the layer operator including convolution, maxpooling and the ReLU activation function.
Then we integrate the residual learning into the UNet, which solves the applied UNet network mapping with:
(2) 
thus casting the original mapping into , where is a convolution kernel and linearly projects to match ’s dimensions as Fig. 1. As the UNet layers resize the image, residuals are linearly projected either with kernel convolutional layer along with maxpooling or upsampling and
convolution to match dimensions. The detailed residual connections of layer 2 and layer 6 are described in Fig.
2. These layers are shown as examples as all residual layers have analogous structure. In the final stage, a convolutional layer with softmax activation creates a pixelwise probabilistic map of two classes (background and masses). The residual UNet loss energy for each output during training is defined with categorical crossentropy. Mathematically,(3) 
where
is the residual UNet output probability distribution at position
given the input ROI and parameters .Note that the standard UNet is designed for images of size . Here we modify the standard UNet to adapt mammographic ROIs (
) with zeropadding for downsampling and upsampling. Residual shortcut additions are calculated in each layer. Afterthat, feature maps are concatenated as: layer 1 with layer 7, layer 2 with layer 6, layer 3 with layer 5 as shown in Fig.
1. Both original ROIs and UNet Outputs are then fed into the graphical inference layer.2.2 Graphical Inference
Graphical models are recently applied on mammograms for mass segmentation. Among them, CRF incorporates the label consistency with similar pixels and provide sharp boundary and finegrained segmentation. Mean field iterations are applied as the inference method to realise the CRF as a stack of RNN layers [14, 15]. The cost function for CRF () can be defined as follows:
(4) 
where A is the partition function, is the unary function which is calculated on the residual UNet output, and is the pairwise potential function which is defined with the label compatibility for position and [14], Gaussian kernels , and corresponding weights , [10] as [6, 15].
Finally, by integrating (3) and (4) the total loss energy in the CRUNet for each input is defined as:
(5) 
where
is a tradeoff factor, which is empirically chosen as 0.67. And the whole CRUNet is trained by backpropagation.
3 Experiments
3.1 Datasets
The proposed method is evaluated on two publicly available datasets INbreast [11] and DDSMBCRP [9]. INBreast is a fullfield digital mammographic dataset (70 pixel resolution), which is annotated by a specialist with lesion type and detailed contours for each mass. 116 accurately annotated masses are contained with mass size ranging from 15 to 3689. The DDSMBCRP [9] database is selected from the Digital Database for Screening Mammography (DDSM) database, which contains digitized film screen mammograms (43.5 microns resolution) with corresponding pixelwise ground truth provided by radiologists.
To compare the proposed methods with other related algorithms, we use the same dataset division and ROIs extraction as [6, 7, 15], in which ROIs are manually located and extracted with rectangular bounding boxes and then resized into
pixels using bicubic interpolation
[6]. In work [6, 7, 15], extracted ROIs are preprocessed with the Ball and Bruce technique [1], which our algorithms do not require. The INbreast dataset is divided into 58 training and 58 test ROIs; The DDSMBCRP is divided into 87 training and 87 test ROIs [6]. The training data is augmented by horizontal flip, vertical flip, and both horizontal and vertical flip.Methodology  INbreast  DDSMBCRP  Residual  Preprocess  Postprocess 
Cardoso et. al. [3]          
Beller et. al. [2]          
Dhungel et. al. [7]  ✓  ✓  
Dhungel et. al. [6]  ✓  ✓  
Zhu et. al. [15]  ✓  
UNet  
CRUNet ()  ✓  
CRUNet ()  ✓  
CRUNet, No R ()  
CRUNet ()  ✓ 
3.2 Experiment Configurations
In this paper, each component of the CRUNet is experimented, including and the CRUNet without residual learning (CRUNet, No R). In the CRUNet, convolutions are first computed with kernel size , which are then followed by a skip to compute the residual as shown in Fig. 1. The feature maps in each downsampling layer are of size 16, 32, 64, and 128 respectively, while the ROIs spatial dimensions are , , and . To avoid overfitting, dropout layers are involved with 50% dropout rate. The resolution of two datasets are different, with the DDSM’s much higher than the INbreast’s. To address this, the convolutional kernel size for DDSM is chosen as
by experimental grid search. All other hyper parameters are identical. The whole CRUNet is optimized by the Stochastic Gradient Descent algorithm with the Adam update rule.
3.3 Performance and Discussion
All stateofart methods and the CRUNet’ performances are shown in the Table 1, where [15] are reproduced, results of [2, 3, 6, 7] are from their papers. Table 1 shows that our proposed algorithm performs better than other published algorithms on both data sets. In INbreast, the best Dice Index (DI) 93.66% is obtained with CRUNet, No R () and a similar DI 93.32% is achieved by its residual learning; while in DDSMBCRP, all stateofart algorithm performs similarly and the best DI 91.43% is obtained by CRUNet (). The CRUNet performs worse on DDSMBCRP than INbreast, which is because of its worse data quality. To better understand the dice coefficients distribution in test sets, Fig. 3 shows the histogram of dice coefficients and sampled cumulative distribution of two datasets. In those figures we can observe that the CRUNet achieves a higher proportion of cases with DI . In addition, all algorithms follow a similar distribution, but Zhu’s algorithm has a bigger tail than others on the INbreast data. To visually compare the performances, example contours from the CRUNet () and Zhu’s algorithms are shown in Fig. 4. It depicts that while achieving a similar DI value to Zhu’s method, the CRUNet obtains a less noisy boundary. To examine the tail in Zhu’s DIs histogram (DI 81%), Fig. 5 compares the contours of the hard cases, which suggests that the proposed CRUNet provides better contours for irregular shape masses with less noisy boundaries.
4 Conclusions
In summary, we propose the CRUNet to improve the standard UNet segmentation performance via incorporating the advantages of probabilistic graphic models and deep residual learning. The CRUNet algorithm does not require any tedious preprocesssing or postprocessing techniques. It outperforms published stateofart methods on INbreast and DDSMBCRP with best DIs as 93.66% and 91.14% respectively. In addition, it achieves higher segmentation accuracy when the applied database is of higher quality. The CRUNet provides similar contour shapes (even for hard cases) to the radiologist with less noisy boundary, which plays a vital role in subsequent cancerous diagnosis.
References
 [1] Ball, J.E., Bruce, L.M.: Digital mammographic computer aided diagnosis (cad) using adaptive level set segmentation. In: Engineering in Medicine and Biology Society, 2007. EMBS 2007. 29th Annual International Conference of the IEEE. pp. 4973–4978. IEEE (2007)
 [2] Beller, M., Stotzka, R., Müller, T.O., Gemmeke, H.: An examplebased system to support the segmentation of stellate lesions. In: Bildverarbeitung für die Medizin 2005, pp. 475–479. Springer (2005)

[3]
Cardoso, J.S., Domingues, I., Oliveira, H.P.: Closed shortest path in the original coordinates with an application to breast cancer. International Journal of Pattern Recognition and Artificial Intelligence
29(01), 1555002 (2015)  [4] Carneiro, G., Zheng, Y., Xing, F., Yang, L.: Review of deep learning methods in mammography, cardiovascular, and microscopy image analysis. In: Deep Learning and Convolutional Neural Networks for Medical Image Computing, pp. 11–32. Springer (2017)

[5]
Chen, D., Lv, J., Yi, Z.: Graph regularized restricted boltzmann machine. IEEE transactions on neural networks and learning systems
29(6), 2651–2659 (2018)  [6] Dhungel, N., Carneiro, G., Bradley, A.P.: Deep learning and structured prediction for the segmentation of mass in mammograms. In: International Conference on Medical Image Computing and ComputerAssisted Intervention. pp. 605–612. Springer (2015)
 [7] Dhungel, N., Carneiro, G., Bradley, A.P.: Deep structured learning for mass segmentation from mammograms. In: Image Processing (ICIP), 2015 IEEE International Conference on. pp. 2950–2954. IEEE (2015)

[8]
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016)
 [9] Heath, M., Bowyer, K., Kopans, D., Moore, R., Kegelmeyer, P.: The digital database for screening mammography. Digital mammography pp. 431–434 (2000)
 [10] Krähenbühl, P., Koltun, V.: Efficient inference in fully connected crfs with gaussian edge potentials. In: Advances in neural information processing systems. pp. 109–117 (2011)
 [11] Moreira, I.C., Amaral, I., Domingues, I., Cardoso, A., Cardoso, M.J., Cardoso, J.S.: Inbreast: toward a fullfield digital mammographic database. Academic radiology 19(2), 236–248 (2012)
 [12] Oliver, A., Freixenet, J., Marti, J., Perez, E., Pont, J., Denton, E.R., Zwiggelaar, R.: A review of automatic mass detection and segmentation in mammographic images. Medical image analysis 14(2), 87–110 (2010)
 [13] Ronneberger, O., Fischer, P., Brox, T.: Unet: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computerassisted intervention. pp. 234–241. Springer (2015)
 [14] Zheng, S., Jayasumana, S., RomeraParedes, B., Vineet, V., Su, Z., Du, D., Huang, C., Torr, P.H.: Conditional random fields as recurrent neural networks. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1529–1537 (2015)
 [15] Zhu, W., Xiang, X., Tran, T.D., Hager, G.D., Xie, X.: Adversarial deep structured nets for mass segmentation from mammograms. arXiv preprint arXiv:1710.09288 (2017)