BUSU-Net: An Ensemble U-Net Framework for Medical Image Segmentation

by   Wei Hao Khoong, et al.
National University of Singapore

In recent years, convolutional neural networks (CNNs) have revolutionized medical image analysis. One of the most well-known CNN architectures in semantic segmentation is the U-net, which has achieved much success in several medical image segmentation applications. Also more recently, with the rise of autoML ad advancements in neural architecture search (NAS), methods like NAS-Unet have been proposed for NAS in medical image segmentation. In this paper, with inspiration from LadderNet, U-Net, autoML and NAS, we propose an ensemble deep neural network with an underlying U-Net framework consisting of bi-directional convolutional LSTMs and dense connections, where the first (from left) U-Net-like network is deeper than the second (from left). We show that this ensemble network outperforms recent state-of-the-art networks in several evaluation metrics, and also evaluate a lightweight version of this ensemble network, which also outperforms recent state-of-the-art networks in some evaluation metrics.


page 1

page 2

page 3

page 4


Scalable Neural Architecture Search for 3D Medical Image Segmentation

In this paper, a neural architecture search (NAS) framework is proposed ...

Convolutional neural network stacking for medical image segmentation in CT scans

Computed tomography (CT) data poses many challenges to medical image seg...

Bi-Directional ConvLSTM U-Net with Densley Connected Convolutions

In recent years, deep learning-based networks have achieved state-of-the...

DCSAU-Net: A Deeper and More Compact Split-Attention U-Net for Medical Image Segmentation

Image segmentation is a key step for medical image analysis. Approaches ...

MixModule: Mixed CNN Kernel Module for Medical Image Segmentation

Convolutional neural networks (CNNs) have been successfully applied to m...

Automated Segmentation of Cervical Nuclei in Pap Smear Images using Deformable Multi-path Ensemble Model

Pap smear testing has been widely used for detecting cervical cancers ba...

A Divide-and-Conquer Approach towards Understanding Deep Networks

Deep neural networks have achieved tremendous success in various fields ...

I Introduction

In the field of medical imaging, medical image segmentation has been playing an increasingly important role over the years. There has been an increasing demand for accurate, fast and cost-effective automated processing in medical image analysis equipment, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and ultrasound. Automated processes can not only save time and costs, but also reduce the reliance on manual labor (radiographer) and human error.

To provide grounds for clinical diagnosis and to assist the doctors in making more accurate diagnoses, the medical imaging analysis equipment has to segment portions of interest (e.g. diseased vessels) in the medical image and extract relevant features out of it. These can be done via deep learning methods such as convolutional neural networks (CNNs), which is a booming area of active research in recent years. Research and applications on deep learning methods for image segmentation, alongside big data and cloud computing has been driving significant progress in the field of computer vision. Deep neural networks are mostly applied to classification problems, where the output of the network is in the form of a single label or a set of probability values linked to a given input image. Fully convolutional neural networks (FCN) was one of the first deep network method applied to image segmentation. The FCN architecture was extended to U-Net 

[11], which achieved state-of-the-art results in segmentation without the need for a large amount of data. This not only saves time during training of the model, but also opens up new directions of research which can leverage the minimal requirement of image data to produce more accurate and robust image segmentation networks.

Ii Related Work

In medical imaging, semantic segmentation has playing an important role in recent years. Deep learning approaches have been achieving performance almost equivalent (or better) than that of radiologists, which in many use cases, have helped reduce the manual labor required for segmentation and also improved the speed and accuracy of the segmentations. Recent advancements in deep learning approaches for semantic segmentation include the fully convolutional neural network [9] (CNN), U-Net [11]

which obtained the highest accuracy in the segmentation of neuronal structures in electron microscopic stacks, V-Net 

[10] which is a 3D extension of U-Net to predict segmentation of a input volume all at once, VoxResNet [5] which is a deep voxel-wise residual network proposed for brain segmentation in MR, and NAS-Unet [16], a neural architecture search framework for semantic segmentation inspired by the U-net architecture.

Ii-a LadderNet

LadderNet [17], a multi-branch CNN with a chain of U-nets for semantic segmentation, contains skip connections between levels of the network and unlike the U-Net, its features from encoder branches are concatenated with features from decoder branches, in which features are summed from two branches. In particular, the authors propose LadderNet as an ensemble of multiple fully-convolutional networks, similar to Veit et al. [15]

who proposed that the ResNet performs like an ensemble of shallow networks, since the residual connections provided multiple paths of information flow. Here, every path can be viewed as a variant of a fully-convolutional network, and the total number of paths grow exponentially with the number of encoder-decoder pairs and spatial levels. This indicates that LadderNet has the potential to record more complicated features and yield higher accuracies.

As the number of encoder-decoder pairs grow, so will the number of parameters and difficulty of training. The authors proposed a shared-weights residual block between ensembles which reaps the benefits of skip connections, recurrent convolutions, dropout regularization and also has much fewer parameters than a standard residual block.


BCDU-Net [3] was proposed as an extension of U-Net, and it yielded better performance than state-of-the-art alternatives for the task of segmentation. In its architecture, the contracting path consists of four steps, where each step has two convolutional filters followed by a max-pooling and a ReLU activation function. The number of feature maps are also doubled at each step. Image representations are progressively extracted in the contracting path, which also increases the dimension of the representations layer by layer. Densely connected convolutions [6] was proposed to mitigate the problem of learning redundant features in successive convolutions for the U-net. In each step of the decoding path, they begin with an up-sampling function over the output of the previous layer. The feature maps are processed with bi-directional convolutional LSTMs [13] (BConvLSTM) unlike the original U-Net, where the corresponding feature maps in the contracting path are cropped and pasted onto the decoding path.

After each up-sampling procedure, the outputs undergo batch normalization, which increases the stability of the network by standardizing the inputs to the network layer, by subtracting the batch mean and dividing the result by the batch standard deviation. Batch normalization helps to speed up training of the neural network. After batch normalization, the outputs are taken in by a BConvLSTM layer. The BConvLSTM layer uses two convolutional LSTMs 

[12] (ConvLSTM) to process input data into forward and backward paths, from which a decision is made for the present input by handling the data dependencies in both directions. Note that in the original ConvLSTM, only the data dependencies for the forward path are processed.

Iii Proposed Methods

Iii-a Motivations

This work was mainly inspired by BCDU-Net [3] and LadderNet [17]. The network adopts the general architecture of LadderNet, with components that of BCDU-Net. Like BCDU-Net, it adopts densely-connected convolutions and utilizes bi-directional convolutional LSTMs [13] (BConvLSTM), which has two convolutional LSTMs [12] (ConvLSTM) to process input data in two directions (forward and backward paths), from which it makes a decision for the current input by handling data dependencies in both directions.

Another inspiration for this work was from NAS-Unet [16]. In neural architecture search for semantic segmentation for medical imaging, to the best of our knowledge, there has yet to be any published work with methods involving ensembles of U-Nets or its variants. A side-objective of this work is thus to illustrate the gains of such ensembles, using recent state-of-the-art methods in segmentation. In particular, this work shows that an ensemble of the same underlying network with varying sizes do yield better results than both with the same sizes. However, we were only able to experiment successfully with an ensemble of two networks and not more than that due to compute power limitations.

Iii-B BUSU-Net

The original proposed model - Big-U Small-U Net (BUSU-Net) consists of 108 layers made from chaining two BCDU-Nets, where the first is deeper than the second. In particular, the Big-U is deeper than the original BCDU-net [3] and the Small-U is of the same size as the original. The visualization of the network can be found in Figure 3 of the appendix (as it is really big).

Iii-C LightBUSU-Net

The LightBUSU-Net is a ‘lighter’ version of BUSU-Net with 43 layers, where its Big-U and Small-U are less deep than in BUSU-Net and BCDU-Net. The purpose of this is to illustrate the robustness of a lightweight model, which can be deployed in situations with limited resources such as memory, space and compute capabilities. The visualization of the network can be found in Figure 4 of the appendix.

Iv Experiments

Iv-a Training of Neural Network

All training was performed in the High Performance Computing111See https://nusit.nus.edu.sg/hpc/ for more details.

(HPC) at the National University of Singapore (NUS). We utilized TensorFlow 

[1] version 1.12 in Python 3.6, 5 CPU cores, 100GB RAM, 1 unit of NVIDIA® Tesla® V100-32GB GPU. An example of the job script we submitted to the HPC cluster for training of the neural network is as follows:

#PBS -P volta_pilot
#PBS -j oe
#PBS -N BUSU_Net_experiment
#PBS -q volta_gpu
#PBS -l select=1:ncpus=5:mem=100gb:ngpus=1:mpiprocs=2
#PBS -l walltime=70:00:00
np=$(cat ${PBS_NODEFILE} | wc -l);
singularity exec $image bash << EOF > stdout.$PBS_JOBID
2> stderr.$PBS_JOBID
#mpirun -np $np -x NCCL_DEBUG python3 train_BUSU_Net.py
python3 train_BUSU_Net.py

Iv-B Evaluation Metrics

We utilized several metrics to evaluate the performances of BUSU-Net and LightBUSU-Net, namely accuracy, sensitivity, specificity and F1-score. We first calculated the True Positive (TP), True Negative (TN), False Positive (FP) and False Negative (FN). The above metrics are thus calculated as follows:


In particular, we calculated the receiver operating characteristics (ROC) curve and the area under curve (AUC) to further evaluate the performances of the neural networks.

Method Year Dataset Accuracy Sensitivity Specificity AUC F1-Score
COSFIRE filters [4] 2015 DRIVE 0.9442 0.7655 0.9705 0.9614 -
Cross-Modality [7] 2015 DRIVE 0.9527 0.7569 0.9816 0.9738 -
U-net [11] 2015 DRIVE 0.9531 0.7537 0.9820 0.9755 0.8142
DeepModel [8] 2016 DRIVE 0.9495 0.7763 0.9768 0.9720 -
RU-Net [2] 2018 DRIVE 0.9553 0.7726 0.9820 0.9779 0.8149
R2U-Net [2] 2018 DRIVE 0.9556 0.7792 0.9813 0.9782 0.8171
LadderNet [17] 2018 DRIVE 0.9561 0.7856 0.9810 0.9793 0.8202
BCDU-Net (d=1) [3] 2019 DRIVE 0.9559 0.8012 0.9784 0.9788 0.8222
BCDU-Net (d=3) [3] 2019 DRIVE 0.9560 0.8007 0.9786 0.9789 0.8224
LightBUSU-Net 2020 DRIVE 0.9539 0.8281 0.9723 0.9781 0.8207
BUSU-Net 2020 DRIVE 0.9560 0.8113 0.9771 0.9799 0.8243
TABLE I: Performance comparison of proposed networks and recent state-of-the-art methods on DRIVE dataset

V Results

We evaluated BUSU-Net and LightBUSU-Net on the DRIVE [14] dataset. DRIVE is a dataset for blood vessel segmentation from retina images, and it consists of 40 color retina images, 20 of which are used for training and the remaining 20 images for testing. The original size of each image is pixels. As it is clear that the number of samples in this dataset is not sufficient for training a deep neural network, we employ the same strategy as [2, 3]. Firstly, the input images are randomly divided into patches numbering around from the 20 training images, of which are used for training, and the remainder patches are used for validation.

Table I shows the quantitative results of the segmentation obtained from the recent state-of-the-art networks and that of the proposed networks on the DRIVE dataset. From the results, it can be observed that BUSU-Net outperforms the state-of-the-art networks on majority of the evaluation metrics, and has some very close to them. However, it is noteworthy to point out that LightBUSU-Net outperforms all state-of-the-art networks in sensitivity, including that of BUSU-Net.

We illustrate the overall performances of BUSU-Net and LightBUSU-Net on the DRIVE dataset with their ROC curves, along with their precision recall curves in Figures 1 and 2 respectively. The ROC curve is the plot of the true positive rate against the false positive rate. The area under the ROC curve (AUC), which is a measure of the network segmentation capability of the input data, can be found in Table I.

We also evaluated our results with LadderBCDU-Net, a network we designed as a benchmark which is made up of two original BCDU-Nets from [3]. We did not present its results in Table I as it does not outperform the original BCDU-Net in any metric. Instead, we used it to illustrate the improvements in evaluation metrics when BCDU-Nets with different depths are superimposed to obtain BUSU-net. Interested readers may find LadderNet’s implementation in the GitHub repository at https://github.com/juntang-zhuang/LadderNet. Our implementation of BUSU-Net and LightBUSU-Net and their results which are presented in this paper can also be found at https://github.com/weihao94/BUSU-Net.

Fig. 1: ROC curves of BUSU-Net (left) and LightBUSU-Net (right)
Fig. 2: Precision recall curves of BUSU-Net (left) and LightBUSU-Net (right)

Vi Conclusion

We have shown that our proposed BUSU-net has considerable gains as compared to recent state-of-the-art neural networks in semantic segmentation. In particular, we have made known that there are stark differences in terms of performance in a superimposed network of two BCDU-Nets of different depths as compared to a superimposed network of two BCDU-nets of equal sizes and also that of the original BCDU-Net. In other words, instead of having the same network of equal depths superimposed, by having the same network with unequal depths - one larger than the other, we can obtain overall better performance than that of a single network and that of two superimposed networks of equal depths. Furthermore, we have shown that a lightweight model of BUSU-Net is able to obtain much better results than recent state-of-the-art networks on some evaluation metrics. This, along with the results from BUSU-Net, will help in the work towards automated segmentation of medical images with an ensemble framework.

Vii Future Work

As long a stretch as it sounds, we will work towards automated segmentation of medical images, where ensembles of deep neural networks will be studied at a more rigorous level and generalized (possibly). In particular, the ensembles need not be of equal sized single U-Net, but rather of varying depths. Ensembles of BUSU-Nets with big-Us and small-Us of different depths can be chained together. For example, suppose that we denote a big-U as and a small-U as . Then possible ensembles can be: , , , etc. We were not able to experiment with such cases due to resource constraints in the HPC and also the fact that it is a shared resource utilized by many other researchers, where a large number of users in a queue can result in our job being sent to the back of the queue, due to a low priority score attributed to large amounts of compute resource being requested. If memory-efficient ensemble methods can be employed, it may become possible to work on an automated ensemble framework which combines both the ensembling framework above and that of NAS-unet.

Viii Acknowledgments

We will like to thank National University of Singapore’s Information Technology for making GPUs readily available in the university and for their detailed yet user-friendly documentations.


  • [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. J. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Józefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. G. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. A. Tucker, V. Vanhoucke, V. Vasudevan, F. B. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng (2016)

    TensorFlow: large-scale machine learning on heterogeneous distributed systems

    CoRR abs/1603.04467. External Links: Link, 1603.04467 Cited by: §IV-A.
  • [2] Md. Z. Alom, M. Hasan, C. Yakopcic, T. M. Taha, and V. K. Asari (2018) Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation. CoRR abs/1802.06955. External Links: Link, 1802.06955 Cited by: TABLE I, §V.
  • [3] R. Azad, M. Asadi-Aghbolaghi, M. Fathy, and S. Escalera (2019) Bi-directional convlstm u-net with densley connected convolutions. ArXiv abs/1909.00166. Cited by: §II-B, §III-A, §III-B, TABLE I, §V, §V.
  • [4] G. Azzopardi, N. Strisciuglio, M. Vento, and N. Petkov (2015) Trainable cosfire filters for vessel delineation with application to retinal images. Medical Image Analysis 19 (1), pp. 46 – 57. External Links: ISSN 1361-8415, Document, Link Cited by: TABLE I.
  • [5] H. Chen, Q. Dou, L. Yu, J. Qin, and P. Heng (2018) VoxResNet: deep voxelwise residual networks for brain segmentation from 3d mr images. NeuroImage 170, pp. 446 – 455. Note: Segmenting the Brain External Links: ISSN 1053-8119, Document, Link Cited by: §II.
  • [6] G. Huang, Z. Liu, and K. Q. Weinberger (2016) Densely connected convolutional networks.

    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pp. 2261–2269.
    Cited by: §II-B.
  • [7] Q. Li, B. Feng, L. Xie, P. Liang, H. Zhang, and T. Wang (2016-01) A cross-modality learning approach for vessel segmentation in retinal images. IEEE Transactions on Medical Imaging 35 (1), pp. 109–118. External Links: Document, ISSN 1558-254X Cited by: TABLE I.
  • [8] P. Liskowski and K. Krawiec (2016-11) Segmenting retinal blood vessels with deep neural networks. IEEE Transactions on Medical Imaging 35 (11), pp. 2369–2380. External Links: Document, ISSN 1558-254X Cited by: TABLE I.
  • [9] J. Long, E. Shelhamer, and T. Darrell (2015-06) Fully convolutional networks for semantic segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §II.
  • [10] F. Milletari, N. Navab, and S. Ahmadi (2016) V-net: fully convolutional neural networks for volumetric medical image segmentation. 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. Cited by: §II.
  • [11] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi (Eds.), Cham, pp. 234–241. External Links: ISBN 978-3-319-24574-4 Cited by: §I, §II, TABLE I.
  • [12] X. SHI, Z. Chen, H. Wang, D. Yeung, W. Wong, and W. WOO (2015) Convolutional lstm network: a machine learning approach for precipitation nowcasting. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.), pp. 802–810. External Links: Link Cited by: §II-B, §III-A.
  • [13] H. Song, W. Wang, S. Zhao, J. Shen, and K. Lam (2018-09) Pyramid dilated deeper convlstm for video salient object detection. In The European Conference on Computer Vision (ECCV), Cited by: §II-B, §III-A.
  • [14] J. Staal, M. D. Abramoff, M. Niemeijer, M. A. Viergever, and B. van Ginneken (2004-04) Ridge-based vessel segmentation in color images of the retina. IEEE Transactions on Medical Imaging 23 (4), pp. 501–509. External Links: Document, ISSN 1558-254X Cited by: §V.
  • [15] A. Veit, M. Wilber, and S. Belongie (2016) Residual networks behave like ensembles of relatively shallow networks. External Links: 1605.06431 Cited by: §II-A.
  • [16] Y. Weng, T. Zhou, Y. Li, and X. Qiu (2019) NAS-unet: neural architecture search for medical image segmentation. IEEE Access 7 (), pp. 44247–44257. External Links: Document, ISSN 2169-3536 Cited by: §II, §III-A.
  • [17] J. Zhuang (2018-10) LadderNet: Multi-path networks based on U-Net for medical image segmentation. arXiv e-prints, pp. arXiv:1810.07810. External Links: 1810.07810 Cited by: §II-A, §III-A, TABLE I.

Appendix A BUSU-Net Visualization

Fig. 3: BUSU-Net

Appendix B LightBUSU-Net Visualization

Fig. 4: LightBUSU-Net