DLTK: State of the Art Reference Implementations for Deep Learning on Medical Images

by   Nick Pawlowski, et al.
Imperial College London

We present DLTK, a toolkit providing baseline implementations for efficient experimentation with deep learning methods on biomedical images. It builds on top of TensorFlow and its high modularity and easy-to-use examples allow for a low-threshold access to state-of-the-art implementations for typical medical imaging problems. A comparison of DLTK's reference implementations of popular network architectures for image segmentation demonstrates new top performance on the publicly available challenge data "Multi-Atlas Labeling Beyond the Cranial Vault". The average test Dice similarity coefficient of 81.5 exceeds the previously best performing CNN (75.7) and the accuracy of the challenge winning method (79.0).


page 2

page 4


Two layer Ensemble of Deep Learning Models for Medical Image Segmentation

In recent years, deep learning has rapidly become a method of choice for...

AIDE: Annotation-efficient deep learning for automatic medical image segmentation

Accurate image segmentation is crucial for medical imaging applications....

Binary segmentation of medical images using implicit spline representations and deep learning

We propose a novel approach to image segmentation based on combining imp...

MultiResUNet : Rethinking the U-Net Architecture for Multimodal Biomedical Image Segmentation

In recent years Deep Learning has brought about a breakthrough in Medica...

Automatically Designing CNN Architectures for Medical Image Segmentation

Deep neural network architectures have traditionally been designed and e...

Uncertainty Quantification using Variational Inference for Biomedical Image Segmentation

Deep learning motivated by convolutional neural networks has been highly...

Semi-Supervised Siamese Network for Identifying Bad Data in Medical Imaging Datasets

Noisy data present in medical imaging datasets can often aid the develop...

1 Introduction

The successful application of deep convolutional neural networks (CNNs) to the ImageNet challenge

Russakovsky2015 by Krizhevsky et al. Krizhevsky2012

has had a large impact on the field of computer vision. Biomedical image analysis is a particular area benefiting from latest advancements in computer vision methodology, but it only recently adapted deep learning techniques on a wider scale

Litjens2017. Adaptation of deep learning to biomedical imaging problems requires speciality operations and the lack of low-threshold access to validated reference implementations has led to slow progress. To address this, we present DLTK111https://dltk.github.io/, a Deep Learning Toolkit for Medical Image Analysis. DLTK simplifies the application and experimentation with deep learning methods on medical imaging data by providing validated, high-performance reference implementations and required speciality operations.

2 Building DLTK

We believe that most of the deep learning research relevant to medical imaging will address at least one of these components: a) data reading and preprocessing; b) model definitions and network architectures; c) training and optimisation strategies; d) deployment of methods to new data. Therefore, DLTK enables a plug-and-play structure for those components based on TensorFlow’s Abadi2015 new high-level API cheng2017tensorflow. This approach gives users the freedom to reuse existing components but also to rapidly integrate new functionality as subject of their research. We naturally interface within the TensorFlow framework and therefore benefit from its wide range of operations and community contributions. While other packages, such as NiftyNet niftynet17, build on a predefined application structure, we emphasise an API level prioritising experimentation by exposing access to low-level operations.

3 Experiments & Results

DLTK reference implementations of FCN Long2015 and U-Net Ronneberger2015 architectures using residual units He2015 are tested on the publicly available challenge dataset from the MICCAI 2015 challenge “Multi-Atlas Labeling Beyond the Cranial Vault”. We test combinations of components, a) data reading with random and class-balanced sampling of patches, b) training with a Dice, cross-entropy and class-balanced cross-entropy loss for the two network architectures. All methods are trained with ADAM Kingma2014 with default parameters and tuned to counteract loss spikes. Network inputs are patches of size voxels. We found that the U-Net trained with cross-entropy loss and class-balanced sampling performed best out of all combinations. The comparative performance of all experiment dimensions is depicted in Figure 2 in the Appendix.

For external comparison, we submitted the best performing method to the challenge website and compare to the winning entry employing a multi-atlas approach heinrichmulti and the best performing CNN Larsson2017. Our U-Net implementation achieves new state-of-the-art results of in terms of DSC compared to the multi-atlas method by heinrichmulti with . The previously best performing CNN Larsson2017 achieves . Interestingly, we report a slightly lower validation performance of compared to Larsson2017 with which might indicate overfitting of the previous CNN Larsson2017

to the training data as we achieve better test set performance than validation performance. We further note that our architecture is far from its potential optimal performance as we did not fine-tune any of its hyperparameters or the data preprocessing and rather prefer to report out-of-the-box performance of the DLTK implementation.

(a) Axial view of an exemplary segmentation
(b) 3D rendering of an exemplary segmentation
Figure 1: Prediction of the DLTK U-Net segmenting 13 organs on abdominal CT scans.

4 Conclusion

We present a new tool kit for performing deep learning experiments tailored to biomedical image analysis. DLTK offers basic baseline implementations for popular network architectures, sampling techniques and losses commonly used in medical image analysis. It further offers easy deployment including sliding-window inference invariant to input shapes. In future, we aim to extend the range of DLTK to include more additional components and latest developments and thus, enable low-threshold access to cutting-edge deep learning methods for medical imaging.


NP is supported by Microsoft Research PhD Scholarship and the EPSRC Centre for Doctoral Training in High Performance Embedded and Distributed Systems (HiPEDS, Grant Reference EP/L016796/1). MR is supported by an Imperial College Research Fellowship. We gratefully acknowledge the support of NVIDIA with the donation of one Titan X GPU for our research.


Appendix A Experimental Comparison of Network Architectures

Figure 2:

Box plot comparing the DSC scores, recall and precision for the U-Net (green) and FCN (blue). The white markers show the mean, the black line shows the median, the error bars indicate confidence intervals, and the additional black markers indicate outliers. The U-Net outperforms the FCN in almost every comparison. The FCN only outperforms the U-Net on the recall for the right adrenal gland.