has had a large impact on the field of computer vision. Biomedical image analysis is a particular area benefiting from latest advancements in computer vision methodology, but it only recently adapted deep learning techniques on a wider scaleLitjens2017. Adaptation of deep learning to biomedical imaging problems requires speciality operations and the lack of low-threshold access to validated reference implementations has led to slow progress. To address this, we present DLTK111https://dltk.github.io/, a Deep Learning Toolkit for Medical Image Analysis. DLTK simplifies the application and experimentation with deep learning methods on medical imaging data by providing validated, high-performance reference implementations and required speciality operations.
2 Building DLTK
We believe that most of the deep learning research relevant to medical imaging will address at least one of these components: a) data reading and preprocessing; b) model definitions and network architectures; c) training and optimisation strategies; d) deployment of methods to new data. Therefore, DLTK enables a plug-and-play structure for those components based on TensorFlow’s Abadi2015 new high-level API cheng2017tensorflow. This approach gives users the freedom to reuse existing components but also to rapidly integrate new functionality as subject of their research. We naturally interface within the TensorFlow framework and therefore benefit from its wide range of operations and community contributions. While other packages, such as NiftyNet niftynet17, build on a predefined application structure, we emphasise an API level prioritising experimentation by exposing access to low-level operations.
3 Experiments & Results
DLTK reference implementations of FCN Long2015 and U-Net Ronneberger2015 architectures using residual units He2015 are tested on the publicly available challenge dataset from the MICCAI 2015 challenge “Multi-Atlas Labeling Beyond the Cranial Vault”. We test combinations of components, a) data reading with random and class-balanced sampling of patches, b) training with a Dice, cross-entropy and class-balanced cross-entropy loss for the two network architectures. All methods are trained with ADAM Kingma2014 with default parameters and tuned to counteract loss spikes. Network inputs are patches of size voxels. We found that the U-Net trained with cross-entropy loss and class-balanced sampling performed best out of all combinations. The comparative performance of all experiment dimensions is depicted in Figure 2 in the Appendix.
For external comparison, we submitted the best performing method to the challenge website and compare to the winning entry employing a multi-atlas approach heinrichmulti and the best performing CNN Larsson2017. Our U-Net implementation achieves new state-of-the-art results of in terms of DSC compared to the multi-atlas method by heinrichmulti with . The previously best performing CNN Larsson2017 achieves . Interestingly, we report a slightly lower validation performance of compared to Larsson2017 with which might indicate overfitting of the previous CNN Larsson2017
to the training data as we achieve better test set performance than validation performance. We further note that our architecture is far from its potential optimal performance as we did not fine-tune any of its hyperparameters or the data preprocessing and rather prefer to report out-of-the-box performance of the DLTK implementation.
We present a new tool kit for performing deep learning experiments tailored to biomedical image analysis. DLTK offers basic baseline implementations for popular network architectures, sampling techniques and losses commonly used in medical image analysis. It further offers easy deployment including sliding-window inference invariant to input shapes. In future, we aim to extend the range of DLTK to include more additional components and latest developments and thus, enable low-threshold access to cutting-edge deep learning methods for medical imaging.
NP is supported by Microsoft Research PhD Scholarship and the EPSRC Centre for Doctoral Training in High Performance Embedded and Distributed Systems (HiPEDS, Grant Reference EP/L016796/1). MR is supported by an Imperial College Research Fellowship. We gratefully acknowledge the support of NVIDIA with the donation of one Titan X GPU for our research.