Conditional random fields have been widely used as an efficient post-processing method in medical image segmentation . There is a more elegant way to take advantage of CNN and CRF at the same time, which has been introduced the first time by Zheng et al. to train 2D CNN and CRF together, and further developed by Monteiro et al. into a 3D version. There are also other similar joint optimization of CNN and CRF approaches investigated by different researchers .
However, the current end-to-end training methods in medical imaging still rely on independent tuning of some of the CRF parameters and all use intensity information as the primary feature space. In medical images, intensity information often provides low-quality feature space for the CRF as intensities are noisy and several structures belonging to different classes may have the same intensity. To counter this, we propose a new CRF method called Posterior-CRF that can be applied at the end of a segmentation CNN (such as U-net). In contrast with previous end-to end approaches, it optimizes all CRF parameters during network training and applies the CRF to the posterior probability map instead of the original intensity information. In this way, the mean field inference in CRF could make full use of the high-quality feature maps obtained with CNN. The experiments show that our approach outperforms the post-processing CRF and previous end-to-end CRF approaches in white matter hyperintensities segmentation.
2.1 CNN modeling
as activation function except for the last output layer, which usesoftmax
to produce the final CNN probability maps. We use categorical cross-entropy as the loss function.
In the fully-connected CRF model (,), the corresponding Gibbs energy w.r.t the label segmentation is
where and range from 1 to , which is the number of voxels in the random field and 3D input patch . For convenience, the conditioning on will be omitted in the rest of the paper. The first term is the unary potential, which in our case is the current voxelwise class probabilities in the last CNN layer. The second term is the pairwise potential:
where is the label compatibility function given by Potts model that captures the compatibility between different pairs of labels. is the linear combination weight of different predefined kernels.
The first kernel in Eq 2
is defined by the positions vectorsand and feature vectors and , and the second kernel is the smoothness kernel which is only controlled by the voxel positions. , and are the parameters that control the sensitivity to the corresponding feature space. In the previous methods, people usually use the intensity of the input image as the feature (or reference map) , which we call Intensity-CRF methods (Fig 1). However, the Intensity-CRF is very sensitive to the parameter because the intensity varies a lot between different medical images as well as the random noise. Therefore, we replace the intensity by the posterior probability as the new reference maps, which we call Posterior-CRF method (Fig 1). The idea of Posterior-CRF is to try to use the best-quality CNN feature maps as the feature space used in the mean field inference. As an efficient feature extractor, 3D UNet could provide high-quality feature maps which provide better class separation compared to the original input image. Moreover, Posterior-CRF also avoids the noisy intensity feature space that makes the inference in Intensity-CRF unstable. Another advantage of Posterior-CRF is that compared with Intensity-based methods, there is no longer the need to pretrain and fix the CRF parameter settings because we don’t use original intensity information anymore. And now, all the parameters like , , , , are equivalently trained together with the other weights in the network.
We test our methods on 60 FLAIR scans from WMH 2017 Challenge
. The images were acquired from three hospitals and manually annotated with three labels: background, white matter hyperintensities and other pathology. Images are randomly split into 36 images for training, 12 for validation and 12 for testing. Training patches are extracted and cropped to(or padded if it is smaller than) the sizewith the original voxel size (, , for three hospitals) and intensities, with 87.5 overlap in z-direction between patches. Several 3D data augmentation strategies were applied on the training patches, including 3D rotation with randomly sampled from [, , ], shifting by [7, 24, 24] voxels, as well as flipping in all 3 directions (XY, XZ and YZ). We trained our network on all the three labels and report the results of white matter hyperintensities versus background + other pathology.
As shown in Table 1, we compared our method with UNet and three different CRF approaches, which are: Post-CRF for post-processing, Intensity-CRF and Spatial-CRF that only use the position information (second term in Eq 2). Parameters and for Post-CRF are tuned by grid search on the training sets. For Intensity-CRF and Spatial-CRF, the relevant are taken from Post-CRF, while the weights are learned. Posterior-CRF achieves the best Dice score and Average volume difference while it also performs well on Hausdorff distance and has a good balance between FP and FN. For the visualization of a certain slice results in Fig 2, we can see that Posterior-CRF has the best visual quality compared to other approaches. The UNet and the Post-CRF results are visually equivalent and have many false positives, while Intensity-CRF and Spatial-CRF are similar and both remove too many voxels.
We propose a new end-to-end CRF approach called Posterior-CRF that could be trained together with CNN in a better way and overcome the drawbacks of other CRF approaches.
 Kamnitsas, K., Ledig, C., Newcombe, V. F., Simpson, J. P., Kane, A. D., Menon, D. K., … Glocker, B. (2017). Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Medical image analysis, 36, 61-78.
 Dou, Q., Yu, L., Chen, H., Jin, Y., Yang, X., Qin, J., Heng, P. A. (2017). 3D deeply supervised network for automated segmentation of volumetric medical images. Medical image analysis, 41, 40-54.
 Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D., …
 Schwing, A. G., Urtasun, R. (2015). Fully connected deep structured networks. arXiv preprint arXiv:1503.02351.
 Lin, G., Shen, C., Van Den Hengel, A., Reid, I. (2016). Efficient piecewise training of deep structured models for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3194-3203).
Reid, I. (2016). Efficient piecewise training of deep structured models for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3194-3203).
 Monteiro, M., Figueiredo, M. A., Oliveira, A. L. (2018). Conditional Random Fields as Recurrent Neural Networks for 3D Medical Imaging Segmentation. arXiv preprint arXiv:1807.07464.
 Ronneberger, O., Fischer, P., Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234-241). Springer, Cham.
 Krähenbühl, P., Koltun, V. (2011). Efficient inference in fully connected crfs with gaussian edge potentials. In Advances in neural information processing systems (pp. 109-117).
 Adams, A., Baek, J., Davis, M. A. (2010). Fast high dimensional filtering using the permutohedral lattice. In Computer Graphics Forum (Vol. 29, No. 2, pp. 753-762). Oxford, UK: Blackwell Publishing Ltd.
 Wachinger, C., Reuter, M., Klein, T. (2018). DeepNAT: Deep convolutional neural network for segmenting neuroanatomy. NeuroImage, 170, 434-445.
Klein, T. (2018). DeepNAT: Deep convolutional neural network for segmenting neuroanatomy. NeuroImage, 170, 434-445.