Globally Optimal Surface Segmentation using Deep Learning with Learnable Smoothness Priors

by   Leixin Zhou, et al.
The University of Iowa

Automated surface segmentation is important and challenging in many medical image analysis applications. Recent deep learning based methods have been developed for various object segmentation tasks. Most of them are a classification based approach, e.g. U-net, which predicts the probability of being target object or background for each voxel. One problem of those methods is lacking of topology guarantee for segmented objects, and usually post processing is needed to infer the boundary surface of the object. In this paper, a novel model based on convolutional neural network (CNN) followed by a learnable surface smoothing block is proposed to tackle the surface segmentation problem with end-to-end training. To the best of our knowledge, this is the first study to learn smoothness priors end-to-end with CNN for direct surface segmentation with global optimality. Experiments carried out on Spectral Domain Optical Coherence Tomography (SD-OCT) retinal layer segmentation and Intravascular Ultrasound (IVUS) vessel wall segmentation demonstrated very promising results.


page 10

page 11

page 12


3-D Surface Segmentation Meets Conditional Random Fields

Automated surface segmentation is important and challenging in many medi...

Globally Optimal Segmentation of Mutually Interacting Surfaces using Deep Learning

Segmentation of multiple surfaces in medical images is a challenging pro...

Simultaneous Multiple Surface Segmentation Using Deep Learning

The task of automatically segmenting 3-D surfaces representing boundarie...

Deep Active Surface Models

Active Surface Models have a long history of being useful to model compl...

End-to-End Adversarial Shape Learning for Abdomen Organ Deep Segmentation

Automatic segmentation of abdomen organs using medical imaging has many ...

Uncertainty-Based Dynamic Graph Neighborhoods For Medical Segmentation

In recent years, deep learning based methods have shown success in essen...

Adaptive Neural Layer for Globally Filtered Segmentation

This study is motivated by typical images taken during ultrasonic examin...

1 Introduction

Automated segmentation of objects or equivalently boundary surfaces plays a very import role in quantitative image analysis. In several years, deep learning based method for semantic segmentation has become very popular in computer vision and medical imaging. The fully convolutional networks (FCN)

[11] , and then U-net [15]

for medical image segmentation have been proposed. All these methods model the segmentation problem as a pixel-wise or voxel-wise classification problem. Popular loss functions for training these networks include cross entropy (CE), weighted cross entropy (WCE) for imbalanced classes, and dice similarity coefficient (DSC). As there is no explicit constraint among the labeling of different pixels, usually post processing, such as morphological operation, is required to get reasonable prediction.

It is well known that prior knowledge, like shape priors, can help significantly improve segmentation performance, especially in medical scenario the target objects often lack strong boundaries, and/or present similar intensity profiles with multiple crowed nearby tissues. Some studies have investigated along this line. For example, Milletari et al. [12]

proposed to integrate into CNN the statistical shape priors, which were obtained from principal components analysis (PCA). Although the robustness of segmentation was improved, the mean DSC did not increase. Ravishankar

et al. [14] utilized a shape regularization network to project the preliminary prediction into shape space, and a shape loss was added to make the prediction more fit into the desired shape space. Chen et al. [5] proposed an end-to-end approach with CNN and Conditional Random Fields (CRFs) for segmentation. However, the inference of CRFs was approximate and not globally optimal.

Inspired by the graph search method [10], Shah et al. [17] first modeled the terrain-like surfaces segmentation as direct surface identification using regression. The network consists of an FCN followed by fully-connected layers. The network is very light weighted and no post processing is required. Surprisingly the results are very promising [17]. It is well known that U-net outperforms FCN because U-net has an additional expansive path such that features of high resolution can be learned and then better prediction accuracy can be achieved. To improve segmentation accuracy further, it is natural to think to replace the FCN with a U-net. However, it is not reasonable to concatenate a U-net with fully-connected layers, as the invariance of feature maps in the original resolution is supposed to be much less than that in the low resolution, such that there would be much more chance that the fully-connected layers heavily overfit to the training data. Another drawback of Shah et al’s work  [17] is that the surface smoothness is implicitly learned within the whole network (mainly within fully-connected layers) as a black box. It is hard to decode after training.

To resolve problems mentioned above, we propose to explicitly model the surface segmentation problem as a quadratic programming, which can be solved with guaranteed global optimality. The whole network can be trained end-to-end. Our contributions are in three folds: 1) The first time to parameterize the output of the U-net as Gaussians (the mean represents surface position prediction from the U-net and the standard deviation encodes the prediction confidence), which converts the description from the discrete to the continuous space; 2) The parameter that controls smoothness can be learned and has a clear meaning; 3) The method works in the continuous space and enables sub-pixel segmentation.

2 Method

Figure 1: Surface segmentation definition.

We first define the surface segmentation problem. A -D image can be viewed as a

-D tensor

. A terrain-like surface in is oriented and shown in Fig. 1. Let , and denote the image sizes in three dimensions, respectively. Let all column index be a set . The surface is defined by , . Thus any feasible surface in intersects with each column exactly once. Generally in surface segmentation [10], the problem is formulated as minimizing the energy function


where the unary term is the energy when considering each column independently, and the pairwise energy term penalizes discontinuity of surface position among adjacent columns. The design of and will be detailed in Section 2.1.

2.1 Proposed Inference Pipeline

One should note that the proposed method can be applied in both -D and -D. For the purpose of proof-of-concept and clear explanation, the detailed description of the proposed method and all experiments are done in -D. The inference pipeline of the proposed method is demonstrated in Fig. 1(a). The trained U-net takes in the original image , and outputs the discrete probability map . Ideally, for each image column, the probability for the target surface position is high, and it is gradually reduced on the positions away from the one on the surface, as demonstrated in Fig. 1(b). We thus propose a block to convert the discrete probability map to a Gaussian parameterization , where specifies the mean surface position on each column and is the corresponding standard deviation. The Gaussian parameterization is then fed into the trained smoothing block (SB) , which incorporate the learned surface smoothness priors to infer the optimal target surface. Next, we detail the novel blocks in our deep optimal surface segmentation neural network.

Figure 2: (a) Inference pipeline of the proposed method. (b) Target surface position (left) and its relaxed surface possibility map (right) for one column.

D2C block The D2C block is basically designed to convert the discrete probability map of each column , which is output from U-net, to a continuous representation (Gaussian is utilized in our design). This enables optimizing directly on the surface position and sub-pixel accuracy prediction. The proposed conversion is realized by fitting a continuous Gaussian function to the discrete probability map

, which can be thought as discrete samples of a continuous Gaussian probability density function. Recall that one dimensional Gaussian function has the formula

and then where , , and . In our setting, for each column, we have samples of . We can define an error function [8], namely Then minimizing the weighted mean square error (MSE)

, one can get the estimates of

by solving a set of three linear equations and then . The problem is very similar to least square problem [8]

. In our implementation, a linear transform is utilized to normalize the probability map for each column to the range [0,1], then we can ignore the magnitude

. As the computation of each column is independent, it is straightforward to be extended to -D.

Smoothing Block (SB) To integrate the surface segmentation model (Eq. 1) with smoothness priors, we define the energy function as


where and are defined as and is the set of neighbor columns.

Figure 3: The proposed SB architecture. Only one parameter needs to be trained.

For simplicity, the nearest neighbor pairs, i.e. , are considered as the set of neighbor columns. The whole energy in Eq. 2 can be reformulated as the standard quadratic form It can be proved that the Hessian matrix is positive definite by using Gershgorin circle theorem and then the energy function is convex. The gradient is Let the gradient to be zero, we have the global optimal solution

Another advantage of the proposed energy formulation is the optimal solution can be calculated in one step, i.e. we do not need to make use of a recurrent neural network (RNN) to implement SB. It is also straightforward to implement the smoothing block in 3D.

2.2 Training Strategy

2.2.1 U-net Pre-training

A common U-net architectures is utilized to generate the discrete probability map for the input image (). The detailed architecture is demonstrated in Fig. 10

. In the proposed method, the softmax layer works on each

column, not on each pixel. The rational is that we assume the target surface intersects with each column by exactly once, and so the probabilities are normalized within each column. Also we assume the U-net should output a Gaussian shaped probability map for each column, which mimics the Bayesian learning for each column and shares merits with knowledge distillation [9] and distillation defense [13]

. To encourage the U-net outputs reasonable probability maps, the Kullback–Leibler divergence (KLD) loss is utilized for the U-net pre-training. KLD is a measure of how one probability distribution is different from a reference probability distribution. It is equivalent to the Cross Entropy when the reference is a Kronecker delta function. We propose to relax the delta function to a Gaussian distribution such that the proposed D2C block can work properly. We pick

to be around 0.1 times of the column length and our method is insensitive to . Then we will have one Gaussian distribution ground truth for each column, denoted as . One illustration is demonstrated in Fig. 1(b). We denote the output from the U-net as and the loss for the pre-training is formulated as

2.2.2 Fine Tuning

During the fine tunning phase, the mean square surface position error (MSE) is utilized as the loss function, which is formulated as where denotes the ground truth surface positions. The fine tuning of the whole network proceeds in an alternation fashion (Fig. 4).

Figure 4: Two phases of fine tuning of the proposed network. The fine tuning alternates between training U-net for epochs and training SB for epochs. The shaded blocks are kept fixed during the respective training phase.

The validation data is utilized to train the SB, and the training data is for U-net training. As SB only has one parameter- to train, the overfitting chance of it is very low. Also the U-net is not trained on validation data, the learned should be more representative in the wild. Otherwise if fine tuning U-net+SB simultaneously on the training data, the learned is generally smaller than necessary, as the pre-trained U-net generally has fit the training data well and SB does not play an important role.

3 Experiments

3.1 SD-OCT Retinal Layer Segmentation

Figure 5: Sample SD-OCT data. Ground truth (left) and predictions (right). Red: IRPE; Blue: OBM.

The proposed method was applied to retinal layer segmentation in SD-OCT images, which were obtained from the public dataset [6]. Since the manual tracings were only available for a region centered at the fovea, subvolumes of size were extracted around the fovea. The dataset was randomly divided into 3 sets: 1) Training set - 263 volumes (79 normal, 184 with age-related macular degeneration (AMD)); 2)Validation set - 59 volumes (18 normal, 41 AMD); 3) Testing set - 58 volumes (18 normal, 40 AMD). The surfaces considered are S2-Inner Aspect of Retinal Pigment Epithelium Drusen Complex (IRPE) and S3-Outer Aspect of Bruch Membrane (OBM) as shown in Fig. 5, which are very challenging to segment.

Pre-processing and Augmentation The intensity of each slice was normalized to the range [-1, 1]. No additional pre-processing methods were utilized. For the purpose of pre-training the U-Net, the standard deviation of the Gaussian model of the surface position on each column was set . We augmented the training data by applying mirroring along the horizontal dimension, and 2 random translations along the axial dimension on the original image and the mirrored image.

Hyperparameters All training utilized Adam optimizer. For U-net pre-training, the learning rate was , and the training ran for 2000 epochs. For fine-tuning, the learning rate of U-net () was , the learning rate of SB ( was , the total epochs for the two phases were: , and . The initial smoothness parameter was set to .

Surface W/O, normal W/, normal W/O, AMD W/, AMD [17] normal [17] AMD
IRPE 2.310.54 2.210.58 3.802.01 3.631.71 3.840.58 6.071.84
OBM 2.970.35 2.930.37 6.093.10 5.832.89 4.971.01 5.851.80
Table 1: Unsigned mean surface positioning errors (UMSP) for the results on the SD-OCT test dataset. The unit is in . W/O: without SB; W/: with SB.

Results Unsigned mean surface positioning error (UMSP) [7] is utilized for evaluation of segmentation accuracy. The quantitative results are summarized in Table 1. We compare to another deep learning based method [17], which is the state-of-the-art on this dataset. As for the IRPE surface, the proposed method without SB outperforms [17]. With SB plugged in, the performance can be improved further: for the normal cases, the UMSP can be improved by 39%; for the AMD cases, we can achieve a similar improvement (40%). The segmentation of the OBM surface is more challenging than that of the IRPE surface. One can notice that the learned SB can improve the segmentation accuracy consistently. As for the OBM surface, compared to [17], the proposed method achieves 41% improvement on the normal cases and performs comparably on the AMD cases. An interesting observation is that the learned are around 0.05 and 1.0 for IRPE and OBM, respectively, which coincides with the human observation that generally less context information exists for OBM. Sample segmentation results are illustrated in Fig. 5, Fig. 6 and Fig. 7.

Figure 6: Sample segmentations of the OBM surface. The four columns illustrate original images, ground truth, predictions without SB (red: Gaussian means, blue: std bars) and that with SB (red: prediction, blue: ground truth). For illustrations, the standard deviation bars are downsampled. The errors shown are in the unit of pixel.
Figure 7: Sample segmentations of the IRPE surface. The four columns illustrate original images, ground truth, predictions without SB (red: Gaussian means, blue: std bars) and that with SB (red: prediction, blue: ground truth). For illustrations, the standard deviation bars are downsampled. The errors shown are in the unit of pixel.

3.2 IVUS Vessel Wall Segmentation

The data used for this experiment was obtained from the standardized evaluation of IVUS image segmentation database [2]. In this experiment, the dataset B was used. This dataset consists of 435 images with a size of , as well as the respective expert manual tracings of lumen and media surfaces. It comprises two groups - a training set (109 slices) and a testing set (326 slices).

Figure 8: Sample IVUS data. Ground truth (left) and predictions (right) in Cartesian (top) and Polar system (bottom). Red: lumen; Blue: media.

The experiment with the proposed method was conducted in conformance with the directives provided for the IVUS challenge. In our experiment, we randomly split the 109 training slices into 100 slices for training and 9 slices for validation. Pre-processing and Augmentation Each slice was transformed to be represented in the polar coordinate system with a size of , as illustrated in Fig. 8. The intensity of each slice was normalized to have a zero mean and a unit standard deviation. The Gaussian truth was generated for each column using , as it has a shorter column (128 vs 512 in the SD-OCT data). As the number of training data was limited, we augmented the data on the fly by random combinations of various operations including mirroring, circulation shifting along the polar dimension, adding Gaussian noises (mean=0, std=0.1), adding Salt and Pepper noises (5%), and cropping (90% of the original size) and then resizing (). All training parameters were the same as those for the SD-OCT segmentations, except that the number of epochs for the pre-training was 4000.

Methods Lumen Media
P3[3, 4] 0.880.05 0.060.05 0.340.14 0.910.04 0.050.04 0.310.12
VGG-U-net[1] 0.80(-) - - 0.81(-) - -
[16] 0.860.04 0.090.03 0.370.14 0.900.03 0.070.03 0.430.12
W/O SB 0.870.11 0.080.12 0.431.90 0.880.10 0.080.10 0.410.31
W/ SB 0.880.06 0.070.07 0.280.18 0.890.07 0.070.07 0.370.27
Table 2: Segmentation results comparison on the IVUS dataset.

Results Jaccard Measure (JM), Percentage of Area Difference (PAD) and Hausdroff Distance (HD) are utilized to evaluate segmentation accuracy. The results are summarized in Table 2. P3 [3, 4]

is the state-of-the-art method for this IVUS dataset. It is an expectation maximization based method and is

semi automated. VGG-U-net denotes a deep learning based method [1]. The state-of-the-art fully automation method is a graph search based method working in irregularly sampled space [16]

, the unary energy of which is learned with random forests. From Table 

2, one can find that the proposed method outperforms the sub-voxel graph based method [16] for the lumen surface and the performance on the media surface is comparable. Compared to the state-of-the-art semi-automate method P3, the performance of the proposed is comparable for the lumen surface and is marginally inferior on the media surface. The proposed method outperforms the VGG-U-net by a big margin. Sample segmentation results are illustrated in Fig. 8 and Fig. 9.

Figure 9: Sample segmentation results on IVUS data. Red: lumen ground truth; Blue: media ground truth; Green: lumen prediction; Purple: media prediction. The four columns correspond to the original image, the ground truth, the prediction without SB, the prediction with SB.

As to the inference computation time, the proposed method needs more overhead than the VGG-U-net method (0.21 vs 0.09 sec/slice). The overhead is mainly from the Hessian matrix computation, as well as two separate runs of the program for two surfaces. While compared to P3  [3, 4] (8.6 sec/slice) and the graph based method [16] (187.4 sec/slice), the proposed method is highly efficient.

4 Conclusion

In this paper, a novel segmentation model based on a convolutional neural network (CNN) and a learnable surface smoothing block is proposed to tackle the surface segmentation problem with end-to-end training. To the best of our knowledge, this is the first study for surface segmentation which can achieve guaranteed globally optimal solutions using deep learning. Experiments on SD-OCT retinal layer segmentation and IVUS vessel wall segmentation demonstrated very promising results. The proposed method is applicable to -D and -D.

Figure 10: U-net architectures. For convolution layers, the number indicates dimensions of feature maps, and convolution kernels are utilized except the last convolution layer, where

kernels are chosen; for max pooling and transposed convolution layers,

kernels and strides 2 are utilized. A Relu layer is utilized following each convolution layer except the final

convolution layer. For all skipping connections, the features are merged by adding.


  • [1] C. Balakrishna, S. Dadashzadeh, and S. Soltaninejad (2018) Automatic detection of lumen and media in the ivus images using u-net with vgg16 encoder. arXiv preprint arXiv:1806.07554. Cited by: §3.2, Table 2.
  • [2] S. Balocco, C. Gatta, F. Ciompi, A. Wahle, P. Radeva, S. Carlier, G. Unal, E. Sanidas, J. Mauri, X. Carillo, et al. (2014) Standardized evaluation methodology and reference database for evaluating ivus image segmentation. Comput Med Imaging Graph 38 (2), pp. 70–90. Cited by: §3.2.
  • [3] M. Cardinal, J. Meunier, G. Soulez, R. L. Maurice, É. Therasse, and G. Cloutier (2006) Intravascular ultrasound image segmentation: a three-dimensional fast-marching method based on gray level distributions. IEEE Trans. Med. Imag 25 (5), pp. 590–601. Cited by: §3.2, §3.2, Table 2.
  • [4] M. R. Cardinal, G. Soulez, J. Tardif, J. Meunier, and G. Cloutier (2010) Fast-marching segmentation of three-dimensional intravascular ultrasound images: a pre-and post-intervention study. Med. Phys. 37 (7Part1), pp. 3633–3647. Cited by: §3.2, §3.2, Table 2.
  • [5] S. Chen and M. de Bruijne (2019) An end-to-end approach to semantic segmentation with 3d cnn and posterior-crf in medical images. In Medical Imaging meets NIPS Workshop at NIPS 2018, Cited by: §1.
  • [6] S. Farsiu, S. J. Chiu, R. V. O’Connell, F. A. Folgar, E. Yuan, J. A. Izatt, C. A. Toth, A. E. D. S. 2. A. S. D. O. C. T. S. Group, et al. (2014) Quantitative classification of eyes with and without intermediate age-related macular degeneration using optical coherence tomography. Ophthalmology 121 (1), pp. 162–172. Cited by: §3.1.
  • [7] M. K. Garvin, M. D. Abramoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka (2009) Automated 3-d intraretinal layer segmentation of macular spectral-domain optical coherence tomography images. IEEE Trans. Med. Imag 28 (9), pp. 1436–1447. Cited by: §3.1.
  • [8] H. Guo (2011) A simple algorithm for fitting a gaussian function [dsp tips and tricks]. IEEE Signal Process Mag 28 (5), pp. 134–137. Cited by: §2.1.
  • [9] G. Hinton, O. Vinyals, and J. Dean (2015) Distilling the knowledge in a neural network. In Deep Learning and Representation Learning Workshop at NIPS 2014, Cited by: §2.2.1.
  • [10] K. Li, X. Wu, D. Z. Chen, and M. Sonka (2006) Optimal surface segmentation in volumetric images-a graph-theoretic approach. IEEE Trans Pattern Anal Mach Intell 28 (1), pp. 119–134. Cited by: §1, §2.
  • [11] J. Long, E. Shelhamer, and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In CVPR 2015, pp. 3431–3440. Cited by: §1.
  • [12] F. Milletari, A. Rothberg, J. Jia, and M. Sofka (2017) Integrating statistical prior knowledge into convolutional neural networks. In MICCAI 2017, pp. 161–168. Cited by: §1.
  • [13] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. Cited by: §2.2.1.
  • [14] H. Ravishankar, R. Venkataramani, S. Thiruvenkadam, P. Sudhakar, and V. Vaidya (2017) Learning and incorporating shape models for semantic segmentation. In MICCAI 2017, pp. 203–211. Cited by: §1.
  • [15] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In MICCAI 2015, pp. 234–241. Cited by: §1.
  • [16] A. Shah, M. D. Abrámoff, and X. Wu (2019) Optimal surface segmentation with convex priors in irregularly sampled space. Med. Image Anal.. Cited by: §3.2, §3.2, Table 2.
  • [17] A. Shah, L. Zhou, M. D. Abrámoff, and X. Wu (2018) Multiple surface segmentation using convolution neural nets: application to retinal layer segmentation in oct images. Biomed. Opt. Express 9 (9), pp. 4509–4526. Cited by: §1, §3.1, Table 1.