Recent advances in fluorescence microscopy imaging, especially two-photon microscopy, have enabled the imaging of cellular and subcellular structures of living tissue [1, 2, 3]. This has resulted in the generation of large datasets of 3D microscopy image volumes, which in turn need automatic image segmentation techniques for quantification. However, the quantitative analyses of these datasets still pose a challenge due to light scattering, distortion created by lens aberrations in different directions, and the complexity of biological structures . The end result is blurry image volumes with poor edge details that become worse in deeper tissue depths.
There have been various techniques developed for segmentation of microscopy images. One widely used class of methods is based on the active contours technique which minimizes an energy functional to fit contours to objects of interest [5, 6]. Early version of active contours 
generally produced poor segmentation results since the segmentation results are noise sensitive and initial contour dependent. An external energy term which convolves a controllable vector field kernel with an image edge map was presented in to address the noise sensitive problem. Similarly, the Poisson inverse gradient was introduced to determine initial contours locations to segment microscopy images in . Moreover, active contours methodology has been integrated with a region-based segmentation method that poses the segmentation problem as an energy equilibrium problem between foreground and background regions . In addition, this region-based active contours technique was extended to fully utilize 3D information to identify foreground and background voxels . More recently, this 3D region-based active contours was combined with 3D inhomogeneity correction to provide better segmentation since this technique takes into consideration inhomogeneities in volume intensity . Additionally, a new segmentation method known as Squassh that couples image restoration and segmentation using a generalized linear model and Bergman divergence was introduced , whereas a method that combined with detecting primitives based on nuclei boundaries and identifying nuclei region using region growing was demonstrated in . Alternatively, combination of multiresolution, multiscale, and region growing methods using random seeds to perform multidimensional segmentation was described in .
As indicated above, florescence image segmentation still remains a challenge problem. Tubule, a biological structure with a tubular shape, segmentation is even more challenging since tubular shape and orientation is varied without known patterns. Also, since typical tubular structures have hollow shapes with unclear boundaries, traditional energy minimization based methods such as active contours have failed segmenting tubular structures . There has been some work particularly focusing on tubular structure segmentation. A minimal path based approach was described in [16, 17] where tubule shape is modeled as the envelope of a family of spheres (3D) or disk (2D). Similarly, a new approach for 3D human vessels segmentation and quantification using 3D cylindrical parametric intensity model was demonstrated in . Also, multiple tubule segmentation technique that combined with level set methods and the geodesic distance transform was introduced in . More recently, one method used to segment tubular structures was delineating tubule boundaries followed by ellipse fitting to close the boundaries while considering intensity inhomogeneity . Another method known as Jelly filling  utilized adaptive thresholding, component analysis, and 3D consistency to achieve segmentation, whereas a method for tubule boundary segmentation used steerable filters to generate potential seeds from which to grow tubule boundaries followed by tubule/lumen separation and 3D propagation to generate segmented tubules in 3D . Previous methods, however, focused on segmenting boundaries of tubule membrane. Since some tubule membranes are not clearly delineated in fluorescence microscopy image volume, finding tubule boundaries may not always result in identifying individual tubule regions.
Convolutional neural network (CNN) has been used to address segmentation problems in biomedical imaging . The fully convolutional network  introduced an encoder-decoder architecture for semantic segmentation. U-Net  is a 2D CNN based method utilizing this encoder-decoder architecture with connecting intermediate stages of downsampling and upsampling to preserve information. U-Net can be used segment complex biological structure in microscopy images. Similarly, in  a U-Net trained on cell objects and contours was used to identify tubular structures. Additionally, a multiple input and multiple output structure based on a CNN for cell segmentation in fluorescence microscopy images was demonstrated in . Also, a nuclei segmentation method that combined with a 2D CNN and a 3D refinement process was introduced in .
In this paper, we present a method for segmenting and identifying individual tubular structure based on a combination of intensity inhomogeneity correction, data augmentation, followed by a CNN architecture. Our proposed method is evaluated at object-level metrics as well as pixel-level metrics using manually annotated groundtruth images of real fluorescence microscopy data. Our datasets are comprised of images of a rat kidney labeled with a phalloidin which labels filamentous actin collected using two-photon microscopy. A typical dataset we use in our studies consists of two tissue structures, the base membrane of the tubular structures and the brush border which is generally located interior to proximal tubules. Our goal here is to segment individual tubules enclosed by their membranes.
2 Proposed Method
Figure 1 shows a block diagram of the proposed method. We denote a 3D image volume of size by , and the focal plane image along the -direction, of size pixels, by where . We also denote the original training and test images in the focal plane by and , respectively. In addition, and denote the groundtruth images that are used for training and testing that correspond to and , respectively. Similarly, and denote inhomogeneity corrected training and test images, respectively. Lastly,
denotes the binary segmentation mask generated by our proposed deep learning architecture anddenotes the final segmentation outcome. For example, the original focal plane is denoted as , its corresponding groundtruth image by , the inhomogeneity corrected version by , the binary segmentation mask as , and the final segmentation result by , respectively.
As shown in Figure 1, our proposed network includes two stages: a training and an inference stage. During the training stage original training images () have their intensity inhomogeneities corrected () as a preprocessing step. Since fluorescence microscopy images suffer from intensity inhomogeneity due to non-uniform light attenuation, correcting intensity inhomogeneity helps improve final segmentation results. We then utilize both and as inputs to the data augmentation step to increase the number of training image pairs used for training the CNN model, . During the inference stage inhomogeneity correction is done on the test images () to obtain . These are then used to segment tubules with the trained model .
2.1 Intensity Inhomogeneity Correction
Due to non-uniform intensities of fluorescence microscopy where center regions of the focal plane are generally brighter than boundary regions, simple intensity based segmentation methods failed to segment biological structures especially near image boundaries . Our previous work  employed a multiplicative model where the original microscopy volume is modeled as
Here, and are a 3D weight array and a zero mean 3D Gaussian noise array, respectively, both of same size as the original microscopy volume. Specifically, represents weight values for each voxel location that accounts for the degree of intensity inhomogeneity. The operator denotes the Hadamard product representing voxelwise multiplication.
The main idea of the multiplicative model is that an original volume, , is modeled as the product of a 3D inhomogeneity field with a corrected volume and the product corrupted by additive 3D Gaussian noise . In  an iterative technique to finding and then correcting for the intensity inhomogeneities based on this model is described.
2.2 Data Augmentation
Our training data consists of paired images which are original microscopy images and corresponding manually annotated groundtruth images. Generating manually annotated groundtruth images is a time consuming process and thus impractical when generating large numbers of images. Data augmentation is typically used when the available training data size is relatively small to generate additional groundtruth images . In this paper we utilize an elastic deformation to generate realistic tubular structures with different shapes and orientations. This allows the network to learn various deformed tubular structures, and is particularly useful for analysis of microscopy images especially for tubular structures that appear in varying shapes and orientations .
We used elastic deformation by employing a grid of control points located every pixels along the horizontal and vertical directions and displacing these control points randomly within pixels in each direction to generate a deformation field. The deformation field is used to deform the focal planes and
by fitting 2D B-spline basis function to the grid followed by bicubic interpolation. We generated random deformation fields for each image pairs and use them to generate deformed image pairs. Each deformed image is rotated , , , to generate four sets of rotated images while preserving the original image size. Each rotated image is then flipped left and right to generate another two sets of images. In our experiment, we manually annotated five pairs of training data during the training stage. Since the elastic deformation uses deformations followed by four rotations and two flips for each deformed image, pairs of images were generated for training.
2.3 Convolutional Neural Network (CNN)
The architecture of our convolutional neural network, shown in Figure 2, consists of encoder layers denoted as through and decoder layers denoted as through
that are serially connected followed by a softmax layer at the end. Each encoder layer consists of akernel with 29]
to perform image whitening, followed by a rectifier-linear unit (ReLU). The combination of convolution, batch normalization, and ReLU are performed twice at every encoder. Finally, maxpooling with a stride ofis used to reduce dimensionality. This encoder scheme is similar to VGGNet  which shrinks the input dimensions but increases the number of filters in the deeper structures. In Figure 2, each encoder’s input dimension is indicated in red under the encoder layers. Also note that the number shown above each layer represents the number of filters utilized for training. For example, an input image of size is resized to at the input to the E2 layer. As the image passes through the all encoder layers, its X and Y dimensions shrink to , respectively, but number of filters utilized increases to . Therefore, the input to the first decoder layer is of dimension .
Conversely, each decoder is comprised of two kernels with pixel padding, batch normalization, and ReLU. Instead of a maxpooling layer, the decoder has an unmaxpooling layer to upsample the data to increase dimensionality. Note that this upsampling process is a reconstruction process. To achieve better upsampling maxpooling indices from each encoder layer are recorded and transferred to the corresponding same size unmaxpooling layer (
). At the end of the encoder-decoder structure, a softmax classifier layer is utilized to determine whether each pixel location belongs to a tubule or background using a probability map. Note that the output of the softmax layer is of sizebecause the final output includes two probability maps corresponding to the two classes: tubule or background. These probability maps are thresholded at to produce binary segmentation masks.
During the training stage augmented training images () are randomly selected and used to train the model for each iteration. The segmentation mask is compared with the corresponding groundtruth (
) and a loss value is obtained for each iteration. Here, we use a 2D cross entropy loss function that is minimized using stochastic gradient descent (SGD) with a fixed learning rate and a momentum. During the inference stage we use the trained modelwith test images () to obtain binary segmentation masks (). During the postprocessing step we clean up objects less than pixels from followed by a hole filling operation to obtain final segmentation results (). Note that the hole filling operation assigns a background pixel to a tubule pixel if the background pixel’s neighborhood pixels are all tubule pixels.
3 Experimental Results
The performance of our proposed method was tested on two different datasets:111 and were provided by Malgorzata Kamocka of the Indiana Center for Biological Microscopy. and . is comprised of grayscale images, each of size pixels, whereas consists of grayscale images, each of size pixels. We selected five different images from and generated corresponding manually annotated groundtruth images to train model
. Our deep learning architecture was implemented in Torch using a fixed learning rate and a momentum of . As indicated, pairs of images were generated by the elastic deformation, rotations, and flips using these five pairs of images. Note that each training data was used as a batch so that
iterations were performed per epoch. We usedepochs for training our proposed network. In addition, was used for the removal of small objects. The performance of the proposed method was evaluated using manually annotated groundtruth images () at different depths in that were never used during the training stage. For visual evaluation and comparison segmentation results of in using various techniques are presented in Figure 3.
3.1 Qualitative Evaluation
The first row in Figure 3 displays an original microscopy image (), its inhomogeneity corrected version (), and manually delineated groundtruth (), respectively. For brevity we have omitted the superscript in the notation. The second row shows segmentation results of various 3D methods such as 3D region-based active contours  (3Dac), 3D active contours with inhomogeneity correction  (3DacIC), and 3D Squassh presented in  (3Dsquassh). Similarly, the third row portrays various segmentation methods particularly designed for tubular structure segmentation such as ellipse fitting method presented in  (Ellipse Fitting), the Jelly filling method in  (Jelly Filling), and tubule segmentation using steerable filter  (Steerable Filter). Finally, the last row shows segmentation results of our proposed CNN architecture without inhomogeneity correction  (2DCNN) and with inhomogeneity correction (2DCNNIC).
For visual comparison we highlighted groundtruth regions in red, segmented tubule regions in green, and background in black. As observed in Figure 3, our proposed method appeared to perform better than the other six methods shown in the second and third rows by distinguishing tubules and was similar performance to 2DCNN. Note that since some methods such as Ellipse Fitting, Jelly Filling, and Steerable Filter only segmented boundaries of tubule structures, tubule interiors were filled in order to perform a fair comparison using connected components with a -neighborhood systems. Also, based on the assumption that tubule regions should contain lumen, if a filled region contained lumen pixel, the region was identified as a tubule region. However, if a filled region did not contain any lumen pixels, the region was considered as a background region.
|of the||of the|
|Ellipse Fitting ||76.17%||22.79%||1.04%||61.15%||47.10%||144.28||76.11%||22.98%||0.91%||48.48%||29.34%||303.34|
|Jelly Filling ||83.91%||13.36%||2.73%||81.82%||71.58%||52.93||81.76%||15.38%||2.86%||74.53%||60.73%||76.26|
|Steerable Filter ||70.98%||28.98%||0.04%||9.90%||5.32%||455.83||71.00%||28.97%||0.03%||4.12%||4.00%||521.83|
The segmentation results shown in the second row generally missed many tubule regions. More specifically, 3Dac and 3Dsquassh could not capture the tubular structures but captured some in the center regions due to the intensity inhomogeneity of microscopy images. 3DacIC failed to segment tubular structures but captured multiple lumens inside tubules as well as some tubule boundaries. In contrast, the segmentation results displayed in the third row showed falsely detected tubules. The main reason is that these tubule segmentation methods focused only on detecting boundaries of tubular structures. In particular, due to weak/blurry edges of fluorescence microscopy images, many boundaries were not continuous causing the filling operation to overflow from one tubule to another or to the background regions. The segmentation results using the CNN generally successfully segmented and identified each tubule region.
Figure 4 provides an alternative way to show the segmentation results. In particular, yellow regions correspond to true positives which are pixel locations that are identified as tubules in both the groundtruth and segmentation results. Green regions correspond to false positives which are pixel locations that are identified as background in groundtruth but tubules in segmentation results. Similarly, red pixels correspond to false negatives, namely pixel locations identified as tubules in the groundtruth but background in segmentation results, and black pixel regions correspond to true negative that are identified as background in both groundtruth and segmentation results. The green regions indicate Type-I error (false alarm) regions and the red regions represent Type-II error (miss) regions. As observed from Figure 4, the segmentation results in the first row contained large red regions which mean large regions of tubules were missed. Conversely, the segmentation results shown in the second row contained many green regions indicating many background regions were falsely segmented as tubule regions. In contrast, the segmentation results in the third row had reasonably small green regions and red regions which indicate that the deep learning based segmentation results had higher pixel accuracy with relatively low Type-I and Type-II errors.
3.2 Quantitative Evaluation
In addition to the qualitative evaluation, quantitative metrics for evaluating the proposed method’s segmentation accuracy of objects were utilized. In particular, we used pixel-based and object-based metrics. In the pixel-based metric, the pixel accuracy (PA), Type-I error, and Type-II error of pixel segmentation were obtained based on the manually annotated groundtruth images. Here, PA, Type-I, and Type-II are defined as below:
where , , , and are defined to be the number of segmented pixels that were labeled as true positives, true negatives, false positives, false negatives, respectively. denotes the total number of pixels in a image. These three pixel-based metrics obtained for different segmentation results are provided in Table 1. As shown in Figure 4, Type-II errors of the first three methods (3Dac, 3DacIC, 3Dsquassh) were much higher compared to other methods. Similarly, Type-I errors of next three methods (Ellipse Fitting, Jelly Filling, Steerable Filter) were much higher than those of the other methods. However, 2DCNN and 2DCNNIC had high PA and relatively low Type-I and Type-II errors.
In addition, our segmentation methods were evaluated using object-based criteria described in the 2015 MICCAI Grand Segmentation Challenge [25, 32] namely: the F1 score metric, the Dice Index, and the Hausdorff Distance.
The F1 score metric is a measure of the segmentation/detection accuracy of individual objects. The evaluation of the F1 score metric is based on two metrics, precision and recall . Denoting the number of tubules correctly identified by , the number of objects that are non-tubules but identified as tubules by , and the number of tubules that are not correctly identified as tubules by , respectively, then precision and recall are obtained as 
Given the values of and recall , the F1 is found using
It is to be noted that a tubule segmented by the proposed method (or any other method for that matter) that overlaps at least with its corresponding manually annotated tubule is labeled as a true positive and added to the count of the true positives (), otherwise it is considered as a false positive and added to the count of the false positives (). Similarly, a manually annotated tubule that has no corresponding segmented tubule or overlaps less than with segmented tubular regions is considered to be a false negative and added to the count of the false negatives ().
As mentioned above a second metric used to evaluate segmentation accuracy is the Dice Index (OD). The Dice Index  is a measure of similarity between two sets of samples. In our case, the two sets of samples are the sets of voxels belonging to a manually annotated tubule denoted by , and the set of voxels belonging to a segmented tubule denoted by . The Dice Index between and is defined as
where denotes set cardinality which in this case will be the number of voxels belonging to an object. A higher value of the Dice Index indicates better segmentation match/results relative to the groundtruth data. A practical way of evaluating the Dice Index for segmented objects is described in  and is given by
In Eq (6), denotes the tubule () obtained by a segmentation method and denotes a manually annotated tubule that is maximally matched with . Similarly, denotes the tubule () identified in the groundtruth data and denotes a segmented tubule that is maximally matched with . Finally, and denote the total number of segmented and manually annotated tubules, respectively. The first summation term in Eq (6) represents how well each groundtruth tubule overlaps with its segmented counterpart, whereas the second summation term represents how well each segmented tubule overlaps with its manually annotated counterpart. The terms and which are used to weight the summation terms represent the fraction of the space that each tubule and occupies within the entire tubule region, respectively.
While the Dice Index measures segmentation accuracy, a third metric, the Hausdorff Distance (OH), is needed to evaluate shape similarity. The Hausdorff Distance , , between a segmented tubule and its manually annotated counterpart , is defined to be
Here, denotes the Euclidean distance between a pair of pixels and . Based on Eq (8), the Hausdorff Distance obtains the maximum distance among all pairs of voxels on the boundaries of and . Therefore, a smaller value of the Hausdorff Distance indicates a higher similarity in shape between the boundaries of and . As done above (see Eq (6)), a practical way of finding the Hausdorff Distance between a segmented tubule and its manually annotated counterpart is given by :
where the parameters and are defined in Eq (7).
The performance of the proposed method and other methods based on the F1 score, OD, and OH metrics were obtained and tabulated in Table 1. As mentioned above higher values of F1 and OD are considered to be indicators of better segmentation results. In contrast, lower values of OH indicate better segmentation result. As can be seen in Table 1, our proposed method outperformed all the other segmentation methods against which the proposed method is being evaluated. In particular, 3Dac, 3DacIC, 3Dsquassh, and Steerable Filter had low F1 scores since these segmentation methods had large Type-I or Type-II errors. Similarly, all of the methods except for 2DCNN suffered from low OD and high OH values. In particular, since the segmentation results of 3Dsquassh, Ellipse Fitting, and Steerable Filter failed to distinguish most of the individual tubules, they exhibited low OD and high OH values. Note that 3DacIC had relatively low OH and low OD values since it segmented some tubule boundaries as well as some partial regions (lumen) inside the tubules. Lastly, the use of intensity inhomogeneity correction in the proposed method improved its performance relative to that of 2DCNN.
For visual evaluation we provide the segmentation results of the proposed method using two different datasets: and , sampled at different depths within the volumes. The first row shows original microscopy images , , and from and the second row displays the segmentation results corresponding to the first row. To better visualize the segmentation results, we highlighted individual tubules with different colors and overlaid them onto the original microscopy images. Similarly, the third row exhibits original microscopy images , , and from . Their corresponding segmentation results are shown in the fourth row. Note that the model which was trained on was used for during the inference stage. Although the shape, size, and orientation of tubular structures presented in are all different from , the proposed method can still successfully segment and identify individual tubules presented in as well as individual tubules in .
This paper presented a tubular structure segmentation method that uses inhomogeneity correction, data augmentation, a convolutional neural network, and postprocessing. The qualitative and quantitative results indicate that the proposed method can successfully segment and identify individual tubules compared to other segmentation methods. In the future, we plan to utilize 3D information generated from realistic 3D synthetic tubular structures to improve segmentation results as well as reduce manual annotation work.
This work was partially supported by a George M. O’Brien Award from the National Institutes of Health NIH/NIDDK P30 DK079312.
-  R. K. P. Benninger, M. Hao, and D. W. Piston, “Multi-photon excitation imaging of dynamic processes in living cells and tissues,” Reviews of Physiology Biochemistry and Pharmacology, vol. 160, pp. 71–92, April 2008.
-  F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nature Methods, vol. 2, no. 12, pp. 932–940, December 2005.
-  K. W. Dunn, R. M. Sandoval, K. J. Kelly, P. C. Dagher, G. A. Tanner, S. J. Atkinson, R. L. Bacallao, and B. A. Molitoris, “Functional studies of the kidney of living animals using multicolor two-photon microscopy,” American Journal of Physiology-Cell Physiology, vol. 283, no. 3, pp. C905–C916, September 2002.
-  D. B. Murphy and M. W. Davidson, Fundamentals of Light Microscopy and Electronic Imaging, Wiley-Blackwell, Hoboken, NJ, 2nd edition, 2012.
M. Kass, A. Witkin, and D. Terzopoulos,
“Snakes: Active contour models,”
International Journal of Computer Vision, vol. 1, no. 4, pp. 321–331, January 1988.
-  R. Delgado-Gonzalo, V. Uhlmann, D. Schmitter, and M. Unser, “Snakes on a plane: A perfect snap for bioimage analysis,” IEEE Signal Processing Magazine, vol. 32, no. 1, pp. 41–48, January 2015.
-  B. Li and S. T. Acton, “Active contour external force using vector field convolution for image segmentation,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2096–2106, August 2007.
-  B. Li and S. T. Acton, “Automatic active model initialization via Poisson inverse gradient,” IEEE Transactions on Image Processing, vol. 17, no. 8, pp. 1406–1420, August 2008.
-  T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Transactions on Image Processing, vol. 10, no. 2, pp. 266–277, February 2001.
-  K. S. Lorenz, P. Salama, K. W. Dunn, and E. J. Delp, “Three dimensional segmentation of fluorescence microscopy images using active surfaces,” Proceedings of the IEEE International Conference on Image Processing, pp. 1153–1157, September 2013, Melbourne, Australia.
-  S. Lee, P. Salama, K. W. Dunn, and E. J. Delp, “Segmentation of fluorescence microscopy images using three dimensional active contours with inhomogeneity correction,” Proceedings of the IEEE International Symposium on Biomedical Imaging, pp. 709–713, April 2017, Melbourne, Australia.
-  G. Paul, J. Cardinale, and I. F. Sbalzarini, “Coupling image restoration and segmentation: A generalized linear model/Bregman perspective,” International Journal of Computer Vision, vol. 104, no. 1, pp. 69–93, March 2013.
-  S. Arslan, T. Ersahin, R. Cetin-Atalay, and C. Gunduz-Demir, “Attributed relational graphs for cell nucleus segmentation in fluorescence microscopy images,” IEEE Transactions on Medical Imaging, vol. 32, no. 6, pp. 1121–1131, June 2013.
-  G. Srinivasa, M. C. Fickus, Y. Guo, A. D. Linstedt, and J. Kovacevic, “Active mask segmentation of fluorescence microscope images,” IEEE Transactions on Image Processing, vol. 18, no. 8, pp. 1817–1829, August 2009.
-  S. Lee, P. Salama, K. W. Dunn, and E. J. Delp, “Boundary fitting based segmentation of fluorescence microscopy images,” Proceedings of the IS&T/SPIE Conference on Imaging and Multimedia Analytics in a Web and Mobile World 2015, pp. 940805–1–10, February 2015, San Francisco, CA.
-  F. Benmansour and L. D. Cohen, “Tubular structure segmentation based on minimal path method and anisotropic enhancement,” International Journal of Computer Vision, vol. 92, no. 2, pp. 192–210, March 2010.
-  H. Li and A. Yezzi, “Vessels as 4-D curves: Global minimal 4-D paths to extract 3-D tubular surfaces and centerlines,” IEEE Transactions on Medical Imaging, vol. 26, no. 9, pp. 1213–1223, September 2007.
-  S. Worz and K. Rohr, “A new 3D parametric intensity model for accurate segmentation and quantification of human vessels,” Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention, pp. 491–499, September 2004, Saint-Malo, France.
-  A. Fakhrzadeh, E. Sporndly-Nees, L. Holm, and C. L. Luengo Hendriks, “Analyzing tubular tissue in histopathological thin sections,” Proceedings of the IEEE International Conference on Digital Image Computing Techniques and Applications, pp. 1–6, December 2012, Fremantle, WA.
-  N. Gadgil, P. Salama, K. W. Dunn, and E. J. Delp, “Jelly filling segmentation of fluorescence microscopy images containing incomplete labeling,” Proceedings of the IEEE International Symposium on Biomedical Imaging, pp. 531–535, April 2016, Prague, Czech Republic.
-  D. J. Ho, P. Salama, K. W. Dunn, and E. J. Delp, “Boundary segmentation for fluorescence microscopy using steerable filters,” Proceedings of the SPIE Conference on Medical Imaging, pp. 10133–1–11, February 2017, Orlando, FL.
-  G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. van der Laak, B. van Ginneken, and C. I. Sanchez, “A survey on deep learning in medical image analysis,” arXiv preprint arXiv:1702.05747, pp. 1–34, February 2017.
-  E. Shelhamer, J. Long, and T. Darrell, “Fully convolutional networks for semantic segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 640–651, April 2017.
-  O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention, vol. 9351, pp. 234–241, October 2015, Munich, Germany.
H. Chen, X. Qi, L. Yu, and P. A. Heng,
“DCAN: Deep contour-aware networks for accurate gland
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2487–2496, June 2016, Las Vegas, NV.
-  S. E. A. Raza, L. Cheung, D. Epstein, S. Pelengaris, M. Khan, and N. M. Rajpoot, “MIMO-NET: A multi-input multi-output convolutional neural network for cell segmentation in fluorescence microscopy images,” Proceedings of the IEEE International Symposium on Biomedical Imaging, pp. 337–340, April 2017, Melbourne, Australia.
-  C. Fu, D. J. Ho, S. Han, P. Salama, K. W. Dunn, and E. J. Delp, “Nuclei segmentation of fluorescence microscopy images using convolutional neural networks,” Proceedings of the IEEE International Symposium on Biomedical Imaging, pp. 704–708, April 2017, Melbourne, Australia.
-  K. S. Lorenz, P. Salama, K. W. Dunn, and E. J. Delp, “Digital correction of motion artefacts in microscopy image sequences collected from living animals using rigid and nonrigid registration,” Journal of Microscopy, vol. 245, no. 2, pp. 148–160, February 2012.
S. Ioffe and C. Szegedy,
“Batch normalization: Accelerating deep network training by reducing
internal covariate shift,”
Proceedings of the International Conference on Machine Learning, vol. 37, pp. 448–456, July 2015, Lille, France.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, pp. 1–14, April 2015.
-  R. Collobert, K. Kavukcuoglu, and C. Farabet, “Torch7: A matlab-like environment for machine learning,” Proceedings of the BigLearn workshop at the Neural Information Processing Systems, pp. 1–6, December 2011, Granada, Spain.
-  K. Sirinukunwattana, J. P. W. Pluim, H. Chen, X. Qi, P. A. Heng, Y. B. Guo, L. Y. Wang, B. J. Matuszewski, E. Bruni, U. Sanchez, A. Bohm, O. Ronneberger, B. B. Cheikh, D. Racoceanu, P. Kainz, M. Pfeiffer, M. Urschler, D. R. J. Snead, and N. M. Rajpoot, “Gland segmentation in colon histology images: The glas challenge contest,” Medical Image Analysis, vol. 35, no. 1, pp. 489–502, January 2017.
-  L. R. Dice, “Measures of the amount of ecologic association between species,” Ecology, vol. 26, no. 3, pp. 297–302, July 1945.
-  D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge, “Comparing images using the Hausdorff distance,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 9, pp. 850–863, September 1993.