Most metals and ceramics have complex microstructures such as polycrystalline structure, multi-phase structure, multi-domain structure separated by different interfaces, called grain boundary , phase boundary  and domain boundary . The microstructure including these boundaries is determined by material composition and preparation process, meanwhile, is of great significance for controlling the properties and performance of materials. Therefore, microstructure characterization is one of core missions in materials science and engineering.
During the quantitative analysis of microstructure characteristics, an important step is microscopic image processing, which is used for extracting the key information in the microstructure. Unlike the image processing task in natural and biological scenes, the microscopic image in materials science shows unique problems, which increase the difficulty of image processing and analyzing. Take polycrystalline structure for example, which is commonly used and studied in practice. The ultimate objective is to achieve the 3D structure of the sample. Due to the opacity of materials, researchers can only use serial section method to obtain serial 2D images and stack it to reconstruct 3D structure, shown in Figure 1. Thus, there are two important steps in the process: 2D image analysis and 3D reconstruction. Both of them have their own difficulty.
For 2D image analysis, flaws in material microscopic images seriously hinder the target object detection . The region of interest in polycrystalline microscopic images is the single-pixel closed boundary of grain (like a cell in biological image) , as shown with black straight and thick arrows. Unfortunately, the sample will unavoidably be introduced into flaws during sample preparation, such as polish and etch processes. There are three types of flaws in polycrystalline microscopic images, which will pose significant problems for the boundary detection task.
Blurred or missing boundary: caused by incomplete etching in the nital solution, as shown with red straight and thin arrows. This kind of flaw may occur in any position of slices, or even in the same position of serial slices. It is necessary for an algorithm to recover the missing boundary by using the information of adjacent slices.
Noise: caused in sample preparation, as shown with yellow curved arrows.
Spurious scratches: unavoidably caused in the polished process, which is similar to the boundary and is easy to confuse the image processing algorithm, as shown with blue notched arrows.
Attribute to high representative model, convolution neural networks(CNN) has driven great progress on image segmentation and boundary detection  in recent years, especially in natural and biological scenes. However, as far as we know, there is no deep learning-based method specially designed for polycrystalline structural materials with such flaws.
For 3D reconstruction, it is a challenge to identify the same grain regions in adjacent slices. Different degrees of deformation exists in the same grain between adjacent slices, a phenomenon of grain bifurcation may occur. In addition, grain disappearance and appearance often occur between adjacent slices. Therefore, it needs to design an algorithm which can solve all these problems when transforming a 2D boundary result to 3D label result.
In this work, to solve the problem existed in 2D microscopic images of polycrystalline materials, we propose a novel Weighted Propagation Convolution Neural Network based on U-Net(WPU-Net), which propagate boundary information from adjacent slice to aid the boundary detection in the target slice with a weighted map specially designed for this boundary detection task. From a practical standpoint, our work presents three contributions:
We propose an adaptive boundary weight loss to force the network to tolerate minor difference in boundary location and pay more attention to topology preservation, which is better suited for the boundary detection in polycrystalline images, as the quantitative analysis of material microstructures is almost unaffected by small differences in the boundary.
We modify u-net by introducing 3D information into U-net architecture, such that it makes better use of domain knowledge between slices and be beneficial to detect boundaries precisely, even for the blurred and missing boundaries. As shown in the experiment section, our method achieves the highest performance compared to state-of-the-art methods.
We propose a new solution to reconstruct the 3D structure of the sample by using CNN to perform grain object tracking between slices.
Our code and partial data can be found at: https://github.com/clovermini/WPU-Net.
2 Related work
2.1 Boundary Detection
There are many existing methods have been - or can be - used to detect the boundary of 3D polycrystalline material microscopic images. They can be broadly categorized into two classes: 2D image-based methods that detect the boundary with only the information contained in the 2D image itself and 3D image-based methods that detect the boundary using the 3D context information contained in the image volume.
The 2D image-based methods include many classical image segmentation methods [43, 12, 33, 11, 27, 5, 2, 42], such as watershed , canny , otsu , graph cut , grab cut  and so on. They are mainly based on hand-crafted features, including gray scale information, gradient information, morphological cues, structural information and so on. Although these methods have achieved good performance in many image segmentation scenarios, they may fail to get a satisfying performance in images with high noise, blurred or even missing edges. Deep Learning-based method [29, 47, 49, 40, 4, 38, 1, 3, 31, 8] for 2D semantic segmentation has been more and more popular in recent years, it has become the de facto standard for image segmentation by virtue of its powerful feature learning and expression ability. The U-Net  has become the most commonly used image segmentation method because of its robustness and excellent performance. Many improved methods [38, 1, 3] including ours are based on it, a representative one is Attention U-Net . However, the 2D image-based methods have an inherent drawback, that is, it can not make use of the 3D context information between adjacent slices.
The 3D image-based methods can also be broadly grouped into three classes based on how to use 3D information. (I) 3D fully convolution networks(FCN) [14, 24, 50, 26, 10, 36], which employ 3D convolutions to replace 2D convolutions. 3D U-Net , V-Net  is the representative methods of this class. (II) Combining 2D FCN with RNN. A most representative method is UNet+BDCLSTM 
, which uses the 2D FCN to extract intra-slice contexts, and a recurrent neural network (RNN) to extract inter-slice contexts. The methods using 3D convolutions perform 2D convolutions with isotropic kernel on anisotropic 3D images, which could be problematic, methods of combining RNN and 2D FCN can eliminate this drawback, however, it performs poorly for the continuous blurring of grain boundaries at the same position of adjacent slices. We will explain this in detail in the experiment results. What’s more, both methods above are very computationally intensive. (III) Tracking-based methods, which have been developed for detecting boundary in a stack of 2D slices. developed an interactive segmentation method based on break-point detection, but a lot of artificial corrections are needed.  proposed the concept ”propagation segmentation” based on graph-cut, it sets the energy function of the target image using the information of the last slice through the domain knowledge of material science.  improved it by changing the setting of binary terms in energy function, filling the blurred or missing boundary in target images with the same boundary in the last slice. The tracking-based method shows superior performance when dealing with blurred or missing boundary and spurious scratches. However, they are usually designed by hand-crafted features and time-consuming. Our method combines the deep learning-based method with a tracking-based method to take advantage of both, achieving the best performance in state-of-the-art methods.
2.2 Weighted Loss
Weighted loss is widely used to handle the class imbalance problem in deep learning, weighted cross-entropy for example . However, it did not tolerate minor differences in boundary location. U-net  has proposed a weighted map loss to pay more attention to the border of two objects. However, it can only be applied to separate regions. By contrast, it will be equal to weighted cross-entropy when applied to tight regions in our task. Some works tried to simply dilate the boundary  to achieve higher performance, however, that will remove tiny objects. Thus, it needs to redesign a new weighted method which will handle the above problems.
2.3 3D Reconstruction
There are two classes of 3D reconstruction methods to recognize the same regions in adjacent slices. The segmentation based, such as 3D watershed [35, 15], uses distance information or gradient information to determine the relationship between two adjacent pixels. Unfortunately, the polycrystalline structure is complex and staggered, the grain region in one slice is connected to other grains in voxel relation on adjacent slices so that the 3D watershed cannot be applied to this task. The track based methods calculate shape similarity and overlap area between two connected components in two adjacent slices . However, both of them rely on hand-crafted features which will unavoidably cause the over-segment problem.
3.1 Adaptive Boundary Weighted Map
Traditional weighted cross-entropy rigidly controls the location of the predicted boundary at the pixel level. However, in a practical point, the topology of grain and boundary is what truly focused. U-net  has proposed a weighted map to force the network to learn the small separation borders between two regions. This is very suited to loosely arranged regions. However, for tightly arranged regions, and are equals to 0 and the result is as same as weighted cross-entropy.
By getting inspiration from U-net, we propose an adaptive boundary weighting method, which is weighting map incorporated with cross entropy calculation. The formulas are shown below:
is the energy function which is computed by a pixel-wise soft-max over the final feature map combined with the cross-entropy function. The soft-max is defined as where denotes the activation in feature channel at the pixel position with and . is the number of classes and equals to 2 in the boundary detection task. is the weighted map to balance the class frequencies. We design two types of weights, and , for background and object respectively. For each pixel in grain , we calculate its distance to the nearest boundary. In addition, we can get the maximum of in grain , the . We customize the weight for each grain by using in the above formulas. By making such optimization, the algorithm adaptive control the convergence speed of normal function. That is the smaller the grain size, the faster the weight converge, which is suited to protect the tiny grain and tolerate minor differences in boundary location.
is the dilating result of the single-width mask which controls the range of variation of the boundary. The standard deviation of normal function in each grainis the result of
divided by 2.58. That is because of the possibility of normal distribution in rangeis 99.00%.
We discuss the benefit of adaptive boundary weighting map with some demos in Figure 2. This figure shows the curve of (blue curve) and (red curve) on differnt grain size. The green curve is the final result for each weights. The dot dash black line is original mask location and the black straight line is dilating result of original mask. For example, The size of structural kernel of dialting operation is 5. For tiny grain which size is smaller than dilating kernel size in (a), the method choose the bigger result of and , which will prptect the boundary of tiny grain and prevent them from being covered in dilating operation. For huge grain which size is bigger than dilating kernel size in (c), the method choose in the range of dilation result and out of the range of that. This operation limit the variation of boundary, which prevent huge differency of predicted result and ground truth. For comparision, we show the weighting result on approriate grain size in (b).
We visualize the adaptive boundary weighted map and the boundary detection result by using classical U-net in Figure 3. The left column is the raw image and boundary mask. The medial column is the illustration of dilation on a mask and its boundary result. And the right column is the adaptive boundary weight and its result. For the propose of comparison, we put the and together to visualization. The simply dilation operation tolerate the minor differences at boundary location and might remove the tiny grain in the result, as shown with red straight arrows. By contrast, the adaptive boundary weight not only tolerates the minor differences at the boundary location but also protect the boundary of tiny grain to ensure the topology of the result.
3.2 Integrate Propagation Information in Network
In order to better solve the problem of blurred or missing boundaries and spurious scratches in the microstructure images of polycrystalline materials, we draw on the advantages of the tracking-based method and deep learning-based method and propose a new network architecture for 3D image segmentation, especially applicable to the polycrystalline image. This architecture propagates the mask information of the last slice to the next target image to assist the target image in detecting boundary accurately. More specifically, as shown in Figure 4, the information of the last slice (as shown with the gray image on the left side of Figure 4
) is sent to U-Net along with original image as input. As CNN has strong learning and modeling capabilities, it can learn a powerful feature extraction function related to a specific task based on the training data. The core of our work is to build a deep learning model can use the power of the neural network to learn a much more complex modeling function between two adjacent slices. The ideal state of this function is that it can not only recognize blurred or missing boundaries and spurious scratches in target image with the help of the last slice but also keep the topology of the target image itself. In order to promote the neural network to learn an ideal modeling function as close as possible. We make efforts in two ways.
Firstly, we design a weighted map according to the domain knowledge of polycrystalline material, which is just the referred to in formula 4. In this weighted map, the center of the grain has a larger weight, and the closer to the grain boundary, the smaller the weight. This conforms to the properties of polycrystalline materials, namely although the grain boundaries of adjacent slices may undergo different degrees of deformation, the central portion of the grain in the last slice is likely to remain as grain in this layer. From this perspective, using a weight map could be more appropriate than using a mask of the last slice directly. In order to prove it, we have designed three sets of comparative experiments in the experimental stage, using mask, mask-expansion and weight map, respectively. The mask-dilation means a boundary dilation map on a mask, which comes from the concept of ”bounding region” in , as shown in Figure 5.
Secondly, we present a multi-level fusion strategy to make better use of multiple levels of information. As U-Net is a cascaded framework, with the number of convolution layers increases, it gradually extracts high-dimensional information representations. In layer 1 (as shown in Figure 4), U-Net may only learn simple boundary information, but in layer 4, it may be able to learn high-dimensional structural information, which is important in boundary detection on the polycrystalline image. The upper information sent to the network not only contains boundary information but also rich structural information. Thus, we use a multi-level fusion strategy to make the most of it. The simplest concatenation is used as the fusion strategy.
3.3 Grain Object Tracking Slice By Slice
After analyzing all the 2D images, there is still a challenge to reconstruct the 3D structure. That is to recognize the same grain regions in adjacent slices. As shown in Figure 6, Image and Image are two adjacent slices. Boundary and Boundary are boundary detection results. Label and Label are the label results, which can be used to 3D reconstruction. Each grain region is given to a unique label and a certain color to visualize. In Figure 6, various deformations may occur with grains in the Z direction. Some grains may occur deformation. Some grains disappeared, and some grains appeared, as shown in a detailed demonstration. Therefore, there is a challenge to design an algorithm to solve all these deformations when the transform boundary results to label result.
Traditional methods can’t achieve high performance in this problem. As discussed in section 2.3, there are two classes of the algorithm to try to solve the problem: segmentation based and track based. However, they are both to design the algorithm by hand-crafted features, which easily produced the over-segment result. Therefore, we intend to use a learning algorithm to handle this task. Unfortunately, many object tracking algorithms based on deep learning rely on the different appearance of different objects, which is very suited to track the objects in a natural scene. By contrast, all the grains have the same pixel value in boundary result or approximate value in origin image.
We propose a new grain object tracking solution by using convolution network in the image classification task. For each pair of two connected grain regions in three dimensions, we apply a classification network to recognize whether they belong to the same label.
We use Figure 6 and Figure 7 for detailed illustration. Label is a set of labels in the last slice, and Label is a set of labels in this slice by using the 2D connected component algorithm. Unfortunately, due to classical 3D, connected components algorithm can’t be used to complex and staggered structure, we pick up image classification algorithm to track the grain objects. For each grain in this slice, Label for example, we find all the connected components (such as Label, Label and Label) in Label of Label in the Z direction. Then we concatenate and resize each Label above with Label to form 2 channels images and feed it to an image classification network. The network is a simply 2-class network to get the similarity of two regions or the probability of successful tracking. After that, we can achieve the label of Label which have maximum similarity with Label. If the maximum similarity beyond a threshold, the tracking process can be thought to success.
4 Experiment Results
In this section, an adequate experiment will be deployed to demonstrate the effectiveness of our proposed methods, WPUnet. We test our methods on two pure iron dataset, one real anisotropic dataset, and one synthetic isotropic dataset. The synthetic dataset was generated by Monte Carlo Potts model , which is used to mimic the grown procedure of polycrystalline grain. The synthetic dataset consists of a sequence of 2D label images and corresponding serial boundary images. The size of this dataset is . Due to the nature of synthetic, it does not have the corresponding real original image. Thus, we only use it when testing the grain object tracking algorithm. The real dataset was produced and collected in practical experiments with serial section method . In our experiment, we use a stack of 296 microscopic pure iron images with large resolution ( pixels), it consists of about 16796 grains in total. The ground truth of real dataset was labeled by professional material researchers. In order to control the experimental parameters, we randomly cropped 12480 images with resolution of images as a training set, set 88 images with resolution of pixels as a testing set, and set the first 8 images on the test as a validation set. The testing set and validation set used pixels sub-images as the input of network and the results were gathered to form the image by using overlap-tile strategy .
The goal of boundary detection in this work is to achieve single-pixel width and closed the boundary of each grain. Thus, the metric should tolerate minor differences in boundary location and penalize under-segment and over-segment errors.
For a fair comparison, we use multiple metrics to evaluate our algorithm, such as Variation of Information (VI) [34, 37], Adjusted Rand Index (ARI)  and Mean Average Precision (Map) [28, 17], Rand index (RI) 
. Note that among all the evaluation metrics used in this paper, only the lower the VI value, the better. The higher the other metrics, the better.
We first performed normalization to input images. The weights of nets were initialized with Xavier 
and all nets were trained from scratch. We adopted batch normalization (BN)25]
). The learning rate was set to 1e-4. We optimized the objective function with respect to the weights at all network layer by RMSProp with smoothing constant ()=0.9 and
=1e-5. Each model was trained for 10 epochs on 2 NVIDIA V100 GPUs with a batch size of 24. During training, we picked up the parameters when it achieved the smallest loss on the validation set. All the performance in the experimental section4.1 was obtained on the testing set using the above parameters.
Our implementation of this algorithm was derived from the publicly available Pytorch framework.
4.1 Boundary Detection
All experiments displayed on this subsection were carried out on real dataset and the reported performance is the average of the scores for all images in the test set. Experiments on adaptive boundary weighted loss are carried out first to determine the superiority of our weighting method. Then adequate ablation experiments on WPU-Net was conducted using that weighted loss.
4.1.1 Adaptive Boundary Weighting
To justify the effectiveness and robustness of our proposed adaptive weighted loss, we report the performance of cross-entropy loss with different weights applied on classical models, such as U-net  and Attention U-net . Three weighted loss was compared: simply class-balanced weighted loss (CBW) , class-balanced weighted loss on mask dilation of 5 pixels (CBWD5)  and adaptive boundary weighted loss (ABW).
As shown in Table 1, adaptive boundary weight performs better than the other two in general. We analyzed the main reason could be the adaptive boundary weighted loss does better in tolerating minor difference of boundary location and protecting topology information as shown in Figure 3. The VI score of CBWD5 and ABW on U-net are very close, probably because VI is less sensitive in tiny grains. In contrast, the value of Map that is more sensitive to small grains is relatively higher. We can also see that Adaptive Boundary Weight can achieve higher performance both on U-net and Attention U-net architecture. That is suggesting that improvements induced by adaptive boundary weight can be used directly with existing state-of-the-art architectures.
4.1.2 Integrate Propagative Information in Network
We conducted two experiments to systematically examine the effect of WPU-Net, including each part of it. The first one is an ablation experiment about the information style of last slice and the fusion mode of last slice’s information in WPU-Net. We set up six sets of contrast experiments for three different information styles of the last slice and two fusion modes. In order to eliminate the influence of other factors, each set of experiments was carried out under strictly the same environment, including the same pre-processing, post-processing methods, same network parameter settings, and training epochs. Remarkably, regardless of the style of upper information, the pixel values are normalized to [-6, 1] before they are entered into the network. This is consistent with the normalization of the original image. And in order to obtain a single-pixel boundary result image, the predictions of the network will undergo skeletonization operation.
The evaluation results of the ablation experiment are listed in Table 2. We can see that weight map-based methods generally score higher than mask and mask-expansion based methods. This proves the validity of the weight map we proposed. However, there are some strange phenomena in the results of information fusion mode. The multi-level fusion strategy we proposed performs poorly on the mask style, performs well on mask-expansion style, while performs similarly on weight map style. This is a question worth pondering. We analyze this may be because the types of information carried by different styles of last slice’s information are inconsistent. The mask contains strong edge information, which is harmful when integrated into the high-level features of the network. The mask-expansion integrates the concept of ”bounding region” in , which is mainly used to characterize structural transitions between adjacent slices. Therefore, it works better when integrated into high-level features. While the similar performance of weight map based methods may justify that it contains not only edge information but also rich structural information.
|Weight Map||Layer 1||0.1715||0.7264||0.7288|
The second experiment is a model comparison between WPUnet and classic models. We picked up those models as they are the typical methods of dealing with 3D images mentioned in section 2.1. As we can see in Table 3, our proposed method WPUnet outperforms others in every evaluation metrics, especially on VI metrics, our method is about smaller than other methods. This proved the feasibility and effectiveness of propagation segmentation network in the boundary detection task of 3D images, especially in polycrystalline materials. Due to the special manufacturing process of microscopic images of polycrystalline materials, it has many special problems need special attention. The problem of continuous blurring of same grain boundaries and scratch noise in adjacent slices are the two main reason for the inapplicability of typical methods. To further analyze this problem, we displayed the merge error and split error of each method in VI evaluation metrics separately in Figure 8. The merge error(under-segmentation) means the error caused by unsuccessful detection of grain boundaries(FN), resulting in two grains in the image being judged to be the same grain. It usually occurs at blurred grain boundaries. While the split error(over-segmentation) means the wrong detection of grain boundaries(FP), resulting in one grain in the image is judged as two grains. It usually occurs at spurious scratches. From Figure 8, we found that in addition to 3D U-Net and our method, other models all show much worse performance on blurring grain boundaries generally. The merge error is abnormally high in UNet+BDCLSTM, we analyzed RNN maybe not good at dealing with the problem of a continuous blur. By contrast, our WPUnet performs better in both problems, especially on blurring boundaries. We visualize the detection results of some representative methods in Figure 9. It should be mentioned that all the algorithms we used in this experiment are re-implemented using pytorch based on the original paper and source code(If it provides) except for Fast-FineCut.
|3D U-Net ||0.2696||0.6370||0.7475|
|Attention U-Net ||0.3114||0.5721||0.6810|
4.2 Grain Object Tracking Slice By Slice
We test our object tracking algorithm both on the synthetic isotropic dataset and real anisotropic dataset. The real data produced from the experiment and thus limited by processes of sample preparation. Due to the polishing process of sample preparation, the resolution of Z direction is always smaller than X and Y direction. By contrast, the synthetic data is isotropic and made by a simulation model. We use RI and VI as metric of the experiments. We compare our algorithm with maximum overlap area algorithm and minimum centroid distance algorithm proposed in the article . For image classification model, we choose vgg13_bn  and densenet161  for comparison. The learning rate started from 1e-3 and was multiplied by 0.8 after each two epoch until decay to 1e-6. The batch size is 20 and uses RMSProp  with 0.9 momentum to optimized. Each model was trained for 10 epochs.
For both of them, The testing set is evaluated on the parameters where models achieve the highest accuracy on the validation set.
In addition, because the lacking information in dimensions, the tracking algorithm can not achieve 100% accuracy even for ground truth boundary result. Therefore, we choose the best model for tracking by using ground truth boundary result and applied it to different boundary detect methods. It is reasonable to use tracking result to evaluate the performance of different boundary detect methods.
Note that the number of slices is not limitation for CNN. The number of pair grain regions is actually the input of network. There are million of pair grain regions as training set for real data set and half million of pair grain regions as training set for synthetic one.
4.2.1 Synthetic Dataset
The synthetic dataset was generated by Monte Carlo Potts model . Monte Carlo Potts model was used to mimic the grown procedure of polycrystalline grain. We obtained the data at 5000 Monte Carlo step of the simulation process. Due to the synthetic nature of the data, it only has serial label images and corresponding serial boundary images. It contains 400 slices with resolution of pixels. We use 80 slices as the testing set, 80 slices as the validation set and 240 slices as the training set. As shown in the Table.4, we report the tracking performance of different methods. Tracking methods with deep learning achieve the promising performance in comparison with traditional methods. Besides, it will improve the performance when applied to complex and advanced network. However, the duration of deep learning based tracking algorithm consume much more time than traditional time. We thought it can be optimized by parallel programming.
4.2.2 Real Mini Dataset
For real data set, we use 80 slices as the validation set and 208 slices as the training set. For efficiency reasons, we use sub-dataset of the pure iron dataset as testing set. The sub-dataset contains 80 slices with resolution of pixels. As shown in Table 5, it has shown the same result with synthetic data. In addition, we choose the densenet161 to track the boundary result of different methods in 6. WPUnet achieve the promising result than other methods.
|Min Centroid Dis||0.5656||0.8748||23.84|
|Max Overlap Area||0.6105||0.8603||18.48|
|Fast-Fine Cut ||0.9890||0.6375||1.7142|
|3D U-net ||0.9946||0.7870||1.1827|
In general, the algorithm achieves the highest performance both on the real an-isotropic dataset and synthetic isotropic dataset.
In this work, we proposed a Weighted Propagation U-net (WPU-net) architecture to handle the boundary detection in polycrystalline materials. The network integrated information from adjacent slices to aid boundary detection in the target slice. And we presented adaptive boundary weighting to optimize the model, which can tolerate minor difference in boundary detection and protect the topology of grains. Experiments have shown that our network achieves the promising performance that is superior to previous state-of-the-art methods. In addition, we developed a new solution to reconstruct the 3D structure of the sample by using CNN to perform grain object tracking between slices. Our team will focus on accelerating the speed of tracking and optimizing boundary detection in the future.
The authors acknowledge the financial support from the National Key Research and Development Program of China (No. 2016YFB0700500), and the National Science Foundation of China (No. 61572075, No. 6170203, No. 61873299, No. 51574027), and Key Research Plan of Hainan Province (No. ZDYF2018139). Besides, we gratefully thanks to Dr. Chao Yao for many helpful comments.
Khan Naimul Mefraz Abraham, Nabila.
A novel focal tversky loss function with improved attention u-net for lesion segmentation.2018.
-  Mumtaz Ali, Hoang Son Le, Mohsin Khan, and Nguyen Thanh Tung. Segmentation of dental x-ray images in medical imaging using neutrosophic orthogonal matrices. Expert Systems with Applications, 2017.
-  Md. Zahangir Alom, Mahmudul Hasan, Chris Yakopcic, Tarek M. Taha, and Vijayan Asari. Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation. 2018.
-  Manuel Berning, Kevin M Boergens, and Moritz Helmstaedter. Segem: Efficient image analysis for high-resolution connectomics. Neuron, 87(6):1193–1206, 2015.
Neil Birkbeck, Dana Cobzas, Martin Jagersand, and Albert Murtha.
An interactive graph cut method for brain tumor segmentation.
Applications of Computer Vision, pages 1–7, 2009.
-  Patrick R Cantwell, Ming Tang, Shen J Dillon, Jian Luo, Gregory S Rohrer, and Martin P Harmer. Grain boundary complexions. Acta Materialia, 62(1):1–48, 2014.
-  Jianxu Chen, Lin Yang, Yizhe Zhang, Mark S Alber, and Danny Z Chen. Combining fully convolutional and recurrent neural networks for 3d biomedical image segmentation. neural information processing systems, pages 3036–3044, 2016.
-  Liangchieh Chen, George Papandreou, Iasonas Kokkinos, Kevin P Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4):834–848, 2018.
-  C T Chou, P B Hirsch, M Mclean, and E D Hondros. Anti-phase domain boundary tubes in ni3al. Nature, 300(5893):621–623, 1982.
-  Ozgun Cicek, Ahmed Abdulkadir, Soeren S Lienkamp, Thomas Brox, and Olaf Ronneberger. 3d u-net: Learning dense volumetric segmentation from sparse annotation. medical image computing and computer assisted intervention, pages 424–432, 2016.
-  Mary Comer, Charles A. Bouman, Marc De Graef, and Jeff P. Simmons. Bayesian methods for image segmentation. JOM, 63(7):55–57, 2011.
-  M Ali Akber Dewan, Ahmad M Omair, and M N S Swamy. Tracking biological cells in time-lapse microscopy: an adaptive technique combining motion and topological features. IEEE transactions on bio-medical engineering, 58(6):1637–47, 2011.
-  P J E Forsyth, R King, G J Metcalfe, and Bruce Chalmers. Grain boundaries in metals. Nature, 158(4024):875–876, 1946.
-  Jan Funke, Fabian Tschopp, William Grisaitis, Arlo Sheridan, Chandan Singh, Stephan Saalfeld, and Srinivas C Turaga. Large scale image segmentation with structured loss based deep learning for connectome reconstruction. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–1, 2018.
-  Hai Gao, Ping Xue, and Weisi Lin. A new marker-based watershed algorithm. 2:81–84, 2004.
-  Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. pages 249–256, 2010.
Booz Allen Hamilton.
2018 data science bowl, 2018.https://www.kaggle.com/c/data-science-bowl-2018/overview/evaluation.
-  G. Hinton. Divide the gradient by a running average of its recent magnitude. Technical report, 2012.
-  Junhao Hu, Yingchao Shi, X Sauvage, Gang Sha, and K Lu. Grain boundary stability governs hardening and softening in extremely fine nanograined metals. Science, 355(6331):1292–1296, 2017.
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger.
Densely connected convolutional networks.
computer vision and pattern recognition, pages 2261–2269, 2017.
Sergey Ioffe and Christian Szegedy.
Batch normalization: Accelerating deep network training by reducing
internal covariate shift.
international conference on machine learning, pages 448–456, 2015.
-  Robert Jagitsch. A method of using marked phase boundaries. Nature, 159(4031):166–166, 1947.
-  Viren Jain, Benjamin Bollmann, Mark A Richardson, Daniel R Berger, Moritz Helmstaedter, Kevin L Briggman, Winfried Denk, Jared B Bowden, John M Mendenhall, Wickliffe C Abraham, et al. Boundary learning by optimization with topological constraints. pages 2488–2495, 2010.
-  Michal Januszewski, Jorgen Kornfeld, Peter H Li, Art Pope, Tim Blakely, Larry Lindsey, Jeremy Maitinshepard, Mike Tyka, Winfried Denk, and Viren Jain. High-precision automated reconstruction of neurons with flood-filling networks. Nature Methods, 15(8):605–610, 2018.
-  Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. neural information processing systems, 141(5):1097–1105, 2012.
-  Kisuk Lee, Jonathan Zung, Peter H Li, Viren Jain, and H Sebastian Seung. Superhuman accuracy on the snemi3d connectomics challenge. arXiv: Computer Vision and Pattern Recognition, 2017.
-  Qingwu Li, Xue Ni, and Guogao Liu. Ceramic image processing using the second curvelet transform and watershed algorithm. In IEEE International Conference on Robotics and Biomimetics, pages 2037 – 2042, 2007.
-  Tsungyi Lin, Michael Maire, Serge J Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. pages 740–755, 2014.
-  Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. computer vision and pattern recognition, pages 3431–3440, 2015.
-  Carlos Lopezmolina, B De Baets, and Humberto Bustince. Quantitative error measures for edge detection. Pattern Recognition, 46(4):1125–1139, 2013.
-  Boyuan Ma, Xiaojuan Ban, Haiyou Huang, Yulian Chen, Wanbo Liu, and Yonghong Zhi. Deep learning-based image segmentation for al-la alloy microscopic images. Symmetry, 10(4):107, 2018.
-  Boyuan Ma, Xiaojuan Ban, Ya Su, Chuni Liu, Hao Wang, Weihua Xue, Yonghong Zhi, and Di Wu. Fast-finecut: Grain boundary detection in microscopic images considering 3d information. Micron, 116:5–14, 2019.
-  William Mcilhagga. The canny edge detector revisited. International Journal of Computer Vision, 91(3):251–261, 2011.
Comparing clusterings—an information based distance.
Journal of Multivariate Analysis, 98(5):873–895, 2007.
-  F Meyer. Color image segmentation. pages 303–306, 1992.
-  Fausto Milletari, Nassir Navab, and Seyedahmad Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. international conference on 3d vision, pages 565–571, 2016.
Juan Nuneziglesias, Ryan Kennedy, Toufiq Parag, Jianbo Shi, and Dmitri B
Machine learning of hierarchical clustering to segment 2d and 3d images.PLOS ONE, 8(8), 2013.
-  Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, Kazunari Misawa, Kensaku Mori, Steven Mcdonagh, Nils Y Hammerla, and Bernhard Kainz. Attention u-net: Learning where to look for the pancreas. 2018.
-  Pytorch, 2019. https://pytorch.org/.
-  Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. medical image computing and computer assisted intervention, pages 234–241, 2015.
-  Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. international conference on learning representations, 2015.
-  Meng Tang, Lena Gorelick, Olga Veksler, and Yuri Boykov. Grabcut in one cut. International conference on computer vision, pages 1769–1776, 2013.
-  M. H. J. Vala and A. Baxi. A review on otsu image segmentation algorithm. International Journal of Advanced Research in Computer Engineering and Technology, 2(2), 2013.
-  Nguyen Xuan Vinh, Julien Epps, and James Bailey. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research, 11:2837–2854, 2010.
-  Jarrell W Waggoner, Youjie Zhou, Jeff P Simmons, Marc De Graef, and Song Wang. 3d materials image segmentation by 2d propagation: A graph-cut approach considering homomorphism. IEEE Transactions on Image Processing, 22(12):5282–5293, 2013.
-  Hao Wang, Guoquan Liu, and Xiangge Qin. Grain size distribution and topology in 3d grain growth simulation with large-scale monte carlo method. International Journal of Minerals Metallurgy and Materials, 16(1):37–42, 2009.
-  Saining Xie and Zhuowen Tu. Holistically-nested edge detection. international conference on computer vision, pages 1395–1403, 2015.
-  Weihua Xue. Three-dimensional Modeling and Quantitative Characterization of Grain Structure. PhD thesis, University of Science and Technology Beijing, 2016.
-  Tao Zeng. Residual deconvolutional networks for brain electron microscopy image segmentation. IEEE Transactions on Medical Imaging, 09 2016.
-  Tao Zeng, Bian Wu, and Shuiwang Ji. Deepem3d: approaching human-level performance on 3d anisotropic em image segmentation. Bioinformatics, 33(16):2555–2562, 2017.