Log In Sign Up

WPU-Net:Boundary learning by using weighted propagation in convolution network

by   Boyuan Ma, et al.

Deep learning has driven great progress in natural and biological image processing. However, in materials science and engineering, there are often some flaws and indistinctions in material microscopic images induced from complex sample preparation, even due to the material itself, hindering the detection of target objects. In this work, we propose WPU-net that redesign the architecture and weighted loss of U-Net to force the network to integrate information from adjacent slices and pay more attention to the topology in this boundary detection task. Then, the WPU-net was applied into a typical material example, i.e., the grain boundary detection of polycrystalline material. Experiments demonstrate that the proposed method achieves promising performance compared to state-of-the-art methods. Besides, we propose a new method for object tracking between adjacent slices, which can effectively reconstruct the 3D structure of the whole material while maintaining relative accuracy.


page 2

page 4

page 5

page 9


A Topology-Attention ConvLSTM Network and Its Application to EM Images

Structural accuracy of segmentation is important for finescale structure...

A Deep Learning Approach for Semantic Segmentation of Unbalanced Data in Electron Tomography of Catalytic Materials

Heterogeneous catalysts possess complex surface and bulk structures, rel...

Transfer Learning for Material Classification using Convolutional Networks

Material classification in natural settings is a challenge due to comple...

Integral boundary conditions in phase field models

Modeling the microstructure evolution of a material embedded in a device...

Fully Convolutional Network with Multi-Step Reinforcement Learning for Image Processing

This paper tackles a new problem setting: reinforcement learning with pi...

DH-Net: Deformed Microstructure Homogenization via 3D Convolutional Neural Networks

With the rapid development of additive manufacturing, microstructures ar...

1 Introduction

Most metals and ceramics have complex microstructures such as polycrystalline structure, multi-phase structure, multi-domain structure separated by different interfaces, called grain boundary [13], phase boundary [22] and domain boundary [9]. The microstructure including these boundaries is determined by material composition and preparation process, meanwhile, is of great significance for controlling the properties and performance of materials. Therefore, microstructure characterization is one of core missions in materials science and engineering.

During the quantitative analysis of microstructure characteristics, an important step is microscopic image processing, which is used for extracting the key information in the microstructure. Unlike the image processing task in natural and biological scenes, the microscopic image in materials science shows unique problems, which increase the difficulty of image processing and analyzing. Take polycrystalline structure for example, which is commonly used and studied in practice. The ultimate objective is to achieve the 3D structure of the sample. Due to the opacity of materials, researchers can only use serial section method to obtain serial 2D images and stack it to reconstruct 3D structure, shown in Figure 1. Thus, there are two important steps in the process: 2D image analysis and 3D reconstruction. Both of them have their own difficulty.

For 2D image analysis, flaws in material microscopic images seriously hinder the target object detection [32]. The region of interest in polycrystalline microscopic images is the single-pixel closed boundary of grain (like a cell in biological image) [6], as shown with black straight and thick arrows. Unfortunately, the sample will unavoidably be introduced into flaws during sample preparation, such as polish and etch processes. There are three types of flaws in polycrystalline microscopic images, which will pose significant problems for the boundary detection task.

  • Blurred or missing boundary: caused by incomplete etching in the nital solution, as shown with red straight and thin arrows. This kind of flaw may occur in any position of slices, or even in the same position of serial slices. It is necessary for an algorithm to recover the missing boundary by using the information of adjacent slices.

  • Noise: caused in sample preparation, as shown with yellow curved arrows.

  • Spurious scratches: unavoidably caused in the polished process, which is similar to the boundary and is easy to confuse the image processing algorithm, as shown with blue notched arrows.

Figure 1: Microscopic serial slices of polycrystalline iron. The left is the demonstration of serial slices. The right top is five serial raw slices and the right bottom is its corresponding boundary results. For detailed visualization, we only add the scale bar in this Figure. All microscopic images share the same scale bar in this paper. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of the article.)

Attribute to high representative model, convolution neural networks(CNN) has driven great progress on image segmentation 

[29] and boundary detection [47] in recent years, especially in natural and biological scenes. However, as far as we know, there is no deep learning-based method specially designed for polycrystalline structural materials with such flaws.

For 3D reconstruction, it is a challenge to identify the same grain regions in adjacent slices. Different degrees of deformation exists in the same grain between adjacent slices, a phenomenon of grain bifurcation may occur. In addition, grain disappearance and appearance often occur between adjacent slices. Therefore, it needs to design an algorithm which can solve all these problems when transforming a 2D boundary result to 3D label result.

In this work, to solve the problem existed in 2D microscopic images of polycrystalline materials, we propose a novel Weighted Propagation Convolution Neural Network based on U-Net(WPU-Net), which propagate boundary information from adjacent slice to aid the boundary detection in the target slice with a weighted map specially designed for this boundary detection task. From a practical standpoint, our work presents three contributions:

  1. We propose an adaptive boundary weight loss to force the network to tolerate minor difference in boundary location and pay more attention to topology preservation, which is better suited for the boundary detection in polycrystalline images, as the quantitative analysis of material microstructures is almost unaffected by small differences in the boundary.

  2. We modify u-net by introducing 3D information into U-net architecture, such that it makes better use of domain knowledge between slices and be beneficial to detect boundaries precisely, even for the blurred and missing boundaries. As shown in the experiment section, our method achieves the highest performance compared to state-of-the-art methods.

  3. We propose a new solution to reconstruct the 3D structure of the sample by using CNN to perform grain object tracking between slices.

Our code and partial data can be found at:

2 Related work

2.1 Boundary Detection

There are many existing methods have been - or can be - used to detect the boundary of 3D polycrystalline material microscopic images. They can be broadly categorized into two classes: 2D image-based methods that detect the boundary with only the information contained in the 2D image itself and 3D image-based methods that detect the boundary using the 3D context information contained in the image volume.

The 2D image-based methods include many classical image segmentation methods [43, 12, 33, 11, 27, 5, 2, 42], such as watershed [27], canny [33], otsu [43], graph cut [5], grab cut [42] and so on. They are mainly based on hand-crafted features, including gray scale information, gradient information, morphological cues, structural information and so on. Although these methods have achieved good performance in many image segmentation scenarios, they may fail to get a satisfying performance in images with high noise, blurred or even missing edges. Deep Learning-based method [29, 47, 49, 40, 4, 38, 1, 3, 31, 8] for 2D semantic segmentation has been more and more popular in recent years, it has become the de facto standard for image segmentation by virtue of its powerful feature learning and expression ability. The U-Net [40] has become the most commonly used image segmentation method because of its robustness and excellent performance. Many improved methods [38, 1, 3] including ours are based on it, a representative one is Attention U-Net [38]. However, the 2D image-based methods have an inherent drawback, that is, it can not make use of the 3D context information between adjacent slices.

The 3D image-based methods can also be broadly grouped into three classes based on how to use 3D information. (I) 3D fully convolution networks(FCN) [14, 24, 50, 26, 10, 36], which employ 3D convolutions to replace 2D convolutions. 3D U-Net [10], V-Net [36] is the representative methods of this class. (II) Combining 2D FCN with RNN. A most representative method is UNet+BDCLSTM [7]

, which uses the 2D FCN to extract intra-slice contexts, and a recurrent neural network (RNN) to extract inter-slice contexts. The methods using 3D convolutions perform 2D convolutions with isotropic kernel on anisotropic 3D images, which could be problematic, methods of combining RNN and 2D FCN can eliminate this drawback, however, it performs poorly for the continuous blurring of grain boundaries at the same position of adjacent slices. We will explain this in detail in the experiment results. What’s more, both methods above are very computationally intensive. (III) Tracking-based methods, which have been developed for detecting boundary in a stack of 2D slices.

[19] developed an interactive segmentation method based on break-point detection, but a lot of artificial corrections are needed. [45] proposed the concept ”propagation segmentation” based on graph-cut, it sets the energy function of the target image using the information of the last slice through the domain knowledge of material science. [32] improved it by changing the setting of binary terms in energy function, filling the blurred or missing boundary in target images with the same boundary in the last slice. The tracking-based method shows superior performance when dealing with blurred or missing boundary and spurious scratches. However, they are usually designed by hand-crafted features and time-consuming. Our method combines the deep learning-based method with a tracking-based method to take advantage of both, achieving the best performance in state-of-the-art methods.

2.2 Weighted Loss

Weighted loss is widely used to handle the class imbalance problem in deep learning, weighted cross-entropy for example [47]. However, it did not tolerate minor differences in boundary location. U-net [40] has proposed a weighted map loss to pay more attention to the border of two objects. However, it can only be applied to separate regions. By contrast, it will be equal to weighted cross-entropy when applied to tight regions in our task. Some works tried to simply dilate the boundary [49] to achieve higher performance, however, that will remove tiny objects. Thus, it needs to redesign a new weighted method which will handle the above problems.

2.3 3D Reconstruction

There are two classes of 3D reconstruction methods to recognize the same regions in adjacent slices. The segmentation based, such as 3D watershed [35, 15], uses distance information or gradient information to determine the relationship between two adjacent pixels. Unfortunately, the polycrystalline structure is complex and staggered, the grain region in one slice is connected to other grains in voxel relation on adjacent slices so that the 3D watershed cannot be applied to this task. The track based methods calculate shape similarity and overlap area between two connected components in two adjacent slices [48]. However, both of them rely on hand-crafted features which will unavoidably cause the over-segment problem.

3 Method

3.1 Adaptive Boundary Weighted Map

Traditional weighted cross-entropy rigidly controls the location of the predicted boundary at the pixel level. However, in a practical point, the topology of grain and boundary is what truly focused. U-net [40] has proposed a weighted map to force the network to learn the small separation borders between two regions. This is very suited to loosely arranged regions. However, for tightly arranged regions, and are equals to 0 and the result is as same as weighted cross-entropy.

By getting inspiration from U-net, we propose an adaptive boundary weighting method, which is weighting map incorporated with cross entropy calculation. The formulas are shown below:


is the energy function which is computed by a pixel-wise soft-max over the final feature map combined with the cross-entropy function. The soft-max is defined as where denotes the activation in feature channel at the pixel position with and . is the number of classes and equals to 2 in the boundary detection task. is the weighted map to balance the class frequencies. We design two types of weights, and , for background and object respectively. For each pixel in grain , we calculate its distance to the nearest boundary. In addition, we can get the maximum of in grain , the . We customize the weight for each grain by using in the above formulas. By making such optimization, the algorithm adaptive control the convergence speed of normal function. That is the smaller the grain size, the faster the weight converge, which is suited to protect the tiny grain and tolerate minor differences in boundary location.

is the dilating result of the single-width mask which controls the range of variation of the boundary. The standard deviation of normal function in each grain

is the result of

divided by 2.58. That is because of the possibility of normal distribution in range

is 99.00%.

We discuss the benefit of adaptive boundary weighting map with some demos in Figure 2. This figure shows the curve of (blue curve) and (red curve) on differnt grain size. The green curve is the final result for each weights. The dot dash black line is original mask location and the black straight line is dilating result of original mask. For example, The size of structural kernel of dialting operation is 5. For tiny grain which size is smaller than dilating kernel size in (a), the method choose the bigger result of and , which will prptect the boundary of tiny grain and prevent them from being covered in dilating operation. For huge grain which size is bigger than dilating kernel size in (c), the method choose in the range of dilation result and out of the range of that. This operation limit the variation of boundary, which prevent huge differency of predicted result and ground truth. For comparision, we show the weighting result on approriate grain size in (b).

Figure 2: Applying to different grain size by using adaptive boundary weighting.
Figure 3: Demonstration of different weights. The left column is the raw image and its boundary mask. The medium is the weight of dilation of 5 pixels on a mask and its detection result. The right is adaptive boundary weight and its result. All models are performed on classical U-net.

We visualize the adaptive boundary weighted map and the boundary detection result by using classical U-net in Figure 3. The left column is the raw image and boundary mask. The medial column is the illustration of dilation on a mask and its boundary result. And the right column is the adaptive boundary weight and its result. For the propose of comparison, we put the and together to visualization. The simply dilation operation tolerate the minor differences at boundary location and might remove the tiny grain in the result, as shown with red straight arrows. By contrast, the adaptive boundary weight not only tolerates the minor differences at the boundary location but also protect the boundary of tiny grain to ensure the topology of the result.

3.2 Integrate Propagation Information in Network

In order to better solve the problem of blurred or missing boundaries and spurious scratches in the microstructure images of polycrystalline materials, we draw on the advantages of the tracking-based method and deep learning-based method and propose a new network architecture for 3D image segmentation, especially applicable to the polycrystalline image. This architecture propagates the mask information of the last slice to the next target image to assist the target image in detecting boundary accurately. More specifically, as shown in Figure 4, the information of the last slice (as shown with the gray image on the left side of Figure 4

) is sent to U-Net along with original image as input. As CNN has strong learning and modeling capabilities, it can learn a powerful feature extraction function related to a specific task based on the training data. The core of our work is to build a deep learning model can use the power of the neural network to learn a much more complex modeling function between two adjacent slices. The ideal state of this function is that it can not only recognize blurred or missing boundaries and spurious scratches in target image with the help of the last slice but also keep the topology of the target image itself. In order to promote the neural network to learn an ideal modeling function as close as possible. We make efforts in two ways.

Figure 4: Proposed Weighted Propagation Convolution Neural Network based on U-Net(WPU-Net) architecture with Multi-level fusion.

Firstly, we design a weighted map according to the domain knowledge of polycrystalline material, which is just the referred to in formula 4. In this weighted map, the center of the grain has a larger weight, and the closer to the grain boundary, the smaller the weight. This conforms to the properties of polycrystalline materials, namely although the grain boundaries of adjacent slices may undergo different degrees of deformation, the central portion of the grain in the last slice is likely to remain as grain in this layer. From this perspective, using a weight map could be more appropriate than using a mask of the last slice directly. In order to prove it, we have designed three sets of comparative experiments in the experimental stage, using mask, mask-expansion and weight map, respectively. The mask-dilation means a boundary dilation map on a mask, which comes from the concept of ”bounding region” in [32], as shown in Figure 5.

Figure 5: Three styles of upper information. From left to right, they are mask, mask-dilation and adaptive weighted map, respectively. Image intensity inverted for clarity.

Secondly, we present a multi-level fusion strategy to make better use of multiple levels of information. As U-Net is a cascaded framework, with the number of convolution layers increases, it gradually extracts high-dimensional information representations. In layer 1 (as shown in Figure 4), U-Net may only learn simple boundary information, but in layer 4, it may be able to learn high-dimensional structural information, which is important in boundary detection on the polycrystalline image. The upper information sent to the network not only contains boundary information but also rich structural information. Thus, we use a multi-level fusion strategy to make the most of it. The simplest concatenation is used as the fusion strategy.

3.3 Grain Object Tracking Slice By Slice

After analyzing all the 2D images, there is still a challenge to reconstruct the 3D structure. That is to recognize the same grain regions in adjacent slices. As shown in Figure 6, Image and Image are two adjacent slices. Boundary and Boundary are boundary detection results. Label and Label are the label results, which can be used to 3D reconstruction. Each grain region is given to a unique label and a certain color to visualize. In Figure 6, various deformations may occur with grains in the Z direction. Some grains may occur deformation. Some grains disappeared, and some grains appeared, as shown in a detailed demonstration. Therefore, there is a challenge to design an algorithm to solve all these deformations when the transform boundary results to label result.

Figure 6: Tracking demonstration. The left column is raw images. The medium column is the boundary detection result. We need to track each grain between two neighbor slices and transform boundary result to label result. In label result, each grain region is given to a unique label and a certain color to visualize.

Traditional methods can’t achieve high performance in this problem. As discussed in section 2.3, there are two classes of the algorithm to try to solve the problem: segmentation based and track based. However, they are both to design the algorithm by hand-crafted features, which easily produced the over-segment result. Therefore, we intend to use a learning algorithm to handle this task. Unfortunately, many object tracking algorithms based on deep learning rely on the different appearance of different objects, which is very suited to track the objects in a natural scene. By contrast, all the grains have the same pixel value in boundary result or approximate value in origin image.

We propose a new grain object tracking solution by using convolution network in the image classification task. For each pair of two connected grain regions in three dimensions, we apply a classification network to recognize whether they belong to the same label.

1:function Tracking()
2:     Initialize Label
3:      prelabel the first slice based on connected components algorithm
4:     for each  do
5:          prelabel based on connected components algorithm
6:         for each  do
7:              Find all connected components with in Z direction
8:              Get max similarity of above with using classification method
9:              if  then
10:                   which have maximum similarity
11:              end if
12:         end for
13:         Given new label to each not been matched
14:         Assign to certain location in
16:     end for
17:     Filter label which only occur in one slice, relabel it to most similar label in neighbor slices
18:     return
19:end function
Algorithm 1 Algorithm for grain object tracking

All the process is shown in Algorithm 1 which takes the boundary slices as input from section 3.1 and section 3.2.

Figure 7:

we use image classification network to achieve the similarity of two regions. The similarity is the probability of success tracking.

We use Figure 6 and Figure 7 for detailed illustration. Label is a set of labels in the last slice, and Label is a set of labels in this slice by using the 2D connected component algorithm. Unfortunately, due to classical 3D, connected components algorithm can’t be used to complex and staggered structure, we pick up image classification algorithm to track the grain objects. For each grain in this slice, Label for example, we find all the connected components (such as Label, Label and Label) in Label of Label in the Z direction. Then we concatenate and resize each Label above with Label to form 2 channels images and feed it to an image classification network. The network is a simply 2-class network to get the similarity of two regions or the probability of successful tracking. After that, we can achieve the label of Label which have maximum similarity with Label. If the maximum similarity beyond a threshold, the tracking process can be thought to success.

4 Experiment Results

In this section, an adequate experiment will be deployed to demonstrate the effectiveness of our proposed methods, WPUnet. We test our methods on two pure iron dataset, one real anisotropic dataset, and one synthetic isotropic dataset. The synthetic dataset was generated by Monte Carlo Potts model [46], which is used to mimic the grown procedure of polycrystalline grain. The synthetic dataset consists of a sequence of 2D label images and corresponding serial boundary images. The size of this dataset is . Due to the nature of synthetic, it does not have the corresponding real original image. Thus, we only use it when testing the grain object tracking algorithm. The real dataset was produced and collected in practical experiments with serial section method [48]. In our experiment, we use a stack of 296 microscopic pure iron images with large resolution ( pixels), it consists of about 16796 grains in total. The ground truth of real dataset was labeled by professional material researchers. In order to control the experimental parameters, we randomly cropped 12480 images with resolution of images as a training set, set 88 images with resolution of pixels as a testing set, and set the first 8 images on the test as a validation set. The testing set and validation set used pixels sub-images as the input of network and the results were gathered to form the image by using overlap-tile strategy [31].

The goal of boundary detection in this work is to achieve single-pixel width and closed the boundary of each grain. Thus, the metric should tolerate minor differences in boundary location and penalize under-segment and over-segment errors.

For a fair comparison, we use multiple metrics to evaluate our algorithm, such as Variation of Information (VI) [34, 37], Adjusted Rand Index (ARI) [44] and Mean Average Precision (Map) [28, 17], Rand index (RI) [23]

. Note that among all the evaluation metrics used in this paper, only the lower the VI value, the better. The higher the other metrics, the better.

We first performed normalization to input images. The weights of nets were initialized with Xavier [16]

and all nets were trained from scratch. We adopted batch normalization (BN) 


after each convolution and before activation. All hidden layers were equipped with Rectified Linear Unit (ReLU 


). The learning rate was set to 1e-4. We optimized the objective function with respect to the weights at all network layer by RMSProp 

[18] with smoothing constant ()=0.9 and

=1e-5. Each model was trained for 10 epochs on 2 NVIDIA V100 GPUs with a batch size of 24. During training, we picked up the parameters when it achieved the smallest loss on the validation set. All the performance in the experimental section

4.1 was obtained on the testing set using the above parameters.

Our implementation of this algorithm was derived from the publicly available Pytorch framework 


4.1 Boundary Detection

All experiments displayed on this subsection were carried out on real dataset and the reported performance is the average of the scores for all images in the test set. Experiments on adaptive boundary weighted loss are carried out first to determine the superiority of our weighting method. Then adequate ablation experiments on WPU-Net was conducted using that weighted loss.

4.1.1 Adaptive Boundary Weighting

To justify the effectiveness and robustness of our proposed adaptive weighted loss, we report the performance of cross-entropy loss with different weights applied on classical models, such as U-net [40] and Attention U-net [38]. Three weighted loss was compared: simply class-balanced weighted loss (CBW) [47], class-balanced weighted loss on mask dilation of 5 pixels (CBWD5) [38] and adaptive boundary weighted loss (ABW).

Algorithm VI Map ARI
U-net CBW 0.3397 0.5493 0.6692
CBW5 0.3028 0.5533 0.6803
ABW 0.3085 0.5604 0.6836
CBW 0.3114 0.5721 0.6810
CBW5 0.3111 0.5590 0.6844
ABW 0.2944 0.5806 0.6900

Table 1: Different weights applied to classical models. CBW means the class-balanced weight, CBWD5 means class-balanced weight on mask dilation of 5 pixels and ABW means adaptive boundary weight. The bold values mean the best performance in each metric.

As shown in Table 1, adaptive boundary weight performs better than the other two in general. We analyzed the main reason could be the adaptive boundary weighted loss does better in tolerating minor difference of boundary location and protecting topology information as shown in Figure 3. The VI score of CBWD5 and ABW on U-net are very close, probably because VI is less sensitive in tiny grains. In contrast, the value of Map that is more sensitive to small grains is relatively higher. We can also see that Adaptive Boundary Weight can achieve higher performance both on U-net and Attention U-net architecture. That is suggesting that improvements induced by adaptive boundary weight can be used directly with existing state-of-the-art architectures.

4.1.2 Integrate Propagative Information in Network

We conducted two experiments to systematically examine the effect of WPU-Net, including each part of it. The first one is an ablation experiment about the information style of last slice and the fusion mode of last slice’s information in WPU-Net. We set up six sets of contrast experiments for three different information styles of the last slice and two fusion modes. In order to eliminate the influence of other factors, each set of experiments was carried out under strictly the same environment, including the same pre-processing, post-processing methods, same network parameter settings, and training epochs. Remarkably, regardless of the style of upper information, the pixel values are normalized to [-6, 1] before they are entered into the network. This is consistent with the normalization of the original image. And in order to obtain a single-pixel boundary result image, the predictions of the network will undergo skeletonization operation.

The evaluation results of the ablation experiment are listed in Table 2. We can see that weight map-based methods generally score higher than mask and mask-expansion based methods. This proves the validity of the weight map we proposed. However, there are some strange phenomena in the results of information fusion mode. The multi-level fusion strategy we proposed performs poorly on the mask style, performs well on mask-expansion style, while performs similarly on weight map style. This is a question worth pondering. We analyze this may be because the types of information carried by different styles of last slice’s information are inconsistent. The mask contains strong edge information, which is harmful when integrated into the high-level features of the network. The mask-expansion integrates the concept of ”bounding region” in [32], which is mainly used to characterize structural transitions between adjacent slices. Therefore, it works better when integrated into high-level features. While the similar performance of weight map based methods may justify that it contains not only edge information but also rich structural information.

Last Style Mode VI Map ARI
Mask Layer 1 0.2175 0.6742 0.7344
Layer 1-4 0.2484 0.6576 0.7170
Mask-dilation Layer 1 0.2673 0.6403 0.7059
Layer 1-4 0.2249 0.6519 0.7311
Weight Map Layer 1 0.1715 0.7264 0.7288
Layer 1-4 0.1718 0.7149 0.7522
Table 2: Ablation experiments on WPU-Net. The last Style means the information style of the last slice sent to the network. These three styles are shown and illustrated in detail in Figure 5. The fusion mode means the fusion strategy of last slice’s information in WPU-Net, Layer 1 means the last slice’s information is only merged in the first layer, while Layer 1-4 means the multi-level strategy. The bold values mean the best performance in each metric.

The second experiment is a model comparison between WPUnet and classic models. We picked up those models as they are the typical methods of dealing with 3D images mentioned in section 2.1. As we can see in Table 3, our proposed method WPUnet outperforms others in every evaluation metrics, especially on VI metrics, our method is about smaller than other methods. This proved the feasibility and effectiveness of propagation segmentation network in the boundary detection task of 3D images, especially in polycrystalline materials. Due to the special manufacturing process of microscopic images of polycrystalline materials, it has many special problems need special attention. The problem of continuous blurring of same grain boundaries and scratch noise in adjacent slices are the two main reason for the inapplicability of typical methods. To further analyze this problem, we displayed the merge error and split error of each method in VI evaluation metrics separately in Figure 8. The merge error(under-segmentation) means the error caused by unsuccessful detection of grain boundaries(FN), resulting in two grains in the image being judged to be the same grain. It usually occurs at blurred grain boundaries. While the split error(over-segmentation) means the wrong detection of grain boundaries(FP), resulting in one grain in the image is judged as two grains. It usually occurs at spurious scratches. From Figure 8, we found that in addition to 3D U-Net and our method, other models all show much worse performance on blurring grain boundaries generally. The merge error is abnormally high in UNet+BDCLSTM, we analyzed RNN maybe not good at dealing with the problem of a continuous blur. By contrast, our WPUnet performs better in both problems, especially on blurring boundaries. We visualize the detection results of some representative methods in Figure 9. It should be mentioned that all the algorithms we used in this experiment are re-implemented using pytorch based on the original paper and source code(If it provides) except for Fast-FineCut.

Algorithm VI mAP ARI
WPU-net 0.1718 0.7149 0.7522
3D U-Net [10] 0.2696 0.6370 0.7475
Attention U-Net [38] 0.3114 0.5721 0.6810
RDN [49] 0.3264 0.5398 0.6756
U-Net [40] 0.3397 0.5493 0.6692
UNet+BDCLSTM [7] 0.4270 0.5683 0.6506
HED [47] 0.4235 0.4913 0.6419
Fast-FineCut [32] 0.4478 0.5413 0.6340
Table 3: Model comparison in real data sets. The bold values mean the best performance in each metric.
Figure 8: Comparison of models on merge error and split error.
Figure 9: Detection results with different methods. Four adjacent slices from top to bottom.

4.2 Grain Object Tracking Slice By Slice

We test our object tracking algorithm both on the synthetic isotropic dataset and real anisotropic dataset. The real data produced from the experiment and thus limited by processes of sample preparation. Due to the polishing process of sample preparation, the resolution of Z direction is always smaller than X and Y direction. By contrast, the synthetic data is isotropic and made by a simulation model. We use RI and VI as metric of the experiments. We compare our algorithm with maximum overlap area algorithm and minimum centroid distance algorithm proposed in the article [48]. For image classification model, we choose vgg13_bn [41] and densenet161 [20] for comparison. The learning rate started from 1e-3 and was multiplied by 0.8 after each two epoch until decay to 1e-6. The batch size is 20 and uses RMSProp [18] with 0.9 momentum to optimized. Each model was trained for 10 epochs.

For both of them, The testing set is evaluated on the parameters where models achieve the highest accuracy on the validation set.

In addition, because the lacking information in dimensions, the tracking algorithm can not achieve 100% accuracy even for ground truth boundary result. Therefore, we choose the best model for tracking by using ground truth boundary result and applied it to different boundary detect methods. It is reasonable to use tracking result to evaluate the performance of different boundary detect methods.

Note that the number of slices is not limitation for CNN. The number of pair grain regions is actually the input of network. There are million of pair grain regions as training set for real data set and half million of pair grain regions as training set for synthetic one.

4.2.1 Synthetic Dataset

The synthetic dataset was generated by Monte Carlo Potts model [46]. Monte Carlo Potts model was used to mimic the grown procedure of polycrystalline grain. We obtained the data at 5000 Monte Carlo step of the simulation process. Due to the synthetic nature of the data, it only has serial label images and corresponding serial boundary images. It contains 400 slices with resolution of pixels. We use 80 slices as the testing set, 80 slices as the validation set and 240 slices as the training set. As shown in the Table.4, we report the tracking performance of different methods. Tracking methods with deep learning achieve the promising performance in comparison with traditional methods. Besides, it will improve the performance when applied to complex and advanced network. However, the duration of deep learning based tracking algorithm consume much more time than traditional time. We thought it can be optimized by parallel programming.

Algorithm VI ARI Duration(s)
Min Centroid Dis 0.5634 0.9351 53.88
Max Overlap Area 0.5875 0.9350 57.02
Vgg13_bn [41] 0.5638 0.9332 413.59
Densenet161 [20] 0.5502 0.9441 809.45

Table 4: Performance of tracking on synthetic data set with different algorithms.

4.2.2 Real Mini Dataset

For real data set, we use 80 slices as the validation set and 208 slices as the training set. For efficiency reasons, we use sub-dataset of the pure iron dataset as testing set. The sub-dataset contains 80 slices with resolution of pixels. As shown in Table 5, it has shown the same result with synthetic data. In addition, we choose the densenet161 to track the boundary result of different methods in 6. WPUnet achieve the promising result than other methods.

Algorithm VI ARI Duration(s)
Min Centroid Dis 0.5656 0.8748 23.84
Max Overlap Area 0.6105 0.8603 18.48
Vgg13_bn [41] 0.5560 0.8827 179.69
Densenet161 [20] 0.5396 0.8868 285.16
Table 5: Performance of tracking on real mini data set with different algorithms.
Algorithm RI ARI VI
Ground Truth 0.9971 0.8868 0.5396
WPU-net 0.9954 0.8183 1.0040
Fast-Fine Cut [32] 0.9890 0.6375 1.7142
3D U-net [10] 0.9946 0.7870 1.1827
U-net [40] 0.9912 0.6427 1.8269
Unet-BDCLSTM [7] 0.9883 0.5678 2.1410

Table 6: Performance of tracking on real mini data set with different boundary detection algorithms.

In general, the algorithm achieves the highest performance both on the real an-isotropic dataset and synthetic isotropic dataset.

5 Conclusion

In this work, we proposed a Weighted Propagation U-net (WPU-net) architecture to handle the boundary detection in polycrystalline materials. The network integrated information from adjacent slices to aid boundary detection in the target slice. And we presented adaptive boundary weighting to optimize the model, which can tolerate minor difference in boundary detection and protect the topology of grains. Experiments have shown that our network achieves the promising performance that is superior to previous state-of-the-art methods. In addition, we developed a new solution to reconstruct the 3D structure of the sample by using CNN to perform grain object tracking between slices. Our team will focus on accelerating the speed of tracking and optimizing boundary detection in the future.

6 Acknowledgement

The authors acknowledge the financial support from the National Key Research and Development Program of China (No. 2016YFB0700500), and the National Science Foundation of China (No. 61572075, No. 6170203, No. 61873299, No. 51574027), and Key Research Plan of Hainan Province (No. ZDYF2018139). Besides, we gratefully thanks to Dr. Chao Yao for many helpful comments.


  • [1] Khan Naimul Mefraz Abraham, Nabila.

    A novel focal tversky loss function with improved attention u-net for lesion segmentation.

  • [2] Mumtaz Ali, Hoang Son Le, Mohsin Khan, and Nguyen Thanh Tung. Segmentation of dental x-ray images in medical imaging using neutrosophic orthogonal matrices. Expert Systems with Applications, 2017.
  • [3] Md. Zahangir Alom, Mahmudul Hasan, Chris Yakopcic, Tarek M. Taha, and Vijayan Asari. Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation. 2018.
  • [4] Manuel Berning, Kevin M Boergens, and Moritz Helmstaedter. Segem: Efficient image analysis for high-resolution connectomics. Neuron, 87(6):1193–1206, 2015.
  • [5] Neil Birkbeck, Dana Cobzas, Martin Jagersand, and Albert Murtha. An interactive graph cut method for brain tumor segmentation. In

    Applications of Computer Vision

    , pages 1–7, 2009.
  • [6] Patrick R Cantwell, Ming Tang, Shen J Dillon, Jian Luo, Gregory S Rohrer, and Martin P Harmer. Grain boundary complexions. Acta Materialia, 62(1):1–48, 2014.
  • [7] Jianxu Chen, Lin Yang, Yizhe Zhang, Mark S Alber, and Danny Z Chen. Combining fully convolutional and recurrent neural networks for 3d biomedical image segmentation. neural information processing systems, pages 3036–3044, 2016.
  • [8] Liangchieh Chen, George Papandreou, Iasonas Kokkinos, Kevin P Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4):834–848, 2018.
  • [9] C T Chou, P B Hirsch, M Mclean, and E D Hondros. Anti-phase domain boundary tubes in ni3al. Nature, 300(5893):621–623, 1982.
  • [10] Ozgun Cicek, Ahmed Abdulkadir, Soeren S Lienkamp, Thomas Brox, and Olaf Ronneberger. 3d u-net: Learning dense volumetric segmentation from sparse annotation. medical image computing and computer assisted intervention, pages 424–432, 2016.
  • [11] Mary Comer, Charles A. Bouman, Marc De Graef, and Jeff P. Simmons. Bayesian methods for image segmentation. JOM, 63(7):55–57, 2011.
  • [12] M Ali Akber Dewan, Ahmad M Omair, and M N S Swamy. Tracking biological cells in time-lapse microscopy: an adaptive technique combining motion and topological features. IEEE transactions on bio-medical engineering, 58(6):1637–47, 2011.
  • [13] P J E Forsyth, R King, G J Metcalfe, and Bruce Chalmers. Grain boundaries in metals. Nature, 158(4024):875–876, 1946.
  • [14] Jan Funke, Fabian Tschopp, William Grisaitis, Arlo Sheridan, Chandan Singh, Stephan Saalfeld, and Srinivas C Turaga. Large scale image segmentation with structured loss based deep learning for connectome reconstruction. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–1, 2018.
  • [15] Hai Gao, Ping Xue, and Weisi Lin. A new marker-based watershed algorithm. 2:81–84, 2004.
  • [16] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. pages 249–256, 2010.
  • [17] Booz Allen Hamilton.

    2018 data science bowl, 2018.
  • [18] G. Hinton. Divide the gradient by a running average of its recent magnitude. Technical report, 2012.
  • [19] Junhao Hu, Yingchao Shi, X Sauvage, Gang Sha, and K Lu. Grain boundary stability governs hardening and softening in extremely fine nanograined metals. Science, 355(6331):1292–1296, 2017.
  • [20] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks.

    computer vision and pattern recognition

    , pages 2261–2269, 2017.
  • [21] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift.

    international conference on machine learning

    , pages 448–456, 2015.
  • [22] Robert Jagitsch. A method of using marked phase boundaries. Nature, 159(4031):166–166, 1947.
  • [23] Viren Jain, Benjamin Bollmann, Mark A Richardson, Daniel R Berger, Moritz Helmstaedter, Kevin L Briggman, Winfried Denk, Jared B Bowden, John M Mendenhall, Wickliffe C Abraham, et al. Boundary learning by optimization with topological constraints. pages 2488–2495, 2010.
  • [24] Michal Januszewski, Jorgen Kornfeld, Peter H Li, Art Pope, Tim Blakely, Larry Lindsey, Jeremy Maitinshepard, Mike Tyka, Winfried Denk, and Viren Jain. High-precision automated reconstruction of neurons with flood-filling networks. Nature Methods, 15(8):605–610, 2018.
  • [25] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. neural information processing systems, 141(5):1097–1105, 2012.
  • [26] Kisuk Lee, Jonathan Zung, Peter H Li, Viren Jain, and H Sebastian Seung. Superhuman accuracy on the snemi3d connectomics challenge. arXiv: Computer Vision and Pattern Recognition, 2017.
  • [27] Qingwu Li, Xue Ni, and Guogao Liu. Ceramic image processing using the second curvelet transform and watershed algorithm. In IEEE International Conference on Robotics and Biomimetics, pages 2037 – 2042, 2007.
  • [28] Tsungyi Lin, Michael Maire, Serge J Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. pages 740–755, 2014.
  • [29] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. computer vision and pattern recognition, pages 3431–3440, 2015.
  • [30] Carlos Lopezmolina, B De Baets, and Humberto Bustince. Quantitative error measures for edge detection. Pattern Recognition, 46(4):1125–1139, 2013.
  • [31] Boyuan Ma, Xiaojuan Ban, Haiyou Huang, Yulian Chen, Wanbo Liu, and Yonghong Zhi. Deep learning-based image segmentation for al-la alloy microscopic images. Symmetry, 10(4):107, 2018.
  • [32] Boyuan Ma, Xiaojuan Ban, Ya Su, Chuni Liu, Hao Wang, Weihua Xue, Yonghong Zhi, and Di Wu. Fast-finecut: Grain boundary detection in microscopic images considering 3d information. Micron, 116:5–14, 2019.
  • [33] William Mcilhagga. The canny edge detector revisited. International Journal of Computer Vision, 91(3):251–261, 2011.
  • [34] Marina Meilă. Comparing clusterings—an information based distance.

    Journal of Multivariate Analysis

    , 98(5):873–895, 2007.
  • [35] F Meyer. Color image segmentation. pages 303–306, 1992.
  • [36] Fausto Milletari, Nassir Navab, and Seyedahmad Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. international conference on 3d vision, pages 565–571, 2016.
  • [37] Juan Nuneziglesias, Ryan Kennedy, Toufiq Parag, Jianbo Shi, and Dmitri B Chklovskii.

    Machine learning of hierarchical clustering to segment 2d and 3d images.

    PLOS ONE, 8(8), 2013.
  • [38] Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, Kazunari Misawa, Kensaku Mori, Steven Mcdonagh, Nils Y Hammerla, and Bernhard Kainz. Attention u-net: Learning where to look for the pancreas. 2018.
  • [39] Pytorch, 2019.
  • [40] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. medical image computing and computer assisted intervention, pages 234–241, 2015.
  • [41] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. international conference on learning representations, 2015.
  • [42] Meng Tang, Lena Gorelick, Olga Veksler, and Yuri Boykov. Grabcut in one cut. International conference on computer vision, pages 1769–1776, 2013.
  • [43] M. H. J. Vala and A. Baxi. A review on otsu image segmentation algorithm. International Journal of Advanced Research in Computer Engineering and Technology, 2(2), 2013.
  • [44] Nguyen Xuan Vinh, Julien Epps, and James Bailey. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research, 11:2837–2854, 2010.
  • [45] Jarrell W Waggoner, Youjie Zhou, Jeff P Simmons, Marc De Graef, and Song Wang. 3d materials image segmentation by 2d propagation: A graph-cut approach considering homomorphism. IEEE Transactions on Image Processing, 22(12):5282–5293, 2013.
  • [46] Hao Wang, Guoquan Liu, and Xiangge Qin. Grain size distribution and topology in 3d grain growth simulation with large-scale monte carlo method. International Journal of Minerals Metallurgy and Materials, 16(1):37–42, 2009.
  • [47] Saining Xie and Zhuowen Tu. Holistically-nested edge detection. international conference on computer vision, pages 1395–1403, 2015.
  • [48] Weihua Xue. Three-dimensional Modeling and Quantitative Characterization of Grain Structure. PhD thesis, University of Science and Technology Beijing, 2016.
  • [49] Tao Zeng. Residual deconvolutional networks for brain electron microscopy image segmentation. IEEE Transactions on Medical Imaging, 09 2016.
  • [50] Tao Zeng, Bian Wu, and Shuiwang Ji. Deepem3d: approaching human-level performance on 3d anisotropic em image segmentation. Bioinformatics, 33(16):2555–2562, 2017.