Microstructure reconstruction via artificial neural networks: A combination of causal and non-causal approach

by   Kryštof Latka, et al.

We investigate the applicability of artificial neural networks (ANNs) in reconstructing a sample image of a sponge-like microstructure. We propose to reconstruct the image by predicting the phase of the current pixel based on its causal neighbourhood, and subsequently, use a non-causal ANN model to smooth out the reconstructed image as a form of post-processing. We also consider the impacts of different configurations of the ANN model (e.g. number of densely connected layers, number of neurons in each layer, the size of both the causal and non-causal neighbourhood) on the models' predictive abilities quantified by the discrepancy between the spatial statistics of the reference and the reconstructed sample.



There are no comments yet.


page 3

page 4


Review of Face Detection Systems Based Artificial Neural Networks Algorithms

Face detection is one of the most relevant applications of image process...

Artificial Neural Networks and their Applications

The Artificial Neural network is a functional imitation of simplified mo...

Evolutionary Training of Sparse Artificial Neural Networks: A Network Science Perspective

Through the success of deep learning, Artificial Neural Networks (ANNs) ...

A non-discriminatory approach to ethical deep learning

Artificial neural networks perform state-of-the-art in an ever-growing n...

A mixed model approach to drought prediction using artificial neural networks: Case of an operational drought monitoring environment

Droughts, with their increasing frequency of occurrence, continue to neg...

Determining Structural Properties of Artificial Neural Networks Using Algebraic Topology

Artificial Neural Networks (ANNs) are widely used for approximating comp...

Maximum Resilience of Artificial Neural Networks

The deployment of Artificial Neural Networks (ANNs) in safety-critical a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Multi-scale modelling is a powerful predictive tool that bypasses the need for complex constitutive laws by performing auxiliary calculations at lower scales, using a characteristic sample of a material microstructure [1]. Such a sample can be easily extracted when analysing a material with regular, periodic arrangement of material’s phases; in case of a material with stochastic microstructure, the representative sample is typically constructed artificially such that it matches selected spatial statistics of the material microstructure [2].

While originally the representative samples were generated via optimization approaches, e.g. [3]

, recently, reconstruction methods relying on machine learning have started to emerge, using various frameworks including Markov random fields 

[4], deep adversarial neural networks [5, 6]

, or supervised learning using classification trees 

[7]. Several papers include references to causal and non-causal neighbourhood which both prove to be effective ways of extracting input data for the chosen framework. However, the causal approach generally seems to be the preferred one, with one of these papers even claiming that their model cannot generate valid results if based on a non-causal neighbourhood [4].

The objective of our work is to reconstruct an image of a microstructure from an almost random noise with microscopic properties as similar as possible to the original image. While many of the aforementioned proposed approaches are implementation-complex, we present a simple method using the TensorFlow framework with the Keras sequential API 

[8]. We closely follow the methodology of Bostanabad and coworkers, [7]; however, instead of using classification trees, we use two distinct Artificial Neural Networks (ANNs), where the first network reconstructs the general pattern of the microstructure and the second network denoises and smoothes out the previously reconstructed pattern. For simplicity, we study only two-phase materials, i.e. we test the framework with black and white images. Our implementation and data are publicly available at a GitLab repository [9].

2 Methodology

The proposed reconstruction method comprises two distinct steps:

  1. The reconstruction of a general shape of the microstructure from a random noise with margins of the reference image used as a seed.

  2. The smoothing procedure of the reconstructed image that improves the local features of the microstructural geometry.

2.1 Reconstructing microstructural geometry

Figure 1: Illustration of the causal neighbourhood with around the central pixel highlighted in dark grey. The red pixels represent the input data for the ANN trained in Step 1; however, in order to extract an input structure of a rectangular shape, the black and yellow pixels are also included in the inputs but their values are discarded and replaced by random binary values.
(a) (b) (c)
Figure 2: Microstructure reconstruction process using the causal model. After training the neural network model on input data from the initial microstructure (a), the trained model is used to sequentially predict pixel values in a raster scan order using previously predicted pixel values from the causal neighbourhood.

In Step 1, the overall material distribution should be outlined in the reconstructed sample, rendering the key features of the microstructural geometry. We opted for a sequential approach, in which new values of individual pixels in the discrete, pixel-like representation of the newly generated microstructural geometry are predicted from the values previously determined at antecedent positions (thus the causal approach), because such an approach has already proven its merits in microstructural reconstruction, cf. [7].

To this end, we define a rectangular causal neighbourhood of pixels with parameter being the given neighbourhood radius; see Fig. 1

for an illustration. Note that the positions highlighted in red constitute the real input data; the dark grey and yellow pixels are only a padding (without value) such that the whole input features a regular 2D shape and, consequently, pooling layers can be easily applied. The actual value of the dark grey pixel serves as a label during an extraction of the training data from a reference image.

The neural network for predicting the pixel values in Step 1 is designed such that the rectangular input is first subsampled using a pooling layer, flattened and passed through several densely connected layers. Based on our numerical experiments, the maximum pooling layer consistently delivered better results than the average pooling layer. The setup of the training procedure and the effect of the remaining network parameters, namely the number of densely connected layers , the number of neurons in each layer , the size of the pooling layer and the neighbourhood radius , are discussed later in Section 4.

Once the network is trained, a new microstructural realization is generated by iterating over pixels of a to-be-reconstructed image in a raster scan order. Consequently, an initial microstructural geometry must be provided in a margin of width at the left, top, and bottom edge of the image; the initial values in the remaining part of the image are irrelevant and we generate them as a random binary noise; see Fig. 2b.

2.2 Smoothing procedure

Figure 3: Illustration of the non-causal neighbourhood of a neighbourhood radius of . The black pixel represents the central pixel and the red pixels represent non-causal neighbourhood. When extracting the neighbourhood, the central black pixel is represented as a random binary value.

Outputs of the model trained in Step 1 usually contain the main features of the trained microstructural geometry; however, local details are typically polluted by random noise; compare Figs. 2a and 2c. For Step 2 we train an additional neural network to smooth out the image and correct irregularities in the image generated in the Step 1.

This time, the model works with a complete, i.e. non-causal, square neighbourhood around the central pixel (illustrated in Figure 3), usually of a smaller neighbourhood radius than in the causal model in Step 1. Again, the two-dimensional structure of the input is needed to facilitate a subsampling/pooling layer, which is then followed by flattening and passing through two densely connected layers. The impact of the actual choice (i.e. average vs maximum pooling) of the subsampling layer is discussed in Section 4).

To increase robustness of the trained model and prevent if from learning to simply copy the value of the central pixel, we introduce two errors: (i) the value of the dark grey pixel, which is used as a label in the training, is always randomized in the inputs, and (ii) we also randomly choose value for of red pixels from Fig. 3.

(a) (b)
Figure 4: Effect of the smoothing network based on the non-causal model: (a) reconstructed image serving as an input, (b) microstructure corrected by the smoothing model. Both images are of size 200 x 200 pixels.

3 Error quantification

In order to assess the performance of the proposed models beyond a visual inspection, we compare generated microstructural samples to the reference images in terms of spatial statistics. The most straightforward spatial statistics is a volume fraction of a chosen phase (the white phase, i.e. pixels with value 1, in our case). We define error as an absolute value of the difference between the volume fraction in the reference sample and the volume fraction in the generated microstructure,


The second spatial statistics considered in our work is the two-point probability function

, which states the probability of finding two points separated by in a given phase. Since all our data are represented as a regular grid of values, the discrete version of

can be easily computed using the Fast Fourier Transform 

[2]. Consequently, we quantify the discrepancy in the reference and a generated microstructure by means of their discrete two-point probability functions and as


where is the Frobenius matrix norm [10]


Finally, to quantify the effect of our smoothing non-causal model, we add the third error metric that captures the level of local heterogeneity. Assuming a two-phase medium, a microstructure can be represented with a Boolean matrix . For each pixel we can computed a local quantity as a sum of the averaged absolute differences between the value of the central pixel and its neighbouring eight pixels,


The error is then computed again as an average over the image excluding the one-pixel wide margin, i.e.


The reasoning behind this error measure is that if we consider the phases of pixels in a very small neighbourhood around the central pixel, the number of pixels whose phase is different to that of the central pixel will be lower if the edges are properly smoothed out. Even though this might not necessarily be true for the pixels which form the edge of the reconstructed pattern (and thus, the black and white phase must switch), it will apply on a larger scale (hence, we compute the sum of the values of for all pixels in the image). Therefore, in theory, the lower the value of is for an image, the more smoothed out the image should be.

4 Results

We report the effect of parameters on the quality of the microstructural reconstruction quantified with the error measures introduced in the previous chapter.

First, we focus on parameters of the reconstruction model.

Number of neurons
9 16 25
0.046 0.031 0.036
0.031 0.049 0.029
0.002 0.010 0.021
Table 1: Values of the volume fraction error , depending on the number of layers and the number of neurons in each layer .
Number of neurons
9 16 25
1.36 9.36 1.05
9.32 1.46 7.57
2.96 3.71 6.70
Table 2: Values of the two-point probability error , depending on the number of layers and the number of neurons in each layer .

Tables 1 and 2 illustrate the impact of altering the number of densely connected layers and the number of neurons in each layer , while keeping the neighbourhood radius and the pooling size constant, on the volume fraction error and the two-point probability error , respectively. In particular, we set and as these values produced the most visually appropriate reconstructions in early tests of the model.

The next two tables, Tabs. 3 and 4, summarize the sensitivity study investigating the influence of the neighbourhood radius and the pooling size on the reconstruction. This time, all the tests were performed with densely connected layers with neurons in each layer.

Neighbourhood radius
10 12 15
0.491 0.298 0.006
0.058 0.046 0.031
Table 3: Values of the volume fraction error , depending on neighbourhood radius and pooling size .
Neighbourhood radius
10 12 15
2.50 1.24 4.05
1.81 1.36 9.36
Table 4: Values of the two-point probability error , depending on neighbourhood radius and pooling size .

The next three tables, i.e. Tables 56, and 7, summarize the parametric study for the smoothing model with non-causal neighbourhood. We chose the reconstructed image obtained by the first model using layers, each with neurons, a neighbourhood radius of and a pooling size of as an input to the model and compared the resulting smoothed-out image to the original microstructure (before reconstruction), recall Fig. 2a, in terms of the error measures introduced in Section 4. We carried out two sets of test: one for the average and one for the maximum pooling 2D layer. In each set, we altered the radius of the non-causal neighbourhood as well as the magnitude of the artificially introduced noise . Each set rendered three tables as we inspected also the level of local heterogeneity , in addition to the errors and already reported for the generative model.

Neighbourhood radius
5 7 10


0.041 0.002 0.040
0.004 0.036 0.033
0.051 0.027 0.033


0.058 0.000 0.033
0.037 0.003 0.030
0.044 0.029 0.036
Table 5: Values of the volume fraction error , depending on the magnitude of artificially introduce noise , neighbourhood radius , and the type of pooling layer used (max or average).
Neighbourhood radius
5 7 10


1.22 3.15 1.20
3.24 1.06 9.88
1.23 7.21 9.88


1.76 3.08 1.01
1.12 3.25 8.98
1.31 8.90 1.08
Table 6: Values of the two-point probability error , depending on the magnitude of artificially introduce noise , neighbourhood radius , and the type of pooling layer used (max or average).
Neighbourhood radius
5 7 10


0.130 0.133 0.123
0.135 0.129 0.124
0.123 0.130 0.127


0.113 0.120 0.112
0.114 0.119 0.112
0.114 0.115 0.113
Table 7: Values of the local heterogeneity error , depending on the magnitude of artificially introduce noise , neighbourhood radius , and the type of pooling layer used (max or average). For comparison, the value of for the reference image is 0.097.

5 Discussion

First, we add an observation regarding the three error measures defined in Section 3 and our visual perception of the reconstructed microstructures. The first error measure, , served as a coarse check that the volume fraction in the reconstructed image is similar to the original; however, it could not assess how similar the reconstructed pattern is to the original. For this purpose, we adopted , based on the two-point correlation function, as we excepted it ot be better suited to compare the reconstructed pattern to the original. Nevertheless, we noticed a significant discrepancy between the values of

for each model and the visual similarity of the reconstructed pattern to the original one. For example, the generative model using

layers, each with neurons, neighbourhood radius of and a pooling size of , could probably be considered as the most accurate in terms of the visual pattern (the image reconstructed using this model is in Figure 2c), but only seventh best according to . We conclude that other spatial statistics such as two-point cluster function or lineal path should be added to the suite of error measures as well to capture both the global distribution of a microstructure and its local characteristics. On the other hand, the assessment of the smoothing model by values was generally in accordance with the quality of the visual appearance of the reconstructed images.

Our results show that taking more layers with less neurons in each was favourable to the approach with less but more populated layers. Perhaps as expected, the larger neighbourhood radius was considered during training of the generative model, the better the results were. Surprisingly, pooling size was better for the largest neighbourhood radius, while larger pooling was preferential in all other cases.

In the case of the smoothing model, considering larger neighbourhood radius beyond certain threshold (in our case ) did not improve its performance. This can be attributed to the different purpose of both models; while the generative model needs information from distant points to properly distribute the material with the sample, the smoothing model is by its nature local. On the other hand, larger pooling layer was consistently outperforming the smaller one in all tests. Most importantly, we noticed that the values of were significantly lower for the smoothing models using an average pooling layer instead of a maximum pooling layer. This is probably due to the fact that the average pooling layer ignores sharp features in the image (e. g. individual pixels whose phase was not correctly identified in the Part 1 of the reconstruction), which allowed us to smooth out the edges of the reconstructed pattern in the image.

However, it is important to emphasize that these observations are specific for the considered microstructure.

6 Conclusions

Despite the simplicity of the proposed ANN-based model, accompanied by the ease of implementation facilitated by the TensorFlow framework and the Keras Sequential API, the model generates meaningful microstructural geometries. The combination of a causal model used for reconstruction and a non-causal smoothing model in particular yielded satisfying results, cf. Figure 4

, considering the fact that the models knew only a limited local information. We believe that even better results can be obtained by, e.g., incorporating the considered errors directly in the loss function during the training process of individual networks. Yet, this remains to be done in our future work.

The need for the margin of initial values during reconstruction might be seen as a limitation restricting the model to generating microstructural samples only as large as the reference one, from which the margin can be easily copied. However, a possible solution is to take the reference sample, dismember it into pieces and reorder the pieces so that they form the margin of desired size. Alternatively, starting from different parts of the reference microstructure, a set of smaller samples can be generated; these samples can be then assembled together while blending the microstructure in their overlaps using, e.g., image quilting [6].

M. Doškář and J. Zeman gratefully acknowledge support by the Czech Science Foundation, project No. 19-26143X.


  • [1] M. G. D. Geers, V. G. Kouznetsova, K. Matouš, J. Yvonnet. Homogenization Methods and Multiscale Modeling: Nonlinear Problems. In E. Stein, R. de Borst, T. J. R. Hughes (eds.), Encyclopedia of Computational Mechanics Second Edition, pp. 1–34. John Wiley & Sons, Ltd, Chichester, UK, 2017. doi:10.1002/9781119176817.ecm107.
  • [2] S. Torquato. Random heterogeneous materials: microstructure and macroscopic properties. No. 16 in Interdisciplinary applied mathematics. Springer, New York, 2002.
  • [3] J. Zeman, M. Šejnoha. From random microstructures to representative volume elements. Modelling and Simulation in Materials Science and Engineering 15(4):S325–S335, 2007. doi:10.1088/0965-0393/15/4/S01.
  • [4] L.-Y. Wei, M. Levoy.

    Fast texture synthesis using tree-structured vector quantization.

    In Proceedings of the 27th annual conference on Computer graphics and interactive techniques - SIGGRAPH '00. ACM Press, 2000. doi:10.1145/344779.345009.
  • [5] Z. Yang, X. Li, L. C. Brinson, et al. Microstructural materials design via deep adversarial learning methodology. Journal of Mechanical Design 140(11), 2018. doi:10.1115/1.4041371.
  • [6] D. Fokina, E. Muravleva, G. Ovchinnikov, I. Oseledets.

    Microstructure synthesis using style-based generative adversarial networks.

    Physical Review E 101(4), 2020. doi:10.1103/physreve.101.043308.
  • [7] R. Bostanabad, A. T. Bui, W. Xie, et al. Stochastic microstructure characterization and reconstruction via supervised learning. Acta Materialia 103:89–102, 2016. doi:10.1016/j.actamat.2015.09.044.
  • [8] M. Abadi, A. Agarwal, P. Barham, et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
  • [9] K. Latka, M. Folwarczny, M. Doškář, J. Zeman. Ai-based microstructure reconstruction. https://gitlab.com/MartinDoskar/ai-based-reconstruction, 2021.
  • [10] G. H. Golub, C. F. Van Loan. Matrix computations. The Johns Hopkins University Press, Baltimore, 4th edn., 2013.