Accurate and Fast reconstruction of Porous Media from Extremely Limited Information Using Conditional Generative Adversarial Network

04/04/2019 ∙ by Junxi Feng, et al. ∙ Sichuan University NetEase, Inc 0

Porous media are ubiquitous in both nature and engineering applications, thus their modelling and understanding is of vital importance. In contrast to direct acquisition of three-dimensional (3D) images of such medium, obtaining its sub-region (s) like two-dimensional (2D) images or several small areas could be much feasible. Therefore, reconstructing whole images from the limited information is a primary technique in such cases. Specially, in practice the given data cannot generally be determined by users and may be incomplete or partially informed, thus making existing reconstruction methods inaccurate or even ineffective. To overcome this shortcoming, in this study we proposed a deep learning-based framework for reconstructing full image from its much smaller sub-area(s). Particularly, conditional generative adversarial network (CGAN) is utilized to learn the mapping between input (partial image) and output (full image). To preserve the reconstruction accuracy, two simple but effective objective functions are proposed and then coupled with the other two functions to jointly constrain the training procedure. Due to the inherent essence of this ill-posed problem, a Gaussian noise is introduced for producing reconstruction diversity, thus allowing for providing multiple candidate outputs. Extensively tested on a variety of porous materials and demonstrated by both visual inspection and quantitative comparison, the method is shown to be accurate, stable yet fast (∼0.08s for a 128 × 128 image reconstruction). We highlight that the proposed approach can be readily extended, such as incorporating any user-define conditional data and an arbitrary number of object functions into reconstruction, and being coupled with other reconstruction methods.



There are no comments yet.


page 3

page 4

page 5

page 7

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Porous media, such as sandstone, soil, alloy, and composite abound in nature and synthetic situations and have play a critical role in a variety of engineering applications. Hence, their understanding and modelling is of significant importance Torquato (2013); Sahimi (2011).Despite the advance of three-dimensional (3D) imaging techniques like computed tomography (CT) Li et al. (2018a); Wang et al. (2018a); Bostanabad et al. (2018) and scanning electron microscope (SEM) Tahmasebi et al. (2015), however, in many cases, only limited data is available for analysis. They could be 3D incomplete data, several two-dimensional (2D) slices and even a few statistics Yeong and Torquato (1998a, b). Therefore, reconstructing 3D full images from these limited data has been a major technique in such situations.

During the past decades, various reconstruction methods have been developed Yeong and Torquato (1998a, b); Rozman and Utz (2001); Pant et al. (2014); Jiao et al. (2008, 2009); Chen et al. (2015); Gerke and Karsanina (2015); Gerke et al. (2014); Karsanina and Gerke (2018); TANG et al. (2009); Chen et al. (2014); Gao et al. (2016); Feng et al. (2018a); Ju et al. (2017, 2014, 2018); Okabe and Blunt (2005); Gao et al. (2015); Ding et al. (2018); Mariethoz et al. (2010); Tahmasebi and Sahimi (2012, 2013, 2016a, 2016b); Tahmasebi (2017); Bostanabad et al. (2016a, b); Feng et al. (2018b); Mosser et al. (2017, 2018a, 2018b); Laloy et al. (2017, 2018); Wang et al. (2018b); Li et al. (2018b, 2019), and popular algorithms include optimization-based method Yeong and Torquato (1998a, b); Rozman and Utz (2001); Pant et al. (2014); Jiao et al. (2008, 2009); Chen et al. (2015); Gerke and Karsanina (2015); Gerke et al. (2014); Karsanina and Gerke (2018); TANG et al. (2009); Chen et al. (2014); Gao et al. (2016); Feng et al. (2018a); Ju et al. (2017, 2014, 2018), multi-point statistics (MPS) Okabe and Blunt (2005); Gao et al. (2015); Ding et al. (2018), direct sampling (DS) Mariethoz et al. (2010), CCSIM Tahmasebi and Sahimi (2012, 2013, 2016a, 2016b); Tahmasebi (2017)

, machine learning and deep learning based method

Bostanabad et al. (2016a, b); Feng et al. (2018b); Mosser et al. (2017, 2018a, 2018b); Laloy et al. (2017, 2018); Wang et al. (2018b), and superdimension method recently proposed Li et al. (2018b, 2019). It is well known that generally the prerequisite of this reconstruction methodology is that the 2D training image (TI) needs to meet the requirements of stationarity and ergodicity, in other words, 2D image is able to statistically represent the main characteristic of the entire 3D structure. Despite considerable research on TI selection Mirowski et al. (2009); Boisvert et al. (2007); Gao et al. (2017), however, in practice the 2D images or data are generally not determined by users, and they may be incomplete or partially applicable Shen et al. (2015); Mariethoz and Renard (2010); Sokat et al. (2018), and thus cannot be directly used as a representative of 3D images. For instance, loss of data or information in 2D images is a universal issue in Earth Sciences and is the primary cause of (hydro) geological uncertainty Mariethoz and Renard (2010). Clearly, directly reconstructing 3D image from its imperfect 2D images may be infeasible. Instead, to first recover the 2D full image using the limited information and then apply it to 3D reconstruction could be an alternative solution.

Even though 2D image reconstruction can serve as the preparation of the subsequent 3D reconstruction, in fact, reconstruction of 2D image and the analysis based on which can be a distinctive topic. It is noteworthy that practically the 2D images obtained by optical microscope or SEM still play a key role in numerous researches. Notice that in some circumstances acquisition of 2D images could also be tough and highly expensive Abdollahifard et al. (2016); Ju et al. (2018); Semnani and Borja (2017). For example, study of nano-scale pore in tight porous materials such as shale generally requires a huge number of high-resolution 2D images for comprehensive analysis, because usually a single imaging field of view (FOV) of such sample is too small to represent the entire material Semnani and Borja (2017). Hence, it is of critical importance and great interest to employ the acquired data (e.g., several small FOVs) for quick reconstruction and accurate investigation, which may save both time and imaging cost.

Considering the above two aspects, effectively utilizing the given (obtained) data and correspondingly developing accurate 2D/3D reconstruction methods still remain an outstanding problem. Figure 1 shows a schematic of 2D reconstruction of such process. In addition, another related issue is the diversity of reconstruction results. Since the inherent essence of this inverse problem generally enables that more than one solutions are acceptable, the reconstruction algorithm is expected to be not only accurate and fast, but also able to stably provide comparable candidate solutions, thus allowing for user selection.

At present, potential methods for this reconstruction conundrum could be the variants of DS Mariethoz et al. (2010) and CCSIM Tahmasebi and Sahimi (2012, 2013, 2016a, 2016b); Tahmasebi (2017), and we notice that, the performance of these MPS-like methods heavily rely on the proportion of informed data, i.e., the more data there is, the better performance they may achieve. In the case of extremely limited information here, inaccurate reconstruction and evident artifacts (unnatural structure) may easily arise (see results in Sec.IV).

Figure 1: Schematic of reconstruction from extremely limited information.

Recently, the advent of machine learning/deep learning techniques has brought new inspirations and insights in a variety of domains Bostanabad et al. (2016a, b); LeCun et al. (2015); Esteva et al. (2017); Chen et al. (2017, 2018a); Ren et al. (2019); Goodfellow et al. (2014); Mirza and Osindero (2014); Karimpouli and Tahmasebi (2019); Karimpouli and Tahmesbi (2019); Tahmasebi et al. (2017); Chen et al. (2018b); Cang et al. (2017); Chan and Elsheikh (2018a, b); Feng et al. (2018b); Mosser et al. (2017, 2018a, 2018b); Laloy et al. (2017, 2018); Wang et al. (2018b); Zhu et al. (2018). Notably, there have been an increasing number of such techniques utilized in the reconstruction and analysis of materials Karimpouli and Tahmasebi (2019); Karimpouli and Tahmesbi (2019); Tahmasebi et al. (2017); Chen et al. (2018b); Cang et al. (2017); Chan and Elsheikh (2018a, b); Feng et al. (2018b); Mosser et al. (2017, 2018a, 2018b); Laloy et al. (2017, 2018); Wang et al. (2018b)

. Recent advances include decision tree method

Bostanabad et al. (2016a, b), 2D generative adversarial network (GAN) and 2D conditional generative adversarial network(CGAN) Chan and Elsheikh (2018a, b); Feng et al. (2018b), 3D GAN and 3D CGAN Mosser et al. (2017, 2018a, 2018b); Laloy et al. (2017, 2018), and CNN-based method Wang et al. (2018b)

. The prevalence of deep learning-based method is due to that by training a neural network using numerous samples (pairs of input and output), it can find a general mapping from input to output. Once neural network is trained, its inference (reconstruction) can be very quick.

In this paper, we aim to propose a deep learning-based framework for reconstructing porous media from extremely limited information. Specially, CGAN is employed to learn a mapping between input (partial image) and output (full image). To preserve the reconstruction accuracy, two simple but effective objective functions (aka, loss functions) are proposed and then coupled with another two loss functions to jointly constrain the training procedure. Additionally, considering the intrinsic nature of this inverse problem that generally more than one reconstructions are reasonable, a Gaussian noise is introduced to produce reconstruction diversity, thus allowing for providing multiple candidate outputs. According to the extensive tests on a variety of porous media and the both visual and quantitative comparisons, our method is demonstrated to be accurate yet stable. Moreover, when given an input, the proposed method can render instant reconstruction on CPU (

for a image), which achieves speedup factor compared with conventional method DS. We remark that apart from the ability to solve the problem of incomplete data, our approach can be readily extended, such as incorporating an arbitrary number of object functions of any types into reconstruction, the ability to incorporate any user-define conditional data, and being coupled with other reconstruction methods.

The rest of this paper is organized as follows: Section II details the reconstruction framework of porous media, including fundamental of CGAN, introduction of noise, and the loss functions. Section III describes the assessment methods for reconstruction. Results and comparisons are demonstrated in Sec. IV. In Section V, we make concluding remarks.

Figure 2: Schematic of CGAN. tries to generate realistic data from given inputs to fool , while attempts to identify the fakes by and real data.

Ii Reconstruction of porous media using conditional generative adversarial network

In this section, we will present the details on the reconstruction of porous media using conditional generative adversarial network (CGAN), including primary principle of CGAN, design of network architectures, and loss functions.

ii.1 Principle of CGAN

Owing to CGAN Mirza and Osindero (2014) is the conditional version of GAN Goodfellow et al. (2014), we would like to first introduce GAN. In particular, GAN consists of two adversarial sub-networks, one is generator and the other is discriminator . Especially, they battle in a two-player min-max game in which the generator tries to generate realistic data from given input to fool discriminator , whereas attempts to identify the fakes by and real data (target). The goal of is to learn a mapping that maps prior noise to the real data , while the output of discriminator

is to give the probability that

is from training data rather than the fake data generated by . Both and are trained alternately to optimize the following objective function:


where tries to minimize this expression and in the same time tries to maximize it, thus indicating the adversarial conception in GAN. Notably, at the very beginning, the abilities of both and are quite weak; with iterations, they gradually evolve to be powerful and finally reach Nash equilibrium Goodfellow et al. (2014), in which is able to produce realistic data which cannot be recognized by . Once trained, is discarded and only is used to transform input to its expected output. In general, the use of initial GAN is to generate realistic data using noise distribution such as Gaussian or rand noise from the given samples, which could be of special use in the situations that data is lacking.

As an improved version of GAN, the CGAN allows for incorporating conditional data as an external input , rather than only a noise distribution , as demonstrated in Fig.2.

Therefore, the modified objective function is given as:

Figure 3: Schematic of architecture of network G.

Actually, in terms of different tasks, the noise in CGAN could be added or not. Especially, the noise could even be dropped in the case of one-to-one image translation problems Isola et al. (2017). However, in this paper, the diversity of reconstruction is mainly due to the introduction of noise. Hence, the noise is reserved and fused together with (see more details presented in subsection II.2.3).

ii.2 Network architectures

In the following, we will elaborate the architectures of generator G and discriminator D, as well as the injection of noise into G.

ii.2.1 Architecture of generator

Specially, in general image-to-image tasks, U-Net network architecture is frequently employed, for its merit that relatively fewer parameters and multi-scale characteristic for feature extraction. Owing to the nature of reconstruction of porous media is also an image-to-image task, we mainly follow the design of BicycleGAN

Zhu et al. (2018), as it is a representative U-Net based framework. Figure 3 depicts the main architecture of G. Specifically, this network performs two main steps: encoding process by encoder and decoding process by decoder. These two procedures could be viewed as the nonlinear down-sampling and up-sampling of input image. In this work we focus on images, so starting from the size of , the input image is convoluted and down-sampled gradually to code by the sampling factor 2, and then deconvoluted and up-sampled back to

for output by the same factor 2. Each convolutional layer or deconvolutional is followed by nonlinear activation layer (ReLU or Leaky ReLU) and instance normalization (IN) layer. In general, the encoding and decoding processes of U-Net are symmetric, which means that the shape of feature maps in the symmetric position are the same. Thus, a skip connection is usually added to assist decoding process by introducing the feature map information of encoding process.

ii.2.2 Architecture of discriminator

Figure 4: Schematic of architecture of network D.

The design of discriminator is relatively simpler than generator G, as shown in Fig. 4. It is composed of five convolutional layers, each of them is followed by Leaky ReLU (LReLU) layer and instance normalization (IN) layer. As aforementioned, is trained to distinguish the real data and fake data . Hence, the input of is the pair of conditional data and real data , or that of conditional data and fake data

during training process. By using the sigmoid function and an additional average function, the output of

is transformed to a probability between to , which indicates how real the data is. The D is trained to recognize real data as probability and fake data as probability . When reaching the final balance with , both real and fake data are identified as probability .

ii.2.3 Noise injection

It is worth noting that the goal of injection of random noise into generator is to introduce its output diversity, for providing multiple choices for users due to the inherent essence of this ill-posed problem. In general, there are two alternatives of noise injection: i) adding noise in the first layer (Fig. 5 a) and ii) in each layer of the encoder (Fig. 5 b). As demonstrated in the literature Zhu et al. (2018), the noise injected all layers in the encoder leads to a slightly better performance. Hence, in the proposed method we utilize the design in Fig. 5 b.

Figure 5: Alternatives for noise injection. Noise is injected by spatial replication and concatenation into the generator. (a) Adding noise in the first layer and (b) in each layer of the encoder.

In our work, we use a Gaussian noise with a shape , which is first spatially replicated to same height and width of the current layer of encoder, and then is concatenated with this layer along channel dimension. For instance, assume the shape of current layer is , the Gaussian noise is spatially replicated to along H and W dimension. After concatenation along channel dimension, the shape of resulting concatenated layer will be .

ii.3 Loss function

Similar to other optimization-based approaches like simulated annealing Yeong and Torquato (1998a, b); Rozman and Utz (2001); Pant et al. (2014); Jiao et al. (2008, 2009); Chen et al. (2015); Gerke and Karsanina (2015); Gerke et al. (2014); Karsanina and Gerke (2018); TANG et al. (2009); Chen et al. (2014); Gao et al. (2016); Feng et al. (2018a); Ju et al. (2017, 2014, 2018), the goal of deep learning-based method is to minimize the loss function, which is used to evaluate the discrepancy between network prediction and target. On the basis of Eq. 2, the loss of discriminator D can be equivalently concluded to minimize the equation as follows:


In what follows, we would like to detail the loss function of generator G, since it significantly determines the quality of generated results. Specially, the total loss function of is composed of four individual loss functions, namely, CGAN loss , L1 loss , and two proposed loss functions for this reconstruction task, i.e., pattern loss and porosity loss .

CGAN loss comes from the fundamental of CGAN framework, representing how close output of is to the target when discriminated by Goodfellow et al. (2014). In terms of Equation Eq. 2, the loss is defined as:


L1 loss , defined as a sum of pixel-wise absolute value difference between output and input conditional data , is given by:


where is Gaussian noise. This loss function is utilized to ensure that the output keeps the same conditioning data contained in , including both value and position.

The third one is pattern loss , proposed to quantify the mean squared error (MSE) of pattern distributions in the respective images of output and target . This loss function is leveraged to describe the local texture difference in two images, and the definition is:


Particularly, the pattern in an image is defined as the multi-point configuration captured by a template. The calculation of pattern loss is straightforward: i) by using a

template, scan the image and collect all the patterns; ii) flatten each pattern, and obtain its corresponding binary code and then convert it to decimal number; iii) the occurrence of each decimal number is counted and then is normalized (divided by the total number of patterns in the image), and thus the probability distribution of patterns is obtained; iv) calculate the Euclidean distance between the two distributions of target and reconstruction. Figure

6 gives the main steps of such process.

Figure 6: Schematic of obtainment of probability distribution of patterns.

The last loss function is porosity loss , which is presented to identify the disagreement of porosity of output and target , and thus maintaining the porosity agreement during reconstruction. It is given as follows:


The total loss is a weighted sum of above four loss functions and is defined as:


where the hyper-parameters , and control the relative importance of each term.

Iii Assessment methods for reconstruction

In this paper, to verify the performance of the proposed method, porosity and three morphological functions, namely, two-point correlation function Torquato (2013), lineal-path function Lu and Torquato (1992) and two-point cluster function Torquato et al. (1988), are employed. Figure 7 illustrates the definition of these functions.

Figure 7: Definition of three morphological functions.

Two-point correlation function gives the probability that two points separated by a distance both lie in the same phase, indicating spatial correlation of two points. For statistically isotropic and homogeneous media, it only relies on the distance between the point pair. Therefore, for briefness is dropped.

Similar to the , the two-point cluster function presents the probability that two points both lie in the same cluster. In terms of its definition, it embodies higher-order morphological information than .

The other descriptor is lineal-path function , which is the probability that a segment entirely lies in the same phase. It encodes connectedness information of medium along a lineal path.

Iv Results and discussion

Here, we focus on 2D two-phase structures, while with slight modifications, extensions of multi-phase reconstruction and 3D reconstruction are also straightforward. To ascertain the performance of our method CGAN, we test it on four types of porous media, covering high or low porosity, and isotropy or anisotropy.

iv.1 Dataset

Notice that in this work, in terms of each category of porous medium, an associated dataset is made, which encompasses 600-1000 images. Especially, 70 samples in each dataset is randomly chosen as training set and the rest 30 is as testing set. Figure 8 presents six examples in the dataset of silica material, each sample is a pair of input (partial image) and target (full image).

Figure 8: Six samples in silica material dataset.

In all of our reconstruction tasks here, hyper-parameter , and in Eq. 8 are empirically set to , and , respectively. For the tradeoff between accuracy and efficiency of pattern loss, template size in calculation of pattern loss is set to . In addition, batch size and channel of noise are respectively set to and . For both and , we use Adam optimizer Kingma and Ba (2014) with an initial learning rate , and a linear decay for stable training.

iv.2 Results and comparisons

In this subsection, we present reconstruction results and comparisons of four porous media, including isotropic materials such as silica, battery material, sandstone, and an anisotropic medium. In particular, porosity and three morphological functions, i.e., , , and are employed to evaluate the reconstruction accuracy. For each medium, realizations are generated and an additional average of statistics (morphological functions and porosity) over them is also presented.

iv.2.1 Silica reconstruction

First, we reconstructed an isotropic porous material, i.e., silica in a rubber matrix with a size of and a low porosity . Figure 9 demonstrates the input, three randomly selected reconstructions, as well as the target. Notice, the informed data in the input is a square in the top-left area of the target, only accounting for data in this image. However, the reconstructions are still visually indistinguishable from the target. It can be obviously seen that, the hard data in the input is well honored in all of the reconstructions (marked in orange rectangles), while the rest of the image keeps good diversity, which allows for multiple selections for users.

Figure 9: Input (a), a square in the top-left region of the target (e), and three realizations of CGAN (b)-(d). Orange rectangles show the reproduction of hard data in the input, while green rectangles indicate the much bigger white cluster in reconstructions than those in the input, which demonstrates the introduction of external data by our method.
Figure 10: Comparison of statistical functions between reconstructions, their average, and target. The calculation of statistical functions is along X and Y directions and then averaged.

Moreover, in addition to visual inspection, we further quantitatively compare the reconstruction accuracy of the proposed method. Figure 10 depicts the quantitative comparison of statistical functions and porosities between reconstructions, their average, and the target. Good agreement can be observed between reconstructions and target, and the average over realizations excellently matches the target, demonstrating the accuracy of proposed method. Additionally, the small biases of functions between reconstructions and target also indicates the stability of our method. The porosity distribution (Fig. 10 a) on reconstructions is while the target is , their accordance also manifests the accuracy and robustness of the proposed method.

Notably, once trained, our method only takes for reconstruction when running on an Intel i7-4790K ( GHz) CPU. It is also worth noting that this input image (Fig. 9 a) may be regarded as a high-resolution image with small FOV and by using our method, its much larger FOV with high resolution can be accurately recovered. Meanwhile, according to the input and reconstructions (Fig. 9), some of white clusters (marked in green rectangles) in reconstruction are much bigger than the given ones, therefore indicating the additional information is indeed introduced by our method.

iv.2.2 Battery material reconstruction

Figure 11: Comparison of visual inspection and statistical functions. Orange rectangles show the reproduction of hard data. The calculation of statistical functions is along X and Y directions and then averaged.

In addition to one subarea reconstruction, our method is also verified on an isotropic battery material Ananyev et al. (2018) with four tiny subareas informed and target porosity , as shown in Fig. 11 a and Fig.11 e. The purpose of this experiment is to mimic the extreme circumstances in aerospace Shen et al. (2015), geoscience Mariethoz and Renard (2010); Sokat et al. (2018), etc., that only several incomplete parts of data may be available. Again, according to the comparisons in Fig. 11, the reconstructions are visually comparable, and in terms of both porosity ( ) and statistical functions, the agreement is excellent.

We would like to emphasize that the amount of informed data in the input image of this material is the same as that of silica, however, here the standard deviation

of its reconstruction porosities is much higher, in comparison with that of the silica reconstruction (). This phenomenon can be also be evidently observed from the larger vibrations of the three descriptors in Fig. 11(f)-(h). Actually, this is expected since this material is more complex in both connection and geometry than those of silica material.

iv.2.3 Sandstone reconstruction

It is useful to compare the performances of our method CGAN and another existing method, i.e., a variation of (here also called for convenience). As presented in Fig. 12 a, we use a sandstone image with a relatively larger region () of known data for the demonstration purpose. Obviously, In light of both visual and quantitative comparisons in Fig. 12, the performance of CGAN significantly surpasses that of DS. As can be observed in Fig. 12 d, the reconstruction by DS is unnatural, in which the pore size is severely underestimated and the connectedness is also poor, compared with the target image. Specially, the reasons for this are twofold: i) the reconstruction mechanism of sequential simulation of DS can readily cause error accumulation and thus make reconstruction inaccurate; ii) more importantly, the nature of this method is generally directly repeating the given data (usually in the manner of patterns) into the rest of unknown part, rather than generating or introducing additional realistic information. As can be seen Fig. (12 d), rectangles marked in red and blue demonstrate the repeat of two patterns of hard data in input.

Figure 12: Comparison of visual inspection and statistical functions. Red and blue rectangles respectively present the repeat of two patterns of hard data in input. The calculation of statistical functions is along X and Y directions and then averaged.
Figure 13: Comparison of visual inspection and statistical functions. Orange rectangle shows the reproduction of hard data in the input. The calculation of statistical functions is only along southeast direction.

By contrast, the two reconstructions of CGAN (Fig. 12 b and Fig. 12 c) are both visually and quantitatively in accordance with target, unreal structures could be hardly recognized. Also, these reconstructions present seamless transition between the edge of hard data and unknown data. Besides, reconstruction of CGAN only takes , while DS takes on the same CPU, achieving a speedup factor.

iv.2.4 Anisotropic reconstruction

We further apply our method to reconstruction of an anisotropic porous material Bostanabad et al. (2016a), as demonstrated in Fig. 13 e. The particularity of this medium is the pore along southeast is significantly longer than other directions. Clearly, even though hard data is given (Fig. 13 a), the reconstructions are still realistic at visual level, and the hard data is also well preserved. The reconstruction porosity distribution is , which is close to the target .

As can be seen in Fig. 13 f-h, the averaged and of reconstructions are in good agreement with those of target, whereas higher-order function exhibits slight bias, which especially enlarges with the increase of the distance between two pixels. Remarkably, the performance of the proposed method on this anisotropic reconstruction is slightly weak, compared with the three isotropic reconstructions aforementioned. This is primarily due to this material is more topologically complicated in pore s lineal size and morphology. Additionally, notice that here no explicit prior information associated with direction and size of pore is incorporated during reconstruction, and based on the loss functions used, our method can still capture and reproduce the essential directional morphology and dimensional characteristics. Of course, more constraints can be incorporated into our method to further improve the reconstruction accuracy.

V Conclusion

Reconstruction of porous media from limited information has been an outstanding challenge, especially the given data is scarce. To address this problem, in this paper we have presented a deep learning-based method for reconstructing full images of porous media from its much smaller sub-area(s), which provides a framework to produce accurate, fast, and robust realizations. Especially, CGAN is employed to learn the mapping between partial image and full image. In particular, two object functions are proposed and along with other two constraint functions, then jointly constitute the total object function to ensure the reconstruction accuracy. Besides, a Gaussian noise is introduced to preserve reconstruction diversity, allowing for multiple choices for users. Extensively tested on various porous materials, our method has been demonstrated to be able to accurately and stably reconstruct statistically equivalent structures while keeping high efficiency. By using our approach, despite the variety of amount and form of given data, the mapping between input and output can be successfully learned, and consequently the corresponding full images could be instantaneously reproduced. This may be especially useful in the applications that data is lacking or data acquisition is costly.

We would like to highlight that the proposed framework can be readily extended to other applications, such as 2D to 3D or 3D to 3D conditional reconstruction. Theoretically, it is able to incorporate an arbitrary number of object functions of any types into reconstruction, and incorporate any user-define conditional data, which could be of particular use in practice when the presence of specific structures in the area is informed. In addition, it also has the potential to reduce computational cost, for instance, it can be coupled with a variety of reconstruction methods of porous media like MPS Okabe and Blunt (2005); Gao et al. (2015); Ding et al. (2018), DS Mariethoz et al. (2010), CCSIM Tahmasebi and Sahimi (2012, 2013, 2016a, 2016b); Tahmasebi (2017), and used for accelerating the matching process in these algorithms. Some of which will be reported in the near future.

This work is supported by the National Natural Science Foundation of China (Grant No. ). The authors would like to thank Dr. Daniel W. Apley, Department of Industrial Engineering and Management Sciences, Northwestern University, for providing several images of porous media used in this work.


  • Torquato (2013) S. Torquato, Random heterogeneous materials: microstructure and macroscopic properties, Vol. 16 Business Media, 2013).
  • Sahimi (2011) M. Sahimi, Flow and transport in porous media and fractured rock: from classical methods to modern approaches (John Wiley & Sons, 2011).
  • Li et al. (2018a) H. Li, S. Singh, N. Chawla,  and Y. Jiao, Mater. Charact. 140, 265 (2018a).
  • Wang et al. (2018a) Y. Wang, J.-Y. Arns, S. S. Rahman,  and C. H. Arns, Phys. Rev. E 98, 043310 (2018a).
  • Bostanabad et al. (2018) R. Bostanabad, Y. Zhang, X. Li, T. Kearney, L. C. Brinson, D. W. Apley, W. K. Liu,  and W. Chen, Prog. Mater Sci. 95, 1 (2018).
  • Tahmasebi et al. (2015) P. Tahmasebi, F. Javadpour,  and M. Sahimi, Transp. Porous Media 110, 521 (2015).
  • Yeong and Torquato (1998a) C. Yeong and S. Torquato, Phys. Rev. E 57, 495 (1998a).
  • Yeong and Torquato (1998b) C. Yeong and S. Torquato, Phys. Rev. E 58, 224 (1998b).
  • Rozman and Utz (2001) M. G. Rozman and M. Utz, Phys. Rev. E 63, 066701 (2001).
  • Pant et al. (2014) L. M. Pant, S. K. Mitra,  and M. Secanell, Phys. Rev. E 90, 023306 (2014).
  • Jiao et al. (2008) Y. Jiao, F. Stillinger,  and S. Torquato, Phys. Rev. E 77, 031135 (2008).
  • Jiao et al. (2009) Y. Jiao, F. Stillinger,  and S. Torquato, Proc. Natl. Acad. Sci USA 106, 17634 (2009).
  • Chen et al. (2015) S. Chen, H. Li,  and Y. Jiao, Phys. Rev. E 92, 023301 (2015).
  • Gerke and Karsanina (2015) K. M. Gerke and M. V. Karsanina, EPL (Europhysics Letters) 111, 56002 (2015).
  • Gerke et al. (2014) K. M. Gerke, M. V. Karsanina, R. V. Vasilyev,  and D. Mallants, EPL (Europhysics Letters) 106, 66002 (2014).
  • Karsanina and Gerke (2018) M. V. Karsanina and K. M. Gerke, Phys. Rev. Lett. 121, 265501 (2018).
  • TANG et al. (2009) T. TANG, Q. TENG, X. HE,  and D. Luo, J. Microsc. 234, 262 (2009).
  • Chen et al. (2014) D. Chen, Q. Teng, X. He, Z. Xu,  and Z. Li, Phys. Rev. E 89, 013305 (2014).
  • Gao et al. (2016) M. Gao, Q. Teng, X. He, C. Zuo,  and Z. Li, Phys. Rev. E 93, 012140 (2016).
  • Feng et al. (2018a) J. Feng, Q. Teng, X. He, L. Qing,  and Y. Li, Comp. Mater. Sci. 144, 181 (2018a).
  • Ju et al. (2017) Y. Ju, Y. Huang, J. Zheng, X. Qian, H. Xie,  and X. Zhao, Comput. Geosci. 101, 10 (2017).
  • Ju et al. (2014) Y. Ju, J. Zheng, M. Epstein, L. Sudak, J. Wang,  and X. Zhao, Comput. Methods Appl. Mech. Eng. 279, 212 (2014).
  • Ju et al. (2018) Y. Ju, Y. Huang, W. Gong, J. Zheng, H. Xie, L. Wang,  and X. Qian, IEEE Trans. Geosci. Remote Sens.  (2018).
  • Okabe and Blunt (2005) H. Okabe and M. J. Blunt, J. Petrol. Sci. Eng. 46, 121 (2005).
  • Gao et al. (2015) M. Gao, X. He, Q. Teng, C. Zuo,  and D. Chen, Phys. Rev. E 91, 013308 (2015).
  • Ding et al. (2018) K. Ding, Q. Teng, Z. Wang, X. He,  and J. Feng, Phys. Rev. E 97, 063304 (2018).
  • Mariethoz et al. (2010) G. Mariethoz, P. Renard,  and J. Straubhaar, Water Resour. Res. 46 (2010).
  • Tahmasebi and Sahimi (2012) P. Tahmasebi and M. Sahimi, Phys. Rev. E 85, 066709 (2012).
  • Tahmasebi and Sahimi (2013) P. Tahmasebi and M. Sahimi, Phys. Rev. Lett. 110, 078002 (2013).
  • Tahmasebi and Sahimi (2016a) P. Tahmasebi and M. Sahimi, Water Resour. Res. 52, 2074 (2016a).
  • Tahmasebi and Sahimi (2016b) P. Tahmasebi and M. Sahimi, Water Resour. Res. 52, 2099 (2016b).
  • Tahmasebi (2017) P. Tahmasebi, Water Resour. Res. 53, 5980 (2017).
  • Bostanabad et al. (2016a) R. Bostanabad, W. Chen,  and D. Apley, J. Microsc. 264, 282 (2016a).
  • Bostanabad et al. (2016b) R. Bostanabad, A. T. Bui, W. Xie, D. W. Apley,  and W. Chen, Acta Mater. 103, 89 (2016b).
  • Feng et al. (2018b) J. Feng, Q. Teng, X. He,  and X. Wu, Acta Mater. 159, 296 (2018b).
  • Mosser et al. (2017) L. Mosser, O. Dubrule,  and M. J. Blunt, Phys. Rev. E 96, 043309 (2017).
  • Mosser et al. (2018a) L. Mosser, O. Dubrule,  and M. J. Blunt, arXiv preprint arXiv:1802.05622  (2018a).
  • Mosser et al. (2018b) L. Mosser, O. Dubrule,  and M. J. Blunt, Transp. Porous Media 125, 81 (2018b).
  • Laloy et al. (2017) E. Laloy, R. Hérault, J. Lee, D. Jacques,  and N. Linde, Adv. Water Resour. 110, 387 (2017).
  • Laloy et al. (2018) E. Laloy, R. Hérault, D. Jacques,  and N. Linde, Water Resour. Res. 54, 381 (2018).
  • Wang et al. (2018b) Y. Wang, C. H. Arns, S. S. Rahman,  and J.-Y. Arns, Math. Geosci. 50, 781 (2018b).
  • Li et al. (2018b) Y. Li, X. He, Q. Teng, J. Feng,  and X. Wu, Phys. Rev. E 97, 043306 (2018b).
  • Li et al. (2019) Y. Li, Q. Teng, X. He, J. Feng,  and S. Xiong, J. Petrol. Sci. Eng. 174, 968 (2019).
  • Mirowski et al. (2009) P. W. Mirowski, D. M. Tetzlaff, R. C. Davies, D. S. McCormick, N. Williams,  and C. Signer, Math. Geosci. 41, 447 (2009).
  • Boisvert et al. (2007) J. B. Boisvert, M. J. Pyrcz,  and C. V. Deutsch, Nat. Resour. Res. 16, 313 (2007).
  • Gao et al. (2017) M. Gao, Q. Teng, X. He, J. Feng,  and X. Han, Phys. Rev. E 95, 053306 (2017).
  • Shen et al. (2015) H. Shen, X. Li, Q. Cheng, C. Zeng, G. Yang, H. Li,  and L. Zhang, IEEE Geosci. Remote Sens. Mag. 3, 61 (2015).
  • Mariethoz and Renard (2010) G. Mariethoz and P. Renard, Math. Geosci. 42, 245 (2010).
  • Sokat et al. (2018) K. Y. Sokat, I. S. Dolinskaya, K. Smilowitz,  and R. Bank, Eur. J. Oper. Res. 269, 466 (2018).
  • Abdollahifard et al. (2016) M. J. Abdollahifard, G. Mariethoz,  and M. Pourfard, Comput. Geosci. 91, 49 (2016).
  • Semnani and Borja (2017) S. J. Semnani and R. I. Borja, Acta Geotech. 12, 1193 (2017).
  • LeCun et al. (2015) Y. LeCun, Y. Bengio,  and G. Hinton, Nature 521, 436 (2015).
  • Esteva et al. (2017) A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau,  and S. Thrun, Nature 542, 115 (2017).
  • Chen et al. (2017) H. Chen, X. He, L. Qing,  and Q. Teng, IEEE Trans. Multimedia 19, 1702 (2017).
  • Chen et al. (2018a) H. Chen, X. He, C. Ren, L. Qing,  and Q. Teng, Neurocomputing 285, 204 (2018a).
  • Ren et al. (2019) C. Ren, X. He, Y. Pu,  and T. Q. Nguyen, IEEE Trans. Image Process.  (2019).
  • Goodfellow et al. (2014) I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville,  and Y. Bengio, in Advances in neural information processing systems (2014) pp. 2672–2680.
  • Mirza and Osindero (2014) M. Mirza and S. Osindero, arXiv preprint arXiv:1411.1784  (2014).
  • Karimpouli and Tahmasebi (2019) S. Karimpouli and P. Tahmasebi, Neural Networks 111, 89 (2019).
  • Karimpouli and Tahmesbi (2019) S. Karimpouli and P. Tahmesbi, Comput. Geosci.  (2019).
  • Tahmasebi et al. (2017) P. Tahmasebi, F. Javadpour,  and M. Sahimi, Expert Syst. Appl. 88, 435 (2017).
  • Chen et al. (2018b) S. Chen, L. A. Baumes, A. Gel, M. Adepu, H. Emady,  and Y. Jiao, Powder Technol. 339, 615 (2018b).
  • Cang et al. (2017) R. Cang, Y. Xu, S. Chen, Y. Liu, Y. Jiao,  and M. Y. Ren, J. Mech. Des. 139, 071404 (2017).
  • Chan and Elsheikh (2018a) S. Chan and A. H. Elsheikh, arXiv preprint arXiv:1809.07748  (2018a).
  • Chan and Elsheikh (2018b) S. Chan and A. H. Elsheikh, arXiv preprint arXiv:1807.05207  (2018b).
  • Zhu et al. (2018) J.-Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang,  and E. Shechtman, in Advances in Neural Information Processing Systems (2018) pp. 465–476.
  • Isola et al. (2017) P. Isola, J.-Y. Zhu, T. Zhou,  and A. A. Efros, in 

    Proceedings of the IEEE conference on computer vision and pattern recognition

     (2017) pp. 1125–1134.
  • Lu and Torquato (1992) B. Lu and S. Torquato, Phys. Rev. A 45, 922 (1992).
  • Torquato et al. (1988) S. Torquato, J. Beasley,  and Y. Chiew, J. Chem. Phys. 88, 6540 (1988).
  • Kingma and Ba (2014) D. P. Kingma and J. Ba, arXiv preprint arXiv:1412.6980  (2014).
  • Ananyev et al. (2018) M. Ananyev, A. Farlenkov, V. Eremin,  and E. K. Kurumchin, Int. J. Hydrogen Energy 43, 951 (2018).