Reconstruction of 3D Porous Media From 2D Slices

01/29/2019 ∙ by Denis Volkhonskiy, et al. ∙ 0

We propose a novel deep learning architecture for three-dimensional porous media structure reconstruction from two-dimensional slices. A high-level idea is that we fit a distribution on all possible three-dimensional structures of a specific type based on the given dataset of samples. Then, given partial information (central slices) we recover the three-dimensional structure that is built around such slices. Technically, it is implemented as a deep neural network with encoder, generator and discriminator modules. Numerical experiments show that this method gives a good reconstruction in terms of Minkowski functionals.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Transport processes in soils or other porous media strongly depend on their structure. The necessity of modelling such type of processes appears in many practical engineering applications, such as hydrogeology, underground mining, petroleum exploitation, and contaminant cleanup. Digital Rock technology andra2013digital_1 ; andra2013digital_2 ; blunt2013pore ; koroteev2014direct is becoming an essential part of reservoir rock characterization workflows nowadays. It is actively used in water and hydrocarbon production industries for getting the insights on the mechanisms of complex fluid movement in a porous space of a reservoir rock berg2017industrial . The technology aims at the calculation of various physical properties of a rock sample based on its digital representation. The list of properties can include storage properties such as porosity, open porosity, connected porosity, fractional porosity etc; transport properties such as permeability, relative phase permeability, capillary pressure; electromagnetic properties such as formation factor, dielectric permittivity, magnetic and electric permeability etc; elastic coefficients; geomechanical constants; characteristics of NMR response and responses to other good logging signals evseev2015coupling .

Digitalization of a rock sample andra2013digital_1 ; blunt2013pore ; koroteev2017method ; chauhan2016processing typically covers selection of a set of representative core plugs (1-inch scale) and drilling out some miniplugs from them (1 to 10 mm scale); X-ray micro-computed tomography (CT) scanning of the miniplugs; processing the grayscale 3D microCT images for distinguishing between minerals and void pores (segmentation). In more complicated cases, when the rock samples have the significant amount of pores with submicron sizes, digitalization may be supplemented by 2D imaging of a nm resolution (e.g. with a scanning electron microscopes, SEM) for understanding the features of the submicron features.

The essential part of the Digital Rock efforts is directed towards characterization of a rather mature oil reservoir (e.g. with an objective to screen the chemical fluids for enhancing oil recovery koroteev2013application ). In many cases, the core material for these "old" reservoirs is lost, but the stored amount of 2D microscopic photos of thin sections is significant. The ability to generate realistic 3D microstructures out of the 2D data enables fulfilling the database of the digitized rocks in the cases when physical core samples are inaccessible.

In some complicated cases related to submicron porosity, only 2D SEM images might be used to resolve thin channels and pores. The conventional 3D microCT technique is ineffective here. The tool for reconstruction of 3D data out of 2D sections is of an obvious benefit here as well. The reconstructed 3D submicron volumes can be further used for estimating the physical properties on these pieces of rock rich with the submicron information. The estimation of properties can be done using Machine Learning (see

sudakov1adriving ).

2 Background

2.1 Reconstruction of three-dimensional porous media using Generative Adversarial Networks

The standard approach for the reconstruction of three-dimensional porous media consists of an application of some spatial probabilistic models MPS-review . But recent time Deep Learning approach for porous media reconstruction becomes more popular feng2018accelerating ; wang2018porous .

The first work for the reconstruction of three-dimensional porous media using Deep Neural Networks is mosser2017reconstruction . In that work, authors proposed the usage of 3D Deep Convolutional Generative Adversarial Networks model for 3D porous images. Their model is a standard Generative Adversarial Networks goodfellowetal2014 with 3D convolutional layers, trained on different 3D porous images. For the experiments, they used images of Berea sandstone, Ketton limestone, and Beadpack. The work showed good results in the generation of synthesis 3D images in terms of visual quality and statistics distribution (permeability, porosity, and Minkowski metrics minkowski ). In our work, we modify their model in such a way that it can use 2D slice as input.

2.2 Generative Adversarial Networks

Generative Adversarial Networks (GANs, goodfellowetal2014 ) is a powerful framework for generative learning. This neural model is capable of dynamically representing a sampler from the input data distribution and generate new data samples. GANs have been found very powerful in many applications karras2017progressive ; volkhonskiy2017steganographic , including generation of 3D porous media mosser2017reconstruction .

Now let us describe Generative Adversarial Networks in details. Consider the dataset of training objects to be independently sampled from the distribution . Our goal is to construct a generator function with parameters , that models and allows to sample from it. The Generator is a learnable differentiable function of a noise  from some fixed prior distribution . Let us define the distribution of the generator’s output (the distribution of the synthetic samples) as . The goal of the Generator is to make close to the distribution in terms of some metrics.

In order to learn the data distribution by the generator and make it closer to , the model requires the second neural network: discriminator with parameters

. This is a standard binary classifier, that distinguishes two classes: real objects (from

) and generated objects (from ). One can consider the discriminator as a critic, that looks on the quality of the synthetic samples.

The complete structure of the Generative Adversarial Framework is presented at Fig. 1.

Figure 1: Generative Adversarial Networks model

During training tries to generate such realistic images, that will fail in predictions (and confuse whether the object is real or generated). On the other hand, tries to be strong in order to detect the deceiving of . This leads to the mini-max game (1) of a discriminator and generator. Here

represents the probability that

came from the data distribution rather than from .

The mini-max game (1) can be interpreted as following. First, we should find such discriminator parameters , that will make the discriminator as strong, as possible. Then, when we have “good” discriminator, we train generator (with its parameters ) in order to minimize discriminator’s loss (deceive ).

(1)

In practice, the training of and is performed simultaneously: we make several updates of and then several updates of

. Both neural networks are updated via Stochastic Gradient Descent

bottou2010large or it’s modifications (in our work, we use ADAM kingma2014adam algorithm). The update rules are the following:

  • Keeping fixed, update by ascending its stochastic gradient:

    where is a sample from and is a size of a training batch.

  • Keeping fixed, update by ascending its stochastic gradient:

    where again is a sample from and is a size of a training batch.

Since the distributions and are implicitly defined, during training we replace the expectations with the average value over the batch.

Since the GANs framework is general, and the only assumption is the data distribution, it can be applied to any kind of data. In this article, we consider each  to be 3D porous media.

One of the standard modification of GANs is Conditional Generative Adversarial Networks mirza2014conditional

. If the generator in the original GANs takes as input only prior noise vector, in conditional version GANs are conditioned on some prior information. It means, that the conditions is also an input in the network. In the simplest case, the condition may be the class of the generated object.

Since the goal of our work is to propose a model for conditional generation of 3D porous media given the 2D input slice, we should model the distribution , where is a 3D porous media and is an input 2D slice. This could be done using Conditional Generative Adversarial Networks mirza2014conditional .

In our proposed model, which we describe in section 3, we condition generator to the 2D slice representation.

2.3 Autoencoders

We would like our generator to be conditioned on the 2D input slice. However, the 2D representation for slice is excess since it has some porous structure and contains only and in its pixels values. It leads to the idea of the slice compression before passing it through generator network.

One of the distinctive feature of Deep Learning models — good representation learning in the unsupervised setting. For this goal people usually use autoencoders doersch2016tutorial . This is a class of neural models, that consists of two neural networks: encoder and decoder. The encoder is a function of data, that transforms it into latent representation (encodes the data). The purpose of the decoder is to transform latent representation back to the data.

More formally, let’s consider the set of training objects . Let us denote the encoder with parameters as and the decoder with parameters as . If we apply to , we obtain the latent representation of the object: . In order to reconstruct the object from the hidden state we apply decoder: .

(2)

During training, the goal is to obtain reconstruction error (2) as small as possible. For this purpose, we optimize the Euclidean distance between the original input and the reconstruction results. For the optimization, one can use standard Stochastic Gradient Descent or it’s modification (for example, Adam kingma2014adam ).

3 Slice to Pores Generative Adversarial Networks

The goal of this work is to construct a GANs-based framework for 3D porous media synthesis. As we discussed earlier, one of the important requirement is the possibility to take a slice as the input for the generator.

We introduce a new model called Slice to Pores Generative Adversarial Networks (SPGAN). The SPGAN model is a synergy of a convolutional autoencoder and 3D Deep Convolutional Generative Adversarial Networks models. It can be considered as a DCGAN, where Generator is conditioned on the noise and the latent representation of the 2D slice. Our model consists of three neural networks:

  • Encoder with parameters — transforms input slice to a vector representation ;

  • Generator with parameters — transforms the input noise vector and encoded slice to a 3D image ;

  • Discriminator with parameters — predicts the class of input 3D image . This is standard GANs discriminator.

The SPGAN model is a synergy of two models: autoencoder and GANs. Pair of encoder and generator is an autoencoder for slice images. Pair of generator and discriminator is a GANs model. In the next three sections we will describe these models separately, and then combine them into the one model, that is presented in Fig. 2.

Figure 2: SPGAN structure

3.1 2D Slice Autoencoder

In order to learn the autoencoder model for 2D slice images loss is considered. Generator, in this case, plays the role of the decoder. Generator returns 3D image as an output, and we would like this image to have the central slice to be close to the input one. There are two main differences from standard 2D convolutional autoencoders:

  1. Our decoder is 3D convolutional neural networks, thus we should be able to get the central 2D slice from it;

  2. Decoder takes as an input not only latent representation but also a noise vector from some prior distribution .

In order to obtain the central slice from the 3D image, we introduce a mask M. This is a function, that takes 3D image as an input and returns its central slice.

(3)

The loss function for our autoencoder is (

3). The intuition behind it is the following: we minimize the difference of the input slice and the central slice in the generated 3D image.

3.2 3D Images Generative Adversarial Network

For the generation of 3D images, we should be able to define a GANs model. GANs framework was described in section 2.2. Our GANs model also consists of a generator and discriminator, and there are two differences from the standard GANs:

  1. We use 3D convolutional layers;

  2. The generator is conditioned on the latent representation of a slice, that is obtained from the autoencoder, described in section 3.1.

For training generator and discriminator we, as in GANs framework, use the mini-max game (4).

(4)

3.3 Algorithm

Since we have two different models (autoencoder and GANs), and the generator takes part in both of them, we should provide an algorithm of simultaneous network training. The formal procedure is presented in algorithm 1.

for number of training iterations do
       Sample minibatch of 3D images from the dataset;
       Obtain the minibatch of slices , using the mask M;
       Sample minibatch of noise vectors from the prior distribution ;
       Update the encoder by ascending its stochastic gradient
Update the generator by ascending its stochastic gradient
Obtain the minibatch of latent representations Update the discriminator by ascending its stochastic gradient
Update the generator by descending its stochastic gradient
end for
Algorithm 1 Algorithm of training SPGAN model

At each iteration given a minibatch of 3D images and a minibatch of noise vectors, our algorithm works as the following. At the first step, we update generator and encoder, according to the autoencoder loss (3). At the second step, we update the discriminator and generator, according to the GANs loss (4). We use Adam kingma2014adam optimization algorithm as a modification of Stochastic Gradient Descent.

4 Experiments

For the experiments section, we chose several training images: Berea sandstone, Ketton limestone (both from Imperial College Collection imp_college_samples ) and in-house sandstone. Our sample belongs to South-Russian geologic formation in Western Siberia. It is finely medium-grained sandstone from the depth of approximately m. We trained SPGAN model on them separately and then compared generated images with the real.

4.1 Experimental Setup

In this section, we describe the setup of our experiments in a precise way. Initially, we had big 3D images of different structures (sandstones, limestones). For each type of image, we have only one sample of a big size, what makes it impossible to train deep neural model because of two reasons: we should have a rather big dataset of train examples and these examples should be not very big in order to fit into the GPU memory. That is why for each big image we used the procedure of sampling small parts with overlapping. These images have the following sizes: voxels with the size of 3 m for Berea, voxels with the size of 3 m for Ketton and voxels with the size of 3 m for South-Russian sandstone, respectively. We provide slice-by-slice representation of the generated samples on Fig. 3 (Berea), Fig. 4 (Ketton) and Fig. 5 (South-Russian sandstone). Left image on these figures corresponds to central (horizontal) slice on which medium is built up.

Figure 3: Berea generated sample
Figure 4: Ketton generated sample
Figure 5: SR Sandstone generated sample
(a) Berea
(b) Ketton
(c) S-R sandstone
Figure 6: Original 3D samples
(a) Berea
(b) Ketton
(c) S-R sandstone
Figure 7: Generated 3D samples

3D original and generated samples are presented on Fig. 6 and Fig. 7, respectively.

4.2 Generated Media properties Analysis

One can see that the results obtained via generating different rock samples looks very similar to the original one. To measure the quality of synthetic data we compare porosity and so-called Minkowski functionals. For the comparison, we built box-plots for generated and real samples separately, which is a good visualization of the distribution. The box-plot represents the distribution of the statistics, showing their minimum, maximum and mean values, along with and quartiles and outliers.

Porosity is a measure of the void spaces in a material and is a fraction of the volume of voids over the total volume. It takes values between and . The porosity comparison is provided in Fig. 8 and shows close agreement.

Figure 8: Porosity comparison

Minkowski functionals are unbiased, stereological estimators and provide local and global morphological information. In fact, Minkowski functionals describe the morphology and topology of 2D and 3D binary structures. We calculate the four Minkowski functionals (of the solid phase): volume, surface area, mean breadth (a measure of average width) and Euler-Poincare characteristic (connectivity number) using the following package (MATLAB code) minkowski . The results of comparison with the same metrics for original data are shown on Fig. 9 for Berea sandstone, Fig. 11 for Ketton limestone, Fig. 13 for South-Russian sandstone, respectively, and are in a good agreement. We also provide permeability comparison for all samples under consideration (Fig. 10 — Berea, Fig. 12 — Ketton, Fig. 14 — in-house sample).

Figure 9: Minkowski metrics for Berea
Figure 10: Permeability for Berea
Figure 11: Minkowski metrics for Ketton
Figure 12: Permeability for Ketton
Figure 13: Minkowski metrics for South-Russian sandstone
Figure 14: Permeability for South-Russian Sandstone

5 Conclusion and Future Work

We show that generative adversarial networks can be efficiently used to reconstruct three-dimensional porous media from slices. In a future work, it would be interesting to study other topological properties of generated samples, and also include such properties as porosity into the loss directly. Another possible direction of research is the application of sparse 3D convolutions (for example, see notchenko2017large ) in order to generate samples with high resolution.

Acknowledgements

The work was partially supported by The Ministry of Education and Science of Russian Federation, grant No. 14.615.21.0004, grant code: RFMEFI61518X0004.

References

Appendix A Network structure

Our model consists of three neural networks: Encoder, Discriminator and Generator. Now we will describe their structure.

a.1 Generator structure

Generator, that we use, consists of 3D transposed convolutional [WZX16]

, batch normalization

[IS15]

, ReLU and Tanh layers. It has the following structure:

  1. ConvTranspose3d(in_channels=1480, out_channels=512, kernel_size=(4, 4, 4), stride=(1, 1, 1), bias=False)

  2. BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True)

  3. ReLU()

  4. ConvTranspose3d(in_channels=512, out_channels=256, kernel_size=(4, 4, 4), stride=(2, 2, 2), padding=(1, 1, 1), bias=False)

  5. BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True)

  6. ReLU()

  7. ConvTranspose3d(in_channels=256, out_channels=128, kernel_size=(4, 4, 4), stride=(2, 2, 2), padding=(1, 1, 1), bias=False)

  8. BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True)

  9. ReLU()

  10. ConvTranspose3d(in_channels=128, out_channels=64, kernel_size=(4, 4, 4), stride=(2, 2, 2), padding=(1, 1, 1), bias=False)

  11. BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True)

  12. ReLU()

  13. ConvTranspose3d(in_channels=64, out_channels=1, kernel_size=(4, 4, 4), stride=(2, 2, 2), padding=(1, 1, 1), bias=False)

  14. Tanh()

a.2 Encoder

  1. Conv2d(1, 16, (3, 3), stride=(3, 3), padding=(1, 1))

  2. ReLU()

  3. Conv2d(16, 8, (3, 3), stride=(2, 2), padding=(1, 1))

a.3 Discriminator structure

  1. Conv3d(1, 16, kernel_size=(4, 4, 4), stride=(2, 2, 2), padding=(1, 1, 1), bias=False)

  2. LeakyReLU(0.2)

  3. Conv3d(16, 32, kernel_size=(4, 4, 4), stride=(2, 2, 2), padding=(1, 1, 1), bias=False)

  4. BatchNorm3d(32, eps=1e-05, momentum=0.1, affine=True)

  5. LeakyReLU(0.2)

  6. Conv3d(32, 64, kernel_size=(4, 4, 4), stride=(2, 2, 2), padding=(1, 1, 1), bias=False)

  7. BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True)

  8. LeakyReLU(0.2)

  9. Conv3d(64, 128, kernel_size=(4, 4, 4), stride=(2, 2, 2), padding=(1, 1, 1), bias=False)

  10. BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True)

  11. LeakyReLU(0.2)

  12. Conv3d(128, 1, kernel_size=(4, 4, 4), stride=(1, 1, 1), bias=False)

Appendix B Training details

In our experiments, training takes around hours on single Tesla V100 GPU. The parameters of training are presented in Tab. 1.

Berea Ketton
South-Russian
sandstone
Learning rate
dimension
Train images size
Batch size

Number of epochs

Table 1: Training parameters