Automatic classification of geologic units in seismic images using partially interpreted examples

by   Bas Peters, et al.

Geologic interpretation of large seismic stacked or migrated seismic images can be a time-consuming task for seismic interpreters. Neural network based semantic segmentation provides fast and automatic interpretations, provided a sufficient number of example interpretations are available. Networks that map from image-to-image emerged recently as powerful tools for automatic segmentation, but standard implementations require fully interpreted examples. Generating training labels for large images manually is time consuming. We introduce a partial loss-function and labeling strategies such that networks can learn from partially interpreted seismic images. This strategy requires only a small number of annotated pixels per seismic image. Tests on seismic images and interpretation information from the Sea of Ireland show that we obtain high-quality predicted interpretations from a small number of large seismic images. The combination of a partial-loss function, a multi-resolution network that explicitly takes small and large-scale geological features into account, and new labeling strategies make neural networks a more practical tool for automatic seismic interpretation.



page 5


Multi-resolution neural networks for tracking seismic horizons from few training images

Detecting a specific horizon in seismic images is a valuable tool for ge...

Semantic segmentation of mFISH images using convolutional networks

Multicolor in situ hybridization (mFISH) is a karyotyping technique used...

Robust Interactive Semantic Segmentation of Pathology Images with Minimal User Input

From the simple measurement of tissue attributes in pathology workflow t...

Transfer Learning from Partial Annotations for Whole Brain Segmentation

Brain MR image segmentation is a key task in neuroimaging studies. It is...

How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels

Explaining to users why automated systems make certain mistakes is impor...

DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort

We introduce DatasetGAN: an automatic procedure to generate massive data...

Neural-networks for geophysicists and their application to seismic data interpretation

Neural-networks have seen a surge of interest for the interpretation of ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Given a seismic image that is the result of stacking or migration processing in terms space and time coordinates, the next step is generating a geological interpretation. An interpretation of the seismic image includes delineating geological features, such as faults, chimneys, channels and important horizons. In this work, we focus on horizons and geological units. If two horizons are the upper and lower boundary of a specific rock type or unit, the area in between the two horizons corresponds to a single geologic entity. This is not a requirement for the methods and examples we present. Any two horizons may also contain an arbitrary stack of layers and rock types in between. We can still consider this as a single meta-unit and aim to classify it as such.

Neural networks have a long history of learning from example seismic interpretations, to help and speed up manual interpretation. Classic works (Harrigan et al., 1992; Veezhinathan et al., 1993) were, because of computational resources, limited to train on a single (window of a) time-recording, or a small portion of a seismic image at a time. Various more recent works pose geological interpretation problems as the binary classification of the central pixel of a small patch/volume of seismic data (Tingdahl and De Rooij, 2005; Meldahl et al., 2001; Waldeland et al., 2018; Shi et al., 2018; Alaudah et al., 2018)

. Applying such a trained network provides the probability that the central pixel of each patch is a fault/channel/salt/chimney/horizon or not. This strategy has the limitation that the network never sees the large-scale features present in large seismic images. Network architectures like autoencoders and U-nets

(Ronneberger et al., 2015) that map image-to-image do not have the same limitation, provided the data and label images are sufficiently large. These methods simultaneously classify all pixel in a large image, see Wu and Zhang (2018); Zhao (2018) for geoscientific applications. Zhao (2018) notes that this advantage also creates a challenge when generating the labels. Whereas image-to-pixel networks can work with scattered annotations, standard network and loss functions for image-to-image classifications require the label images to be annotated fully, i.e., each pixel needs to have a known class.

We propose to use a partial loss function as introduced by Peters et al. (2018)

, to be able to work with data and label images for which only a small subset of the pixels have known labels. This approach avoids the aforementioned difficulties when generating labels. We propose and investigate two strategies for choosing the pixels that need to be labeled, for the problem of segmenting a seismic image into distinct geological units. Our training data consists of a few dozen very large seismic images collected in a sedimentary geological area. We employ a U-net based network design that processes the seismic images on multiple resolutions simultaneously, therefore taking various length-scales of geological features into account explicitly. Numerical results evaluate the proposed labeling strategies. They show that if we take geophysical domain-specific information into account that is not available in other applications such as natural language processing or video segmentation for self-driving vehicles, we obtain better results using fewer labeled pixels.

2 Labeling strategies for seismic images

Our goal is to develop strategies that require the least amount of time from a human interpreter of seismic images to generate the labels. Many test data sets for semantic segmentation (e.g., CamVid) come with fully annotated label images. Several geophysical works (Zhao, 2018; Di, 2018) also rely on full label images, which are time-consuming to create manually with high accuracy. Therefore, we propose two labeling strategies that require less user input.

The interpretation information provided to the authors by an industrial partner comes in the form of x-y-z locations of a few important geological horizons. Because the geology is sedimentary, we assume that every horizon occurs at a single depth for a given x-y location. This information opens up the possibility to generate labels of geological units by assigning a class value to the area in between any to horizons. A full label image and the corresponding data are shown in Figure 1. The number of occurances per class in the labels is evidently unbalanced.

Figure 1: (a) One of the 24 data images of size pixels, and (b) corresponding full label (not used in this work).

2.1 Scattered annotations

Probably the simplest method to create label images is by selecting scattered points in the seismic image, and assign the class to just those pixels. If the intepreter investigates locations in the seismic image, we obtain label pixels. Questions regarding this strategy include 1) should we select an equal number of label pixels per class? 2) What if the interpreter only works on ‘easy’ locations and does not create labels near the class boundaries? Answers are beyond the scope of this work, but in the numerical examples, we generate an equal number of label pixels per class, at randomly selected locations.

2.2 Column-wise annotations

Suppose the interpreter labels one column at a time. If horizons are marked in the image, all points in between the horizon locations are also known. Labeling pixels will thus result in labeled pixels. Provided that we are interested in a few horizons and geologic units, column-wise annotations yield a much larger number of label pixels per manual annotation point compared to scattered annotation. The second benefit is that column-based labeling samples points at, and close to, the class boundaries.

Figure 2: (a) A label image generated by column-wise annotations, and (b) randomly labeled pixels. Both labels are partial versions of Figure 0(b). The colors indicate the class at each location, white space represents unknown labels that are not used to compute the loss or gradient.

3 Network and partial loss function

Our network is based on the U-net (Ronneberger et al., 2015)

and maps a vectorized seismic image,

, of size to probability images of the same size. The final interpretation follows as the class corresponding to the maximum probability per pixel. The network parameters contain convolutional kernels and a final linear classifier matrix. We use the same network design as Peters et al. (2018), which is a symmetric version of the U-net, with layers, and it has between and convolutional kernels of size per layer. There are no fully connected layers, so that the network can use input-output of varying sizes.

Let us denote the number of training data/label examples (images) as . We represent the labels, the true probability per class per pixel, for a single image as . The networks in this work train by minimizing the cross-entropy loss, which for a single data image with corresponding full label reads


The cross-entropy loss is separable with respect to the pixels (index ). To be able to work with partially known label images, we define the partial cross-entropy loss as


The set contains all pixels for which we have label information. We thus compute the full forward-propagation through the network using the full data image, , but the loss and gradient are based on a subset of pixels only.

We train the network using Algorithm 1 from Peters et al. (2018)

, which is stochastic gradient descent using one out of the

data and label images per iteration. We use the partial cross-entropy loss instead of the partial -loss because we classify each pixel for the segmentation problem. There are epochs, where we reduce the learning-rate by a factor every epochs, starting at .

4 Results

We evaluate the results of the labels with scattered and column-wise annotations. Our goal is a) to see which of the two sampling strategies performs best, given a fixed number of manually annotated pixels, and b) can we obtain highly accurate geologic classifications form a ‘reasonably’ small number of labeled pixels?

Both the seismic data and geologic interface locations from the Sea of Ireland were provided by an industrial partner. There are six classes, see Figure 1. We give ourselves a budget of labeled pixels for each of the seismic images of size . We can thus choose to use label images with labeled columns because we need interface locations to obtain a fully labeled column, or randomly selected label pixels, i.e., per class. Figure 2 displays an example of a label for each labeling strategy.

Figure 3: Classified seismic data using (a) column-sampling based training labels and (b) randomly sampled label pixels. Colors correspond to the class of the maximum predicted probability for each pixel. Errors in predicted class are shown in white in the bottom row, corresponding to the predictions above.

The results in Figure 3 show that column-based annotations lead to more accurate predictions compared to generating labeled pixels at random locations. This verifies the, perhaps, expected result that a larger number of annotated label pixels provide more information. Additional experiments (not shown), reveal that more known label pixels lead to more accurate classifications. The maximum increase in prediction quality for the column-based labeling is limited, however, because the provided labels were generated manually by industrial seismic interpreters, and are not correct. Both labeling strategies achieve similar prediction accuracy when we increase the number of known label pixels to about . We could also use data-augmentation to achieve higher prediction accuracy for a given number of labeled pixels, so the results serve as a baseline for the segmentation quality.

5 Conclusions

Interpreting seismic images by classifying them into distinct geologic units is a suitable task for neural-network-based computer vision systems. The networks map seismic images into an interpreted/segmented image. Networks that operate on multiple resolutions simultaneously can take both small and large scale geological structure into account. However, standard image-to-image networks and loss functions require target images for which each pixel has label information. This is a problem for geoscientific applications, as it is difficult and time-consuming to annotate large seismic images manually and completely. We presented a segmentation workflow that works with partially labeled target interpretations, thereby removing one of the main (perceived) barriers to the successful application of neural networks to the automatic interpretation of large seismic images. We proposed and evaluated two strategies for generating partial label images efficiently. Generating column-wise labels is more efficient because a small number of interface annotations also provides us with all labels in between. The combination of a symmetric U-net, partial cross-entropy loss function, training on large seismic images without forming patches, and time-efficient labeling strategies form a powerful segmentation method.