Given a seismic image that is the result of stacking or migration processing in terms space and time coordinates, the next step is generating a geological interpretation. An interpretation of the seismic image includes delineating geological features, such as faults, chimneys, channels and important horizons. In this work, we focus on horizons and geological units. If two horizons are the upper and lower boundary of a specific rock type or unit, the area in between the two horizons corresponds to a single geologic entity. This is not a requirement for the methods and examples we present. Any two horizons may also contain an arbitrary stack of layers and rock types in between. We can still consider this as a single meta-unit and aim to classify it as such.
Neural networks have a long history of learning from example seismic interpretations, to help and speed up manual interpretation. Classic works (Harrigan et al., 1992; Veezhinathan et al., 1993) were, because of computational resources, limited to train on a single (window of a) time-recording, or a small portion of a seismic image at a time. Various more recent works pose geological interpretation problems as the binary classification of the central pixel of a small patch/volume of seismic data (Tingdahl and De Rooij, 2005; Meldahl et al., 2001; Waldeland et al., 2018; Shi et al., 2018; Alaudah et al., 2018)
. Applying such a trained network provides the probability that the central pixel of each patch is a fault/channel/salt/chimney/horizon or not. This strategy has the limitation that the network never sees the large-scale features present in large seismic images. Network architectures like autoencoders and U-nets(Ronneberger et al., 2015) that map image-to-image do not have the same limitation, provided the data and label images are sufficiently large. These methods simultaneously classify all pixel in a large image, see Wu and Zhang (2018); Zhao (2018) for geoscientific applications. Zhao (2018) notes that this advantage also creates a challenge when generating the labels. Whereas image-to-pixel networks can work with scattered annotations, standard network and loss functions for image-to-image classifications require the label images to be annotated fully, i.e., each pixel needs to have a known class.
We propose to use a partial loss function as introduced by Peters et al. (2018)
, to be able to work with data and label images for which only a small subset of the pixels have known labels. This approach avoids the aforementioned difficulties when generating labels. We propose and investigate two strategies for choosing the pixels that need to be labeled, for the problem of segmenting a seismic image into distinct geological units. Our training data consists of a few dozen very large seismic images collected in a sedimentary geological area. We employ a U-net based network design that processes the seismic images on multiple resolutions simultaneously, therefore taking various length-scales of geological features into account explicitly. Numerical results evaluate the proposed labeling strategies. They show that if we take geophysical domain-specific information into account that is not available in other applications such as natural language processing or video segmentation for self-driving vehicles, we obtain better results using fewer labeled pixels.
2 Labeling strategies for seismic images
Our goal is to develop strategies that require the least amount of time from a human interpreter of seismic images to generate the labels. Many test data sets for semantic segmentation (e.g., CamVid) come with fully annotated label images. Several geophysical works (Zhao, 2018; Di, 2018) also rely on full label images, which are time-consuming to create manually with high accuracy. Therefore, we propose two labeling strategies that require less user input.
The interpretation information provided to the authors by an industrial partner comes in the form of x-y-z locations of a few important geological horizons. Because the geology is sedimentary, we assume that every horizon occurs at a single depth for a given x-y location. This information opens up the possibility to generate labels of geological units by assigning a class value to the area in between any to horizons. A full label image and the corresponding data are shown in Figure 1. The number of occurances per class in the labels is evidently unbalanced.
2.1 Scattered annotations
Probably the simplest method to create label images is by selecting scattered points in the seismic image, and assign the class to just those pixels. If the intepreter investigates locations in the seismic image, we obtain label pixels. Questions regarding this strategy include 1) should we select an equal number of label pixels per class? 2) What if the interpreter only works on ‘easy’ locations and does not create labels near the class boundaries? Answers are beyond the scope of this work, but in the numerical examples, we generate an equal number of label pixels per class, at randomly selected locations.
2.2 Column-wise annotations
Suppose the interpreter labels one column at a time. If horizons are marked in the image, all points in between the horizon locations are also known. Labeling pixels will thus result in labeled pixels. Provided that we are interested in a few horizons and geologic units, column-wise annotations yield a much larger number of label pixels per manual annotation point compared to scattered annotation. The second benefit is that column-based labeling samples points at, and close to, the class boundaries.
3 Network and partial loss function
Our network is based on the U-net (Ronneberger et al., 2015)
and maps a vectorized seismic image,, of size to probability images of the same size. The final interpretation follows as the class corresponding to the maximum probability per pixel. The network parameters contain convolutional kernels and a final linear classifier matrix. We use the same network design as Peters et al. (2018), which is a symmetric version of the U-net, with layers, and it has between and convolutional kernels of size per layer. There are no fully connected layers, so that the network can use input-output of varying sizes.
Let us denote the number of training data/label examples (images) as . We represent the labels, the true probability per class per pixel, for a single image as . The networks in this work train by minimizing the cross-entropy loss, which for a single data image with corresponding full label reads
The cross-entropy loss is separable with respect to the pixels (index ). To be able to work with partially known label images, we define the partial cross-entropy loss as
The set contains all pixels for which we have label information. We thus compute the full forward-propagation through the network using the full data image, , but the loss and gradient are based on a subset of pixels only.
We train the network using Algorithm 1 from Peters et al. (2018)
, which is stochastic gradient descent using one out of thedata and label images per iteration. We use the partial cross-entropy loss instead of the partial -loss because we classify each pixel for the segmentation problem. There are epochs, where we reduce the learning-rate by a factor every epochs, starting at .
We evaluate the results of the labels with scattered and column-wise annotations. Our goal is a) to see which of the two sampling strategies performs best, given a fixed number of manually annotated pixels, and b) can we obtain highly accurate geologic classifications form a ‘reasonably’ small number of labeled pixels?
Both the seismic data and geologic interface locations from the Sea of Ireland were provided by an industrial partner. There are six classes, see Figure 1. We give ourselves a budget of labeled pixels for each of the seismic images of size . We can thus choose to use label images with labeled columns because we need interface locations to obtain a fully labeled column, or randomly selected label pixels, i.e., per class. Figure 2 displays an example of a label for each labeling strategy.
The results in Figure 3 show that column-based annotations lead to more accurate predictions compared to generating labeled pixels at random locations. This verifies the, perhaps, expected result that a larger number of annotated label pixels provide more information. Additional experiments (not shown), reveal that more known label pixels lead to more accurate classifications. The maximum increase in prediction quality for the column-based labeling is limited, however, because the provided labels were generated manually by industrial seismic interpreters, and are not correct. Both labeling strategies achieve similar prediction accuracy when we increase the number of known label pixels to about . We could also use data-augmentation to achieve higher prediction accuracy for a given number of labeled pixels, so the results serve as a baseline for the segmentation quality.
Interpreting seismic images by classifying them into distinct geologic units is a suitable task for neural-network-based computer vision systems. The networks map seismic images into an interpreted/segmented image. Networks that operate on multiple resolutions simultaneously can take both small and large scale geological structure into account. However, standard image-to-image networks and loss functions require target images for which each pixel has label information. This is a problem for geoscientific applications, as it is difficult and time-consuming to annotate large seismic images manually and completely. We presented a segmentation workflow that works with partially labeled target interpretations, thereby removing one of the main (perceived) barriers to the successful application of neural networks to the automatic interpretation of large seismic images. We proposed and evaluated two strategies for generating partial label images efficiently. Generating column-wise labels is more efficient because a small number of interface annotations also provides us with all labels in between. The combination of a symmetric U-net, partial cross-entropy loss function, training on large seismic images without forming patches, and time-efficient labeling strategies form a powerful segmentation method.
- Alaudah et al.  Y. Alaudah, S. Gao, and G. AlRegib. Learning to label seismic structures with deconvolution networks and weak labels. In SEG Technical Program Expanded Abstracts 2018, pages 2121–2125, 2018. doi: 10.1190/segam2018-2997865.1. URL https://library.seg.org/doi/abs/10.1190/segam2018-2997865.1.
- Di  H. Di. Developing a seismic pattern interpretation network (spinet) for automated seismic interpretation. arXiv preprint arXiv:1810.08517, 2018.
- Harrigan et al.  E. Harrigan, J. R. Kroh, W. A. Sandham, and T. S. Durrani. Seismic horizon picking using an artificial neural network. In [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 3, pages 105–108 vol.3, March 1992. doi: 10.1109/ICASSP.1992.226265.
- Meldahl et al.  P. Meldahl, R. Heggland, B. Bril, and P. de Groot. Identifying faults and gas chimneys using multiattributes and neural networks. The Leading Edge, 20(5):474–482, 2001. doi: 10.1190/1.1438976. URL https://doi.org/10.1190/1.1438976.
- Peters et al.  B. Peters, J. Granek, and E. Haber. Multi-resolution neural networks for tracking seismic horizons from few training images. arXiv preprint arXiv:1812.11092, 2018.
- Ronneberger et al.  O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention, pages 234–241, 2015. ISSN 1611-3349. doi: 10.1007/978-3-319-24574-428. URL http://dx.doi.org/10.1007/978-3-319-24574-428.
Shi et al. 
Y. Shi, X. Wu, and S. Fomel.
Automatic salt-body classification using deep-convolutional neural network.In SEG Technical Program Expanded Abstracts 2018, pages 1971–1975, 2018. doi: 10.1190/segam2018-2997304.1. URL https://library.seg.org/doi/abs/10.1190/segam2018-2997304.1.
- Tingdahl and De Rooij  K. M. Tingdahl and M. De Rooij. Semi-automatic detection of faults in 3d seismic data. Geophysical Prospecting, 53(4):533–542, 2005. doi: 10.1111/j.1365-2478.2005.00489.x. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1365-2478.2005.00489.x.
- Veezhinathan et al.  J. Veezhinathan, F. Kemp, and J. Threet. A hybrid of neural net and branch and bound techniques for seismic horizon tracking. In Proceedings of the 1993 ACM/SIGAPP Symposium on Applied Computing: States of the Art and Practice, SAC ’93, pages 173–178, New York, NY, USA, 1993. ACM. ISBN 0-89791-567-4. doi: 10.1145/162754.162863. URL http://doi.acm.org/10.1145/162754.162863.
- Waldeland et al.  A. U. Waldeland, A. C. Jensen, L.-J. Gelius, and A. H. S. Solberg. Convolutional neural networks for automated seismic interpretation. The Leading Edge, 37(7):529–537, 2018. doi: 10.1190/tle37070529.1. URL https://doi.org/10.1190/tle37070529.1.
- Wu and Zhang  H. Wu and B. Zhang. A deep convolutional encoder-decoder neural network in assisting seismic horizon tracking. arXiv preprint arXiv:1804.06814, 2018.
- Zhao  T. Zhao. Seismic facies classification using different deep convolutional neural networks. In SEG Technical Program Expanded Abstracts 2018, pages 2046–2050, 2018. doi: 10.1190/segam2018-2997085.1. URL https://library.seg.org/doi/abs/10.1190/segam2018-2997085.1.