1 Introduction
Geologic horizons are interfaces between two rock types with distinct petrophysical properties. These horizons are of great importance for understanding the geology and targeting resources such as hydrocarbons and water. Seismic imaging of the subsurface is the method of choice to obtain highresolution images, from shallow to relatively large depths (see Figure 1 for an example).
Seismic data is collected as a function of a shot and a recording location. The raw seismic data can be converted to images with the vertical axes representing either depth or time. In this work, we assume that the raw seismic data is already converted into such images. Traditionally, these seismic images are then manually interpreted by experts to identify and interpret the horizons. Hand picking the horizons in large seismic crosssections or 3D volumes can be very timeconsuming, especially where the resolution of the seismic image is suboptimal or the geology is more complex.
1.1 Previous Work & Related Problems
Seismic attributes (e.g., preprocessed seismic images such as coherence, slope, energy, or dip) and automatic horizon tracking algorithms help seismic interpreters by tracking the horizon based on a few handpicked reference locations, however, in more challenging environments can produce poor results and require added user supervision.
Automatic horizon detection algorithms come in various flavors for horizon interpretation. Neural network based methods have a long history for these applications. Early works Harrigan et al. (1992); Veezhinathan et al. (1993); Liu et al. (2005); Huang (2005); Huang et al. (2005); Kusuma and Fish (2005); Alberts et al. (2000)
use multilayer perceptron or recurrent networks of a few layers. Neither the quantity or quality of data, nor the computing power used in these early works were comparable with today’s standards. Some of the earlier references were limited to work with one or a few timerecordings (traces) at a time, thereby limiting the spatial information the networks can exploit.
Wu and Zhang (2018); Zhao (2018) use a convolutional autoencoder to perform segmentation of seismic images into a few regions. They pose the segmentation problem as a classification task where the horizons delineate the boundary between the class regions. Their training data is randomly selected out of a seismic volume and is, therefore, an example of interpolation of horizon locations. Di (2018)proposes to train on a large number of small annotated patches, for classifying seismic data volumes as an integrated geologic interpretation. A key difference from our work is that we work with the largest images practically possible, such that we can exploit spatial information over long distances to help predictions. A comparison by
Zhao (2018) confirms training imagetoimage leads to better predictions compared to predicting the class of the central pixel from a small patch.There are also many algorithms for seismic horizon tracking that do not employ neural networks. These often require data preprocessing, or detect all horizons in a seismic image rather than one specific interface (Kusuma and Fish, 2005; Li et al., 2012). Wu and Fomel (2018) propose a method that uses information over multiple lengthscales on coarsened computational grids.
Our goal of interpolating or extrapolating a specific seismic horizon is different from the related problems of saltbody (Waldeland et al., 2018; Shi et al., 2018), fault (Tingdahl and De Rooij, 2005; ArayaPolo et al., 2017), chimney detection (Meldahl et al., 2001), or multiple features (Alaudah et al., 2018). For these applications, binary classification is the most common formulation: either pixels are the target of interest (i.e., salt) or they are not. Horizon detection presents a different challenge because every seismic image will contain many horizons, but we are typically interested in a small specific subset. We thus not only need to learn how to detect a horizon but also characteristics (thickness, amplitude, position in the stratigraphic sequence, depth, curvature) which help to uniquely identify it from other linear features in a seismic image.
1.2 New contributions
We provide a new approach to the horizon detection problem in seismic images. First, given the multiscale nature of seismic data, we employ a recently proposed network architecture (Ronneberger et al., 2015)
which has been shown to produce bestinclass performance for image segmentation in other fields such as medical imaging. Second, in contrast to the majority of datasets using deep learning for image recognition
(Deng et al., 2009; Krizhevsky and Hinton, 2009), our dataset consists of a relatively small set of large images. To facilitate learning in such conditions, we introduce a partial loss function that enables training on partially labeled horizons. Our partial loss is different from methods that extract a small patch/cube around a label point (also known as a seed interpretation, Meldahl et al. (2005)) and classify the data patch by patch, yielding one classified pixel at a time. The partial loss enables us to train on sparse labels directly, without extracting a patch around the label point.Contrary to most work based on neural networks, we do not frame our problem as a classification task. Instead, we formulate nonlinear regression problems where the label image values correspond to the probability of a horizon being at that depth for a given location. This is a convenient way to include uncertainty information on the horizon labels explicitly. The network output is therefore also an image that naturally conveys the uncertainty in the horizon depth estimates. Note that a classification approach provides the probability map of a class, which corresponds to the probability of a geologic rock type at each location in the image. The horizon location follows from such information as the points where the maximum class probability changes from one class to another, however, this does not directly provide the probability of the horizon at each location.
Because we train on large images, there is no need to create small patches. We thereby avoid manual user input on the window/patch size which would impact the results, as well as any artifacts resulting from a tiled solution to the problem. The dataset used consists of seismic images, and the algorithm is trained without the use of any other attribute images, wavelet information, or preprocessing that earlier work on horizon tracking used as supplemental input, see e.g. Meldahl et al. (2001); Poulton (2002); Leggett et al. (2003); Huang et al. (2005); Alberts et al. (2000).
Finally, because our approach uses regression to train for the depth of the horizon, special consideration is necessary when preparing labels for the problem. We introduce a novel parametrization of the training label information which lends itself to a more transparent handling of uncertainty information and a probabilistic interpretation of the predicted results. Due to the sparsity of the horizon labels, the resulting training set can be very unbalanced. We handle this problem by rebalancing the training set at each iteration via perclass random sampling and demonstrate the importance of this step for the result.
1.3 Application to field data
We validate the proposed computational methods, loss function and learning strategy using seismic images from sedimentary areas in the North Sea and the Sea of Ireland. We demonstrate the effectiveness of our network architecture and new partial loss function, as well as investigate the difference between alternate problem setups, including interpolation versus extrapolation, and inline versus crossline predictions.
2 Label preparation and handling
The raw label data are xyz coordinates of the location of the horizon of interest. We can directly plot the xyz coordinates in an image, by assigning the number to nonhorizon locations and the number to horizon locations. We found training and predictions from this type of training labels rather ineffective, and it does not include valuable information about the uncertainty of the horizon picks.
The horizon picks are either handpicked or obtained using an automatic horizon tracker with some human assistance and quality control. The selected xyz locations are therefore not completely accurate. Another source of label errors is the seismic image itself, from which the labels are generated. The quality of the seismic image decreases if there is noise in the data, or if the geology violates the assumptions on the migration method that generated the seismic image from raw seismic data. A common assumption is that the geology that is almost laterally invariant, i.e., slowly varying in the horizontal direction. More advanced imaging algorithms (e.g., reverse timemigration) assume a background velocity model that is approximately a smoothed version of the true velocity model. Violations of the assumptions result in parts of the seismic image becoming blurred, and continuous layers are broken up. The exact location of the horizon is ambiguous in these situations.
To reflect the uncertainty in the provided horizon labels, we add information about the uncertainty as follows: we convolve the horizon locations with a normalized Gaussian kernel. The resulting values are probabilities of the horizon location. The provided xyz location has the highest value, and the probability of a horizon tapers off as the distance from the xyz pick increases. In Figure 1(b) we show an example of a data image and label for the case we are given a horizon and need to extrapolate it. These images are of the size that we use for training.
3 Network design
A key component in the prediction of an interface is the network architecture. Most work done in the past uses very few layers for the prediction. It has been shown that for many vision applications such networks can have a limited power of prediction. Recent architectures are based on deep networks that can contain tens if not hundreds of layers. One such stable design is the residual network (ResNet, He et al. (2015)) which can be written as
(1) 
Here, are the layers in the network, are the parameters of each layer to be learned from the data and
is a nonlinear function that consists of a hyperbolic tangent or a rectified linear unit (ReLu). The transformation
is used to increase or decrease the number of channels of the network. For our problem, we start with a single channel (the seismic data) and open the network to a few tens of channels.While ResNets have been very successful for image classification, they tend to be less accurate for segmentation problems. The main problem is scale; convolution is a local operator and therefore the network can have difficulties to learn features that span a number of scalelengths.
In order to resolve this problem, we have used a UNet Ronneberger et al. (2015) structure. UNets are similar to autoencoders as they restrict (that is, downsample the image) as they go deeper. The network has two “arms”. In the downscale arm, equation 1 is used, with a small modification
(2) 
Here is a restriction operator that down sample the image using a full weighting (Briggs et al., 2000).
Let be the image sampled on the lowest resolution. In the second arm of the network the image is upsampled to its original size, that is, the image is interpolated starting with by the equation:
(3) 
Here, is the transpose of the restriction operator. In order to obtain symmetry for the two branches of the net, we choose the parameters of the upgoing net to be the adjoints of the down going ones. In particular, we use the transpose of the convolutions of the weights that are downgoing.
The combination of lowresolution and highresolution features allow the network to communicate between different scales, which is crucial for our application where reflectors have both local and global features.
Layer #  Feature size  # of channels  kernel size 

1  4  
2  4  
3  4  
4  6  
5  6  
6  6  
7  8  
8  8  
9  8  
10  12  
11  12  
12  12  
13  16  
14  16  
15  16  
16  24  
17  24  
18  32  
19  32 
4 Partial loss function
Consider a network
that maps from (vectorized) images
of size to images of the same size. The network weights are convolutional kernels, biases and a linear classifier(in classification settings this would be a matrix). The last layer of the network reduces a tensor
to . The final network output is thus given by . We learn a single classifier that acts on every pixel of the image.The leastsquares loss is defined as
(4) 
where is a vectorized label image. This is a separable function, so we can compute a partial loss over a selection of pixels as
(5) 
where is the set of pixel indices where we have labels. Note that this is subsampling of the prediction, , which requires a full forwardpass through the network. The gradient computation uses the loss at the points in only.
Another interpretation of the partial loss that is more common in geophysical literature is in terms of a projection. Define as a projection matrix that projects onto the points in , i.e.,
contains a subset of the rows of the identity matrix. We can then write the partial loss in equation
5 as(6) 
where are the partial labels. In this work we use the norm, which is separable as well. The partial, or projected loss is defined as
(7) 
The partial loss function enables us to train on partially known labels, as long as we know which pixels they are associated with, without labeling the whole seismic volume.
4.1 Stochastic optimization using a projected loss function
Many neural network training strategies for classification of datasets that contain a large number of small (
) images use random minibatch stochastic gradient descent (SGD). At each iteration of SGD, the algorithm computes a gradient based on a small number of images and labels. For our applications, we typically only have access to a small number (
) of large images/labels (), sometimes even only a single image. If we were to compute a gradient based on a single image/label, there is only a single gradient and no stochastic effects. It has long been observed that full gradient methods are not competitive to randomized and stochastic gradientbased optimization algorithms for nonconvex optimization in machine learning, particularly neural networks
(Bottou and Bousquet, 2008). The subsampling of the image and label pixels as proposed in the previous section provides us with a stochastic optimization algorithm by using a random subset of the points in at each iteration.4.2 Rebalancing for the projected loss
Seismic horizon detection problems have labels where most pixels have a value equal to , which means there is no probability the horizon is located at that pixel. In each column, there are only a few entries that have a nonzero label value. This imbalance (about times more zero labels than nonzero in our numerical examples) can lead to slower training and lowquality predictions. To mitigate these issues, we apply binary rebalancing and use an equal number of zero and nonzero pixel values.
In a randomized stochastic optimization algorithm, at each iteration we draw randomly selected samples out of the set of known label pixels . Binary rebalancing means there are samples that have a label value equal to zero  denoted by the set  and samples that correspond to a nonzero label value  denoted by the set . The union of the two subsets is .
We summarize the stochastic optimization algorithm for training neural networks using a partial loss function in combination with binary sample rebalancing in Algorithm 1. The numerical examples show that balancing of zero and nonzero labels result in better predictions.
5 Field example of horizon tracking using neural networks
Our data consists of seismic images that are models of the reflectivity of the Earth. The amplitude in the data relates to the elastic impedance contrast between the geological layers. The raw data have been processed into a large 3D model. We work with 2D slices. The labels are a combination of hand picking and algorithm assisted horizon tracking. Both data and picks were previously generated as part of a commercial exploration project by an external company.
We present results for extrapolation by inline continuation, as well as interpolation from scattered horizon picks. The results also indicate the effect of balancing the number of zero and nonzero label values in each random batch.
We use the same network design for both examples and train the two networks using the projected loss as defined in equation 7. The initialization of the network parameters is random.
5.1 Horizon interpolation
Handpicking horizon locations is a timeconsuming task. Many interpreted horizons have sparse spatial sampling as a result. In this case, we want to interpolate the picks to obtain continuous horizon surfaces, as shown in Figure 3. To be able to train on just a few labeled points in large images, we need a loss function that measures the loss at the labeled points only, but not at the other parts of the image. For a seismic horizon image, this means that we compute the loss based on the columns that have a horizon label (Figure 3(b)). In each of these columns, there is one horizon location; the other column entries are labels that indicate there is no horizon. The columns where we do not have any labeled information are excluded from training by the projected lossfunction as defined in equation 7; the network trains on all seismic data but only part of the label images.
The training data (Figure 3(a) ) are full 2D slices of size of pixels, without windowing or splitting into patches. The label images are only known at on average nine random locations per slice, provided by an industrial partner. We convolve the horizon location with a Gaussian kernel (in the vertical direction only) to assign an uncertainty to the handpicked location. All other entries in the same column have a value equal to zero, which indicates the horizon does not occur at that location, see Figure 3(b).
Training starts with epochs and a learning rate of . Every iteration of each epoch uses a single data and label image. Out of the approximately known label pixels, we randomly select samples per iteration. As a result, not all label pixels are shown to the network during each epoch. We distribute the samples between zero and nonzero values equally. Note that the Gaussian kernel that we convolve with the horizon xyz locations has a width of pixels, so there are on average nonzero label values per image. Training continues with another epochs and the learning rate is reduced by a factor of ten. The third and last training stage is epochs where we again reduce the learning rate by a factor of ten.
Figure 4(b) displays the prediction for two slices. Figure 5(b) shows the same information using colorcoding for the predicted probability and overlaid on the data. The zoomed version in Figure 7 shows more details.
The results in Figures 4(b), 5(b), and 7 show excellent predictions. The network output displays the probability of a horizon directly and no additional postprocessing was applied. The average of nine picks per slice is not a lower limit or recommended number. Getting good predictions using fewer picks is possible. We point out that we could train more to reduce the validation loss, see Figure 8. We also did not use any dataaugmentation, which could benefit the training in the case of fewer label points.
With regards to the balancing procedure outlined in an earlier section, the loss function logs in Figure 8 clearly show that not balancing the number of zero and nonzero label points during each SGD iteration leads to a worse validation loss. Note that contrary to many works on horizon tracking using neural networks based on classification, our nonlinear regression strategy does not have a prediction accuracy. Figure 9 shows a prediction from training without balancing, which is not close to the desired output in any way.
5.2 Horizon extrapolation
Points that indicate the xyz locations of a horizon are also called horizon picks. Given a collection of picks in an area, we can try to extrapolate the horizon away from the known locations. Much historical industrial work produced large quantities of horizon picks that we can use for training. A potential challenge is that the extrapolation can be in areas with different geology than where the training picks are.
There are multiple types of extrapolation, two of which are shown in Figure 10. Perhaps most similar to standard classifications or segmentation tasks on data sets containing many small images (e.g., MNIST, CIFAR), is to train on one set of images, then apply the trained network and classifier on another test set of images. We call this linebyline or slicebyslice learning. A slice refers to a 2D slice from a 3D tensor. The second strategy extrapolates a horizon inline. The training procedure sees the full data (seismic image), but the label is only partially known.
5.2.1 Linebyline versus inline extrapolation
We provide some insight about which of the two types of horizon extrapolation is preferable. First of all, it is important to realize that the interpretation of seismic images is different from other problems, such as segmentation of images from video for selfdriving vehicles. That application has prerecorded video/images along with segmented labels available for training. The testing data arrives in realtime and segmentation needs to happen in a short amount of time. In our case, the complete seismic 3D volume is available at the time of training. It is only the labels that are incomplete. Therefore, we would like to use all training data, together with the labels corresponding to a part of the training data. Inline extrapolation keeps a number of slices separately for testing, so the network never has access to those seismic images. Contrary, inline extrapolation trains on the full seismic slices, but sees only part of the labels, see Figure 1(b). Because we will use a deep neural network with multiple convolutional layers and subsampling/upsampling stages, the data from the area without labels will influence the prediction in the area where we do have labels. For this reason, inline extrapolation has the capability to utilize all data, and we focus on this method in the remainder of this paper.
For training, we use just images of size pixels. There are three training stages. We start with epochs and a learning rate of , followed by epochs with the learning rate reduced by a factor of ten. The last stage is another epochs where we reduce the learning rate by another factor of ten.
Figure 1(b) shows an example of the labels and data. The evaluation part of each data image is about , which is the extrapolation distance of interest to an industrial partner. In Figure 10(b) we display the predictions for two slices. The prediction on the right of the vertical line shows that we generally predict a continuous line, but it is difficult to see how accurate the prediction is. By color coding the predicted likelihood of the horizon in Figure 11(b), we see that the incorrectly highlighted areas have a much lower probability than the correct horizon locations. We also observe that our prediction on the training part is almost perfect. Figure 12(b) shows a zoomedin section that better illustrates the relation between predicted probability and the seismic image.
6 Conclusions
In this work we provided a new look at the problem of detecting horizons in seismic images using neural networks. Specifically, we addressed extrapolation away from previously interpreted horizons, as well as the interpolation of a small number of scattered handpicked horizon locations. The proposed networks, loss function, and learning strategies to overcome issues that limit the success of automatic interpretation using neural networks. We employ deep networks with a multiresolution structure to train on a small number of large seismic images that take largescale geological structures into account, in the sense that information propagates over long distances on multiple scales. This is not directly possible using standard networkbased learning methods that train on small image patches. We proposed a projected loss function that enables training on label images with only a few annotated pixels. Generating such labels is easier and faster than working with conventional label images that need complete labeling of a full image or patch. The standard practice of splitting data and labels into training and test sets is no longer necessary when we train with the projected loss. In seismic imaging, we have access to all data during training. It is the labels that are incomplete. Our networks train on all available seismic images, and we compute the loss and gradient based on a small number of known label pixels. The data in areas without corresponding labels is still seen by the network, and because the network has multiple layers on multiple resolutions, the information influences the predictions and misfit at locations where we do have horizon picks. Application of the proposed network, loss function, and learning strategy to horizon extrapolation and interpolation showed that our methods provide accurate predictions and uncertainty estimates both close and farther from known horizon locations. Our experiments so far were restricted to sedimentary geological settings in the North Sea and Sea or Ireland. The proposed methods make automatic horizon detection possible using fewer horizon picks and take all available seismic data into account.
References
 Alaudah et al. [2018] Y. Alaudah, S. Gao, and G. AlRegib. Learning to label seismic structures with deconvolution networks and weak labels. In SEG Technical Program Expanded Abstracts 2018, pages 2121–2125, 2018. doi: 10.1190/segam20182997865.1. URL https://library.seg.org/doi/abs/10.1190/segam20182997865.1.
 Alberts et al. [2000] P. Alberts, M. Warner, and D. Lister. Artificial neural networks for simultaneous multi horizon tracking across discontinuities. In SEG Technical Program Expanded Abstracts 2000, pages 651–653. Society of Exploration Geophysicists, 2000.
 ArayaPolo et al. [2017] M. ArayaPolo, T. Dahlke, C. Frogner, C. Zhang, T. Poggio, and D. Hohl. Automated fault detection without seismic processing. The Leading Edge, 36(3):208–214, 2017. doi: 10.1190/tle36030208.1. URL https://doi.org/10.1190/tle36030208.1.
 Bottou and Bousquet [2008] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Advances in neural information processing systems, pages 161–168, 2008.
 Briggs et al. [2000] W. Briggs, V. Henson, and S. McCormick. A Multigrid Tutorial, Second Edition. Society for Industrial and Applied Mathematics, second edition, 2000. doi: 10.1137/1.9780898719505. URL https://epubs.siam.org/doi/abs/10.1137/1.9780898719505.

Deng et al. [2009]
J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. FeiFei.
Imagenet: A largescale hierarchical image database.
In
2009 IEEE Conference on Computer Vision and Pattern Recognition
, pages 248–255, June 2009. doi: 10.1109/CVPR.2009.5206848.  Di [2018] H. Di. Developing a seismic pattern interpretation network (spinet) for automated seismic interpretation. arXiv preprint arXiv:1810.08517, 2018.
 Harrigan et al. [1992] E. Harrigan, J. R. Kroh, W. A. Sandham, and T. S. Durrani. Seismic horizon picking using an artificial neural network. In [Proceedings] ICASSP92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 3, pages 105–108 vol.3, March 1992. doi: 10.1109/ICASSP.1992.226265.
 He et al. [2015] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385.
 Huang [2005] K.Y. Huang. Hopfield neural network for seismic horizon picking. In SEG Technical Program Expanded Abstracts 1997, pages 562–565, 2005. doi: 10.1190/1.1885963. URL https://library.seg.org/doi/abs/10.1190/1.1885963.
 Huang et al. [2005] K.Y. Huang, C.H. Chang, W.S. Hsieh, S.C. Hsieh, L. K. Wang, and F.J. Tsai. Cellular neural network for seismic horizon picking. In 2005 9th International Workshop on Cellular Neural Networks and Their Applications, pages 219–222, May 2005. doi: 10.1109/CNNA.2005.1543200.
 Krizhevsky and Hinton [2009] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
 Kusuma and Fish [2005] T. Kusuma and B. C. Fish. Toward more robust neural‐network first break and horizon pickers. In SEG Technical Program Expanded Abstracts 1993, pages 238–241, 2005. doi: 10.1190/1.1822449. URL https://library.seg.org/doi/abs/10.1190/1.1822449.
 Leggett et al. [2003] M. Leggett, W. A. Sandham, and T. S. Durrani. Automated 3D Horizon Tracking and Seismic Classification Using Artificial Neural Networks, pages 31–44. Springer Netherlands, Dordrecht, 2003. ISBN 9789401702713. doi: 10.1007/97894017027133. URL https://doi.org/10.1007/97894017027133.
 Li et al. [2012] L. Li, G. Ma, and X. Du. New method of horizon recognition in seismic data. IEEE Geoscience and Remote Sensing Letters, 9(6):1066–1068, Nov 2012. ISSN 1545598X. doi: 10.1109/LGRS.2012.2190039.
 Liu et al. [2005] X. Liu, P. Xue, and Y. Li. Neural network method for tracing seismic events. In SEG Technical Program Expanded Abstracts 1989, pages 716–718, 2005. doi: 10.1190/1.1889749. URL https://library.seg.org/doi/abs/10.1190/1.1889749.
 Meldahl et al. [2001] P. Meldahl, R. Heggland, B. Bril, and P. de Groot. Identifying faults and gas chimneys using multiattributes and neural networks. The Leading Edge, 20(5):474–482, 2001. doi: 10.1190/1.1438976. URL https://doi.org/10.1190/1.1438976.
 Meldahl et al. [2005] P. Meldahl, R. Heggland, B. Bril, and P. de Groot. The chimney cube, an example of semi‐automated detection of seismic objects by directive attributes and neural networks: Part i; methodology. In SEG Technical Program Expanded Abstracts 1999, pages 931–934, 2005. doi: 10.1190/1.1821262. URL https://library.seg.org/doi/abs/10.1190/1.1821262.
 Poulton [2002] M. M. Poulton. Neural networks as an intelligence amplification tool: A review of applications. GEOPHYSICS, 67(3):979–993, 2002. doi: 10.1190/1.1484539. URL https://doi.org/10.1190/1.1484539.
 Ronneberger et al. [2015] O. Ronneberger, P. Fischer, and T. Brox. Unet: Convolutional networks for biomedical image segmentation. Medical Image Computing and ComputerAssisted Intervention – MICCAI 2015, page 234–241, 2015. ISSN 16113349. doi: 10.1007/978331924574428. URL http://dx.doi.org/10.1007/978331924574428.

Shi et al. [2018]
Y. Shi, X. Wu, and S. Fomel.
Automatic saltbody classification using deepconvolutional neural network.
In SEG Technical Program Expanded Abstracts 2018, pages 1971–1975, 2018. doi: 10.1190/segam20182997304.1. URL https://library.seg.org/doi/abs/10.1190/segam20182997304.1.  Tingdahl and De Rooij [2005] K. M. Tingdahl and M. De Rooij. Semiautomatic detection of faults in 3d seismic data. Geophysical Prospecting, 53(4):533–542, 2005. doi: 10.1111/j.13652478.2005.00489.x. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/j.13652478.2005.00489.x.
 Veezhinathan et al. [1993] J. Veezhinathan, F. Kemp, and J. Threet. A hybrid of neural net and branch and bound techniques for seismic horizon tracking. In Proceedings of the 1993 ACM/SIGAPP Symposium on Applied Computing: States of the Art and Practice, SAC ’93, pages 173–178, New York, NY, USA, 1993. ACM. ISBN 0897915674. doi: 10.1145/162754.162863. URL http://doi.acm.org/10.1145/162754.162863.
 Waldeland et al. [2018] A. U. Waldeland, A. C. Jensen, L.J. Gelius, and A. H. S. Solberg. Convolutional neural networks for automated seismic interpretation. The Leading Edge, 37(7):529–537, 2018. doi: 10.1190/tle37070529.1. URL https://doi.org/10.1190/tle37070529.1.
 Wu and Zhang [2018] H. Wu and B. Zhang. A deep convolutional encoderdecoder neural network in assisting seismic horizon tracking. arXiv preprint arXiv:1804.06814, 2018.
 Wu and Fomel [2018] X. Wu and S. Fomel. Leastsquares horizons with local slopes and multigrid correlations. GEOPHYSICS, 83(4):IM29–IM40, 2018. doi: 10.1190/geo20170830.1. URL https://doi.org/10.1190/geo20170830.1.
 Zhao [2018] T. Zhao. Seismic facies classification using different deep convolutional neural networks. In SEG Technical Program Expanded Abstracts 2018, pages 2046–2050, 2018. doi: 10.1190/segam20182997085.1. URL https://library.seg.org/doi/abs/10.1190/segam20182997085.1.
Comments
There are no comments yet.