A successful separation between true reflection signals and unwanted noise is a long-standing problem in the area of seismic data processing, and greatly affects the fidelity of subsequent seismic imaging (e.g., claerbout1985fundamentals and dai2012multi), geophysical inversion, like amplitude-variation-with-offset (AVO) inversion (e.g., buland2003bayesian and li2014multicomponent), full waveform inversion (e.g., pratt1999seismic, chen2016detecting; chen2018improved), and geological interpretation (e.g., brown2011interpretation). Seismic data is inevitably affected by different types of noise. The existence of noise in the pre-stack seismic data affects the amplitude information, and thus causes unreliable inversion results. For post-stack seismic data, the existence of noise hurts the ability of interpretation, which directly affects the modelling of subsurface reservoirs.
In fact, seismic noise attenuation has gone through a long history of development. This history could be traced back to the simplest method - stacking the seismic data along the offset direction (mayne1962common). Soubaras introduced the F-X projection filtering method in (soubaras1995prestack). Spitz proposed a prediction error filtering method for recognizing coherent signal in the F-X domain (spitz1999pattern). F-X deconvolution (canales1984random; naghizadeh2012multicomponent) introduced by Canales becomes the most wide-used method for random noise attenuation recently. In addition, Zhou et al.classify seismic noise attenuation methods into different categories based on their theories: sparse transform approach transforms seismic data to a sparse domain by applying a soft thresholding to the coefficients, and then transforms the sparse coefficients into time-space domain, e.g., donoho1994ideal; pratt1998gauss; naghizadeh2012seismic; candes2006fast; herrmann2007non; decomposition-based approaches decompose the corrupt seismic data into components and chooses the principal components for signal representation, e.g., chen2014random; gan2016improved; bekara2007local; fomel2013seismic; wu2018data; and rank-reduction based approaches use low-rank seismic data during the data rearrangement processing, e.g., vautard1992singular; trickett2008f; oropeza2011simultaneous; cheng2015separation; anvari2017seismic. In summary, most of these conventional methods utilize signal features, e.g., wave-number and frequency, and domain transformation, to attenuate seismic noise.
On the other hand, noise attenuation for images using machine learning technology has achieved great success in the computer vision field. Deep learning (DL) on denoising task for images has been developed in the past decades (jain2009natural; rabie2005robust; xie2012image), and many research works indicate that burger2012image
achieve a giant leap in this field. Convolutional Neural Network (CNN) has an objective to train the network through learning lower dimension representations of the image features. Taking a benefit from the CNN, deep residual network (ResNet:he2016deep
), and batch normalization (ioffe2015batch), Zhang et al. proposed the de-noising CNN model (DnCNN), and it outperforms the traditional non-CNN based methods(zhang2017beyond). After a short while, CNN based techniques on noise attenuation have been widely and continually developed into many variants. Recently, Noise2Noise (lehtinen2018noise2noise) model was introduced for the noise attenuation task without providing ground-truth information. CBDNet (guo2019toward
) is comprised of two sub-networks, i.e., noise estimation and non-blind denoising, and it achieves state-of-the-art results in terms of both quantitative metrics and visual quality. Similarly, FFDNet (zhang2018ffdnet), RED30 (mao2016image), BM3D-Net (yang2017bm3d), and CS-DIP (van2018compressed) also achieved impressive performances.
However, directly applying these methods onto seismic data may not be effective, since geophysical domain requires not only the visual quality of the seismic image but also the recovery quality of seismic signals. For example, for training purpose, the conventional neural network would decrease the loss value, e.g., L1 (Laplace) loss or L2 (Gauss) loss, and make the predicted value converge to a certain level. As a result, de-noised images may have lower sharpness and looks smoother among adjacent pixels. Such techniques may not fit the seismic noise attenuation since the priority of the noise attenuation is to keep the phase and amplitude spectrum of the valid signal as intact as possible.
In this manuscript, we bring the state-of-the-art techniques of noise attenuation from computer version field into geoscience field and make a variant of the technique to fit the requirements in the geophysical domain for seismic noise attenuation. We will firstly introduce our model frame, and secondly apply our model onto two cases: the synthetic wedge dataset, and SEAM Phase I data (fehler2011seam
), following by results analysis respectively. In summary, deep neural network models proposed in this manuscript could extract the principal components of seismic data to attenuate strong noises and outliers, eventually to recover amplitudes and keep the original phase of the primary signals, without harms on the primary signals.
In this section, we would like to borrow the ideas from pioneering deep learning techniques for image processing and introduce our de-noising model, N2N-Seismic, for seismic noise attenuation task. Specifically, we will first introduce some fundamental concepts of deep learning techniques for image denoising and a state-of-the-art DL-based model, Noise2Noise; secondly, we will discuss why they could not be effectively applied on seismic data processing. Then, we will introduce our DL-based solution, N2N-Seismic, specifically designed for seismic image de-noising.
2.1 The Noise2Noise Model
Traditional methods monitor the performance of deep learning model for noise attenuation tasks from the difference between generated (or de-noised) image and the clear ground truth. However, in recent research, lehtinen2018noise2noise introduced the Noise2Noise model, which could learn to turn corrupted images into good ones by only looking at noised images. Noise2Noise training attempts to learn a mapping between pairs of independently degraded versions of the same training image, i.e., , that incorporates the same signal , but with independently random noise and . Naturally, conventional neural networks do not have the ability to learn how to map one noisy image from another noisy image. However, networks trained on this training task enable to produce results that close to the same performance as traditionally trained networks that do have access to the ground truth images. In cases where ground-truth data is physically unobtainable, N2N still could perform the tasks through training. The similar idea has been applied to its variants, e.g., Noise2Void (krull2019noise2void).
Such deep-learning-based methods effectively work on the traditional image noise attenuation. However, in the geophysical field, experts care more about physical metrics, e.g., amplitude, phase Spectrum, amplitude spectrum, etc. Using only the Noise2Noise model on seismic data may obtain results without making physical sense. It can be found by an observation that results from the traditional Noise2Noise model,
, may always flatten signals in the de-noised results, which implies the phase information of seismic is lost. It is due to the loss function, L2 Loss, which only calculates the (power of) absolute distance between the predicted value and the true value, rather than the fluctuation of the signal. As a result, if the noise level is high, de-noised signal would become flatten when the amplitude of the input signal is extremely low, for the purpose of minimizing the loss value.
For an ordinary image, it is acceptable since smooth transition would bring better visual inspection. However, for seismic data, these smooth transitions would lose important geological information, such as causing the phase distortion of seismic data, which is harmful the follow-up processing and analysis. Furthermore, unlike ordinary images where pixels evenly distribute in (0, 255) for RGB images, seismic data have severe outliers with extremely high/low amplitudes. Such outlier signals would affect the de-noising performance from the traditional Noise2Noise model. Therefore, in this paper, what is emphasized on is to keep the signal phase and amplitude as accurate as possible in both the time and frequency domains. We have developed a variant of the traditional Noise2Noise model, which is named as N2N-Seismic, specifically targeting on solving seismic noise attenuation tasks.
2.2 N2N-Seismic: A Variant N2N model for Seismic Noise
2.2.1 Model Design Part (1): Applying ResNet
A Convolutional Neural Network (CNN) has been widely used in image processing, e.g. image classification (krizhevsky2012imagenet
), face recognition (lawrence1997face). Residual Network (ResNet: he2016deep), a deeper version of CNN, solved the problem that deeper network would cause a higher training/testing loss. ResNet splits the original mapping to two parts:
where denotes the original identity, denotes the residual mapping, and
denotes the final mapping. In this way, the problem of vanishing or exploding gradient and degradation of traditional stacked deep CNN would be eliminated. In the recent years, as one of the applications, ResNet has been widely used into image de-noising, such as the super-resolution residual network (SRResNet:ledig2017photo; EDSR+: lim2017enhanced). In this paper, we will design our deep learning model based on the ResNet techniques.
2.2.2 Model Design Part (2): Model Structure
Figure 1 shows the structure of N2N-Seismic model. N2N-Seismic takes the corrupted images as inputs and return the de-noised images. Inside of the model, we connect a bunch of residual units. We use the shortcut connection to link the input and output of each residual unit together (see Figure 2). In each residual unit, we use the same
convolutional layer followed by a batch normalization layer to expedite the convergence and avoid overfitting. For the activation function, as SRResNet, we employ Parametric ReLU, instead of ReLU used in the traditional ResNet.
Our random noise attenuation model would generate de-noised image using the given corrupted signal as input. Then, we borrow the idea from Noise2Noise model, and compare our de-noised image with another corrupted image . Here, we use the L2 loss function for training purpose, which minimizes the mean of all the squared differences between and the predicted . Given a predicted image matrix with dimension , , and the target ground truth image matrix, , we calculate the loss as follows:
Back-propagation training processes would learn the optimal parameters, , of the neural network until converges. Based on the loss function, we now use the pair of predicted value and the ground truth value to tune the parameters to minimize the pixel-wise loss:
2.2.3 Model Design Part (3): A Follow-up Step for Seismic Data
Since unlike ordinary image with normally distributed value, the seismic data may contain some portions with extremely low or high amplitude. As we discussed, traditional deep-learning based models would focus more on these outliers since they contribute more on the loss function. At the meantime, the other portions with low absolute amplitude would be predicted close to the mean of data. As a result, de-noised signals with low absolute amplitude would become flatten, we call such phenomenon assignal information loss.
To solve the information loss problem on the signals with low absolute amplitude, we introduce our iterative follow-up steps, Clip & De-noise, after we have the trained model.
First, we clip the corrupted seismic input based on their absolute amplitude. Specifically, for noised seismic data with different level of absolute amplitude , we clip by function , we clip by function using the following criteria:
We also mark clipped value using a binary matrix, , where if has been clipped by . Then, we apply our trained model on a different level of amplitude on the decreasing order, which is calculated by:
where denotes an all-ones matrix, and denotes element-wise product. needs to call times internally following by the final step to process the residual pixels out of . The output after this iterative processing would be the final de-noised seismic data. The full algorithm is listed in Algorithm 1.
2.3 Evaluation Metrics
To evaluate the performance, we use the following measurements. The signal-to-noise-ratio (SNR) is defined as the ratio between the variance of the original gather and of the noise, where noise is the difference between the corrupted signal and the clean signal. Given a corrupted seismic data(or denoised seismic ), and its clean sample , SNR (signal-to-noise-ratio) is defined as:
Mean-Squared-Error (MSE) is defined as the average of the element-wise squared difference between the predicted signal and true signal (both of them with the size of ), calculated as follows:
The aforementioned measurements are wildly used in traditional image attenuation evaluation. However, in seismic data, we should also pay attention to the amplitude, phase, and correlation coefficients between predicted signal and original signal. We would like to show such evaluation measurements in the next section visually.
3 Case Study 1: Wedge Model Data
3.1 Wedge Dataset Preparing
In this experiment, a simple wedge model is generated with dimension of 200 samples and 51 traces. Then, 4 different levels of random noise are added to the wedge model, where the Gaussian white noise with meanand scale , , , and , respectively. Data value has been normalized to range. Figure 4 column (a) shows the noise-added seismic data, indicating the information of the wedge model.
3.2 Experimental Settings
For model training, rather than using the seismic image, we used the image dataset on ImageNet***Dataset available at http://image-net.org/download-imageurls. We randomly select 300 images for training and 50 images for validation. Those images consist of different categories, such as animals, plants, landscapes, etc.
As we described in the previous section, we used the idea from Noise2Noise model and added random Gaussian noise to the input and output images, separately, during the training processes. We applied the loss function in Eq. 3. At the meanwhile, MSE (Eq. 7) and SNR (Eq. 8
) would be used as the evaluation metrics.
Training process would be automatically terminated once the performance converges. All parameter weights would be saved to file. For the hyper-parameters tuning, we applied the validation data to adjust hyper-parameters, i.e., learning rate, feature dimension, number of the residual unit, and steps per epoch. We adopt the Adaptive Moment Estimation
|# of Residual unit||16|
|Step per epoch||1000|
) as the optimizer for training since it yields faster convergence compared to Stochastic Gradient Descent (SGD). The optimal hyper-parameters (listed in Table1) would be used into the final model training.
Next, we tested the trained DL model on the wedge model data. As we discussed in the previous section, the proposed DL model, named N2N-Seismic, would apply the clipping process and apply on different ranges of the data value (recall Eq. (6). The number of iterations,
, depends on the distribution of the data value. Since all data value in the wedge dataset relatively uniform distributed from -1 to 1, we chose a low number of iterations,, for the final prediction. Therefore, the threshold would be set as 0.5 and 1, respectively.
3.3 Results and Analysis
Landmark Solutions †††Landmark Solutions: https://www.landmark.solutions/ is an E&P software widely used for seismic data processing (Abbr. Reference). It adopts the FX-Decon algorithm for random seismic noise attenuation. In this manuscript, we would like to use their solution as a rigorous benchmark for comparison.
Figure 4 shows the random noise attenuation results for the wedge model data. Column (1) displays the noise-added seismic data with different levels of Gaussian noise. Comparing with the ground true clean data (not shown), as we can see, the MSE of noised data increases from to , with the growth of noise level (from top to bottom). In addition, the SNR value has decreased from 20.2 dB to 3.1 dB with the growth of noise level. We would use these corrupted seismic data as the input of our N2N-Seismic Model for noise attenuation.
3.3.1 Noise-level Sensitivities
Figure 4 Column (2) shows the de-noised results from reference methods. From the perspective of evaluation metrics, both MSE and SNR of these de-noised images are highly noise-level sensitive. Specifically, the ability of noise attenuation would be sharply decreased with the growth of noise level, from to . Figure 4 Column (3) shows the de-noised results from our N2N-Seismic Model. Comparing with Reference model, the ability of noise attenuation of N2N-Seismic has a mildly decreased with the growth of noise level, from to . This observation indicates that N2N-Seismic model is more robust and less noise-level sensitive than the conventional methods.
3.3.2 De-noising Abilities
The evaluation metrics, MSE and SNR, show that our N2N-Seismic model has significantly better performance than the reference method. Specifically, in the Level-1 noise case, MSE is dropped from to . Comparing with reference model (), our method has 62.50% improvement. And SNR has been improved to 26.0 dB, comparing with the reference model (), N2N-Seismic model has improved 21.50%. Moreover, such improvement would be more apparent with the growth of noise level. For example, in the Level-4 case, N2N-Seismic has improved 86.3% on MSE and 75.3% on SNR, compared to the reference method. Such results indicate that deep-learning-based N2N-Seismic model performs much better than the conventional methods and makes an impressive achievement.
3.3.3 Signal Recovery Abilities
Unlike ordinary image processing, seismic data processing requires more strict criteria in respect of phrase, amplitude, etc., during signal recovery. Figure 5 shows a randomly selected trace from the wedge model, recovered trace by the reference method and N2N-Seismic model. N2N-Seismic recovered signal (blue) is much closer to the ground truth (green) than the recovered signal by the reference method (yellow), in all noise levels. Specifically, N2N-Seismic recovered signals have similar phase and amplitude as the clean signal; however, reference method recovered signal would always have the lower range of amplitude, and its high-frequency residual noise would change the phase of the original clean signal, especially in the high noise-level cases. This observation once again indicates N2N-Seismic model has stronger abilities of seismic noise attenuation and signal recovery than the conventional method.
In this case study, we apply our N2N-Seismic model onto synthetic Wedge dataset and obtain promising results. N2N-Seismic effectively 1) reduces the MSE with evaluation metric of 74.5% in average of 4 noise levels cases; 2) improves the SNR evaluation metric with about 44.0% on average of 4 noise level cases; 3) keeps the original phase and amplitude information of the original clean signal; and 4) reduces the effects of noise level with recovered signals. This study case on wedge data is comparably less challenging for robustness validation of the N2N-Seismic model due to the relatively uniform data distribution. In the next section, we will test the performance with a more complicated case.
4 Case Study 2: SEAM Data
In the previous section, we applied N2N-Seismic model onto the wedge data and got the promising results. However, this task is relatively simple since the wedge model has uniform data distribution. In this section, we apply the N2N-Seismic model into a more complicated situation, where the data distribution is much more intricate than the wedge model data. Therefore, we will see all the benefits of N2N-Seismic model, especially from the Eq. 6. The performances will not only be compared with the conventional method, but also compared with the deep learning methods, N2N-Image (Abbr. N2N-I) directly onto the seismic data.
In this case study, we use the public SEG Advanced Modeling Program (SEAM) data‡‡‡SEAM data available at https://seg.org/News-Resources/Research-and-Data/SEAM. A single line on this dataset is randomly selected. The dataset has 1600 samples in depth with a sample rate of 5 meters, which covers a total depth of 8 kilometers from water surface; and the selected testing area contains 245 traces. Figure 7 column (1) shows the clean seismic data (horizontally compressed for saving space).
4.1 SEAM Dataset Preparing
Like wedge model case study, we have added different (3) levels of random noise under the water bottom line. Denote as the max absolute amplitude and is the mean of the SEAM data. The added Gaussian white noise with mean and scale , , and , respectively. We did not perform the normalization onto the SEAM data. Figure 7 column (2), (5), and (8) show the noise-added seismic data.
However, unlike the wedge dataset, the SEAM dataset has more complicated distribution. Figure 6 shows the histogram of the data distribution. As we can see, most of the amplitudes from the data concentrate on a small amplitude range ; therefore, for such majority, the generated noise would cover a large range of signal information due to the strong amplitude of noise. In conclusion, this recovery task is more challenging for the noise attenuation model.
4.2 Experimental Settings
Like the wedge model, we use the N2N-Seismic model trained with the ImageNet dataset. We also apply the loss function in Eq. 3, and use MSE (Eq. 7) and SNR (Eq. 8) as evaluation metrics. All optimal hyper-parameters are listed on Table 1.
Next, we apply our SEAM Phase I data (fehler2011seam) using the trained model with optimal parameters. As we discussed on the previous section, N2N-Seismic, would need to perform clippings with different range of data value (recall Eq. 6). The number of iterations, , depends on the data value distribution. Since the data value in the SEAM dataset has broader range, we chose higher number of iterations, , for the final prediction, and the threshold would be evenly set as , , , , and , respectively.
For the pre-trained model where input data has been normalized, the SEAM data is also normalized by dividing each threshold . Then, the pre-trained model is applied for noise attenuation, and the result is marked as . Then we do the element-wisely producing the predicted value and the binary mask matrix , and recovery the normalized data by multiplying the threshold . Finally, the noise attenuated data is generated by integrating with iterations of inferences.
4.3 Results and Analysis
Figure 7 exhibits the comparison of performances for random noise attenuation using the conventional method and our N2N-Seismic model. Figure 7 (1) shows the clean image without random noise. Figure 7 (2), (5), and (8) sshow the clean image with low, middle, and high-level random noise under the water bottom line. With the growth of the noise level, we observe that the primary signals are gradually covered by the noise. Refer to Figures 8 and 9, which are partial data zoomed from the red quadrangle regions A and B on Figure 7 (1), we could observe that the primary signals and many details of seismic data have been occupied by the Gaussian noise (comparing (1) and (2)). The primary signals have lower absolute amplitude than noise, as we discussed before. Therefore, recovering such signal information from noises becomes more challenging than the previous case study.
As a comparison, Figure 7 (3), (6), and (9) show the de-noised seismic image by N2N-Seismic model. Comparing with the results from the reference method, N2N-Seismic model recovers more details with much cleaner results, for both low and high absolute amplitude regions. Zooming to the selected quadrangle region, referring to Figure 9 (3), more details have been recovered, especially for the bottom edge.
Figure 7 (4), (7), and (10) exhibit the de-noised results by the reference methods. As we can see, the signals with high absolute amplitude have been recognized and covered; however, the recovered seismic data loses much more details compared with the ground truth. More specifically, referring to the close-up image in Figure 8 and Figures 9, with the growth of the noise level, de-noised images by the reference method lose more details, especially at the bottom edge.
We have initially determined that N2N-Seismic model has a stronger ability for noise attenuation than the conventional method. In the following sub-sections, we will evaluate the results with more detailed numeric comparison, respect to MSE, SNR, correlation coefficient, phase, etc. In the following comparison, we will also include the traditional deep learning method which directly applied to seismic data (named N2N-Image), for manifesting the performance of our proposed model, N2N-Seismic.
4.3.1 MSE and SNR
Table 2 shows the comparison regarding the performance between the reference method and N2N-Seismic. In the low-level noise case, the MSE of the de-noised results by N2N-Seismic model decreased from 3247.6 (noised image) to 296.1 (90.9% attenuated); Comparing with the reference results (), we have improved 66.8%; and comparing with the N2N-Image results (), we have improved 35.7%. In terms of the SNR, N2N-Seismic result () improved 168.6% from the noised data, significantly improved 93.0% from the reference result (), and also improved 14.2% from the N2N-Image result ().
Similar improvement could be found in the middle noise-level case. The MSE of the de-noised results by N2N-Seismic model decreased from 29280.9 (noised image) to 1168.6 (96.0% attenuated); comparing with the reference results (), we have improved 39.32%; and comparing with the N2N-Image results (), we have improved 11.7%. In terms of the SNR, N2N-Seismic result () improved 667.0% from the noised data (), significantly improved 164.6% from the reference result (), and improved 5.6% from the N2N-Image result ().
More pronounced improvement could be observed in the high noise-level case. The MSE of the de-noised results by N2N-Seismic model decreases from 81310.4 (noised image) to 2070.0 (97.5% attenuated); comparing with the reference results (), we have improved 28.6%; and comparing with the N2N-Image results (), we have improved 3.8%. In terms of the SNR, N2N-Seismic result () improved 1105.7% from the noised data, and dramatically improved more than 400 times from the reference result () as well as 1.1% from the N2N-Image result ().
It should be highlighted here that the reference method has an extremely low performance on improving the SNR results. In some cases, e.g., high-level noise, the de-noised results by reference model even have lower SNR than the noised image, which indicates conventional methods have limited ability on seismic noise attenuation in high-level noise case. As an alternative solution, N2N-Seismic gets much better results with respects to MSE and SNR; Comparing with N2N-Image model, which directly applies traditional deep learning model on seismic data, conventional methods still could not provide satisfactory solution.
4.3.2 Correlation Coefficient
Unlike ordinary image processing, in the geophysical field, the correlation coefficient between the de-noised signals and the original clean signals are extremely important because it reveals the consistency of seismic phase. Therefore, we consider the correlation coefficient as another evaluation metric in Table 2. Due to the added Gaussian noise, the correlation coefficient between noised data and the ground truth have been dropped to 0.79, 0.42, and 0.27 in different noise-levels, respectively. For the de-noised results by the N2N-Seismic model, the correlation coefficients have increased to 0.97, 0.90, and 0.81. Specifically, comparing with the conventional method, in the low noise level case, we improved 3.3% from the reference method () and 1.3% from the N2N-Image model (); in the middle noise level case, we improved 5.8% from the reference method () and 1.1% from the N2N-Image model (); and in the high noise level case, we improved 8.9% from the reference method () and 0.6% from the N2N-Image model ().
4.3.3 Phase Recovery
Next, we convert the dataset from depth to the time domain, and analyze the phase spectrum of de-noised data. Figure 10 shows the phase spectrum of the original clean data and the de-noised data by the reference method and N2N-Seismic. We only show the phase spectrum between 0 Hz to 60 Hz. Because when frequency than 60 Hz, the amplitude of signal would be extremely low, so that such high-frequency components will not affect the primary signals and have fewer effects of follow-up processing and analysis. As we can see on Figure 10, frequency from 0 Hz to 19 Hz, results from N2N-Seismic are closer to the original phase than the ones from reference method in every level of noise; frequency after 19 Hz, results from N2N-Seismic would have perfectly fit the original phase as similar as results from N2N-Image and Reference methods. After 40 Hz, N2N-Seismic still perform better than the Reference model.
And Table 3 shows the correlation coefficient of phase in important frequency range of 0 Hz to 60 Hz in details. Refer to Table 3, as we can see, for all frequency ranges, results from N2N-Seismic model are much better than results from the reference model in in all three noise level cases, except an outlier in Frequency 50 Hz to 60 Hz in low level case. Also, the correlation coefficient of phase from N2N-Seismic model is better than ones from N2N-Image model, which indicates that our model is more effective targeting to the seismic data.
4.3.4 Signal Information Recovery
Finally, we would like to compare the abilities of signal recovery between the reference method, N2N-Image, and N2N-Seismic. As we discussed before, in the ordinary image processing using the deep learning model (N2N-Image), CNN has an objective of reducing the loss value,
e.g., L1 loss or L2 loss, and make the predicted values converge to a certain level for training purpose. As a result, the de-noised images may exhibit less sharpen and looks smoother in the areas of adjacent pixels. Such results could not be acceptable in the geophysical field since the first priority of seismic attenuation is to keep the phase and amplitude spectrum of the signal close to intact.
As an example, Figure 11 shows a random trace in the depth range of 50 in the high noise level case. Comparing with the clean trace, the de-noised trace by N2N-image changes seismic phase circled as red. Such information loss would bring the big issue on seismic interpretation. As a comparison, the de-noised trace from the reference method recovers more phase information than N2N-Image, although it still remains some residual high-frequency noise. Reference methods for seismic noise attenuation would be a better solution than the traditional ordinary image processing method. However, the de-noised trace from N2N-Seismic convinces that it could be considered as a better approach to solve the information loss problem than the N2N-Image (refer to the red circles in same depth), and also apparently shows lower noise than the reference method.
In this case study, we exhibit the performance of N2N-Seismic model on a more complicated case. One the one hand, comparing with the conventional method, results from N2N- Seismic impressively improves the MSE and correlation coefficient value; furthermore, N2N-Seismic model dramatically increases the SNR of maximum 400 times. N2N-Seismic keeps the phase and amplitude spectrum as the original clean data, which proves to be much better than the conventional methods. However, for the phase correlation coefficient comparison, the conventional method did slightly better in the low frequency range.
On the other hand, comparing with the N2N-Image model, which directly applies the deep learning model designed for ordinary images onto the seismic data, N2N-Seismic still offers clear advantages in the respects of MSE, SNR, and correlation coefficient. Most importantly, N2N-Seismic keeps more details about the seismic information than N2N-Image, which would be much helpful for the subsequent data processing and analysis. These observations indicate our model, specifically designed for the seismic data, effectively achieves the objects of seismic noise attenuation.
In this manuscript, we proposed a deep learning model with CNN based residual neural networks for the random seismic noise attenuation tasks. Rather than directly applying de-noising model for the ordinary image to the seismic data, our proposed method, N2N-Seismic, has a strong ability in respect of recovery of the seismic wavelets back to intact condition while the signal is relatively preserved. Comparisons, from the two examples with wedge and SEAM data, show that our method performs much better than conventional methods for noise attenuation tasks in terms of SNR, MSE, and Phase Spectrum, etc.
In conclusion, the main contribution of this manuscript is to provide a deep-learning solution for random noise attenuation tasks. Such method absorbs benefits from the deep neural networks in computer vision applied to ordinary image denoising process, and meets the geophysical requirements and expectations. Having rigorous comparisons with conventional methods for several benchmark studies, our proposed deep-learning models successfully implement the tasks above with a great success, and achieve the prominent improvements in respects of MSE, SNR, etc..