Log In Sign Up

Unsupervised Learning of Full-Waveform Inversion: Connecting CNN and Partial Differential Equation in a Loop

by   Peng Jin, et al.
Penn State University
Los Alamos National Laboratory
Michigan State University

This paper investigates unsupervised learning of Full-Waveform Inversion (FWI), which has been widely used in geophysics to estimate subsurface velocity maps from seismic data. This problem is mathematically formulated by a second order partial differential equation (PDE), but is hard to solve. Moreover, acquiring velocity map is extremely expensive, making it impractical to scale up a supervised approach to train the mapping from seismic data to velocity maps with convolutional neural networks (CNN). We address these difficulties by integrating PDE and CNN in a loop, thus shifting the paradigm to unsupervised learning that only requires seismic data. In particular, we use finite difference to approximate the forward modeling of PDE as a differentiable operator (from velocity map to seismic data) and model its inversion by CNN (from seismic data to velocity map). Hence, we transform the supervised inversion task into an unsupervised seismic data reconstruction task. We also introduce a new large-scale dataset OpenFWI, to establish a more challenging benchmark for the community. Experiment results show that our model (using seismic data alone) yields comparable accuracy to the supervised counterpart (using both seismic data and velocity map). Furthermore, it outperforms the supervised model when involving more seismic data.


page 2

page 7

page 8

page 9

page 13

page 14

page 15


Application of an RBF-FD solver for the Helmholtz equation to full-waveform inversion

Full waveform inversion (FWI) is one of a family of methods that allows ...

Stochastic seismic waveform inversion using generative adversarial networks as a geological prior

We present an application of deep generative models in the context of pa...

An Intriguing Property of Geophysics Inversion

Inversion techniques are widely used to reconstruct subsurface physical ...

Exploring Multi-physics with Extremely Weak Supervision

Multi-physical inversion plays a critical role in geophysics. It has bee...

Velocity estimation via model order reduction

A novel approach to full waveform inversion (FWI), based on a data drive...

Schrödinger PCA: You Only Need Variances for Eigenmodes

Principal component analysis (PCA) has achieved great success in unsuper...

EikoNet: Solving the Eikonal equation with Deep Neural Networks

The recent deep learning revolution has created an enormous opportunity ...

1 Introduction

Geophysical properties (such as velocity, impedance, and density) play an important role in various subsurface applications including subsurface energy exploration, carbon capture and sequestration, estimating pathways of subsurface contaminant transport, and earthquake early warning systems to provide critical alerts. These properties can be obtained via seismic surveys, i.e., receiving reflected/refracted seismic waves generated by a controlled source. This paper focuses on reconstructing subsurface velocity maps from seismic measurements. Mathematically, the velocity map and seismic measurements are correlated through an acoustic-wave equation (a second-order partial differential equation) as follows:


where denotes the pressure wavefield at spatial location and time , represents the velocity map of wave propagation, and is the source term. Full Waveform Inversion (FWI) is a methodology that determines high-resolution velocity maps of the subsurface via matching synthetic seismic waveforms to raw recorded seismic data , where represents the locations of the seismic receivers.

Figure 2: An example of (a) a velocity map and (b) seismic measurements (named shot gather in geophysics) and the 1D time-series signal recorded by a receiver.

A velocity map describes the wave propagation speed in the subsurface region of interest. An example in 2D scenario is shown in Figure 1(a). Particularly, the x-axis represents the horizontal offset of a region, and the y-axis stands for the depth. The regions with the same geologic information (velocity) are called a layer in velocity maps. In a sample of seismic measurements (termed a shot gather in geophysics) as depicted in Figure 1(b), each grid in the x-axis represents a receiver, and the value in the y-axis is a 1D time-series signal recorded by each receiver.

Existing approaches solve FWI in two directions: physics-driven and data-driven. Physics-driven approaches rely on the forward modeling of Equation 1

, which simulates seismic data from velocity map by finite difference. They optimize velocity map per seismic sample, by iteratively updating velocity map from an initial guess such that simulated seismic data (after forward modeling) is close to the input seismic measurements. However, these methods are slow and difficult to scale up as the iterative optimization is required per input sample. Data-driven approaches consider FWI problem as an image-to-image translation task and apply convolution neural networks (CNN) to learn the mapping from seismic data to velocity maps

(Wu and Lin, 2019). The limitation of these methods is that they require paired seismic data and velocity maps to train the network. Such ground truth velocity maps are hardly accessible in real-world scenario because generating them is extremely time-consuming even for domain experts.

In this work, we leverage advantages of both directions (physics + data driven) and shift the paradigm to unsupervised learning of FWI by connecting forward modeling and CNN in a loop. Specifically, as shown in Figure 1, a CNN is trained to predict a velocity map from seismic data, which is followed by forward modeling to reconstruct seismic data. The loop is closed by applying reconstruction loss on seismic data to train the CNN. Due to the differentiable forward modeling, the whole loop can be trained end-to-end. Note that the CNN is trained in an unsupervised manner, as the ground truth of velocity map is not needed. We name our unsupervised approach as UPFWI (Unsupervised Physical-informed Full Waveform Inversion).

Additionally, we find that perceptual loss (Johnson et al., 2016) is crucial to improve the overall quality of predicted velocity maps due to its superior capability in preserving the coherence of the reconstructed waveforms comparing with other losses like Mean Squared Error (MSE) and Mean Absolute Error (MAE).

To encourage fair comparison on a large dataset with more complicate geological structures, we introduce a new dataset named OpenFWI, which contains 60,000 labeled data (velocity map and seismic data pairs) and 48,000 unlabeled data (seismic data alone). 30,000 of those velocity maps contain curved layers that are more challenge for inversion. We also add geological faults with various shift distances and tilting angles to all velocity maps.

We evaluate our method on this large dataset. Experimental results show that for velocity maps with flat layers, our UPFWI trained with 48,000 unlabeled data achieves 1146.09 in MSE, which is 26.77% smaller than that of the supervised method, and 0.9895 in Structured Similarity (SSIM), which is 0.0021 higher than the score of the supervised method; for velocity maps with curved layers, our UPFWI achieves 3639.96 in MSE, which is 28.30% smaller than that of supervised method, and 0.9756 in SSIM, which is 0.0057 higher than the score of the supervised method.

Our contribution is summarized as follows:

  • We propose to solve FWI in an unsupervised manner by connecting CNN and forward modeling in a loop, enabling end-to-end learning from seismic data alone.

  • We find that perceptual loss is helpful to boost the performance comparable to the supervised counterpart.

  • We introduce a large-scale dataset as benchmark to encourage further research on FWI.

Figure 3: Unsupervised UPFWI (ours) vs. Supervised H-PGNN+ (Sun et al., 2021). Our method achieves better performance, e.g. lower Mean Squared Error (MSE) and higher Structural Similarity (SSIM), when involving more unlabeled data (24k).

2 Preliminaries of Full Waveform Inversion (FWI)

The goal of FWI in geophysics is to invert for a velocity map from seismic measurements , where and denote the horizontal and vertical dimensions of the velocity map, is the number of sources that are used to generate waves during data acquisition process, denotes the number of samples in the wavefields recorded by each receiver, and represents the total number of receivers.

In conventional physics-driven methods, forward modeling is commonly referred to the process of simulating seismic data from given estimated velocity maps . For simplicity, the forward acoustic-wave operator can be expressed as


Given this forward operator , the physics-driven FWI can be posed as a minimization problem (Virieux and Operto, 2009)


where is the the distance between true seismic measurements and the corresponding simulated data , is a regularization parameter and is the regularization term which is often the or norm of . This requires optimization per sample, which is slow as the optimization involves multiple iterations from an initial guess.

Data-driven methods leverage convolutional neural networks to directly learn the inverse mapping as (Adler et al., 2021)


where is the approximated inverse operator of parameterized by . In practice, is usually implemented as a convolutional neural network (Adler et al., 2021; Wu and Lin, 2019)

. This requires paired seismic data and velocity maps for supervised learning. However, the acquisition of large volume of velocity maps in field applications can be extremely challenging and computationally prohibitive.

3 Method

In this section, we present our Unsupervised Physics-informed solution (named UPFWI), which connects CNN and forward modeling in a loop. It addresses limitations of both physics-driven and data-driven approaches, as it requires neither optimization at inference (per sample), nor velocity maps as supervision.

3.1 UPFWI: Connecting CNN and Forward Modeling

As depicted in Figure 1, our UPFWI connects a CNN and a differentiable forward operator to form a loop. In particular, the CNN takes seismic measurements as input and generates the corresponding velocity map . We then apply forward acoustic-wave operator  (see Equation 2) on the estimated velocity map to reconstruct the seismic data . Typically, the forward modeling employs finite difference (FD) to discretize the wave equation (Equation 1). The details of forward modeling will be discussed in the subsection 3.3. The loop is closed by the reconstruction loss between input seismic data and the reconstructed seismic data . Notice that the ground truth of velocity maps is not involved, and the training process is unsupervised

. Since the forward operator is differentiable, the reconstruction loss can be backpropagated (via gradient descent) to update the parameters

in the CNN.

3.2 CNN Network Architecture

We use an encoder-decoder structured CNN (similar to Wu and Lin (2019) and Zhang and Lin (2020)) to model the mapping from seismic data to velocity map

. The encoder compresses the seismic input and then transforms the latent vector to build the velocity estimation through a decoder. Since the number of receivers

and the number of timesteps in seismic measurements are unbalanced (), we first stack a 71 and six 3

1 convolutional layers (with stride 2 every the other layer to reduce dimension) to extract temporal features until the temporal dimension is close to

. Then, six 33 convolutional layers are followed to extract spatial-temporal features. The resolution is down-sampled every the other layer by using stride 2. Next, the feature map is flattened and a fully connected layer is applied to generate the latent feature with dimension 512. The decoder first repeats the latent vector by 25 times to generate a 55

512 tensor. Then it is followed by five 3

3 convolutional layers with nearest neighbor upsampling in between, resulting in a feature map with size 808032. Finally, we center-crop the feature map (7070) and apply a 3

3 convolution layer to output a single channel velocity map. All the aforementioned convolutional and upsampling layers are followed by a batch normalization 

(Ioffe and Szegedy, 2015)

and a leaky ReLU  

(Nair and Hinton, 2010)

as activation function.

3.3 Differentiable Forward Modeling

We apply the standard finite difference (FD) in the space domain and time domain to discretize the original wave equation. Specifically, the second-order central finite difference in time domain ( in Equation 1) is approximated as follows:


where denotes the pressure wavefields at timestep , and and are the wavefields at and , respectively. The Laplacian of can be estimated in the similar way on the space domain (see Appendix). Therefore, the wave equation can then be written as


where here denotes the discrete Laplace operator.

The initial wavefield at the timestep 0 is set zero (i.e. ). Thus, the gradient of loss with respect to estimated velocity at spatial location

can be computed using the chain rule as


where indicates the length of the sequence.

3.4 Loss Function

The reconstruction loss of our UPFWI includes a pixel-wise loss and a perceptual loss as follows:


where and are input and reconstructed seismic data, respectively. The pixel-wise loss combines and distance as:


where and are two hyper-parameters to control the relative importance. For the perceptual loss , we extract features from conv5 in a VGG-16 network  (Simonyan and Zisserman, 2015)

pretrained on ImageNet 

(Krizhevsky et al., 2012) and combine the and distance as:


where represents the output of conv5 in the VGG-16 network, and and are two hyper-parameters. Compared to the pixel-wise loss, the perceptual loss is better to capture the region-wise structure, which reflects the waveform coherence. This is crucial to boost the overall accuracy of velocity map (e.g. the quantitative velocity values and the structural information).

4 OpenFWI Dataset

We introduce a new large-scale geophysics FWI dataset OpenFWI, which consists of 108K seismic data for two types of velocity maps: one with flat layers (named FlatFault) and the other one with curved layers (named CurvedFault). Each type has 54K seismic data, including 30K with paired velocity maps (labeled) and 24K unlabeled. The 30K labeled pairs of seismic data and velocity maps are splitted as 24K/3K/3K for training, validation and testing respectively. Samples are shown in Appendix.

The shape of curves in our dataset follows a sine function. Velocity maps in CurvedFault are designed to validate the effectiveness of FWI methods on curved topography. Compared to the maps with flat layers, curved velocity maps yield much more irregular geological structures, making inversion more challenging. Both FlatFault and CurvedFault contain 30,000 samples with 2 to 4 layers and their corresponding seismic data. Each velocity map has dimensions of 70

70, and the grid size is 15 meter in both directions. The layer thickness ranges from 15 grids to 35 grids, and the velocity in each layer is randomly sampled from a uniform distribution between 3,000 meter/second and 6,000 meter/second. The velocity is designed to increase with depth to be more physically realistic. We also add geological faults to every velocity map. The faults shift from 10 grids to 20 grids, and the tilting angle ranges from -123

to 123.

To synthesize the seismic data, five sources are evenly distributed on the surface with a spacing of 255 meter, and the seismic traces are recorded by 70 receivers positioned at each grid with an interval of 15 meter. The source is a Ricker wavelet with a central frequency of 25 Hz Wang (2015). Each receiver records time-series data for 1 second, and we use a 1 millisecond sample rate to generate 1,000 timesteps. Therefore, the dimensions of seismic data become 5100070. Compared to the existing datasets (Yang and Ma, 2019; Moseley et al., 2020), OpenFWI is significantly larger. It includes more complicated and physically realistic velocity maps. We hope it establishes a more challenging benchmark for the community.

5 Experiments

In this section, we present experimental results of our proposed UPFWI evaluated on the OpenFWI dataset. We also discuss different factors that affect the performance of our method.

5.1 Implementation Details

Training Details: The input seismic data are normalized to range [-1, 1]. We employ AdamW (Loshchilov and Hutter, 2018) optimizer with momentum parameters , and a weight decay of to update all parameters of the network. The initial learning rate is set to be , and we reduce the learning rate by a factor of 10 when validation loss reaches a plateau. The minimum learning rate is set to be . The size of mini-batch is set to be 16. All trade-off hyper-parameters

in our loss function are set to be 1. We implement our models in Pytorch and train them on 8 NVIDIA Tesla V100 GPUs. All models are randomly initialized.

Evaluation Metrics: We consider three metrics for evaluating the velocity maps inverted by our method: MAE, MSE and Structural Similarity (SSIM). Both MAE and MSE have been employed in the existing methods (Wu and Lin, 2019; Zhang and Lin, 2020) to measure the pixel-wise error. Considering that the layered-structured velocity maps contain highly structured information, degradation or distortion in velocity maps can be easily perceived by a human. To better align with human vision, we employ SSIM to measure the perceptual similarity. It is important to note that for MAE and MSE calculation, we denormalize velocity maps to their original scale while we keep them in normalized scale [-1, 1] for SSIM according to the algorithm.

Comparison: We compare our method with three state-of-the-art algorithms: two pure data-driven methods, i.e., InversionNet (Wu and Lin, 2019) and VelocityGAN (Zhang and Lin, 2020), and a physics-informed method H-PGNN (Sun et al., 2021). We follow the implementation described in these papers and search for the best hyper-parameters for OpenFWI dataset. Note that we improve H-PGNN by replacing the network architecture with the CNN in our UPFWI and adding perceptual loss, resulting in a significant boosted performance. We refer our implementation as H-PGNN+, which is a strong supervised baseline. Our method has two variants (UPFWI-24K and UPFWI-48K), using 24K and 48K unlabeled seismic data respectively.

5.2 Main Results

Dataset Supervision Method MAE MSE SSIM
FlatFault Supervised InversionNet 15.83 2156.00 0.9832
VelocityGAN 16.15 1770.31 0.9857
H-PGNN+ (our implementation) 12.91 1565.02 0.9874
Unsupervised UPFWI-24K (ours) 16.27 1705.35 0.9866
UPFWI-48K (ours) 14.60 1146.09 0.9895
CurvedFault Supervised InversionNet 23.77 5285.38 0.9681
VelocityGAN 25.83 5076.79 0.9699
H-PGNN+ (our implementation) 24.19 5139.60 0.9685
Unsupervised UPFWI-24K (ours) 29.59 5712.25 0.9652
UPFWI-48K (ours) 23.56 3639.96 0.9756
Table 1: Quantitative results evaluated on OpenFWI in terms of MAE, MSE and SSIM. Our UPFWI yields comparable inversion accuracy comparing to supervised baselines. For H-PGNN+, we use our network architecture to replace the original one reported in their paper, and an additional perceptual loss between seismic data is added during training.

Results on FlatFault: Table 1 shows the results of different methods on FlatFault. Compared to data-driven InversionNet and VelocityGAN, our UPFWI-24K performs better in MSE and SSIM, but is slightly worse in MAE score. Compared to the physics-informed H-PGNN+, there is a gap between our UPFWI-24K and H-PGNN+ when trained with the same amount of data. However, after we double the size of unlabeled data (from 24K to 48K), a significant improvement is observed in our UPFWI-48K for all three metrics, and it outperforms all three supervised baselines in MSE and SSIM. This demonstrates the potential of our UPFWI for achieving higher performance with more unlabeled data involved.

The velocity maps inverted by different methods are shown in Figure 4. Consistent with our quantitative analysis, more accurate details are observed in the velocity maps generated by UPFWI-48K. For instance, in the first row of Figure 4, although all models somehow reveal the small geophysical fault near to the right boundary of the velocity map, only UPFWI-48K reconstructs a clear interface between layers as highlighted by the yellow square. In the second row, we find both InversionNet and VelocityGAN generate blurry results in deep region, while H-PGNN+, UPFWI-24K and UPFWI-48K yield much clearer boundaries. We attribute this finding as the impact of seismic loss. We further observe that the slope of the fault in deep region is different from that in the shallow region, yet only UPFWI-48K replicates this result as highlighted by the green square.

Results on CurvedFault Table 1 shows the results of CurvedFault. Performance degradation is observed for all models, due to the more complicated geological structures in CurvedFault. Although our UPFWI-24K underperforms the three supervised baselines, our UPFWI-48K significantly boosts the performance, outperforming all supervised methods in terms of all three metrics. This demonstrates the power of unsupervised learning in our UPFWI that greatly benefits from more unlabeled data when dealing with more complicated curved structure.

Figure 5 shows the visualized velocity maps in CurvedFault obtained using different methods. Similar to the observation in FlatFault, our UPFWI-48K yields more accurate details compared to the results of supervised methods. For instance, in the first row, only our UPFWI-24K and UPFWI-48K precisely reconstruct the fault beneath the curve around the top-left corner as highlighted by the yellow square. Although some artifacts are observed in the results of UPFWI-24K around the layer boundary in deep region, they are eliminated in the results of UPFWI-48K. As for the example in the second row, it is obvious that the shape of geological anomalies in the shallow region is best reconstructed by our UPFWI-24K and UPFWI-48K as highlighted by the red square. More visualization results are shown in the Appendix.

Ground Truth





Figure 4: Comparison of different methods on inverted velocity maps of FlatFault. Our UPFWI-48K reveals more accurate details at layer boundaries and the slope of the fault in deep region.

Ground Truth





Figure 5: Comparison of different methods on inverted velocity maps of CurvedFault. Our UPFWI reconstructs the geological anomalies on the surface that best match the ground truth.

5.3 Ablation Study

Below we study the contribution of different loss functions: (a) pixel-wise distance (MSE), (b) pixel-wise distance (MAE), and (c) perceptual loss. All experiments are conducted on FlatFault using 24,000 unlabeled data.

Figure 5(a) shows the predicted velocity maps for using three loss combinations (pixel-, pixel-, pixel-+perceptual) in UPFWI. The ground truth seismic data and velocity map are shown in the left column. For each loss option, we show the difference between the reconstructed and the input seismic data (on the top) and predicted velocity (on the bottom). When using pixel-wise loss in distance alone, there are some obvious artifacts in both seismic data (around 600 millisecond) and velocity map. These artifacts are mitigated by introducing additional pixel-wise loss in distance. With perceptual loss added, more details are correctly retained (e.g. seismic data from 400 millisecond to 600 millisecond, velocity boundary between layers). Figure 5(b) compares the reconstructed seismic data (in terms of residual to the ground truth) at a slice of 525 meter offset (orange dash line in Figure 5(a)). Clearly, the combination of pixel-wise and perceptual loss has the smallest residual.

The quantitative results are shown in Table 2. They are consistent with the observation in qualitative analysis (Figure 5(a)). In particular, using pixel-wise loss in distance has the worst performance. The involvement of distance mitigates all velocity errors but is slightly worse on MSE and SSIM of seismic error. Adding perceptual loss further boosts the performance in all performance metrics by a clear margin. This shows that perceptual loss is helpful to retain waveform coherence, which is correlated to the velocity boundary, and validates our proposed loss function (combining pixel-wise and perceptual loss).

Figure 6: Comparison of UPFWI with different loss functions on (a) waveform residual and their corresponding inversion results (ground truth provided in the first column), and (b) single trace residuals recorded by the receiver at 525 m offset. Our UPFWI trained with pixel-wise loss (+ distance) and perceptual loss yields the most accurate results. Best viewed in color.
Loss Velocity Error Seismic Error
pixel- pixel- perceptual MAE MSE SSIM MAE MSE SSIM
32.61 10014.47 0.9735 0.0167 0.0023 0.9978
21.71 2999.55 0.9775 0.0155 0.0025 0.9977
16.27 1705.35 0.9866 0.0140 0.0021 0.9984
Table 2: Quantitative results of our UPFWI with different loss function settings in terms of MAE, MSE and SSIM. Both the pixel-wise loss computed in distance and perceptual loss contribute to the boost of performance.

6 Discussion

Our UPFWI has two major limitations. Firstly, it needs further improvement on a small number of challenging velocity maps where adjacent layers have very close velocity values. We find that the lack of supervision is not the cause as our UPFWI can yield comparable or even better results compared to its supervised counterparts. Another limitation is the speed and memory consumption for forward modeling, as the gradient of finite difference (see Equation 6) need to be stored for backpropagation. We will explore different loss functions (e.g. adversarial loss) and the methods that can balance the requirement of computation resources and the accuracy in the future work. We believe the idea of connecting CNN and PDE to solve full waveform inversion has potential to be applied to other inverse problems with a governing PDE such as medical imaging and flow estimation.

7 Related Work

Physics-driven Methods There are two primary physics-driven methods, depending on the complexity of the forward model. The simpler one is via travel time inversion (Tarantola, 2005), which has a linear forward operator, but provides results of inferior accuracy (Lin et al., 2015). FWI techniques (Virieux and Operto, 2009), being the other one, provide superior solutions by modeling the wave propagation, but the forward operator is non-linear and computationally expensive. Furthermore the problem is ill-posed (Virieux and Operto, 2009), making a prior model of the solution space essential. Since regularized FWI solved via iterative techniques need to apply the forward model many times, these solutions are very computationally expensive. In addition, existing regularized FWI methods employ relatively simplistic models of the solution space (Hu et al., 2009; Burstedde and Ghattas, 2009; Ramírez and Lewis, 2010; Lin and Huang, 2017, 2015b, 2015a; Guitton, 2012; Treister and Haber, 2016), leaving considerable room for improvement in the accuracy of the solutions. Another common approach to alleviate the ill-posedness and non-linearity of FWI is via multi-scale techniques (Bunks et al., 1995; Boonyasiriwat et al., 2009; Feng and Schuster, 2019). Rather than matching the seismic data all at once, the multi-scale techniques decompose the data into different frequency bands so that the low-frequency components will be updated first and then followed with higher frequency components.

Data-driven Methods

 Recently, a new type of methods has been developed based on deep learning.

Araya-Polo et al. (2018) proposed a model based on a fully connected network. Wu and Lin (2019) further converted FWI into an image-to-image translation task with an encoder-decoder structure that can handle more complex velocity maps. Zhang and Lin (2020)

adopted GAN and transfer learning to improve the generalization.

Li et al. (2020) designed SeisInvNet to solve the misaligned issue when dealing sources from different locations. In Yang and Ma (2019), a U-Net architecture was proposed with skip connections. Feng et al. (2021) proposed a data-driven multi-scale framework by considering different frequency components. Rojas-Gómez et al. (2020) developed an adaptive data augmentation method to improve the generalization. Ren et al. (2020) combined the data-driven and physics-based methods and proposed H-PGNN model. Some similar ideas were developed on different physical forward models. Wang et al. (2020) proposed a model by connecting two CNNs approximating the forward model and inversion process, and their model was tested on well-logging data. Alfarraj and AlRegib (2019) utilized the forward model to constrain the training of convolutional and recurrent neural layers to invert well-logging seismic data for elastic impedance. All of those aforementioned works were developed based on supervised learning. Biswas et al. (2019) designed an unsupervised CNN to estimate subsurface reflectivity using pre-stack seismic angle gather. Comparing to FWI, their problem is simpler because of the approximated and linearized forward model.

8 Conclusion

In this study, we introduce an unsupervised method named UPFWI to solve FWI by connecting CNN and forward modeling in a loop. Our method can learn the inverse mapping from seismic data alone in an end-to-end manner. We demonstrate through a series of experiments that our UPFWI trained with sufficient amount of unlabeled data outperforms the supervised counterpart on our dataset to be released. The ablation study further substantiates that perceptual loss is a critical component in our loss function and has a great contribution to the performance of our UPFWI.


  • A. Adler, M. Araya-Polo, and T. Poggio (2021) Deep learning for seismic inverse problems: toward the acceleration of geophysical analysis workflows. IEEE Signal Processing Magazine 38 (2), pp. 89–119. Cited by: §2.
  • M. Alfarraj and G. AlRegib (2019) Semisupervised sequence modeling for elastic impedance inversion. Interpretation 7 (3), pp. SE237–SE249. Cited by: §7.
  • M. Araya-Polo, J. Jennings, A. Adler, and T. Dahlke (2018) Deep-learning tomography. The Leading Edge 37 (1), pp. 58–66. Cited by: §7.
  • R. Biswas, M. K. Sen, V. Das, and T. Mukerji (2019) Prestack and poststack inversion using a physics-guided convolutional neural network. Interpretation 7 (3), pp. SE161–SE174. Cited by: §7.
  • C. Boonyasiriwat, P. Valasek, P. Routh, W. Cao, G. T. Schuster, and B. Macy (2009) An efficient multiscale method for time-domain waveform tomography. Geophysics 74 (6), pp. WCC59–WCC68. Cited by: §7.
  • C. Bunks, F. Saleck, S. Zaleski, and G. Chavent (1995) Multiscale seismic waveform inversion. Geophysics 60 (5), pp. 1457–1473. Cited by: §7.
  • C. Burstedde and O. Ghattas (2009) Algorithmic strategies for full waveform inversion: 1D experiments. Geophysics 74 (6), pp. 37–46. Cited by: §7.
  • F. Collino and C. Tsogka (2001) Application of the perfectly matched absorbing layer model to the linear elastodynamic problem in anisotropic heterogeneous media. Geophysics 66 (1), pp. 294–307. Cited by: §A.1.
  • S. Feng, Y. Lin, and B. Wohlberg (2021) Multiscale data-driven seismic full-waveform inversion with field data study. IEEE Transactions on Geoscience and Remote Sensing (), pp. 1–14. External Links: Document Cited by: §7.
  • S. Feng and G. T. Schuster (2019) Transmission+ reflection anisotropic wave-equation traveltime and waveform inversion. Geophysical Prospecting 67 (2), pp. 423–442. Cited by: §7.
  • A. Guitton (2012) Blocky regularization schemes for full waveform inversion. Geophysical Prospecting 60, pp. 870–884. Cited by: §7.
  • W. Hu, A. Abubakar, and T. Habashy (2009) Simultaneous multifrequency inversion of full-waveform seismic data. Geophysics 74 (2), pp. 1–14. Cited by: §7.
  • S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In

    International Conference on Machine Learning

    pp. 448–456. Cited by: §3.2.
  • J. Johnson, A. Alahi, and L. Fei-Fei (2016)

    Perceptual losses for real-time style transfer and super-resolution


    European Conference on Computer Vision

    pp. 694–711. Cited by: §1.
  • A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems 25, pp. 1097–1105. Cited by: §3.4.
  • S. Li, B. Liu, Y. Ren, Y. Chen, S. Yang, Y. Wang, and P. Jiang (2020) Deep-learning inversion of seismic data. IEEE Transactions on Geoscience and Remote Sensing 58 (3), pp. 2135–2149. External Links: Document Cited by: §7.
  • Y. Lin and L. Huang (2015a) Acoustic- and elastic-waveform inversion using a modified Total-Variation regularization scheme. Geophysical Journal International 200 (1), pp. 489–502. External Links: Document Cited by: §7.
  • Y. Lin and L. Huang (2015b) Quantifying subsurface geophysical properties changes using double-difference seismic-waveform inversion with a modified Total-Variation regularization scheme. Geophysical Journal International 203 (3), pp. 2125–2149. External Links: Document Cited by: §7.
  • Y. Lin and L. Huang (2017) Building subsurface velocity models with sharp interfaces using interface-guided seismic full-waveform inversion. Pure and Applied Geophysics 174 (11), pp. 4035–4055. External Links: Document Cited by: §7.
  • Y. Lin, E. M. Syracuse, M. Maceira, H. Zhang, and C. Larmat (2015) Double-difference traveltime tomography with edge-preserving regularization and a priori interfaces. Geophysical Journal International 201 (2), pp. 574. External Links: Document Cited by: §7.
  • I. Loshchilov and F. Hutter (2018) Decoupled weight decay regularization. In International Conference on Learning Representations, Cited by: §5.1.
  • B. Moseley, T. Nissen-Meyer, and A. Markham (2020) Deep learning for fast simulation of seismic waves in complex media. Solid Earth 11 (4), pp. 1527–1549. Cited by: §4.
  • V. Nair and G. E. Hinton (2010) Rectified linear units improve restricted Boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 807–814. Cited by: §3.2.
  • A. Ramírez and W. Lewis (2010) Regularization and full-waveform inversion: a two-step approach. In 80th Annual International Meeting, SEG, Expanded Abstracts, pp. 2773–2778. Cited by: §7.
  • Y. Ren, X. Xu, S. Yang, L. Nie, and Y. Chen (2020) A physics-based neural-network way to perform seismic full waveform inversion. IEEE Access 8, pp. 112266–112277. Cited by: §7.
  • R. Rojas-Gómez, J. Yang, Y. Lin, J. Theiler, and B. Wohlberg (2020) Physics-consistent data-driven waveform inversion with adaptive data augmentation. IEEE Geoscience and Remote Sensing Letters. Cited by: §7.
  • K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, Cited by: §3.4.
  • J. Sun, K. A. Innanen, and C. Huang (2021) Physics-guided deep learning for seismic inversion with hybrid training and uncertainty analysis. Geophysics 86 (3), pp. R303–R317. Cited by: Figure 3, §5.1.
  • A. Tarantola (2005) Inverse problem theory and methods for model parameter estimation. SIAM. Cited by: §7.
  • E. Treister and E. Haber (2016) Full waveform inversion guided by travel time tomography. SIAM Journal on Scientific Computing 39, pp. S587–S609. Cited by: §7.
  • J. Virieux and S. Operto (2009) An overview of full-waveform inversion in exploration geophysics. Geophysics 74 (6), pp. WCC1–WCC26. Cited by: §2, §7.
  • Y. Wang (2015) Frequencies of the Ricker wavelet. Geophysics 80 (2), pp. A31–A37. Cited by: §4.
  • Y. Wang, Q. Ge, W. Lu, and X. Yan (2020) Well-logging constrained seismic inversion based on closed-loop convolutional neural network. IEEE Transactions on Geoscience and Remote Sensing 58 (8), pp. 5564–5574. Cited by: §7.
  • Y. Wu and Y. Lin (2019) InversionNet: an efficient and accurate data-driven full waveform inversion. IEEE Transactions on Computational Imaging 6 (1), pp. 419–433. Cited by: §1, §2, §3.2, §5.1, §5.1, §7.
  • F. Yang and J. Ma (2019) Deep-learning inversion: a next-generation seismic velocity model building method. Geophysics 84 (4), pp. R583–R599. Cited by: §4, §7.
  • Z. Zhang and Y. Lin (2020) Data-driven seismic waveform inversion: a study on the robustness and generalization. IEEE Transactions on Geoscience and Remote Sensing 58, pp. 6900–6913. Cited by: §3.2, §5.1, §5.1, §7.

Appendix A Appendix

a.1 Derivation of Forward Modeling in Practice

Similar to the finite difference in time domain, in 2D situation, by applying the fourth-order central finite difference in space, the Laplacian of can be discretized as


where , , , , and and stand for the horizontal offset and the depth of a 2D velocity map, respectively. For convenience, we assume that the vertical grid spacing is identical to the horizontal grid spacing .

Given the approximation in Equations 5 and 11, we can rewrite the Equation 1 as


where .

During the simulation of the forward modeling, the boundaries of the velocity maps should be carefully handled because they may cause reflection artifacts that interfere with the desired waves. One of the standard methods to reduce the boundary effects is to add absorbing layers around the original velocity map. Waves are trapped and attenuated by a damping parameter when propagating through those absorbing layers. Here, we follow Collino and Tsogka (2001) and implement the damping parameter as


where denotes the overall thickness of absorbing layers, indicates the distance between the current position and the closest boundary of the original velocity map, and is the theoretical reflection coefficient chosen to be . With absorbing layers added, Equation 6 can be ultimately written as


a.2 OpenFWI Examples and Inversion Results of Different Methods


Seismic Measurements in Five Channels

Channel 1

Channel 2

Channel 3

Channel 4

Channel 5
Figure 7: More examples of velocity maps and their corresponding seismic measurements in OpenFWI dataset.

Ground Truth





Figure 8: Comparison of different methods on inverted velocity maps of FlatFault. The details revealed by our UPFWI are highlighted.

Ground Truth





Figure 9: Comparison of different methods on inverted velocity maps of CurvedFault. The details revealed by our UPFWI are highlighted.