On the Robustness and Generalization of Deep Learning Driven Full Waveform Inversion

11/28/2021
by   Chengyuan Deng, et al.
0

The data-driven approach has been demonstrated as a promising technique to solve complicated scientific problems. Full Waveform Inversion (FWI) is commonly epitomized as an image-to-image translation task, which motivates the use of deep neural networks as an end-to-end solution. Despite being trained with synthetic data, the deep learning-driven FWI is expected to perform well when evaluated with sufficient real-world data. In this paper, we study such properties by asking: how robust are these deep neural networks and how do they generalize? For robustness, we prove the upper bounds of the deviation between the predictions from clean and noisy data. Moreover, we demonstrate an interplay between the noise level and the additional gain of loss. For generalization, we prove a norm-based generalization error upper bound via a stability-generalization framework. Experimental results on seismic FWI datasets corroborate with the theoretical results, shedding light on a better understanding of utilizing Deep Learning for complicated scientific applications.

READ FULL TEXT

page 2

page 11

page 16

page 17

page 18

research
02/22/2023

Transfer Learning Enhanced Full Waveform Inversion

We propose a way to favorably employ neural networks in the field of non...
research
07/28/2023

Does Full Waveform Inversion Benefit from Big Data?

This paper investigates the impact of big data on deep learning models f...
research
09/08/2022

Implicit Full Waveform Inversion with Deep Neural Representation

Full waveform inversion (FWI) commonly stands for the state-of-the-art a...
research
09/03/2020

Physics-Consistent Data-driven Waveform Inversion with Adaptive Data Augmentation

Seismic full-waveform inversion (FWI) is a nonlinear computational imagi...
research
10/27/2020

Toward Better Generalization Bounds with Locally Elastic Stability

Classical approaches in learning theory are often seen to yield very loo...
research
05/27/2022

Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power

It is well-known that modern neural networks are vulnerable to adversari...
research
10/14/2017

Benefits from Superposed Hawkes Processes

The superposition of temporal point processes has been studied for many ...

Please sign up or login with your details

Forgot password? Click here to reset