Data-driven modeling of time-domain induced polarization

07/30/2021
by   Charles L. Bérubé, et al.
9

We present a novel approach for data-driven modeling of the time-domain induced polarization (IP) phenomenon using variational autoencoders (VAE). VAEs are Bayesian neural networks that aim to learn a latent statistical distribution to encode extensive data sets as lower dimension representations. We collected 1 600 319 IP decay curves in various regions of Canada, the United States and Kazakhstan, and compiled them to train a deep VAE. The proposed deep learning approach is strictly unsupervised and data-driven: it does not require manual processing or ground truth labeling of IP data. Moreover, our VAE approach avoids the pitfalls of IP parametrization with the empirical Cole-Cole and Debye decomposition models, simple power-law models, or other sophisticated mechanistic models. We demonstrate four applications of VAEs to model and process IP data: (1) representative synthetic data generation, (2) unsupervised Bayesian denoising and data uncertainty estimation, (3) quantitative evaluation of the signal-to-noise ratio, and (4) automated outlier detection. We also interpret the IP compilation's latent representation and reveal a strong correlation between its first dimension and the average chargeability of IP decays. Finally, we experiment with varying VAE latent space dimensions and demonstrate that a single real-valued scalar parameter contains sufficient information to encode our extensive IP data compilation. This new finding suggests that modeling time-domain IP data using mathematical models governed by more than one free parameter is ambiguous, whereas modeling only the average chargeability is justified. A pre-trained implementation of our model – readily applicable to new IP data from any geolocation – is available as open-source Python code for the applied geophysics community.

READ FULL TEXT

page 14

page 24

page 25

research
11/14/2022

Disentangling Variational Autoencoders

A variational autoencoder (VAE) is a probabilistic machine learning fram...
research
07/16/2019

The continuous Bernoulli: fixing a pervasive error in variational autoencoders

Variational autoencoders (VAE) have quickly become a central tool in mac...
research
03/23/2023

High Fidelity Image Synthesis With Deep VAEs In Latent Space

We present fast, realistic image generation on high-resolution, multimod...
research
05/23/2020

Unsupervised Geometric Disentanglement for Surfaces via CFAN-VAE

For non-Euclidean data such as meshes of humans, a prominent task for ge...
research
09/12/2018

Hyperprior Induced Unsupervised Disentanglement of Latent Representations

We address the problem of unsupervised disentanglement of latent represe...
research
02/04/2023

PartitionVAE – a human-interpretable VAE

VAEs, or variational autoencoders, are autoencoders that explicitly lear...
research
02/13/2020

Neuromorphologicaly-preserving Volumetric data encoding using VQ-VAE

The increasing efficiency and compactness of deep learning architectures...

Please sign up or login with your details

Forgot password? Click here to reset