Mitigation of Spatial Nonstationarity with Vision Transformers

12/09/2022
by   Lei Liu, et al.
0

Spatial nonstationarity, the location variance of features' statistical distributions, is ubiquitous in many natural settings. For example, in geological reservoirs rock matrix porosity varies vertically due to geomechanical compaction trends, in mineral deposits grades vary due to sedimentation and concentration processes, in hydrology rainfall varies due to the atmosphere and topography interactions, and in metallurgy crystalline structures vary due to differential cooling. Conventional geostatistical modeling workflows rely on the assumption of stationarity to be able to model spatial features for the geostatistical inference. Nevertheless, this is often not a realistic assumption when dealing with nonstationary spatial data and this has motivated a variety of nonstationary spatial modeling workflows such as trend and residual decomposition, cosimulation with secondary features, and spatial segmentation and independent modeling over stationary subdomains. The advent of deep learning technologies has enabled new workflows for modeling spatial relationships. However, there is a paucity of demonstrated best practice and general guidance on mitigation of spatial nonstationarity with deep learning in the geospatial context. We demonstrate the impact of two common types of geostatistical spatial nonstationarity on deep learning model prediction performance and propose the mitigation of such impacts using self-attention (vision transformer) models. We demonstrate the utility of vision transformers for the mitigation of nonstationarity with relative errors as low as 10 such as convolutional neural networks. We establish best practice by demonstrating the ability of self-attention networks for modeling large-scale spatial relationships in the presence of commonly observed geospatial nonstationarity.

READ FULL TEXT

page 8

page 14

research
09/07/2021

nnFormer: Interleaved Transformer for Volumetric Segmentation

Transformers, the default model of choices in natural language processin...
research
08/19/2021

Do Vision Transformers See Like Convolutional Neural Networks?

Convolutional neural networks (CNNs) have so far been the de-facto model...
research
06/07/2021

ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias

Transformers have shown great potential in various computer vision tasks...
research
06/04/2021

Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning

We challenge a common assumption underlying most supervised deep learnin...
research
12/28/2021

APRIL: Finding the Achilles' Heel on Privacy for Vision Transformers

Federated learning frameworks typically require collaborators to share t...
research
01/31/2023

Fairness-aware Vision Transformer via Debiased Self-Attention

Vision Transformer (ViT) has recently gained significant interest in sol...

Please sign up or login with your details

Forgot password? Click here to reset