Corrected Kriging update formulae for batch-sequential data assimilation

03/29/2012
by   Clément Chevalier, et al.
0

Recently, a lot of effort has been paid to the efficient computation of Kriging predictors when observations are assimilated sequentially. In particular, Kriging update formulae enabling significant computational savings were derived in Barnes and Watson (1992), Gao et al. (1996), and Emery (2009). Taking advantage of the previous Kriging mean and variance calculations helps avoiding a costly (n+1) × (n+1) matrix inversion when adding one observation to the n already available ones. In addition to traditional update formulae taking into account a single new observation, Emery (2009) also proposed formulae for the batch-sequential case, i.e. when r > 1 new observations are simultaneously assimilated. However, the Kriging variance and covariance formulae given without proof in Emery (2009) for the batch-sequential case are not correct. In this paper we fix this issue and establish corrected expressions for updated Kriging variances and covariances when assimilating several observations in parallel.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/07/2018

Fast Variance Reduction Method with Stochastic Batch Size

In this paper we study a family of variance reduction methods with rando...
research
10/13/2015

Fast sequential forensic camera identification

Two sequential camera source identification methods are proposed. Sequen...
research
03/07/2018

Sequential Maximum Margin Classifiers for Partially Labeled Data

In many real-world applications, data is not collected as one batch, but...
research
04/19/2019

Note on Mean Vector Testing for High-Dimensional Dependent Observations

For the mean vector test in high dimension, Ayyala et al.(2017,153:136-1...
research
03/20/2023

Practical Realization of Bessel's Correction for a Bias-Free Estimation of the Auto-Covariance and the Cross-Covariance Functions

To derive the auto-covariance function from a sampled and time-limited s...
research
11/27/2020

Eigenvalue-corrected Natural Gradient Based on a New Approximation

Using second-order optimization methods for training deep neural network...
research
03/22/2022

Sequential Bayesian Registration for Functional Data

In many modern applications, discretely-observed data may be naturally u...

Please sign up or login with your details

Forgot password? Click here to reset