Penalized Projected Kernel Calibration for Computer Models

03/01/2021
by   Yan Wang, et al.
0

Projected kernel calibration is known to be theoretically superior, its loss function is abbreviated as PK loss function. In this work, we prove the uniform convergence of PK loss function and show that (1) when the sample size is large, any local minimum point and local maximum point of the L_2 loss between the true process and the computer models is a local minimum point of the PK loss function; (2) all the local minimum values of the PK loss function converge to the same value. These theoretical results imply that it is extremely hard for the projected kernel calibration to identify the global minimum point of the L_2 loss which is defined as the optimal value of the calibration parameters. To solve this problem, a frequentist method, called the penalized projected kernel calibration method is proposed. As a frequentist method, the proposed method is proved to be semi-parametric efficient. On the other hand, the proposed method has a natural bayesian version, which allows users to calculate the credible region of the calibration parameters without using a large sample approximation. Through extensive simulation studies and a real-world case study, we show that the proposed calibration can accurately estimate the calibration parameters, and compare favorably to alternative calibration methods regardless of the sample size.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/03/2018

Bayesian Projected Calibration of Computer Models

We develop a Bayesian approach called Bayesian projected calibration to ...
research
07/17/2021

A Reproducing Kernel Hilbert Space Approach to Functional Calibration of Computer Models

This paper develops a frequentist solution to the functional calibration...
research
11/23/2022

Fast Calibration for Computer Models with Massive Physical Observations

Computer model calibration is a crucial step in building a reliable comp...
research
10/21/2022

When Expressivity Meets Trainability: Fewer than n Neurons Can Work

Modern neural networks are often quite wide, causing large memory and co...
research
03/07/2022

Bayesian Mendelian randomization testing of interval causal null hypotheses: ternary decision rules and loss function calibration

Our approach to Mendelian Randomization (MR) analysis is designed to inc...
research
12/31/2020

Psychoacoustic Calibration of Loss Functions for Efficient End-to-End Neural Audio Coding

Conventional audio coding technologies commonly leverage human perceptio...
research
11/05/2019

An Alternative Probabilistic Interpretation of the Huber Loss

The Huber loss is a robust loss function used for a wide range of regres...

Please sign up or login with your details

Forgot password? Click here to reset