Designing Accurate Emulators for Scientific Processes using Calibration-Driven Deep Models

05/05/2020
by   Jayaraman J. Thiagarajan, et al.
4

Predictive models that accurately emulate complex scientific processes can achieve exponential speed-ups over numerical simulators or experiments, and at the same time provide surrogates for improving the subsequent analysis. Consequently, there is a recent surge in utilizing modern machine learning (ML) methods, such as deep neural networks, to build data-driven emulators. While the majority of existing efforts has focused on tailoring off-the-shelf ML solutions to better suit the scientific problem at hand, we study an often overlooked, yet important, problem of choosing loss functions to measure the discrepancy between observed data and the predictions from a model. Due to lack of better priors on the expected residual structure, in practice, simple choices such as the mean squared error and the mean absolute error are made. However, the inherent symmetric noise assumption made by these loss functions makes them inappropriate in cases where the data is heterogeneous or when the noise distribution is asymmetric. We propose Learn-by-Calibrating (LbC), a novel deep learning approach based on interval calibration for designing emulators in scientific applications, that are effective even with heterogeneous data and are robust to outliers. Using a large suite of use-cases, we show that LbC provides significant improvements in generalization error over widely-adopted loss function choices, achieves high-quality emulators even in small data regimes and more importantly, recovers the inherent noise structure without any explicit priors.

READ FULL TEXT
research
06/06/2021

Asymmetric Loss Functions for Learning with Noisy Labels

Robust loss functions are essential for training deep neural networks wi...
research
08/10/2020

Predicting Coordinated Actuated Traffic Signal Change Times using LSTM Neural Networks

Vehicle acceleration and deceleration maneuvers at traffic signals resul...
research
09/30/2018

Nth Absolute Root Mean Error

Neural network training process takes long time when the size of trainin...
research
03/22/2020

Improving Calibration in Mixup-trained Deep Neural Networks through Confidence-Based Loss Functions

Deep Neural Networks (DNN) represent the state of the art in many tasks....
research
01/15/2023

Robust and Sparse M-Estimation of DOA

A robust and sparse Direction of Arrival (DOA) estimator is derived base...
research
04/08/2022

Evaluating the Adversarial Robustness for Fourier Neural Operators

In recent years, Machine-Learning (ML)-driven approaches have been widel...
research
10/03/2019

Exploring Generative Physics Models with Scientific Priors in Inertial Confinement Fusion

There is significant interest in using modern neural networks for scient...

Please sign up or login with your details

Forgot password? Click here to reset