
Multilevel Monte Carlo for Smoothing via Transport Methods
In this article we consider recursive approximations of the smoothing di...
read it

Scaling Up Bayesian Uncertainty Quantification for Inverse Problems using Deep Neural Networks
Due to the importance of uncertainty quantification (UQ), Bayesian appro...
read it

Robust Optimisation Monte Carlo
This paper is on Bayesian inference for parametric statistical models th...
read it

Adaptive Monte Carlo via Bandit Allocation
We consider the problem of sequentially choosing between a set of unbias...
read it

Spacetime multilevel Monte Carlo methods and their application to cardiac electrophysiology
We present a novel approach aimed at highperformance uncertainty quanti...
read it

Compressed particle methods for expensive models with application in Astronomy and Remote Sensing
In many inference problems, the evaluation of complex and costly models ...
read it

Machine Learning Econometrics: Bayesian algorithms and methods
As the amount of economic and other data generated worldwide increases v...
read it
Randomized multilevel Monte Carlo for embarrassingly parallel inference
This position paper summarizes a recently developed research program focused on inference in the context of data centric science and engineering applications, and forecasts its trajectory forward over the next decade. Often one endeavours in this context to learn complex systems in order to make more informed predictions and high stakes decisions under uncertainty. Some key challenges which must be met in this context are robustness, generalizability, and interpretability. The Bayesian framework addresses these three challenges, while bringing with it a fourth, undesirable feature: it is typically far more expensive than its deterministic counterparts. In the 21st century, and increasingly over the past decade, a growing number of methods have emerged which allow one to leverage cheap lowfidelity models in order to precondition algorithms for performing inference with more expensive models and make Bayesian inference tractable in the context of highdimensional and expensive models. Notable examples are multilevel Monte Carlo (MLMC), multiindex Monte Carlo (MIMC), and their randomized counterparts (rMLMC), which are able to provably achieve a dimensionindependent (including ∞dimension) canonical complexity rate with respect to mean squared error (MSE) of 1/MSE. Some parallelizability is typically lost in an inference context, but recently this has been largely recovered via novel double randomization approaches. Such an approach delivers i.i.d. samples of quantities of interest which are unbiased with respect to the infinite resolution target distribution. Over the coming decade, this family of algorithms has the potential to transform data centric science and engineering, as well as classical machine learning applications such as deep learning, by scaling up and scaling out fully Bayesian inference.
READ FULL TEXT
Comments
There are no comments yet.