The recent progress in large language models (LLMs), especially the inve...
The performance of Large Language Models (LLMs) on downstream tasks ofte...
While conformal predictors reap the benefits of rigorous statistical
gua...
We propose a new class of generative models that naturally handle data o...
We introduce Markov Neural Processes (MNPs), a new class of Stochastic
P...
Information-theoretic approaches to active learning have traditionally
f...
We develop a contrastive framework for learning better prior distributio...
Bayesian experimental design (BED) provides a powerful and general frame...
We investigate the efficacy of treating all the parameters in a Bayesian...
We provide the first complete continuous time framework for denoising
di...
We propose Active Surrogate Estimators (ASEs), a new method for
label-ef...
We introduce implicit Deep Adaptive Design (iDAD), a new method for
perf...
We present a variational method for online state estimation and paramete...
We introduce a simple and effective method for learning VAEs with
contro...
Multimodal VAEs seek to model the joint distribution over heterogeneous ...
Active Learning is essential for more label-efficient deep learning. Bay...
Subsampling is used in convolutional neural networks (CNNs) in the form ...
Building on ideas from probabilistic programming, we introduce the conce...
We challenge a common assumption underlying most supervised deep learnin...
We introduce active testing: a new framework for sample-efficient model
...
We introduce Deep Adaptive Design (DAD), a method for amortizing the cos...
We introduce an approach for training Variational Autoencoders (VAEs) th...
Active learning is a powerful tool when labelling data is expensive, but...
We show that the gradient estimates used in training Deep Gaussian Proce...
We propose methods to strengthen the invariance properties of representa...
We make inroads into understanding the robustness of Variational Autoenc...
We present an alternative approach to semi-supervision in variational
au...
The current COVID-19 pandemic highlights the utility of contact tracing,...
Recently there has been much interest in quantifying the robustness of n...
We introduce a fully stochastic gradient based approach to Bayesian opti...
Universal probabilistic programming systems (PPSs) provide a powerful an...
Existing approaches to amortized inference in probabilistic programs wit...
Current approaches to amortizing Bayesian inference focus solely on
appr...
Recently there has been a significant interest in learning disentangled
...
Epidemiology simulations have become a fundamental tool in the fight aga...
Bayesian optimal experimental design (BOED) is a principled framework fo...
Bayesian optimal experimental design (BOED) is a principled framework fo...
We develop a new Low-level, First-order Probabilistic Programming Langua...
We develop a generalised notion of disentanglement in Variational
Auto-E...
We present a new approach to neural network verification based on estima...
We study adaptive importance sampling (AIS) as an online learning proble...
We introduce inference trees (ITs), a new class of inference methods tha...
We formalize the notion of nesting probabilistic programming queries and...
We provide theoretical and empirical evidence that using tighter evidenc...
We present a formalization of nested Monte Carlo (NMC) estimation, where...
We present the first general purpose framework for marginal maximum a
po...
We introduce AESMC: a method for using deep neural networks for simultan...
Existing methods for structure discovery in time series data construct
i...
We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a P...