Correcting Model Bias with Sparse Implicit Processes

Model selection in machine learning (ML) is a crucial part of the Bayesian learning procedure. Model choice may impose strong biases on the resulting predictions, which can hinder the performance of methods such as Bayesian neural networks and neural samplers. On the other hand, newly proposed approaches for Bayesian ML exploit features of approximate inference in function space with implicit stochastic processes (a generalization of Gaussian processes). The approach of Sparse Implicit Processes (SIP) is particularly successful in this regard, since it is fully trainable and achieves flexible predictions. Here, we expand on the original experiments to show that SIP is capable of correcting model bias when the data generating mechanism differs strongly from the one implied by the model. We use synthetic datasets to show that SIP is capable of providing predictive distributions that reflect the data better than the exact predictions of the initial, but wrongly assumed model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2022

Deep Variational Implicit Processes

Implicit processes (IPs) are a generalization of Gaussian processes (GPs...
research
10/14/2021

Sparse Implicit Processes for Approximate Inference

Implicit Processes (IPs) are flexible priors that can describe models su...
research
06/06/2018

Variational Implicit Processes

This paper introduces the variational implicit processes (VIPs), a Bayes...
research
03/25/2023

Stochastic Model Predictive Control Utilizing Bayesian Neural Networks

Integrating measurements and historical data can enhance control systems...
research
10/12/2021

Uncertainty-based out-of-distribution detection requires suitable function space priors

The need to avoid confident predictions on unfamiliar data has sparked i...
research
02/20/2020

Bayesian Deep Learning and a Probabilistic Perspective of Generalization

The key distinguishing property of a Bayesian approach is marginalizatio...
research
03/29/2023

Implicit Visual Bias Mitigation by Posterior Estimate Sharpening of a Bayesian Neural Network

The fairness of a deep neural network is strongly affected by dataset bi...

Please sign up or login with your details

Forgot password? Click here to reset