Sparse Implicit Processes for Approximate Inference

Implicit Processes (IPs) are flexible priors that can describe models such as Bayesian neural networks, neural samplers and data generators. IPs allow for approximate inference in function-space. This avoids some degenerate problems of parameter-space approximate inference due to the high number of parameters and strong dependencies. For this, an extra IP is often used to approximate the posterior of the prior IP. However, simultaneously adjusting the parameters of the prior IP and the approximate posterior IP is a challenging task. Existing methods that can tune the prior IP result in a Gaussian predictive distribution, which fails to capture important data patterns. By contrast, methods producing flexible predictive distributions by using another IP to approximate the posterior process cannot fit the prior IP to the observed data. We propose here a method that can carry out both tasks. For this, we rely on an inducing-point representation of the prior IP, as often done in the context of sparse Gaussian processes. The result is a scalable method for approximate inference with IPs that can tune the prior IP parameters to the data, and that provides accurate non-Gaussian predictive distributions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2022

Deep Variational Implicit Processes

Implicit processes (IPs) are a generalization of Gaussian processes (GPs...
research
10/19/2020

Probabilistic selection of inducing points in sparse Gaussian processes

Sparse Gaussian processes and various extensions thereof are enabled thr...
research
07/21/2022

Correcting Model Bias with Sparse Implicit Processes

Model selection in machine learning (ML) is a crucial part of the Bayesi...
research
03/16/2018

Constant-Time Predictive Distributions for Gaussian Processes

One of the most compelling features of Gaussian process (GP) regression ...
research
07/21/2021

A variational approximate posterior for the deep Wishart process

Recent work introduced deep kernel processes as an entirely kernel-based...
research
08/30/2022

Catalytic Priors: Using Synthetic Data to Specify Prior Distributions in Bayesian Analysis

Catalytic prior distributions provide general, easy-to-use and interpret...
research
11/20/2019

Assessment and adjustment of approximate inference algorithms using the law of total variance

A common method for assessing validity of Bayesian sampling or approxima...

Please sign up or login with your details

Forgot password? Click here to reset