Variational Bayesian Unlearning

10/24/2020
by   Quoc Phong Nguyen, et al.
3

This paper studies the problem of approximately unlearning a Bayesian model from a small subset of the training data to be erased. We frame this problem as one of minimizing the Kullback-Leibler divergence between the approximate posterior belief of model parameters after directly unlearning from erased data vs. the exact posterior belief from retraining with remaining data. Using the variational inference (VI) framework, we show that it is equivalent to minimizing an evidence upper bound which trades off between fully unlearning from erased data vs. not entirely forgetting the posterior belief given the full data (i.e., including the remaining data); the latter prevents catastrophic unlearning that can render the model useless. In model training with VI, only an approximate (instead of exact) posterior belief given the full data can be obtained, which makes unlearning even more challenging. We propose two novel tricks to tackle this challenge. We empirically demonstrate our unlearning methods on Bayesian models such as sparse Gaussian process and logistic regression using synthetic and real-world datasets.

READ FULL TEXT

page 7

page 20

research
12/05/2019

Scalable Variational Bayesian Kernel Selection for Sparse Gaussian Process Regression

This paper presents a variational Bayesian kernel selection (VBKS) algor...
research
11/01/2016

Variational Inference via χ-Upper Bound Minimization

Variational inference (VI) is widely used as an efficient alternative to...
research
07/07/2022

Challenges and Pitfalls of Bayesian Unlearning

Machine unlearning refers to the task of removing a subset of training d...
research
12/02/2019

Stochastic Variational Inference via Upper Bound

Stochastic variational inference (SVI) plays a key role in Bayesian deep...
research
10/26/2019

Implicit Posterior Variational Inference for Deep Gaussian Processes

A multi-layer deep Gaussian process (DGP) model is a hierarchical compos...
research
02/27/2013

Backward Simulation in Bayesian Networks

Backward simulation is an approximate inference technique for Bayesian b...
research
07/20/2020

Bayesian Few-Shot Classification with One-vs-Each Pólya-Gamma Augmented Gaussian Processes

Few-shot classification (FSC), the task of adapting a classifier to unse...

Please sign up or login with your details

Forgot password? Click here to reset