A unified PAC-Bayesian framework for machine unlearning via information risk minimization

06/01/2021
by   Sharu Theresa Jose, et al.
0

Machine unlearning refers to mechanisms that can remove the influence of a subset of training data upon request from a trained model without incurring the cost of re-training from scratch. This paper develops a unified PAC-Bayesian framework for machine unlearning that recovers the two recent design principles - variational unlearning (Nguyen et.al., 2020) and forgetting Lagrangian (Golatkar et.al., 2020) - as information risk minimization problems (Zhang,2006). Accordingly, both criteria can be interpreted as PAC-Bayesian upper bounds on the test loss of the unlearned model that take the form of free energy metrics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/25/2021

Chebyshev-Cantelli PAC-Bayes-Bennett Inequality for the Weighted Majority Vote

We present a new second-order oracle bound for the expected risk of a we...
research
05/27/2016

PAC-Bayesian Theory Meets Bayesian Inference

We exhibit a strong link between frequentist PAC-Bayesian risk bounds an...
research
12/06/2019

Improved PAC-Bayesian Bounds for Linear Regression

In this paper, we improve the PAC-Bayesian error bound for linear regres...
research
05/23/2011

PAC-Bayesian Analysis of the Exploration-Exploitation Trade-off

We develop a coherent framework for integrative simultaneous analysis of...
research
02/10/2022

On characterizations of learnability with computable learners

We study computable PAC (CPAC) learning as introduced by Agarwal et al. ...
research
03/03/2022

Robust PAC^m: Training Ensemble Models Under Model Misspecification and Outliers

Standard Bayesian learning is known to have suboptimal generalization ca...

Please sign up or login with your details

Forgot password? Click here to reset