DeepAI AI Chat
Log In Sign Up

A unified PAC-Bayesian framework for machine unlearning via information risk minimization

by   Sharu Theresa Jose, et al.

Machine unlearning refers to mechanisms that can remove the influence of a subset of training data upon request from a trained model without incurring the cost of re-training from scratch. This paper develops a unified PAC-Bayesian framework for machine unlearning that recovers the two recent design principles - variational unlearning (Nguyen, 2020) and forgetting Lagrangian (Golatkar, 2020) - as information risk minimization problems (Zhang,2006). Accordingly, both criteria can be interpreted as PAC-Bayesian upper bounds on the test loss of the unlearned model that take the form of free energy metrics.


page 1

page 2

page 3

page 4


Chebyshev-Cantelli PAC-Bayes-Bennett Inequality for the Weighted Majority Vote

We present a new second-order oracle bound for the expected risk of a we...

PAC-Bayesian Theory Meets Bayesian Inference

We exhibit a strong link between frequentist PAC-Bayesian risk bounds an...

How Tight Can PAC-Bayes be in the Small Data Regime?

In this paper, we investigate the question: Given a small number of data...

Improved PAC-Bayesian Bounds for Linear Regression

In this paper, we improve the PAC-Bayesian error bound for linear regres...

PAC-Bayesian Analysis of the Exploration-Exploitation Trade-off

We develop a coherent framework for integrative simultaneous analysis of...

A PAC-Bayesian Analysis of Graph Clustering and Pairwise Clustering

We formulate weighted graph clustering as a prediction problem: given a ...