From Adaptive Query Release to Machine Unlearning

07/20/2023
by   Enayat Ullah, et al.
0

We formalize the problem of machine unlearning as design of efficient unlearning algorithms corresponding to learning algorithms which perform a selection of adaptive queries from structured query classes. We give efficient unlearning algorithms for linear and prefix-sum query classes. As applications, we show that unlearning in many problems, in particular, stochastic convex optimization (SCO), can be reduced to the above, yielding improved guarantees for the problem. In particular, for smooth Lipschitz losses and any ρ>0, our results yield an unlearning algorithm with excess population risk of Õ(1/√(n)+√(d)/nρ) with unlearning query (gradient) complexity Õ(ρ·Retraining Complexity), where d is the model dimensionality and n is the initial number of samples. For non-smooth Lipschitz losses, we give an unlearning algorithm with excess population risk Õ(1/√(n)+(√(d)/nρ)^1/2) with the same unlearning query (gradient) complexity. Furthermore, in the special case of Generalized Linear Models (GLMs), such as those in linear and logistic regression, we get dimension-independent rates of Õ(1/√(n) +1/(nρ)^2/3) and Õ(1/√(n) +1/(nρ)^1/3) for smooth Lipschitz and non-smooth Lipschitz losses respectively. Finally, we give generalizations of the above from one unlearning request to dynamic streams consisting of insertions and deletions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/06/2022

Differentially Private Generalized Linear Models Revisited

We study the problem of (ϵ,δ)-differentially private learning of linear ...
research
10/17/2018

Uniform Graphical Convergence of Subgradients in Nonconvex Optimization and Learning

We investigate the stochastic optimization problem of minimizing populat...
research
06/08/2020

Classification Under Misspecification: Halfspaces, Generalized Linear Models, and Connections to Evolvability

In this paper we revisit some classic problems on classification under m...
research
01/22/2021

Differentially Private SGD with Non-Smooth Loss

In this paper, we are concerned with differentially private SGD algorith...
research
02/11/2019

Efficient Primal-Dual Algorithms for Large-Scale Multiclass Classification

We develop efficient algorithms to train ℓ_1-regularized linear classifi...
research
06/02/2022

Faster Rates of Convergence to Stationary Points in Differentially Private Optimization

We study the problem of approximating stationary points of Lipschitz and...
research
07/16/2021

Adaptive first-order methods revisited: Convex optimization without Lipschitz requirements

We propose a new family of adaptive first-order methods for a class of c...

Please sign up or login with your details

Forgot password? Click here to reset