Continual Learning and Private Unlearning

by   Bo Liu, et al.
The University of Texas at Austin

As intelligent agents become autonomous over longer periods of time, they may eventually become lifelong counterparts to specific people. If so, it may be common for a user to want the agent to master a task temporarily but later on to forget the task due to privacy concerns. However enabling an agent to forget privately what the user specified without degrading the rest of the learned knowledge is a challenging problem. With the aim of addressing this challenge, this paper formalizes this continual learning and private unlearning (CLPU) problem. The paper further introduces a straightforward but exactly private solution, CLPU-DER++, as the first step towards solving the CLPU problem, along with a set of carefully designed benchmark problems to evaluate the effectiveness of the proposed solution.


page 1

page 2

page 3

page 4


Unicorn: Continual Learning with a Universal, Off-policy Agent

Some real-world domains are best characterized as a single task, but for...

Rethinking Continual Learning for Autonomous Agents and Robots

Continual learning refers to the ability of a biological or artificial s...

Sparse Distributed Memory is a Continual Learner

Continual learning is a problem for artificial neural networks that thei...

Differentially Private Continual Learning

Catastrophic forgetting can be a significant problem for institutions th...

AI Autonomy: Self-Initiation, Adaptation and Continual Learning

As more and more AI agents are used in practice, it is time to think abo...

Please sign up or login with your details

Forgot password? Click here to reset