Continual Learning and Private Unlearning

03/24/2022
by   Bo Liu, et al.
48

As intelligent agents become autonomous over longer periods of time, they may eventually become lifelong counterparts to specific people. If so, it may be common for a user to want the agent to master a task temporarily but later on to forget the task due to privacy concerns. However enabling an agent to forget privately what the user specified without degrading the rest of the learned knowledge is a challenging problem. With the aim of addressing this challenge, this paper formalizes this continual learning and private unlearning (CLPU) problem. The paper further introduces a straightforward but exactly private solution, CLPU-DER++, as the first step towards solving the CLPU problem, along with a set of carefully designed benchmark problems to evaluate the effectiveness of the proposed solution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2018

Unicorn: Continual Learning with a Universal, Off-policy Agent

Some real-world domains are best characterized as a single task, but for...
research
07/02/2019

Rethinking Continual Learning for Autonomous Agents and Robots

Continual learning refers to the ability of a biological or artificial s...
research
03/20/2023

Sparse Distributed Memory is a Continual Learner

Continual learning is a problem for artificial neural networks that thei...
research
02/18/2019

Differentially Private Continual Learning

Catastrophic forgetting can be a significant problem for institutions th...
research
03/17/2022

AI Autonomy: Self-Initiation, Adaptation and Continual Learning

As more and more AI agents are used in practice, it is time to think abo...

Please sign up or login with your details

Forgot password? Click here to reset