Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks

02/17/2020
by   Muhammad Umer, et al.
0

Artificial neural networks are well-known to be susceptible to catastrophic forgetting when continually learning from sequences of tasks. Various continual (or "incremental") learning approaches have been proposed to avoid catastrophic forgetting, but they are typically adversary agnostic, i.e., they do not consider the possibility of a malicious attack. In this effort, we explore the vulnerability of Elastic Weight Consolidation (EWC), a popular continual learning algorithm for avoiding catastrophic forgetting. We show that an intelligent adversary can bypass the EWC's defenses, and instead cause gradual and deliberate forgetting by introducing small amounts of misinformation to the model during training. We demonstrate such an adversary's ability to assume control of the model via injection of "backdoor" attack samples on both permuted and split benchmark variants of the MNIST dataset. Importantly, once the model has learned the adversarial misinformation, the adversary can then control the amount of forgetting of any task. Equivalently, the malicious actor can create a "false memory" about any task by inserting carefully-designed backdoor samples to any fraction of the test instances of that task. Perhaps most damaging, we show this vulnerability to be very acute; neural network memory can be easily compromised with the addition of backdoor samples into as little as 1

READ FULL TEXT
research
02/16/2021

Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models

Continual (or "incremental") learning approaches are employed when addit...
research
05/28/2023

Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study

Large amounts of incremental learning algorithms have been proposed to a...
research
02/09/2022

False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger

In this brief, we show that sequentially learning new information presen...
research
06/24/2021

Continual Competitive Memory: A Neural System for Online Task-Free Lifelong Learning

In this article, we propose a novel form of unsupervised learning, conti...
research
11/24/2020

Lethean Attack: An Online Data Poisoning Technique

Data poisoning is an adversarial scenario where an attacker feeds a spec...
research
05/12/2022

KASAM: Spline Additive Models for Function Approximation

Neural networks have been criticised for their inability to perform cont...
research
06/13/2020

GAN Memory with No Forgetting

Seeking to address the fundamental issue of memory in lifelong learning,...

Please sign up or login with your details

Forgot password? Click here to reset