Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models

02/16/2021
by   Muhammad Umer, et al.
0

Continual (or "incremental") learning approaches are employed when additional knowledge or tasks need to be learned from subsequent batches or from streaming data. However these approaches are typically adversary agnostic, i.e., they do not consider the possibility of a malicious attack. In our prior work, we explored the vulnerabilities of Elastic Weight Consolidation (EWC) to the perceptible misinformation. We now explore the vulnerabilities of other regularization-based as well as generative replay-based continual learning algorithms, and also extend the attack to imperceptible misinformation. We show that an intelligent adversary can take advantage of a continual learning algorithm's capabilities of retaining existing knowledge over time, and force it to learn and retain deliberately introduced misinformation. To demonstrate this vulnerability, we inject backdoor attack samples into the training data. These attack samples constitute the misinformation, allowing the attacker to capture control of the model at test time. We evaluate the extent of this vulnerability on both rotated and split benchmark variants of the MNIST dataset under two important domain and class incremental learning scenarios. We show that the adversary can create a "false memory" about any task by inserting carefully-designed backdoor samples to the test instances of that task thereby controlling the amount of forgetting of any task of its choosing. Perhaps most importantly, we show this vulnerability to be very acute and damaging: the model memory can be easily compromised with the addition of backdoor samples into as little as 1% of the training data, even when the misinformation is imperceptible to human eye.

READ FULL TEXT

page 1

page 5

research
02/17/2020

Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks

Artificial neural networks are well-known to be susceptible to catastrop...
research
02/09/2022

False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger

In this brief, we show that sequentially learning new information presen...
research
11/29/2022

Training Time Adversarial Attack Aiming the Vulnerability of Continual Learning

Generally, regularization-based continual learning models limit access t...
research
03/30/2023

Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling

In this work, we present a data poisoning attack that confounds machine ...
research
05/28/2023

Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study

Large amounts of incremental learning algorithms have been proposed to a...
research
09/27/2018

Generative replay with feedback connections as a general strategy for continual learning

Standard artificial neural networks suffer from the well-known issue of ...
research
07/16/2020

Multilayer Neuromodulated Architectures for Memory-Constrained Online Continual Learning

We focus on the problem of how to achieve online continual learning unde...

Please sign up or login with your details

Forgot password? Click here to reset