DeepAI AI Chat
Log In Sign Up

False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger

by   Muhammad Umer, et al.

In this brief, we show that sequentially learning new information presented to a continual (incremental) learning model introduces new security risks: an intelligent adversary can introduce small amount of misinformation to the model during training to cause deliberate forgetting of a specific task or class at test time, thus creating "false memory" about that task. We demonstrate such an adversary's ability to assume control of the model by injecting "backdoor" attack samples to commonly used generative replay and regularization based continual learning approaches using continual learning benchmark variants of MNIST, as well as the more challenging SVHN and CIFAR 10 datasets. Perhaps most damaging, we show this vulnerability to be very acute and exceptionally effective: the backdoor pattern in our attack model can be imperceptible to human eye, can be provided at any point in time, can be added into the training data of even a single possibly unrelated task and can be achieved with as few as just 1% of total training dataset of a single task.


Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models

Continual (or "incremental") learning approaches are employed when addit...

Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks

Artificial neural networks are well-known to be susceptible to catastrop...

Training Time Adversarial Attack Aiming the Vulnerability of Continual Learning

Generally, regularization-based continual learning models limit access t...

Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study

Large amounts of incremental learning algorithms have been proposed to a...

Move-to-Data: A new Continual Learning approach with Deep CNNs, Application for image-class recognition

In many real-life tasks of application of supervised learning approaches...

OvA-INN: Continual Learning with Invertible Neural Networks

In the field of Continual Learning, the objective is to learn several ta...

Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling

In this work, we present a data poisoning attack that confounds machine ...