Training Time Adversarial Attack Aiming the Vulnerability of Continual Learning

11/29/2022
by   Gyojin Han, et al.
0

Generally, regularization-based continual learning models limit access to the previous task data to imitate the real-world setting which has memory and privacy issues. However, this introduces a problem in these models by not being able to track the performance on each task. In other words, current continual learning methods are vulnerable to attacks done on the previous task. We demonstrate the vulnerability of regularization-based continual learning methods by presenting simple task-specific training time adversarial attack that can be used in the learning process of a new task. Training data generated by the proposed attack causes performance degradation on a specific task targeted by the attacker. Experiment results justify the vulnerability proposed in this paper and demonstrate the importance of developing continual learning models that are robust to adversarial attack.

READ FULL TEXT

page 2

page 8

research
02/16/2021

Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models

Continual (or "incremental") learning approaches are employed when addit...
research
03/30/2023

Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling

In this work, we present a data poisoning attack that confounds machine ...
research
07/27/2023

Detecting Morphing Attacks via Continual Incremental Training

Scenarios in which restrictions in data transfer and storage limit the p...
research
11/09/2022

Continual learning autoencoder training for a particle-in-cell simulation via streaming

The upcoming exascale era will provide a new generation of physics simul...
research
05/23/2023

Enhancing Accuracy and Robustness through Adversarial Training in Class Incremental Continual Learning

In real life, adversarial attack to deep learning models is a fatal secu...
research
02/09/2022

False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger

In this brief, we show that sequentially learning new information presen...
research
04/05/2022

Attention Distraction: Watermark Removal Through Continual Learning with Selective Forgetting

Fine-tuning attacks are effective in removing the embedded watermarks in...

Please sign up or login with your details

Forgot password? Click here to reset