Universal Adversarial Attack on Deep Learning Based Prognostics

by   Arghya Basak, et al.

Deep learning-based time series models are being extensively utilized in engineering and manufacturing industries for process control and optimization, asset monitoring, diagnostic and predictive maintenance. These models have shown great improvement in the prediction of the remaining useful life (RUL) of industrial equipment but suffer from inherent vulnerability to adversarial attacks. These attacks can be easily exploited and can lead to catastrophic failure of critical industrial equipment. In general, different adversarial perturbations are computed for each instance of the input data. This is, however, difficult for the attacker to achieve in real time due to higher computational requirement and lack of uninterrupted access to the input data. Hence, we present the concept of universal adversarial perturbation, a special imperceptible noise to fool regression based RUL prediction models. Attackers can easily utilize universal adversarial perturbations for real-time attack since continuous access to input data and repetitive computation of adversarial perturbations are not a prerequisite for the same. We evaluate the effect of universal adversarial attacks using NASA turbofan engine dataset. We show that addition of universal adversarial perturbation to any instance of the input data increases error in the output predicted by the model. To the best of our knowledge, we are the first to study the effect of the universal adversarial perturbation on time series regression models. We further demonstrate the effect of varying the strength of perturbations on RUL prediction models and found that model accuracy decreases with the increase in perturbation strength of the universal adversarial attack. We also showcase that universal adversarial perturbation can be transferred across different models.


Enabling Fast and Universal Audio Adversarial Attack Using Generative Model

Recently, the vulnerability of DNN-based audio systems to adversarial at...

Universal Adversarial Perturbations and Image Spam Classifiers

As the name suggests, image spam is spam email that has been embedded in...

Now You See It, Now You Dont: Adversarial Vulnerabilities in Computational Pathology

Deep learning models are routinely employed in computational pathology (...

Protection against Cloning for Deep Learning

The susceptibility of deep learning to adversarial attack can be underst...

Physical Passive Patch Adversarial Attacks on Visual Odometry Systems

Deep neural networks are known to be susceptible to adversarial perturba...

Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks

Deep learning models achieve excellent performance in numerous machine l...

Vulnerability of Deep Learning

The Renormalisation Group (RG) provides a framework in which it is possi...

Please sign up or login with your details

Forgot password? Click here to reset