An Incremental Gray-box Physical Adversarial Attack on Neural Network Training

02/20/2023
by   Rabiah Al-qudah, et al.
0

Neural networks have demonstrated remarkable success in learning and solving complex tasks in a variety of fields. Nevertheless, the rise of those networks in modern computing has been accompanied by concerns regarding their vulnerability to adversarial attacks. In this work, we propose a novel gradient-free, gray box, incremental attack that targets the training process of neural networks. The proposed attack, which implicitly poisons the intermediate data structures that retain the training instances between training epochs acquires its high-risk property from attacking data structures that are typically unobserved by professionals. Hence, the attack goes unnoticed despite the damage it can cause. Moreover, the attack can be executed without the attackers' knowledge of the neural network structure or training data making it more dangerous. The attack was tested under a sensitive application of secure cognitive cities, namely, biometric authentication. The conducted experiments showed that the proposed attack is effective and stealthy. Finally, the attack effectiveness property was concluded from the fact that it was able to flip the sign of the loss gradient in the conducted experiments to become positive, which indicated noisy and unstable training. Moreover, the attack was able to decrease the inference probability in the poisoned networks compared to their unpoisoned counterparts by 15.37 and 24.88 attack retained its stealthiness despite its high effectiveness. This was demonstrated by the fact that the attack did not cause a notable increase in the training time, in addition, the Fscore values only dropped by an average of 1.2 respectively.

READ FULL TEXT
research
11/24/2021

Thundernna: a white box adversarial attack

The existing work shows that the neural network trained by naive gradien...
research
02/24/2023

HyperAttack: Multi-Gradient-Guided White-box Adversarial Structure Attack of Hypergraph Neural Networks

Hypergraph neural networks (HGNN) have shown superior performance in var...
research
06/16/2021

Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch

As the curation of data for machine learning becomes increasingly automa...
research
03/09/2022

Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation

In recent years, the adversarial vulnerability of deep neural networks (...
research
04/03/2022

DST: Dynamic Substitute Training for Data-free Black-box Attack

With the wide applications of deep neural network models in various comp...
research
09/08/2020

Adversarial Attack on Large Scale Graph

Recent studies have shown that graph neural networks are vulnerable agai...
research
09/23/2021

FooBaR: Fault Fooling Backdoor Attack on Neural Network Training

Neural network implementations are known to be vulnerable to physical at...

Please sign up or login with your details

Forgot password? Click here to reset