Probabilistic Jacobian-based Saliency Maps Attacks

07/12/2020
by   António Loison, et al.
9

Machine learning models have achieved spectacular performances in various critical fields including intelligent monitoring, autonomous driving and malware detection. Therefore, robustness against adversarial attacks represents a key issue to trust these models. In particular, the Jacobian-based Saliency Map Attack (JSMA) is widely used to fool neural network classifiers. In this paper, we introduce Weighted JSMA (WJSMA) and Taylor JSMA (TJSMA), simple, faster and more efficient versions of JSMA. These attacks rely upon new saliency maps involving the neural network Jacobian, its output probabilities and the input features. We demonstrate the advantages of WJSMA and TJSMA through two computer vision applications on 1) LeNet-5, a well-known Neural Network classifier (NNC), on the MNIST database and on 2) a more challenging NNC on the CIFAR-10 dataset. We obtain that WJSMA and TJSMA significantly outperform JSMA in success rate, speed and average number of changed features. For instance, on LeNet-5 (with 100% and 99.49% accuracies on the training and test sets), WJSMA and TJSMA respectively exceed 97% and 98.60% in success rate for a maximum authorised distortion of 14.5%, outperforming JSMA with more than 9.5 and 11 percentage points. The new attacks are then used to defend and create more robust models than those trained against JSMA. Like JSMA, our attacks are not scalable on large datasets such as IMAGENET but despite this fact, they remain attractive for relatively small datasets like MNIST, CIFAR-10 and may be potential tools for future applications. Codes are available via the link <https://github.com/probabilistic-jsmas/probabilistic-jsmas>.

READ FULL TEXT

page 4

page 9

page 12

page 20

page 21

research
03/15/2020

Output Diversified Initialization for Adversarial Attacks

Adversarial examples are often constructed by iteratively refining a ran...
research
01/04/2023

Beckman Defense

Optimal transport (OT) based distributional robust optimisation (DRO) ha...
research
11/24/2020

Stochastic sparse adversarial attacks

Adversarial attacks of neural network classifiers (NNC) and the use of r...
research
02/24/2020

On Pruning Adversarially Robust Neural Networks

In safety-critical but computationally resource-constrained applications...
research
10/22/2022

Hindering Adversarial Attacks with Implicit Neural Representations

We introduce the Lossy Implicit Network Activation Coding (LINAC) defenc...
research
08/06/2020

Assessing the (Un)Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging

Saliency maps have become a widely used method to make deep learning mod...
research
03/23/2021

NNrepair: Constraint-based Repair of Neural Network Classifiers

We present NNrepair, a constraint-based technique for repairing neural n...

Please sign up or login with your details

Forgot password? Click here to reset