Generating Black-Box Adversarial Examples in Sparse Domain

01/22/2021
by   Hadi Zanddizari, et al.
4

Applications of machine learning (ML) models and convolutional neural networks (CNNs) have been rapidly increased. Although ML models provide high accuracy in many applications, recent investigations show that such networks are highly vulnerable to adversarial attacks. The black-box adversarial attack is one type of attack that the attacker does not have any knowledge about the model or the training dataset. In this paper, we propose a novel approach to generate a black-box attack in sparse domain whereas the most important information of an image can be observed. Our investigation shows that large sparse components play a critical role in the performance of the image classifiers. Under this presumption, to generate adversarial example, we transfer an image into a sparse domain and put a threshold to choose only k largest components. In contrast to the very recent works that randomly perturb k low frequency (LoF) components, we perturb k largest sparse (LaS)components either randomly (query-based) or in the direction of the most correlated sparse signal from a different class. We show that LaS components contain some middle or higher frequency components information which can help us fool the classifiers with a fewer number of queries. We also demonstrate the effectiveness of this approach by fooling the TensorFlow Lite (TFLite) model of Google Cloud Vision platform. Mean squared error (MSE) and peak signal to noise ratio (PSNR) are used as quality metrics. We present a theoretical proof to connect these metrics to the level of perturbation in the sparse domain. We tested our adversarial examples to the state-of-the-art CNNs and support vector machine (SVM) classifiers on color and grayscale image datasets. The results show the proposed method can highly increase the misclassification rate of the classifiers.

READ FULL TEXT

page 1

page 3

page 8

research
12/19/2017

Query-Efficient Black-box Adversarial Examples

Current neural network-based image classifiers are susceptible to advers...
research
02/11/2022

Adversarial Attacks and Defense Methods for Power Quality Recognition

Vulnerability of various machine learning methods to adversarial example...
research
03/15/2023

Physics-Informed Optical Kernel Regression Using Complex-valued Neural Fields

Lithography is fundamental to integrated circuit fabrication, necessitat...
research
06/14/2019

Copy and Paste: A Simple But Effective Initialization Method for Black-Box Adversarial Attacks

Many optimization methods for generating black-box adversarial examples ...
research
06/20/2023

Reversible Adversarial Examples with Beam Search Attack and Grayscale Invariance

Reversible adversarial examples (RAE) combine adversarial attacks and re...
research
05/08/2020

Projection Probability-Driven Black-Box Attack

Generating adversarial examples in a black-box setting retains a signifi...
research
02/24/2019

MaskDGA: A Black-box Evasion Technique Against DGA Classifiers and Adversarial Defenses

Domain generation algorithms (DGAs) are commonly used by botnets to gene...

Please sign up or login with your details

Forgot password? Click here to reset