Reversible Adversarial Examples with Beam Search Attack and Grayscale Invariance

06/20/2023
by   Haodong Zhang, et al.
0

Reversible adversarial examples (RAE) combine adversarial attacks and reversible data-hiding technology on a single image to prevent illegal access. Most RAE studies focus on achieving white-box attacks. In this paper, we propose a novel framework to generate reversible adversarial examples, which combines a novel beam search based black-box attack and reversible data hiding with grayscale invariance (RDH-GI). This RAE uses beam search to evaluate the adversarial gain of historical perturbations and guide adversarial perturbations. After the adversarial examples are generated, the framework RDH-GI embeds the secret data that can be recovered losslessly. Experimental results show that our method can achieve an average Peak Signal-to-Noise Ratio (PSNR) of at least 40dB compared to source images with limited query budgets. Our method can also achieve a targeted black-box reversible adversarial attack for the first time.

READ FULL TEXT
research
10/06/2021

Reversible adversarial examples against local visual perturbation

Recently, studies have indicated that adversarial attacks pose a threat ...
research
01/04/2021

Local Black-box Adversarial Attacks: A Query Efficient Approach

Adversarial attacks have threatened the application of deep neural netwo...
research
11/06/2019

Reversible Adversarial Examples based on Reversible Image Transformation

Recent studies show that widely used deep neural networks (DNNs) are vul...
research
09/28/2020

STRATA: Building Robustness with a Simple Method for Generating Black-box Adversarial Attacks for Models of Code

Adversarial examples are imperceptible perturbations in the input to a n...
research
06/13/2023

I See Dead People: Gray-Box Adversarial Attack on Image-To-Text Models

Modern image-to-text systems typically adopt the encoder-decoder framewo...
research
01/22/2021

Generating Black-Box Adversarial Examples in Sparse Domain

Applications of machine learning (ML) models and convolutional neural ne...
research
09/20/2023

PRAT: PRofiling Adversarial aTtacks

Intrinsic susceptibility of deep learning to adversarial examples has le...

Please sign up or login with your details

Forgot password? Click here to reset