Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers

02/20/2019
by   Diego Gragnaniello, et al.
0

Deep neural networks provide unprecedented performance in all image classification problems, leveraging the availability of huge amounts of data for training. Recent studies, however, have shown their vulnerability to adversarial attacks, spawning an intense research effort in this field. With the aim of building better systems, new countermeasures and stronger attacks are proposed by the day. On the attacker's side, there is growing interest for the realistic black-box scenario, in which the user has no access to the neural network parameters. The problem is to design limited-complexity attacks which mislead the neural network without impairing image quality too much, not to raise the attention of human observers. In this work, we put special emphasis on this latter requirement and propose a powerful and low-complexity black-box attack which preserves perceptual image quality. Numerical experiments prove the effectiveness of the proposed techniques both for tasks commonly considered in this context, and for other applications in biometrics (face recognition) and forensics (camera model identification).

READ FULL TEXT

page 1

page 9

research
04/30/2021

Black-box adversarial attacks using Evolution Strategies

In the last decade, deep neural networks have proven to be very powerful...
research
12/05/2019

Scratch that! An Evolution-based Adversarial Attack against Neural Networks

Recent research has shown that Deep Neural Networks (DNNs) for image cla...
research
10/28/2022

Distributed Black-box Attack against Image Classification Cloud Services

Black-box adversarial attacks can fool image classifiers into misclassif...
research
03/09/2021

BASAR:Black-box Attack on Skeletal Action Recognition

Skeletal motion plays a vital role in human activity recognition as eith...
research
11/25/2020

Adversarial Evaluation of Multimodal Models under Realistic Gray Box Assumption

This work examines the vulnerability of multimodal (image + text) models...
research
11/30/2021

Human Imperceptible Attacks and Applications to Improve Fairness

Modern neural networks are able to perform at least as well as humans in...

Please sign up or login with your details

Forgot password? Click here to reset