Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks

01/15/2018
by   Bo Luo, et al.
0

Machine learning systems based on deep neural networks, being able to produce state-of-the-art results on various perception tasks, have gained mainstream adoption in many applications. However, they are shown to be vulnerable to adversarial example attack, which generates malicious output by adding slight perturbations to the input. Previous adversarial example crafting methods, however, use simple metrics to evaluate the distances between the original examples and the adversarial ones, which could be easily detected by human eyes. In addition, these attacks are often not robust due to the inevitable noises and deviation in the physical world. In this work, we present a new adversarial example attack crafting method, which takes the human perceptual system into consideration and maximizes the noise tolerance of the crafted adversarial example. Experimental results demonstrate the efficacy of the proposed technique.

READ FULL TEXT

page 3

page 5

research
12/06/2018

On Configurable Defense against Adversarial Example Attacks

Machine learning systems based on deep neural networks (DNNs) have gaine...
research
12/05/2019

Region-Wise Attack: On Efficient Generation of Robust Physical Adversarial Examples

Deep neural networks (DNNs) are shown to be susceptible to adversarial e...
research
04/22/2022

A Tale of Two Models: Constructing Evasive Attacks on Edge Models

Full-precision deep learning models are typically too large or costly to...
research
10/24/2017

One pixel attack for fooling deep neural networks

Recent research has revealed that the output of Deep Neural Networks (DN...
research
03/07/2019

Attack Type Agnostic Perceptual Enhancement of Adversarial Images

Adversarial images are samples that are intentionally modified to deceiv...
research
07/26/2022

Perception-Aware Attack: Creating Adversarial Music via Reverse-Engineering Human Perception

Recently, adversarial machine learning attacks have posed serious securi...
research
06/21/2023

Evaluating Adversarial Robustness of Convolution-based Human Motion Prediction

Human motion prediction has achieved a brilliant performance with the he...

Please sign up or login with your details

Forgot password? Click here to reset