Exploring Adversarial Examples: Patterns of One-Pixel Attacks

06/25/2018
by   David Kügler, et al.
0

Failure cases of black-box deep learning, e.g. adversarial examples, might have severe consequences in healthcare. Yet such failures are mostly studied in the context of real-world images with calibrated attacks. To demystify the adversarial examples, rigorous studies need to be designed. Unfortunately, complexity of the medical images hinders such study design directly from the medical images. We hypothesize that adversarial examples might result from the incorrect mapping of image space to the low dimensional generation manifold by deep networks. To test the hypothesis, we simplify a complex medical problem namely pose estimation of surgical tools into its barest form. An analytical decision boundary and exhaustive search of the one-pixel attack across multiple image dimensions let us localize the regions of frequent successful one-pixel attacks at the image space.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/24/2019

Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems

Deep neural networks (DNNs) have become popular for medical image analys...
research
05/18/2023

Content-based Unrestricted Adversarial Attack

Unrestricted adversarial attacks typically manipulate the semantic conte...
research
07/13/2020

Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes

We focus on the problem of black-box adversarial attacks, where the aim ...
research
01/28/2019

Adversarial Examples Target Topological Holes in Deep Networks

It is currently unclear why adversarial examples are easy to construct f...
research
08/04/2023

Multi-attacks: Many images + the same adversarial attack → many target labels

We show that we can easily design a single adversarial perturbation P th...
research
04/20/2022

Adversarial Scratches: Deployable Attacks to CNN Classifiers

A growing body of work has shown that deep neural networks are susceptib...
research
02/13/2018

Identify Susceptible Locations in Medical Records via Adversarial Attacks on Deep Predictive Models

The surging availability of electronic medical records (EHR) leads to in...

Please sign up or login with your details

Forgot password? Click here to reset