Medical Aegis: Robust adversarial protectors for medical images

11/22/2021
by   Qingsong Yao, et al.
0

Deep neural network based medical image systems are vulnerable to adversarial examples. Many defense mechanisms have been proposed in the literature, however, the existing defenses assume a passive attacker who knows little about the defense system and does not change the attack strategy according to the defense. Recent works have shown that a strong adaptive attack, where an attacker is assumed to have full knowledge about the defense system, can easily bypass the existing defenses. In this paper, we propose a novel adversarial example defense system called Medical Aegis. To the best of our knowledge, Medical Aegis is the first defense in the literature that successfully addresses the strong adaptive adversarial example attacks to medical images. Medical Aegis boasts two-tier protectors: The first tier of Cushion weakens the adversarial manipulation capability of an attack by removing its high-frequency components, yet posing a minimal effect on classification performance of the original image; the second tier of Shield learns a set of per-class DNN models to predict the logits of the protected model. Deviation from the Shield's prediction indicates adversarial examples. Shield is inspired by the observations in our stress tests that there exist robust trails in the shallow layers of a DNN model, which the adaptive attacks can hardly destruct. Experimental results show that the proposed defense accurately detects adaptive attacks, with negligible overhead for model inference.

READ FULL TEXT
research
03/13/2021

Attack as Defense: Characterizing Adversarial Examples using Robustness

As a new programming paradigm, deep learning has expanded its applicatio...
research
10/31/2020

Evaluation of Inference Attack Models for Deep Learning on Medical Data

Deep learning has attracted broad interest in healthcare and medical com...
research
12/02/2018

SentiNet: Detecting Physical Attacks Against Deep Learning Systems

SentiNet is a novel detection framework for physical attacks on neural n...
research
10/26/2022

Adversarially Robust Medical Classification via Attentive Convolutional Neural Networks

Convolutional neural network-based medical image classifiers have been s...
research
05/21/2022

Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models

Server breaches are an unfortunate reality on today's Internet. In the c...
research
05/25/2020

Adaptive Adversarial Logits Pairing

Adversarial examples provide an opportunity as well as impose a challeng...
research
12/17/2020

A Hierarchical Feature Constraint to Camouflage Medical Adversarial Attacks

Deep neural networks (DNNs) for medical images are extremely vulnerable ...

Please sign up or login with your details

Forgot password? Click here to reset