Adversarial Attacks Against Medical Deep Learning Systems

04/15/2018
by   Samuel G. Finlayson, et al.
0

The discovery of adversarial examples has raised concerns about the practical deployment of deep learning systems. In this paper, we argue that the field of medicine may be uniquely susceptible to adversarial attacks, both in terms of monetary incentives and technical vulnerability. To this end, we outline the healthcare economy and the incentives it creates for fraud, we extend adversarial attacks to three popular medical imaging tasks, and we provide concrete examples of how and why such attacks could be realistically carried out. For each of our representative medical deep learning classifiers, both white and black box attacks were both effective and human-imperceptible. We urge caution in employing deep learning systems in clinical settings, and encourage research into domain-specific defense strategies.

READ FULL TEXT

page 3

page 7

page 14

page 15

page 16

research
07/24/2019

Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems

Deep neural networks (DNNs) have become popular for medical image analys...
research
10/22/2019

Adversarial Example Detection by Classification for Deep Speech Recognition

Machine Learning systems are vulnerable to adversarial attacks and will ...
research
06/29/2018

Adversarial Examples in Deep Learning: Characterization and Divergence

The burgeoning success of deep learning has raised the security and priv...
research
05/31/2018

PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks

Deep learning systems have become ubiquitous in many aspects of our live...
research
09/29/2020

Adversarial Attacks Against Deep Learning Systems for ICD-9 Code Assignment

Manual annotation of ICD-9 codes is a time consuming and error-prone pro...
research
06/24/2020

Defending against adversarial attacks on medical imaging AI system, classification or detection?

Medical imaging AI systems such as disease classification and segmentati...
research
05/24/2022

Defending a Music Recommender Against Hubness-Based Adversarial Attacks

Adversarial attacks can drastically degrade performance of recommenders ...

Please sign up or login with your details

Forgot password? Click here to reset