Application of Adversarial Examples to Physical ECG Signals

08/20/2021
by   Taiga Ono, et al.
0

This work aims to assess the reality and feasibility of the adversarial attack against cardiac diagnosis system powered by machine learning algorithms. To this end, we introduce adversarial beats, which are adversarial perturbations tailored specifically against electrocardiograms (ECGs) beat-by-beat classification system. We first formulate an algorithm to generate adversarial examples for the ECG classification neural network model, and study its attack success rate. Next, to evaluate its feasibility in a physical environment, we mount a hardware attack by designing a malicious signal generator which injects adversarial beats into ECG sensor readings. To the best of our knowledge, our work is the first in evaluating the proficiency of adversarial examples for ECGs in a physical setup. Our real-world experiments demonstrate that adversarial beats successfully manipulated the diagnosis results 3-5 times out of 40 attempts throughout the course of 2 minutes. Finally, we discuss the overall feasibility and impact of the attack, by clearly defining motives and constraints of expected attackers along with our experimental results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2021

ECG-Adv-GAN: Detecting ECG Adversarial Examples with Conditional Generative Adversarial Networks

Electrocardiogram (ECG) acquisition requires an automated system and ana...
research
01/12/2019

ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System

Deep neural networks (DNNs)-powered Electrocardiogram (ECG) diagnosis sy...
research
05/13/2019

Adversarial Examples for Electrocardiograms

Among all physiological signals, electrocardiogram (ECG) has seen some o...
research
09/04/2021

Real-World Adversarial Examples involving Makeup Application

Deep neural networks have developed rapidly and have achieved outstandin...
research
10/27/2022

Isometric 3D Adversarial Examples in the Physical World

3D deep learning models are shown to be as vulnerable to adversarial exa...
research
03/08/2022

Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon

Estimating the risk level of adversarial examples is essential for safel...
research
07/26/2021

Benign Adversarial Attack: Tricking Algorithm for Goodness

In spite of the successful application in many fields, machine learning ...

Please sign up or login with your details

Forgot password? Click here to reset