Adversarial Examples for Electrocardiograms

05/13/2019
by   Xintian Han, et al.
9

Among all physiological signals, electrocardiogram (ECG) has seen some of the largest expansion in both medical and recreational applications with the rise of single-lead versions. These versions are embedded in medical devices and wearable products such as the injectable Medtronic Linq monitor, the iRhythm Ziopatch wearable monitor, and the Apple Watch Series 4. Recently, deep neural networks have been used to classify ECGs, outperforming even physicians specialized in cardiac electrophysiology. However, deep learning classifiers have been shown to be brittle to adversarial examples, including in medical-related tasks. Yet, traditional attack methods such as projected gradient descent (PGD) create examples that introduce square wave artifacts that are not physiological. Here, we develop a method to construct smoothed adversarial examples. We chose to focus on models learned on the data from the 2017 PhysioNet/Computing-in-Cardiology Challenge for single lead ECG classification. For this model, we utilized a new technique to generate smoothed examples to produce signals that are 1) indistinguishable to cardiologists from the original examples 2) incorrectly classified by the neural network. Further, we show that adversarial examples are not rare. Deep neural networks that have achieved state-of-the-art performance fail to classify smoothed adversarial ECGs that look real to clinical experts.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

07/16/2021

ECG-Adv-GAN: Detecting ECG Adversarial Examples with Conditional Generative Adversarial Networks

Electrocardiogram (ECG) acquisition requires an automated system and ana...
08/20/2021

Application of Adversarial Examples to Physical ECG Signals

This work aims to assess the reality and feasibility of the adversarial ...
11/20/2019

Logic-inspired Deep Neural Networks

Deep neural networks have achieved impressive performance and become de-...
03/09/2018

Detecting Adversarial Examples - A Lesson from Multimedia Forensics

Adversarial classification is the task of performing robust classificati...
10/14/2019

DeepSearch: Simple and Effective Blackbox Fuzzing of Deep Neural Networks

Although deep neural networks have been successful in image classificati...
03/20/2020

Adversarial Examples and the Deeper Riddle of Induction: The Need for a Theory of Artifacts in Deep Learning

Deep learning is currently the most widespread and successful technology...
08/25/2020

Using Deep Networks for Scientific Discovery in Physiological Signals

Deep neural networks (DNN) have shown remarkable success in the classifi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

References

Acknowledgements

We thank Wei-Nchih Lee, Sreyas Mohan, Mark Goldstein, Aodong Li, Aahlad Manas Puli, Harvineet Singh, Mukund Sudarshan and Will Whitney.

Methods

Description of the Traditional Attack Methods.

Two traditional attack methods are FGSM [5] and PGDPGD [13]. They are white-box attack methods based on the gradients of the loss with respect to the input.

Denote our input entry , true label , classifier (network)

, and loss function

. We describe FGSM and PGD below:

  • FGSMFGSM. FGSM is a fast algorithm. For an attack level , FGSM sets

    The attack level is chosen to be sufficiently small so as to be undetectable.

  • PGDPGD. An improved version is to use an iterative version of FGSM. Define to project each back to the infinite norm ball by clamping the maximum absolute difference value between and to . Beginning by setting , we have

    (1)

    After steps, we get our adversarial example .

Our Smooth Attack Method.

In order to smooth the signal, we use the help of convolution. By convolution, we take the weighted average of one position of the signal and its neighbors:

where is the objective function and

is the weights or kernel function. In our experiment, the weights are determined by a Gaussian kernel. Mathematically, if we have a Gaussian kernel of size 2K+1 and standard deviation

, we have

We can easily see that when goes to infinity, the convolution with Gaussian kernel becomes a simple average; when goes to zero, the convolution becomes an identity function. Instead of getting an adversarial perturbation and then convolving it with the Gaussian kernels, we could create adversarial examples by optimizing a smooth perturbation that fools the neural network. We introduce our method of training SAP. In our SAP method, we take the adversarial perturbation as the parameter and add it to the clean examples after convolving with a number of Gaussian kernels. We denote to be a Gaussian kernel with size and standard deviation . The resulting adversarial example could be written as a function of :

In our experiment, we let be and be . Then we try to maximize the loss function with respect to to get the adversarial example. We still use PGD but on this time:

(2)

There are two major differences between updates (2) and (1). In (2), we update not and clip around zero not the input . In practice, we initialize the adversarial perturbation to be the one obtained from PGD () on and run another PGD () on .

Existence of Adversarial Examples

We design experiments to show that adversarial examples are not rare. Denote original signal to be and adversarial example we generated to be .

First, we generate Gaussian noise and then add it to the adversarial examples. To make sure the new examples are still smooth, we smooth the perturbation by convolving with the same Gaussian kernels in our smooth attack method. We then clip the perturbation to make sure that it is still in the infinite norm ball. The newly generated example is

We repeat the process of generating new examples 1000 times. These newly generated examples are still adversarial examples. Some of them may intersect. For each intersected pair, we concatenate the left part of one examples and the right part of the other to create new adversarial examples. Denote and to be a pair of adversarial examples that intersect. Suppose they intersect at time step and the total length of the example is . The new hybrid example satisfies:

where means from time step to time step . All the newly concatenated examples are still misclassified the network.

The 1000 adversarial examples form a band. To emphasise that all the smooth signals in the band are still adversarial examples, we sample uniformly from the band to create new examples. Denote and to be the maximum value and minimum value of 1000 samples at time step

. To sample a smooth signal from the band, we first sample a uniform random variable

for each time step and then we smooth the perturbation. The example generated by uniform sampling and smoothing, this time is

We repeat this procedure 1000 times, and all the newly generated examples still cause the network to make the wrong diagnosis.

Extended Data

Figure 5: Adversarial Examples Created by PGD method. This adversarial example contains square waves and is not smooth. A physician reading this tracing will likely detect that this adversarial example is not a real ECG.