References
 [1] S. Bennett. Wearables could catch heart problems that elude your doctor, Feb. 2018.
 [2] G. D. Clifford, C. Liu, B. Moody, L.w. H. Lehman, I. Silva, Q. Li, A. Johnson, and R. G. Mark. Af classification from a short single lead ecg recording: The physionet computing in cardiology challenge 2017. Computing in cardiology, 2017.
 [3] S. G. Finlayson, J. D. Bowers, J. Ito, J. L. Zittrain, A. L. Beam, and I. S. Kohane. Adversarial attacks on medical machine learning. Science, 363(6433):1287–1289, 2019.
 [4] S. G. Finlayson, I. S. Kohane, and A. L. Beam. Adversarial attacks against medical deep learning systems. arXiv preprint arXiv:1804.05296, 2018.
 [5] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.

[6]
S. D. Goodfellow, A. Goodwin, R. Greer, P. C. Laussen, M. Mazwi, and D. Eytan.
Towards understanding ecg rhythm classification using convolutional neural networks and attention mappings.
Proceedings of Machine Learning Research, 2018.  [7] A. Y. Hannun, P. Rajpurkar, M. Haghpanahi, G. H. Tison, C. Bourn, M. P. Turakhia, and A. Y. Ng. Cardiologistlevel arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nature medicine, 25(1):65, 2019.
 [8] S. Hong, M. Wu, Y. Zhou, Q. Wang, J. Shang, H. Li, and J. Xie. Encase: An ensemble classifier for ecg classification using expert features and deep neural networks. In 2017 Computing in Cardiology (CinC), pages 1–4. IEEE, 2017.
 [9] I. D. C. (IDC). Idc reports strong growth in the worldwide wearables market, led by holiday shipments of smartwatches, wrist bands, and earworn devices, Mar. 2019.
 [10] K. D. Julian, M. J. Kochenderfer, and M. P. Owen. Deep neural network compression for aircraft collision avoidance systems. Journal of Guidance, Control, and Dynamics, 42(3):598–608, 2018.
 [11] B. B. Kelly, V. Fuster, et al. Promoting cardiovascular health in the developing world: a critical challenge to achieve global health. National Academies Press, 2010.
 [12] H. L. Kennedy. The evolution of ambulatory ecg monitoring. Progress in cardiovascular diseases, 56(2):127–132, 2013.
 [13] A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016.
 [14] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
 [15] A. C. of Cardiology (ACC). Apple heart study identifies afib in small group of apple watch wearers, Mar. 2019.
 [16] M. Paschali, S. Conjeti, F. Navarro, and N. Navab. Generalizability vs. robustness: Adversarial examples for medical imaging. arXiv preprint arXiv:1804.00504, 2018.
 [17] P. Rajpurkar, A. Y. Hannun, M. Haghpanahi, C. Bourn, and A. Y. Ng. Cardiologistlevel arrhythmia detection with convolutional neural networks. arXiv preprint arXiv:1707.01836, 2017.
 [18] G. Singh, T. Gehr, M. Mirman, M. Püschel, and M. Vechev. Fast and effective robustness certification. In Advances in Neural Information Processing Systems, pages 10802–10813, 2018.
 [19] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
Acknowledgements
We thank WeiNchih Lee, Sreyas Mohan, Mark Goldstein, Aodong Li, Aahlad Manas Puli, Harvineet Singh, Mukund Sudarshan and Will Whitney.
Methods
Description of the Traditional Attack Methods.
Two traditional attack methods are FGSM [5] and PGDPGD [13]. They are whitebox attack methods based on the gradients of the loss with respect to the input.
Denote our input entry , true label , classifier (network)
, and loss function
. We describe FGSM and PGD below:
FGSMFGSM. FGSM is a fast algorithm. For an attack level , FGSM sets
The attack level is chosen to be sufficiently small so as to be undetectable.

PGDPGD. An improved version is to use an iterative version of FGSM. Define to project each back to the infinite norm ball by clamping the maximum absolute difference value between and to . Beginning by setting , we have
(1) After steps, we get our adversarial example .
Our Smooth Attack Method.
In order to smooth the signal, we use the help of convolution. By convolution, we take the weighted average of one position of the signal and its neighbors:
where is the objective function and
is the weights or kernel function. In our experiment, the weights are determined by a Gaussian kernel. Mathematically, if we have a Gaussian kernel of size 2K+1 and standard deviation
, we haveWe can easily see that when goes to infinity, the convolution with Gaussian kernel becomes a simple average; when goes to zero, the convolution becomes an identity function. Instead of getting an adversarial perturbation and then convolving it with the Gaussian kernels, we could create adversarial examples by optimizing a smooth perturbation that fools the neural network. We introduce our method of training SAP. In our SAP method, we take the adversarial perturbation as the parameter and add it to the clean examples after convolving with a number of Gaussian kernels. We denote to be a Gaussian kernel with size and standard deviation . The resulting adversarial example could be written as a function of :
In our experiment, we let be and be . Then we try to maximize the loss function with respect to to get the adversarial example. We still use PGD but on this time:
(2) 
There are two major differences between updates (2) and (1). In (2), we update not and clip around zero not the input . In practice, we initialize the adversarial perturbation to be the one obtained from PGD () on and run another PGD () on .
Existence of Adversarial Examples
We design experiments to show that adversarial examples are not rare. Denote original signal to be and adversarial example we generated to be .
First, we generate Gaussian noise and then add it to the adversarial examples. To make sure the new examples are still smooth, we smooth the perturbation by convolving with the same Gaussian kernels in our smooth attack method. We then clip the perturbation to make sure that it is still in the infinite norm ball. The newly generated example is
We repeat the process of generating new examples 1000 times. These newly generated examples are still adversarial examples. Some of them may intersect. For each intersected pair, we concatenate the left part of one examples and the right part of the other to create new adversarial examples. Denote and to be a pair of adversarial examples that intersect. Suppose they intersect at time step and the total length of the example is . The new hybrid example satisfies:
where means from time step to time step . All the newly concatenated examples are still misclassified the network.
The 1000 adversarial examples form a band. To emphasise that all the smooth signals in the band are still adversarial examples, we sample uniformly from the band to create new examples. Denote and to be the maximum value and minimum value of 1000 samples at time step
. To sample a smooth signal from the band, we first sample a uniform random variable
for each time step and then we smooth the perturbation. The example generated by uniform sampling and smoothing, this time isWe repeat this procedure 1000 times, and all the newly generated examples still cause the network to make the wrong diagnosis.
Comments
There are no comments yet.