As the use of neural networks continues to grow, it is critical to examine their behavior in adversarial settings. Prior work has shown that neural networks are vulnerable to adversarial examples , instances similar to a natural instance
, but classified by a neural network as any (incorrect) targetchosen by the adversary.
, or reinforcement learning by manipulating the images the RL agent sees[6, 21]. In the discrete domain, there has been some study of adversarial examples over text classification  and malware classification [16, 20].
There has been comparatively little study on the space of audio, where the most common use is performing automatic speech recognition. In automatic speech recognition, a neural network is given an audio waveform and perform the speech-to-text transform that gives the transcription of the phrase being spoken (as used in, e.g., Apple Siri, Google Now, and Amazon Echo).
Constructing targeted adversarial examples on speech recognition has proven difficult. Hidden and inaudible voice commands [11, 41, 39] are targeted attacks, but require synthesizing new audio and can not modify existing audio (analogous to the observation that neural networks can make high confidence predictions for unrecognizable images ). Other work has constructed standard untargeted adversarial examples on different audio systems [24, 13]. The current state-of-the-art targeted attack on automatic speech recognition is Houdini , which can only construct audio adversarial examples targeting phonetically similar phrases, leading the authors to state
targeted attacks seem to be much more challenging when dealing with speech recognition systems than when we consider artificial visual systems.
In this paper, we demonstrate that targeted adversarial examples exist in the audio domain by attacking DeepSpeech , a state-of-the-art speech-to-text transcription neural network. Figure 1 illustrates our attack: given any natural waveform , we are able to construct a perturbation that is nearly inaudible but so that is recognized as any desired phrase. We are able to achieve this by making use of strong, iterative, optimization-based attacks based on the work of .
Our white-box attack is end-to-end, and operates directly on the raw samples that are used as input to the classifier. This requires optimizing through the MFC pre-processing transformation, which is has been proven to be difficult . Our attack works with success, regardless of the desired transcription or initial source audio sample.
By starting with an arbitrary waveform, such as music, we can embed speech into audio that should not be recognized as speech; and by choosing silence as the target, we can hide audio from a speech-to-text system.
Audio adversarial examples give a new domain to explore these intriguing properties of neural networks. We hope others will build on our attacks to further study this field. To facilitate future work, we make our code and dataset available111http://nicholas.carlini.com/code/audio_adversarial_examples. Additionally, we encourage the reader to listen to our audio adversarial examples.
Neural Networks & Speech Recognition.
A neural network is a differentiable parameterized function . Its parameters can be updated by gradient descent to learn any function.
We represent audio as a
-dimensional vector. Each element is a signed 16-bit value, sampled at 16KHz. To reduce the input dimensionality, the Mel-Frequency Cepstrum (MFC) transform is often used as a preprocessing step . The MFC splits the waveform into 50 frames per second, and maps each frame to the frequency domain.
Standard classification neural networks take one input and produce an output probability distribution over all output labels. However, in the case of speech-to-text systems, there are exponentially many possible labels, making it computationally infeasible to enumerate all possible phrases.
Therefore, speech recognition systems often use Recurrent Neural Networks (RNNs) to map an audio waveform to a sequence of probability distributions over individual characters, instead of over complete phrases. An RNN is a function which maintains a state vectorwith and , where the input is one frame of input, and each output is a probability distribution over which character was being spoken during that frame.
Connectionist Temporal Classiﬁcation
(CTC)  is a method of training a sequence-to-sequence neural network when the alignment between the input and output sequences is not known. DeepSpeech uses CTC because the inputs are an audio sample of a person speaking, and the unaligned transcribed sentences, where the exact position of each word in the audio sample is not known.
We briefly summarize the key details and notation. We refer readers to  for an excellent survey of CTC.
Let be the input domain — a single frame of input — and be the range — the characters a-z, space, and the special token (described below). Our neural network takes a sequence of frames and returns a probability distribution over the output domain for each frame. We write to mean that the probability of frame having label . We use to denote a phrase: a sequence of characters , where each .
While maps every frame to a probability distribution over the characters, this does not directly give a probability distribution over all phrases. The probability of a phrase is defined as a function of the probability of each character.
We begin with two short definitions. We say that a sequence reduces to if starting with and making the following two operations (in order) yields :
Remove all sequentially duplicated tokens.
Remove all tokens.
For example, the sequence reduces to .
Further, we say that is an alignment of with respect to (formally: ) if (a) reduces to , and (b) the length of is equal to the length of . The probability of alignment under is the product of the likelihoods of each of its elements:
With these definitions, we can now define the probability of a given phrase under the distribution as
As is usually done, the loss function used to train the network is the negative log likelihood of the desired phrase:
Despite the exponential search space, this loss can be computed efficiently with dynamic programming .
Finally, to decode a vector to a phrase , we search for the phrase that best aligns to .
Because computing requires searching an exponential space, it is typically approximated in one of two ways.
Greedy Decoding searches for the most likely alignment (which is easy to find) and then reduces this alignment to obtain the transcribed phrase:
Beam Search Decoding simultaneously evaluates the likelihood of multiple alignments and then chooses the most likely phrase under these alignments. We refer the reader to  for a complete algorithm description.
Evasion attacks have long been studied on machine learning classifiers[29, 4, 5], and are practical against many types of models .
When discussion neural networks, these evasion attacks are referred to as adversarial examples : for any input , it is possible to construct a sample that is similar to (according to some metric) but so that . In the audio domain, these untargeted adversarial example are usually not interesting: causing a speech-to-text system to transcribe “test sentence” as the misspelled “test sentense” does little to help an adversary.
Targeted Adversarial Examples
are a more powerful attack: not only must the classification of and differ, but the network must assign a specific label (chosen by the adversary) to the instance . The purpose of this paper is to show that targeted adversarial examples are possible with only slight distortion on speech-to-text systems.
Iii Audio Adversarial Examples
Iii-a Threat Model & Evaluation Benchmark
Given an audio waveform , and target transcription , our task is to construct another audio waveform so that and sound similar (formalized below), but so that . We report success only if the output of the network matches exactly the target phrase (i.e., contains no misspellings or extra characters).
We assume a white-box setting where the adversary has complete knowledge of the model and its parameters. This is the threat model taken in most prior work . Just as later work in the space of images showed black-box attacks are possible [35, 22]; we expect that our attacks can be extended to black-box attacks. Additionally, we assume our adversarial examples are directly classified without any noise introduced (e.g., by playing them over-the-air and then recording them with a microphone). Initial work on image-based adversarial examples also made this same assumption, which was later shown unnecessary [27, 2].
How should we quantify the distortion introduced by a perturbation ? In the space of images, despite some debate , most of the community has settled on metrics , most often using [14, 30], the maximum amount any pixel has been changed. We follow this convention for our audio attacks.
We measure distortion in Decibels (dB): a logarithmic scale that measures the relative loudness of an audio sample:
To say that some signal is “10 dB” is only meaningful when comparing it relative to some other reference point. In this paper, we compare the dB level of the distortion to the original waveform . To make this explicit, we write
Because the perturbation introduced is quieter than the original signal, the distortion is a negative number, where smaller values indicate quieter distortions.
While this metric may not be a perfect measure of distortion, as long as the perturbation is small enough, it will be imperceptible to humans. We encourage the reader to listen to our adversarial examples to hear how similar they sound. Alternatively, later, in Figure 2, we visualize two waveforms which transcribe to different phrases overlaid.
To evaluate the effectiveness of our attack, we construct targeted audio adversarial examples on the first test instances of the Mozilla Common Voice dataset. For each sample, we target different incorrect transcriptions, chosen at random such that (a) the transcription is incorrect, and (b) it is theoretically possible to reach that target.
Iii-B An Initial Formulation
Here M represents the maximum representable value ( in our case). This constraint can be handled by clipping the values of ; for notational simplicity we omit it from future formulation. Due to the non-linearity of the constraint , standard gradient-descent techniques do not work well with this formulation.
Prior work  has resolved this through the reformulation
where the loss function is constructed so that The parameter trades off the relative importance of being adversarial and remaining close to the original example.
Constructing a loss function with this property is much simpler in the domain of images than in the domain of audio; on images, directly corresponds to the probability of the input having label . In contrast, for audio, we use a second decoding step to compute , and so constructing a loss function is nontrivial.
To begin, we use the CTC loss as the loss function: For this loss function, one direction of the implication holds true (i.e., ) but the converse does not. Fortunately, this means that the resulting solution will still be adversarial, it just may not be minimally perturbed.
The second difficulty we must address is that when using a distortion metric, this optimization process will often oscillate around a solution without converging . Therefore, instead we initially solve the formulation
for some sufficiently large constant . Upon obtaining a partial solution to the above problem, we reduce and resume minimization, repeating until no solution can be found.
To solve this formulation, we differentiate through the entire classifier to generate our adversarial examples — starting from the audio sample, through the MFC, and neural network, to the final loss. We solve the minimization problem over the complete audio sample simultaneously. This is in contrast with prior work on hidden voice commands , which were generated sequentially, one frame at a time. We solve the minimization problem with the Adam  optimizer using a learning rate of , for a maximum of iterations.
We are able to generate targeted adversarial examples with success for each of the source-target pairs with a mean perturbation of dB. For comparison, this is roughly the difference between ambient noise in a quiet room and a person talking . We encourage the reader to listen to our audio adversarial examples1. The interval for distortion ranged from dB to dB.
The longer a phrase is, the more difficult it is to target: every extra character requires approximately a dB increase in distortion. However, conversely, we observe that the longer the initial source phrase is, the easier it is to make it target a given transcription. These two effects roughly counteract each other (although we were not able to measure this to a statistically significant degree of certainty).
Generating a single adversarial example requires approximately one hour of compute time on commodity hardware (a single NVIDIA 1080Ti). However, due to the massively parallel nature of GPUs, we are able to construct adversarial examples simultaneously, reducing the time for constructing any given adversarial example to only a few minutes.222Due to implementation difficulties, after constructing adversarial examples simultaneously, we must fine-tune them individually afterwards.
Iii-C Improved Loss Function
Carlini & Wagner  demonstrate that the choice of loss function impacts the final distortion of generated adversarial examples by a factor of or more. We now show the same holds in the audio domain, but to a lesser extent. While CTC loss is highly useful for training the neural network, we show that a carefully designed loss function allows generating better lower-distortion adversarial examples. For the remainder of this section, we focus on generating adversarial examples that are only effective when using greedy decoding.
In order to minimize the CTC loss (as done in § III-B), an optimizer will make every aspect of the transcribed phrase more similar to the target phrase. That is, if the target phrase is “ABCD” and we are already decoding to “ABCX”, minimizing CTC loss will still cause the “A” to be more “A”-like, despite the fact that the only important change we require is for the “X” to be turned into a “D”.
This effect of making items classified more strongly as the desired label despite already having that label led to the design of a more effective loss function:
Once the probability of item is larger than any other item, the optimizer no longer sees a reduction in loss by making it more strongly classified with that label.
We now adapt this loss function to the audio domain. Assume we were given an alignment that aligns the phrase with the probabilities . Then the loss of this sequence is
We make one further improvement on this loss function. The constant used in the minimization formulation determines the relative importance of being close to the original symbol versus being adversarial. A larger value of allows the optimizer to place more emphasis on reducing .
In audio, consistent with prior work  we observe that certain characters are more difficult for the transcription to recognize. When we choose only one constant for the complete phrase, it must be large enough so that we can make the most difficult character be transcribed correctly. This forces to be larger than necessary for the easier-to-target segments. To resolve this issue, we instead use the following formulation:
where . Computing the loss function requires choice of an alignment . If we were not concerned about runtime efficiency, in principle we could try all alignments and select the best one. However, this is computationally prohibitive.
Instead, we use a two-step attack:
First, we let be an adversarial example found using the CTC loss (following §III-B). CTC loss explicitly constructs an alignment during decoding. We extract the alignment that is induced by (by computing ). We fix this alignment and use it as the target in the second step.
Next, holding the alignment fixed, we generate a less-distorted adversarial example targeting the alignment using the improved loss function above to minimize , starting gradient descent at the initial point .
We repeat the evaluation from Section III-B (above), and generate targeted adversarial examples for the first 100 instances of the Common Voice test set. We are able to reduce the mean distortion from dB to dB. However, the adversarial examples we generate are now only guaranteed to be effective against a greedy decoder; against a beam-search decoder, the transcribed phrases are often more similar to the target phrase than the original phrase, but do not perfectly match the target.
Figure 2 shows two waveforms overlaid; the blue, thick line is the original waveform, and the orange, thin line the modified adversarial waveform. This sample was chosen randomly from among the training data, and corresponds to a distortion of dB. Even visually, these two waveforms are nearly indistinguishable.
Iii-D Audio Information Density
Recall that the input waveform is converted into 50 frames per second of audio, and DeepSpeech outputs one probability distribution of characters per frame. This places the theoretical maximum density of audio at 50 characters per second. We are able to generate adversarial examples that produce output at this maximum rate. Thus, short audio clips can transcribe to a long textual phrase.
The loss function is simpler in this setting. The only alignment of to is the assignment
We perform this attack and find it is effective, although it requires a mean distortion of dB.
Iii-E Starting from Non-Speech
Not only are we able to construct adversarial examples that cause DeepSpeech to transcribe the incorrect text for a person’s speech, we are also able to begin with arbitrary non-speech audio sample and make that recognize as any target phrase. No technical novelty on top of what was developed above is required to mount this attack: we only let the initial audio waveform be non-speech.
To evaluate the effectiveness of this attack, we take five-second clips from classical music (which contain no speech) and target phrases contained in the Common Voice dataset. We have found that this attack requires more computational effort (we perform iterations of gradient descent) and the total distortion is slightly larger, with a mean of dB.
Iii-F Targeting Silence
Finally, we find it is possible to hide speech by adding adversarial noise that causes DeepSpeech to transcribe nothing. While performing this attack without modification (by just targeting the empty phrase) is effective, we can slightly improve on this if we define silence to be an arbitrary length sequence of only the space character repeated. With this definition, to obtain silence, we should let
We find that targeting silence is easier than targeting a specific phrase: with distortion less than dB below the original signal, we can turn any phrase into silence.
This partially explains why it is easier to construct adversarial examples when starting with longer audio waveforms than shorter ones: because the longer phrase contains more sounds, the adversary can silence the ones that are not required and obtain a subsequence that nearly matches the target. In contrast, for a shorter phrase, the adversary must synthesize new characters that did not exist previously.
Iv Audio Adversarial Example Properties
Iv-a Evaluating Single-Step Methods
In contrast to prior work which views adversarial examples as “blind spots” of a neural network, Goodfellow et al.  argue that adversarial examples are largely effective due to the locally linear nature of neural networks.
The Fast Gradient Sign Method (FGSM)  demonstrates that this is true in the space of images. FGSM takes a single step in the direction of the gradient of the loss function. That is, given network with loss function , we compute the adversarial example as
Intuitively, for each pixel in an image, this attack asks “in which direction should we modify this pixel to minimize the loss?” and then taking a small step in that direction for every pixel simultaneously. This attack can be applied directly to audio, changing individual samples instead of pixels.
However, we find that this type of single-step attack is not effective on audio adversarial examples: the inherent non-linearity introduced in computing the MFCCs, along with the depth of many rounds of LSTMs, introduces a large degree of non-linearity in the output.
In Figure 3 we compare the value of the CTC loss when traveling in the direction of a known adversarial example, compared to traveling in the fast gradient sign direction. While initially (near the source audio sample), the fast gradient direction is more effective at reducing the loss function, it quickly plateaus and does not decrease afterwards. On the other hand, using iterative optimization-based attacks find a direction that eventually leads to an adversarial example. (Only when the CTC loss is below 10 does the phrase have the correct transcription.)
We do, however, observe that the FGSM can be used to produce untargeted audio adversarial examples, that make a phrase misclassified (although optimization methods again can do so with less distortion).
Iv-B Robustness of Adversarial Examples
The minimally perturbed adversarial examples we construct in Section III-B can be made non-adversarial by trivial modifications to the input. Here, we demonstrate here that it is possible to construct adversarial examples robust to various forms of noise.
Robustness to pointwise noise.
Given an adversarial example , adding pointwise random noise to and returning will cause to lose its adversarial label, even if the distortion is small enough to allow normal examples to retain their classification.
Robustness to MP3 compression.
, we make use of the straight-through estimator to construct adversarial examples robust to MP3 compression. We generate an adversarial example such that is classified as the target label by computing gradients of the CTC-Loss assuming that the gradient of the MP3 compression is the identity function. While individual gradient steps are likely not correct, in aggregate the gradients average out to become useful. This allows us to generate adversarial examples with approximately larger distortion that remain robust to MP3 compression.
V Open Questions
Can these attacks be played over-the-air?
Image-based adversarial examples have been shown to be feasible in the physical world [27, 2]. In the audio space, both hidden voice commands and Dolphin Attack’s inaudible voice commands are effective over-the-air when played by a speaker and recorded by a microphone [11, 41].
The audio adversarial examples we construct in this paper do not remain adversarial after being played over-the-air, and therefore present a limited real-world threat; however, just as the initial work on image-based adversarial examples did not consider the physical channel and only later was it shown to be possible, we believe further work will be able to produce audio adversarial examples that are effective over-the-air.
Do universal adversarial perturbations  exist?
One surprising observation is that on the space of images, it is possible to construct a single perturbation that when applied to an arbitrary image will make its classification incorrect. These attacks would be powerful on audio, and would correspond to a perturbation that could be played to cause any other waveform to recognize as a target phrase.
Are audio adversarial examples transferable?
That is, given an audio sample , can we generate a single perturbation so that for multiple classifiers ? Transferability is believed to be a fundamental property of neural networks , significantly complicates constructing robust defenses , and allows attackers to mount black-box attacks . Evaluating transferability on the audio domain is an important direction for future work.
Which existing defenses can be applied audio?
To the best of our knowledge, all existing defenses to adversarial examples have only been evaluated on image domains. If the defender’s objective is to produce a robust neural network, then it should improve resistance to adversarial examples on all domains, not just on images. Audio adversarial examples give another point of comparison.
We demonstrate targeted audio adversarial examples are effective on automatic speech recognition. With optimization-based attacks applied end-to-end, we are able to turn any audio waveform into any target transcription with success by only adding a slight distortion. We can cause audio to transcribe up to 50 characters per second (the theoretical maximum), cause music to transcribe as arbitrary speech, and hide speech from being transcribed.
We present preliminary evidence that audio adversarial examples have different properties from those on images by showing that linearity does not hold on the audio domain. We hope that future work will continue to investigate audio adversarial examples, and separate the fundamental properties of adversarial examples from properties which occur only on image recognition.
This work was supported by National Science Foundation award CNS-1514457, Qualcomm, and the Hewlett Foundation through the Center for Long-Term Cybersecurity.
- Arnab et al.  A. Arnab, O. Miksik, and P. H. Torr. On the robustness of semantic segmentation models to adversarial attacks. arXiv preprint arXiv:1711.09856, 2017.
- Athalye et al.  A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok. Synthesizing robust adversarial examples. arXiv preprint arXiv:1707.07397, 2017.
- Athalye et al.  A. Athalye, N. Carlini, and D. Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
- Barreno et al.  M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar. Can machine learning be secure? In Proceedings of the 2006 ACM Symposium on Information, computer and communications security, pages 16–25. ACM, 2006.
- Barreno et al.  M. Barreno, B. Nelson, A. D. Joseph, and J. Tygar. The security of machine learning. Machine Learning, 81(2):121–148, 2010.
- Behzadan and Munir  V. Behzadan and A. Munir. Vulnerability of deep reinforcement learning to policy induction attacks. arXiv preprint arXiv:1701.04143, 2017.
- Bengio et al.  Y. Bengio, N. Léonard, and A. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
- Biggio et al.  B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, and F. Roli. Evasion attacks against machine learning at test time. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 387–402. Springer, 2013.
- Carlini and Wagner [2017a] N. Carlini and D. Wagner. Magnet and ”efficient defenses against adversarial attacks” are not robust to adversarial examples. arXiv preprint arXiv:1711.08478, 2017a.
- Carlini and Wagner [2017b] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, pages 39–57. IEEE, 2017b.
- Carlini et al.  N. Carlini, P. Mishra, T. Vaidya, Y. Zhang, M. Sherr, C. Shields, D. Wagner, and W. Zhou. Hidden voice commands. In 25th USENIX Security Symposium (USENIX Security 16), Austin, TX, 2016.
- Cisse et al.  M. Cisse, Y. Adi, N. Neverova, and J. Keshet. Houdini: Fooling deep structured prediction models. arXiv preprint arXiv:1707.05373, 2017.
- Gong and Poellabauer  Y. Gong and C. Poellabauer. Crafting adversarial examples for speech paralinguistics applications. arXiv preprint arXiv:1711.03280, 2017.
- Goodfellow et al.  I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
- Graves et al.  A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pages 369–376. ACM, 2006.
- Grosse et al.  K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel. Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435, 2016.
- Hannun  A. Hannun. Sequence modeling with ctc. Distill, 2017. doi: 10.23915/distill.00008. https://distill.pub/2017/ctc.
- Hannun et al.  A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates, et al. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014.
- Hochreiter and Schmidhuber  S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
- Hu and Tan  W. Hu and Y. Tan. Generating adversarial malware examples for black-box attacks based on gan. arXiv preprint arXiv:1702.05983, 2017.
- Huang et al.  S. Huang, N. Papernot, I. Goodfellow, Y. Duan, and P. Abbeel. Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284, 2017.
- Ilyas et al.  A. Ilyas, L. Engstrom, A. Athalye, and J. Lin. Query-efficient black-box adversarial examples. arXiv preprint arXiv:1712.07113, 2017.
- Jia and Liang  R. Jia and P. Liang. Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328, 2017.
- Kereliuk et al.  C. Kereliuk, B. L. Sturm, and J. Larsen. Deep learning and music adversaries. IEEE Transactions on Multimedia, 17(11):2059–2071, 2015.
- Kingma and Ba  D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Kos et al.  J. Kos, I. Fischer, and D. Song. Adversarial examples for generative models. arXiv preprint arXiv:1702.06832, 2017.
- Kurakin et al.  A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016.
- Liu et al.  Y. Liu, X. Chen, C. Liu, and D. Song. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770, 2016.
- Lowd and Meek  D. Lowd and C. Meek. Adversarial learning. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, pages 641–647. ACM, 2005.
- Madry et al.  A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
- Moosavi-Dezfooli et al.  S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. arXiv preprint arXiv:1610.08401, 2016.
- Mozilla  Mozilla. Project deepspeech. https://github.com/mozilla/DeepSpeech, 2017.
- Nguyen et al.  A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In , pages 427–436, 2015.
- Papernot et al. [2016a] N. Papernot, P. McDaniel, and I. Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016a.
- Papernot et al. [2016b] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami. Practical black-box attacks against deep learning systems using adversarial examples. arXiv preprint arXiv:1602.02697, 2016b.
- Rozsa et al.  A. Rozsa, E. M. Rudd, and T. E. Boult. Adversarial diversity and hard positive generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 25–32, 2016.
- Sharif et al.  M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 1528–1540. ACM, 2016.
- Smith et al.  S. W. Smith et al. The scientist and engineer’s guide to digital signal processing. 1997.
- Song and Mittal  L. Song and P. Mittal. Inaudible voice commands. arXiv preprint arXiv:1708.07238, 2017.
- Szegedy et al.  C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. ICLR, 2013.
- Zhang et al.  G. Zhang, C. Yan, X. Ji, T. Zhang, T. Zhang, and W. Xu. Dolphinatack: Inaudible voice commands. CCS, 2017.