Defense against adversarial attacks on spoofing countermeasures of ASV

03/06/2020 ∙ by Haibin Wu, et al. ∙ 0

Various forefront countermeasure methods for automatic speaker verification (ASV) with considerable performance in anti-spoofing are proposed in the ASVspoof 2019 challenge. However, previous work has shown that countermeasure models are vulnerable to adversarial examples indistinguishable from natural data. A good countermeasure model should not only be robust against spoofing audio, including synthetic, converted, and replayed audios; but counteract deliberately generated examples by malicious adversaries. In this work, we introduce a passive defense method, spatial smoothing, and a proactive defense method, adversarial training, to mitigate the vulnerability of ASV spoofing countermeasure models against adversarial examples. This paper is among the first to use defense methods to improve the robustness of ASV spoofing countermeasure models under adversarial attacks. The experimental results show that these two defense methods positively help spoofing countermeasure models counter adversarial examples.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Automatic speaker verification, abbreviated as ASV, refers to the task of ascertaining whether an utterance was spoken by a specific speaker. ASV is undisputedly a crucial technology for biometric identification, which is broadly applied in real-world applications like banking and home automation. Considerable performance improvements in terms of both accuracy and efficiency of ASV systems have been achieved through active research in a diversity of approaches [6, 2, 14, 13, 5, 10]. [13]

proposed a method that use the Gaussian mixture model to extract acoustic features and then apply the likelihood ratio for scoring. An end-to-end speaker verification model that directly maps an utterance into a verification score is proposed by

[5] to improve verification accuracy and make the ASV model compact and efficient.

Recently, improving the robustness of ASV systems against spoofing audios, such as synthetic, converted, and replayed audios, has attracted increasing attention. The automatic speaker verification spoofing and countermeasures challenge[18, 7, 16], which is now in its third edition, aims at developing reliable spoofing countermeasures which can counteract the three kinds of spoofing audios mentioned above. The ASVspoof 2019 takes both logical access (LA) and physical access (PA) into account. The LA scenario contains artificially generated spoofing audios by modern text-to-speech and voice conversion models, and the PA scenario contains replayed audios. A variety of state-of-the-art countermeasure methods that aim at anti-spoofing for ASV models are proposed, and these have achieved considerable performance level for anti-spoofing [3, 20, 8, 9, 1]. However, whether these countermeasure models can defend against deliberately generated adversarial examples remain to be investigated.

Adversarial examples [15]

are generated by maliciously perturbing the original input with a small noise. The perturbations are almost indistinguishable to humans but can cause a well-trained network to classify incorrectly. Using deliberately generated adversarial examples to attack machine learning models is called adversarial attack. Previous work has shown that image classification models are subject to adversarial attacks


. The spoofing countermeasure models for ASV learned by the backpropagation algorithm also have such intrinsic blind spots to adversarial examples

[11]. These intrinsic blind spots must be fixed to ensure safety.

To mitigate the vulnerability of spoofing countermeasure models to adversarial attacks, we introduce a passive defense method, namely spatial smoothing, and a proactive defense method, namely adversarial training. Two countermeasure models in ASVspoof 2019 [20, 8] are constructed, and we implement adversarial training and spatial smoothing to improve the reliability of these two models. This work is among the first to explore defense against adversarial attacks for spoofing countermeasure models.

This paper is organized as follows. In section 2, we introduce the procedure of adversarial example generation. Section 3 gives the detailed structure of two countermeasure models for subsequent experiments. In Section 4, we describe two defense approaches, namely spatial smoothing and adversarial training. The experimental result and analysis are shown in Section 5. Finally, conclusion and future work are given in Section 6.

2 Adversarial Example Generation

2.1 Adversarial Example Generation

We can generate adversarial examples by adding a minimally perceptible perturbation to the input space. The perturbation is found by solving an optimization problem. There are two kinds of adversarial attacks: targeted attacks and nontargeted attacks. Targeted attacks aim at maximizing the probability of a targeted class which is not the correct class. Nontargeted attacks aim at minimizing the probability of the correct class. We focus on targeted attacks in this work. Specifically, to generate adversarial examples, we fix the parameters

of a well-trained model and perform gradient descent to update the input. Mathematically, we want to find a sufficiently small perturbation that satisfies (see Equation 1):



is a well-trained neural network parameterized by

, is the input data with dimensionality , is the true label corresponding to , is a randomly selected label where , is the perturbed data, is a small perturbation and is the feasible set of . Finding a suitable is a constrained optimization problem and we can use descent method to solve it. can be a small -norm ball:


where and . The constraint in Equation 2 is a box constrain and clipping is used to make the solution feasible. We choose the feasible set as shown in Equation 2.

The projected gradient descent method, abbreviated as PGD method is an iterative method for adversarial attack and has shown effective attack performance in various tasks [12]. In this work, the PGD method is introduced to generate adversarial examples. The PGD method is specified in Algorithm 1. In Algorithm 1, is the returned adversarial example, the clip() function applies element-wise clipping to make sure and .

0:   and , input and its corresponding label. is a selected label and . , step size. , the number of iterations.
1:  Initialize ;
2:  for ; ;  do
3:     ;
4:     if  then
6:     else
8:     end if
9:  end for
10:  return  ;
Algorithm 1 Projected Gradient Descent Method

3 ASV Spoofing Countermeasure Models

Inspired by the ASV spoofing countermeasure models in the ASVspoof 2019 challenge[16, 20, 8], we construct two kinds of single models to conduct defense methods. The description of these two models will be given in the subsequent parts.

3.1 VGG-like Network

The VGG network, a model made up of convolution layers and pooling layers, has shown remarkable performance in image classification. [20] studied VGG from the perspective of automatic speaker verification and proposed a VGG-like network with good performance on anti-spoofing for ASV. Based on this finding, we modified VGG to address anti-spoofing and the modified network structure is shown in Table 1.

Type Filter Output
Flatten 25088
FC 4096
FC 4096
FC(softmax) 2
Table 1: VGG-like network architecture.

3.2 Squeeze-Excitation ResNet model

Lai et al. [8] proposed the Squeeze-Excitation ResNet model (SENet) to address anti-spoofing for ASV. The system proposed by [8] ranked 3rd and 14th for the PA and LA scenarios respectively in the ASVspoof 2019 challenge. However, [11] successfully attacked the SENet by deliberately generated adversarial examples. Hence, this work seeks to improve the robustness of SENet with two defense methods elaborated below.

4 Defense Methods

There are two kinds of defense methods against adversarial attacks: passive defense and proactive defense. Passive defense methods aim at countering adversarial attacks without modifying the model. Proactive defense methods train new models which are robust to adversarial examples. Two defense methods are introduced in this section: spatial smoothing which is inexpensive and complementary to other defense methods and adversarial training.

4.1 Spatial Smoothing

Spatial smoothing (referred as ”filtering”) has been widely used for noise reduction in image processing. It is a method that uses the nearby pixels to smooth the central pixel. There are a variety of smoothing methods based on different weighting mechanisms of nearby pixels, e.g., median filter, mean filter, Gaussian filter, etc. Take the mean filter as an example, a slicing window moves over the picture and the central pixel in the window will be substituted by the mean of the values within the slicing window.

Spatial smoothing was introduced by [19] to harden image classification models by detecting malicious generated adversarial examples. Implementing smoothing does not need extra training effort, so we use this inexpensive strategy to improve the robustness of well-trained ASV models.

0:   and , set of paired audio and its corresponding labels. , network parameters.

, normal training epoch.

, adversarial training epoch. , number of training examples, , batch size.
1:  Initialize .
2:  for ; ;  do
3:     for ;;  do
4:        Get from ;
5:        Update using gradient decent with respect to ;
6:     end for
7:  end for
8:  while { & not converged} do
9:     for ; ;  do
10:        Get from ;
11:        Generate adversarial examples by PGD method;
12:        Update using gradient decent with respect to ;
13:     end for
14:  end while
15:  return  ;
Algorithm 2

4.2 Adversarial Training

Adversarial training, which utilizes adversarial examples and injects them into training data, was introduced in [4] to mitigate the vulnerability of deep neural networks against adversarial examples. Adversarial training can be seen as a combination of an inner optimization problem and an outer optimization problem where the goal of the inner optimization is to find imperceptible adversarial examples and the goal of outer optimization is to fix the blind spots. In this work, we also employ adversarial training. First, we use clean examples to pre-train the countermeasure models for epochs. Then we do adversarial training for epochs. The detailed implementation procedure is shown in Algorithm 2.

5 Experiment

5.1 Experiment Setup

In this paper, we use the LA partition of the ASVspoof 2019 dataset [16]. The LA partition is divided into training, development and evaluation sets. The training and development sets are generated by the same kinds of TTS or VC models while the evaluation set contains examples generated by different kinds of TTS or VC models. We trained and then tested on the development set to ensure similar distributions between the datasets. Raw log power magnitude spectrum computed from raw audio waveform is used as acoustic features. A Hamming window of size 1724 and step-size of 0.001s is used to extract FFT spectrum. We use only the first 600 frames of each utterance for training and testing. We do not employ additional preprocessing methods such as dereverberation or pre-emphasis.

The network structures of the two countermeasure models were as described in Section 3. During the experiment, we first use the training data to pre-train the countermeasure models. Then the PGD method as shown in Algorithm 1 is adopted to generate adversarial examples for the well-trained countermeasure models. When we run the PGD method, is set to 5, is set to and is set to . Then we measure the performance of well-trained countermeasure models by the generated adversarial examples with and without filters. Three kinds of filters including median filter, mean filter and Gaussian filter are implemented. Then we use adversarial training to train the countermeasure model for epochs as shown in Algorithm 2. After adversarial training, we evaluate the testing accuracy of countermeasure models for adversarial examples.

5.2 Results and Analyses

5.2.1 Spatial Smoothing

After we pre-train VGG and SENet for epochs, we evaluate the testing accuracy of these two models. According to Table 2, both SENet and VGG achieve high testing accuracy in the testing data which is not perturbed. However, when we test the two models with adversarial examples, the testing accuracy drops drastically. When we apply spatial smoothing to the adversarial examples and then evaluate the performance, the adversarial attack becomes ineffective as there is a great increase in testing accuracy. All three kinds of spatial filters have considerable performance in improving the robustness of countermeasure models against adversarial examples. The improvement obtained with Gaussian filters is much less than the other two filters.

We attempt to explain the contribution of spatial smoothing contributes to spoofing countermeasure model to be robust against adversarial examples. In the adversarial attack scenario, an adversary has full access to a well-trained model but can not alter the parameters of the model. Now, assuming that the adversary is not aware of the existence of spatial smoothing which will be implemented to the input data before the input is thrown into the model. The adversary attempts to find an imperceptible noise which will cause the well-trained model to classify incorrectly by the PGD method and add it to the input. However, the deliberately generated perturbation will be countered by spatial smoothing and the adversarial attack becomes invalid.

Normal examples 99.97% 99.99%
Adversarial examples 48.32% 37.06%
Adversarial examples
+ median filter
82.00% 92.72%
Adversarial examples
+ mean filter
82.39% 93.95%
Adversarial examples
+ Gaussian filter
78.93% 84.39%
Table 2: Testing accuracy of VGG and SENet before adversarial training.

5.2.2 Adversarial Training

As shown in Table 3, the testing accuracy for adversarial examples of SENet increases from 48.32% to 92.40% while the testing accuracy for normal examples changes little after adversarial training. We can see a similar phenomenon for VGG. According to Table 3, adversarial training does improve the robustness of VGG and SENet.

Normal examples 99.75% 99.99%
Adversarial examples 92.40% 98.60%
Adversarial examples
+ median filter
93.74% 98.96%
Adversarial examples
+ mean filter
93.76% 99.24%
Adversarial examples
+ Gaussian filter
83.72% 87.22%
Table 3: Testing accuracy of VGG and SENet after adversarial training.

Traditional supervised training does not address the chosen models to be robust to adversarial examples. So the well-trained models by traditional supervised learning may be sensitive to changes in its input space and thus have vulnerable blind spots that can be attacked by a malicious adversary. During the training stage, adversarial attacks should be taken into account by training on a mixture of data which contains not only clean examples but also adversarial examples to regularize and make the model insensitive on all data points within the

max norm box. After doing that, it is hard for malicious adversaries to generate adversarial examples to attack the model. Adversarial training largely samples adversarial examples within the max norm box to augment the training set. The results in Table 3 illustrate that it is feasible and practical to train a robust countermeasure model using adversarial training.

5.2.3 Adversarial Training + Spatial Smoothing

We combine spatial smoothing and adversarial training and the experiment results are shown in Table 3. We observe that equipping adversarial training with median filters or mean filters increase the testing accuracy for adversarial examples, as compared to solely using adversarial training. But adding Gaussian filters decreases the testing accuracy. Hence, Median filters and mean filters are more desirable filters than Gaussian filters in our experiment setting.

6 Conclusion

In this paper, two kinds of defense methods, namely spatial smoothing and adversarial training, are introduced to improve the robustness of spoofing countermeasure models under adversarial attacks. We implement two countermeasure models, i.e., VGG and SENet and augment them with defense methods. The experiment results show both spatial smoothing and adversarial training enhance robustness of the models against adversarial attacks.

For future work, we will introduce powerful defense methods, such as ensemble adversarial training [17], to make spoofing countermeasure models more robust to adversarial audios generated from testing data having different distribution with training data.


  • [1] R. K. Das, J. Yang, and H. Li (2019) Long range acoustic features for spoofed speech detection. In 20th Annual Conference of the International Speech Communication Association (INTERSPEECH), Cited by: §1.
  • [2] D. Garcia-Romero and C. Y. Espy-Wilson (2011)

    Analysis of i-vector length normalization in speaker recognition systems

    In Twelfth annual conference of the international speech communication association, Cited by: §1.
  • [3] A. Gomez-Alanis, A. M. Peinado, J. A. Gonzalez, and A. M. Gomez (2019)

    A gated recurrent convolutional neural network for robust spoofing detection

    IEEE/ACM Transactions on Audio, Speech, and Language Processing 27 (12), pp. 1985–1999. Cited by: §1.
  • [4] I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §4.2.
  • [5] G. Heigold, I. Moreno, S. Bengio, and N. Shazeer (2016) End-to-end text-dependent speaker verification. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5115–5119. Cited by: §1.
  • [6] A. Kanagasundaram, R. Vogt, D. B. Dean, S. Sridharan, and M. W. Mason (2011) I-vector based speaker recognition on short utterances. In Proceedings of the 12th Annual Conference of the International Speech Communication Association, pp. 2341–2344. Cited by: §1.
  • [7] T. Kinnunen, M. Sahidullah, H. Delgado, M. Todisco, N. Evans, J. Yamagishi, and K. A. Lee (2017) The asvspoof 2017 challenge: assessing the limits of replay spoofing attack detection. Cited by: §1.
  • [8] C. Lai, N. Chen, J. Villalba, and N. Dehak (2019) ASSERT: anti-spoofing with squeeze-excitation and residual networks. arXiv preprint arXiv:1904.01120. Cited by: §1, §1, §3.2, §3.
  • [9] G. Lavrentyeva, S. Novoselov, A. Tseren, M. Volkova, A. Gorlanov, and A. Kozlov (2019) STC antispoofing systems for the asvspoof2019 challenge. arXiv preprint arXiv:1904.05576. Cited by: §1.
  • [10] Y. Lei, N. Scheffer, L. Ferrer, and M. McLaren (2014) A novel scheme for speaker recognition using a phonetically-aware deep neural network. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1695–1699. Cited by: §1.
  • [11] S. Liu, H. Wu, H. Lee, and H. Meng Adversarial attacks on spoofing countermeasure of automatic speaker verification. Note: unpublished Cited by: §1, §3.2.
  • [12] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu (2017)

    Towards deep learning models resistant to adversarial attacks

    arXiv preprint arXiv:1706.06083. Cited by: §2.1.
  • [13] D. A. Reynolds, T. F. Quatieri, and R. B. Dunn (2000) Speaker verification using adapted gaussian mixture models. Digital signal processing 10 (1-3), pp. 19–41. Cited by: §1.
  • [14] A. Senior and I. Lopez-Moreno (2014) Improving dnn speaker independence with i-vector inputs. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 225–229. Cited by: §1.
  • [15] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §1.
  • [16] M. Todisco, X. Wang, V. Vestman, M. Sahidullah, H. Delgado, A. Nautsch, J. Yamagishi, N. Evans, T. Kinnunen, and K. A. Lee (2019) ASVspoof 2019: future horizons in spoofed and fake audio detection. arXiv preprint arXiv:1904.05441. Cited by: §1, §3, §5.1.
  • [17] F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel (2017) Ensemble adversarial training: attacks and defenses. arXiv preprint arXiv:1705.07204. Cited by: §6.
  • [18] Z. Wu, T. Kinnunen, N. Evans, J. Yamagishi, C. Hanilçi, M. Sahidullah, and A. Sizov (2015) ASVspoof 2015: the first automatic speaker verification spoofing and countermeasures challenge. In Sixteenth Annual Conference of the International Speech Communication Association, Cited by: §1.
  • [19] W. Xu, D. Evans, and Y. Qi (2017) Feature squeezing: detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155. Cited by: §4.1.
  • [20] H. Zeinali, T. Stafylakis, G. Athanasopoulou, J. Rohdin, I. Gkinis, L. Burget, J. Černockỳ, et al. (2019) Detecting spoofing attacks using vgg and sincnet: but-omilia submission to asvspoof 2019 challenge. arXiv preprint arXiv:1907.12908. Cited by: §1, §1, §3.1, §3.