Automatic speaker verification (ASV) systems are nowadays increasingly used for various applications. However, ASV systems are vulnerable to audio spoofing attacks, which attempt to gain unauthorized access by manipulating the audio input. One of the most popular and effective audio spoofing attacks are replay attacks (RA)s. In an RA the attacker fools the ASV system by replaying a recording of an authorized speaker. Considering how effective and cheap RAs are, it is necessary to augment an ASV system with an RA detection system in practice.
The public benchmark ASVspoof initiative started with the ASVspoof 2015 challenge which dealt with text-to-speech and voice conversion spoofing attacks . ASVspoof 2017  was the first challenge concerned with RA detection and thus created a benchmark data set consisting of voice command recordings. ASVspoof 2019 , then introduced a much larger corpus of longer and text-independent recordings for RA detection.
The performance of RA detection systems has been thought highly dependent on their input feature processing . Correspondingly, earlier work has largely dealt with handcrafted feature processing and it has been found that high frequency and phase information can be helpful for RA detection (e.g. in [5, 6]). Popular input features that emerged include linear frequency cepstral coefficients (LFCC)  and group delay (GD) grams . In recent years, input features derived from shorter handcrafted feature processing pipelines, such as the log power magnitude spectra (LOGSPEC) , attracted more interest. In contrast to LFCC, LOGSPEC preserves much more of the information present in the original raw signal and thus relies on deep neural netwokrs (DNN)s as powerful feature extractors [9, 10, 11, 12, 13]. Overall, there is currently no conclusive consensus about the best input feature for RA detection.
As the quality of recording and replaying devices is getting better, detecting the difference between genuine and spoofed audios is becoming more difficult. Thus, it becomes necessary to improve the discriminability and generalizability of RA detection systems. Besides common regularization techniques, like data augmentation and Dropout (cf. with [10, 13]
), multiple teams have used discriminative loss functions and multi-task learning (MTL) for better feature discrimination and generalization (cf. with [9, 12, 15]).
Siamese Neural Networks (SNN)  have shown to significantly improve the discriminability and generalizability of models . In this paper, we propose to use SNN in an MTL setting for RA detection. More generally, we investigate to what extent adding discriminative loss functions in a MTL setting can improve the performance of RA detection systems on the ASVspoof 2019 challenge Physical Access (PA) data. The analysis is conducted on multiple input features. It is made sure that none of the systems rely on additional data and labels and that all of our settings follow the real-world application implementation. Our main contributions include: 1) Proposal of SNN in MTL setting for improved discriminability and generalizability of RA detection systems; 2) Extensive analysis of discriminative loss functions on multiple input features; 3) Enhancement of a popular architecture for RA detection with second-order statistics pooling; 4) Combination of reconstruction loss (ReL) with SNN in an MTL setting.
2 Related Work
Convolutional neural networks (CNN)s and especially deep residual neural networks (ResNet)s  have yielded the state-of-the-art performance on the ASVspoof 2019 PA data set [9, 10]. To deal with the much smaller data set than the one ResNet was originally designed for [9, 10, 15]
significantly reduce the size of their models by scaling down the number of kernels employed in each of the CNN layers. A key component in the architecture of their models is the projection of ResNet’s three dimensional tensor output to a one dimensional vector for further binary classification. In a recurrent layer processes the tensor along the time dimension and outputs the last hidden state. A simpler and apparently more effective approach is to use a global average pooling (GAP) layer instead [9, 10]. Given the success of ResNet with GAP, we use this architecture as our baseline in this study. In other fields of research it has been shown that using second-order statistics in addition to first-order statistics yields better feature embeddings for utterance level classification tasks, e.g. in 
. This led us to extent the GAP layer to additionally perform variance pooling.
In , MTL has been applied for RA detection in the form of center loss (CL), which has been shown to greatly improve the discriminability of a model . CL is comprised of the cross-entropy (CE) loss and the intra-class variance loss of the feature embeddings weighted by a hyper parameter to control the intra-class compactness . SNN are known to significantly improve the discriminability and generalizability of a model 
and have found to be effective in similarity assessment in computer vision. By using a pair of input features during training, SNN simultaneously increase the inter-class variance of the embedded input features while decreasing the intra-class variance of the embedded input features. Since CL can be seen as a special case of SNN 111The centroid used in CL can be seen as one of the inputs in the input pair used for SNN., SNN are expected to better improve the discriminability of the model. This inspired us to propose SNN in a MTL setting for RA detection.
Another loss function, which is easily applicable in the MTL setting, is ReL. ReL is an unsupervised loss function and is usually employed in autoencoders to improve the network’s ability to maintain the most distinctive information about the input features in compressed form. When added to a standard CE loss function, ReL can act as an effective regularizer by encouraging the network to learn robust feature embeddings.
3 Proposed Approach
3.1 Audio Preprocessing & Feature Extraction
In a real-world application, the utterance input can be considered as a continuous buffer of audio input. We set the buffer size to
seconds to keep the audio processing step simple and easy to deploy. Therefore, all utterances are cut or zero-padded to have a maximum length ofseconds and only utterance-level input is considered.
The models are tested on the three input features: linear frequency filterbank features (LFBANK), LOGSPEC and GD grams. LFBANK correspond to the conventionally used LFCC features without the discrete cosine decorrelation step. We chose to leave out this decorrelation step because neural networks are known to act as excellent decorrelators.
did not yield any reasonable results in our experiments. For all input features, the short time Fourier transform employed a window size of 50ms and a window shift of 15ms. LFBANK subsequently applies 80 filters (cf. with ) without any delta coefficients. The resulting input dimension for GD gram/LOGSPEC and LFBANK is and , respectively.
3.2 ResNet for Replay Attack Detection
Similar to , the RA detection system is built upon a ”thin” 34-layer ResNet, which is presented in detail in Table 1. The ResNet blocks (i.e. Res1 - Res4) employ the ”full pre-activation” residual unit proposed in 
. Due to differences in the input dimensions between LOGSPEC/GD gram and LFBANK, slightly different stride kernels are used (cf. with Table 1).
The ResNet network is followed by a GAP layer as explained in Eq. (1) in . Extending GAP to the retrieval of second-order statistics, we define a global average and variance pooling (GAVP) layer that extracts both the mean and variance from all feature maps of ResNet’s last CNN layer.
To keep the number of parameters constant, the pooling layer is followed by a dense layer if GAP is employed and a dense layer if GAVP is employed. The final dense layer (called ”Out” in Fig. 1) following the GAP or GAVP layer has a single output
single output neuron
3.3 Multi-Task Learning with Siamese Neural Networks
SNN are made of two sub-networks which share the same set of trainable parameters so that a pair of input features is used as an input during training. Besides computing the conventional CE loss for each sub-network individually (i.e. , ), a distance loss between the feature embedding (i.e. ) of each sub-network is calculated (cf. with Fig. 1). A common choice for is the hinge loss (cf. with ):
wheres represents the margin, equals if the input feature labels are equal or else and
is a distance metric of choice for which we empirically found the cosine similarity to work best. During training, SNN then aims at minimizing the sum of, and , whereas each loss contributes with equal weight.
Optionally, two ReLs () - one for each sub-network - can be added to the overall loss. In this case a shared decoder (with a negligible amount of parameters) is used to reconstruct the pair of input features from the outputs of the last convolutional layer:
with being the Frobenius norm. The decoder consists of three consecutive Deconvolution layers each of which upsamples the input using the stride kernel and which employ kernels of size respectively. The outputs of and are ”mean-pooled” over their output feature maps and finally zero-padded to have exactly the same dimension as their respective input feature matrices. The complete architecture of SNN is illustracted in Fig. 1.
As can be noted from Eq. (1), the space of possible training samples for SNN includes all pair-wise combinations of with itself, which is prohibitively large. A simple remedy taken in this study is to control the dataset’s size by a hyper parameter numSamples
. Before every epoch, a datasetis created by the following simple, but effective sampling procedure:
First, no data sample in (or , respectively) is used twice before every other data sample has been sampled at least once, which ensures almost certainly that all data samples are used per epoch by setting numSamples accordingly. Second, it is ensured that the space of possible sample pairs is vastly explored by shuffling the order of and before every epoch. Third, by choosing to sample from or with even probability, the smaller of the two data sets is upsampled so that is balanced.
4 Experimental Setup and Results
In all experiments, the models were evaluated on the PA subset of the ASVspoof 2019 corpus . PA consists of 48600 spoofed and 5400 genuine utterances in the training (train) data set, 24300 spoofed and 5400 genuine utterances in the development (dev) data set and 116640 spoofed and 18090 genuine utterances in the evaluation (eval) data set. The models were optimized by Adam with , , learning rate
and weight decay, which was tuned for each experiment separately. Training was stopped if the equal error rate (EER) on the dev data set did not improve over 15 consecutive epochs. The models were implemented with the Keras framework.
First, we analysed the effect of audio input length on the performance of a simplified ResNet model using LFBANK on the eval set. We noticed that increasing the input length from 5.0s to 6.5s and eventually to 8.5s improved the EER from 9.31 % to 6.75 % to finally 6.22 %. In this experiment, we simply cut or padded the end of the audio to the specific length. Based on existing literature (e.g. ), it can be explained that the beginning and tailing silence cues can lead to better performance. Considering these findings and our practical application, we decided to use 8.5s input length and to do cutting and padding at the end of the audio from now on (so that we do not rely on voice activity detection in practical applications).
We then analyse the proposed model architecture with GAP . As one baseline, the model was trained using simple CE loss () , which is abbreviated as . As another baseline, the model was trained using CL loss (), which is abbreviated as . For , we found that yields the best results. The baselines are compared to SNN as described in Section 3.3, which we abbreviate as . For the numSamples was set to and the margin was set to . In all training setups, we used a batch size of . and
were all evaluated using LFBANK, LOGSPEC and GD gram as input. In a final step, the systems were systematically fused by means of logistic regression with the Bosaris toolkit using the dev data set for calibration.
Due to the data imbalance of 9 to 1 in the training set, we adopted the weighted CE loss for and with the CE weight for spoofed utterance input set to . To improve training stability, the bias of the output neuron was initialized to (cf. with ) if weighted CE was used. The results can be seen in Table 2.
It can be seen that both MTL models and outperform the single task learning model by relative 23.4 % EER and 31.5 % EER averaged over all input features. further outperforms by a relative margin of 10.6 % EER. We could observe that during training, the MTL setups and converged faster and also seemed to generalize better as the EER on the dev data set decreased much smoother during training.
In the second experiment, we took the best performing model for LOGSPEC input as our new baseline. First, we analysed the effect of extracting second-order statistics in addition to first-order statistics from of the CNN feature maps by replacing the GAP layer with a GAVP layer. This setup is abbreviated as . Second, we extended SNN with two additional reconstruction loss functions according to Eq. (2) for both GAP () and GAVP (). Empirically, it was found that and is much smaller than and , so that the loss is scaled by a weighting factor of . Because, we experienced RAM memory overflow issues with and , the batch size used in training was reduced to and numSamples set to to have the same number of steps per epoch as before 222More details can be found at https://www.comet.ml/patrickvonplaten/anti-spoof.. The results are shown in Table 3.
It can be seen that both using the GAVP layer and adding ReL gives a significant performance boost compared to . Consequently the best single system performance of % EER on the eval data set is achieved by which outperforms by relative 30.5 % EER while having the same number of parameters.
We have thoroughly analysed the discriminate feature learning in an MTL setting for RA detection and found that SNN significantly outperforms the baseline on multiple input features. We explain this improvement by the following. First, SNN greatly improve the discriminability of the model by explicitly increasing the inter-class variance of the model. Second, because SNN sample from a very large pool of possible sample pairs - each giving a different gradient signal - the model regularizes much better during training. We then further improve upon SNN by adding ReL and replacing GAP with GAVP. This leads to a single system EER of % and can be justified by better regularization induced by ReL and more discriminative feature embeddings thanks to the extraction of first- and second-order statistics.
-  Zhizheng Wu, Tomi Kinnunen, Nicholas Evans, Junichi Yamagishi, Cemal Hanilçi, Md Sahidullah, and Aleksandr Sizov, “Asvspoof 2015: the first automatic speaker verification spoofing and countermeasures challenge,” in Sixteenth Annual Conference of the International Speech Communication Association, 2015.
-  Tomi Kinnunen, Md Sahidullah, Héctor Delgado, Massimiliano Todisco, Nicholas Evans, Junichi Yamagishi, and Kong Aik Lee, “The asvspoof 2017 challenge: Assessing the limits of replay spoofing attack detection,” ISCA (the International Speech Communication Association), 2017.
-  Massimiliano Todisco, Xin Wang, Ville Vestman, Md Sahidullah, Hector Delgado, Andreas Nautsch, Junichi Yamagishi, Nicholas Evans, Tomi Kinnunen, and Kong Aik Lee, “Asvspoof 2019: Future horizons in spoofed and fake audio detection,” arXiv preprint arXiv:1904.05441, 2019.
-  Hemant A Patil and Madhu R Kamble, “A survey on replay attack detection for automatic speaker verification (asv) system,” in 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2018, pp. 1047–1053.
-  Parav Nagarsheth, Elie Khoury, Kailash Patil, and Matt Garland, “Replay attack detection using dnn for channel discrimination.,” in Interspeech, 2017, pp. 97–101.
-  Francis Tom, Mohit Jain, and Prasenjit Dey, “End-to-end audio replay attack detection using deep convolutional networks with attention.,” in Interspeech, 2018, pp. 681–685.
-  Md Sahidullah, Tomi Kinnunen, and Cemal Hanilçi, “A comparison of features for synthetic speech detection,” ISCA (the International Speech Communication Association), 2015.
-  Rajesh M Hegde, Hema A Murthy, and Venkata Ramana Rao Gadde, “Significance of the modified group delay feature in speech recognition,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 1, pp. 190–202, 2006.
-  Cheng-I Lai, Nanxin Chen, Jesús Villalba, and Najim Dehak, “Assert: Anti-spoofing with squeeze-excitation and residual networks,” arXiv preprint arXiv:1904.01120, 2019.
-  Weicheng Cai, Haiwei Wu, Danwei Cai, and Ming Li, “The dku replay detection system for the asvspoof 2019 challenge: On data augmentation, feature representation, classification, and fusion,” arXiv preprint arXiv:1907.02663, 2019.
-  Alejandro Gomez-Alanis, Antonio M Peinado, Jose A Gonzalez, and Angel Manuel Gomez, “A gated recurrent convolutional neural network for robust spoofing detection,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2019.
-  Galina Lavrentyeva, Sergey Novoselov, Andzhukaev Tseren, Marina Volkova, Artem Gorlanov, and Alexandr Kozlov, “Stc antispoofing systems for the asvspoof2019 challenge,” arXiv preprint arXiv:1904.05576, 2019.
-  Hossein Zeinali, Themos Stafylakis, Georgia Athanasopoulou, Johan Rohdin, Ioannis Gkinis, Lukáš Burget, Jan Černockỳ, et al., “Detecting spoofing attacks using vgg and sincnet: but-omilia submission to asvspoof 2019 challenge,” arXiv preprint arXiv:1907.12908, 2019.
-  Rich Caruana, “Multitask learning,” Machine learning, vol. 28, no. 1, pp. 41–75, 1997.
-  Jee-weon Jung, Hye-jin Shim, Hee-Soo Heo, and Ha-Jin Yu, “Replay attack detection with complementary high-resolution information using end-to-end dnn for the asvspoof 2019 challenge,” arXiv preprint arXiv:1904.10134, 2019.
-  Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah, “Signature verification using a” siamese” time delay neural network,” in Advances in neural information processing systems, 1994, pp. 737–744.
Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov,
“Siamese neural networks for one-shot image recognition,”
ICML deep learning workshop, 2015, vol. 2.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun,
“Deep residual learning for image recognition,”
Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
-  Joao Carreira, Rui Caseiro, Jorge Batista, and Cristian Sminchisescu, “Semantic segmentation with second-order pooling,” in European Conference on Computer Vision. Springer, 2012, pp. 430–443.
Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao,
“A discriminative feature learning approach for deep face recognition,”in European conference on computer vision. Springer, 2016, pp. 499–515.
Sean Bell and Kavita Bala,
“Learning visual similarity for product design with convolutional neural networks,”ACM Transactions on Graphics (TOG), vol. 34, no. 4, pp. 98, 2015.
-  F. Zhuang, D. Luo, X. Jin, H. Xiong, P. Luo, and Q. He, “Representation learning via semi-supervised autoencoder for multi-task learning,” in 2015 IEEE International Conference on Data Mining, 2015.
-  Bhusan Chettri, Daniel Stoller, Veronica Morfi, Marco A Martínez Ramírez, Emmanouil Benetos, and Bob L Sturm, “Ensemble models for spoofing detection in automatic speaker verification,” arXiv preprint arXiv:1904.04589, 2019.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Identity mappings in deep residual networks,” in European conference on computer vision. Springer, 2016, pp. 630–645.
-  Lorenzo Rosasco, Ernesto De Vito, Andrea Caponnetto, Michele Piana, and Alessandro Verri, “Are loss functions all the same?,” Neural Computation, vol. 16, no. 5, pp. 1063–1076, 2004.
-  François Chollet et al., “Keras,” https://keras.io, 2015.
-  Niko Brümmer and Edward De Villiers, “The bosaris toolkit: Theory, algorithms and code for surviving the new dcf,” arXiv preprint arXiv:1304.2865, 2013.
-  Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980–2988.