Biometric authentication is highly important for the security of both reality and cyberspace. Among various biometrics, such as iris, palmprint, fingerprint and face, voiceprint has received much attention recently, partly due to its convenience and non-intrusiveness. After decades of research, speaker recognition (SRE) by voiceprint has achieved remarkable improvement. [1, 2, 3, 4]
Most of the present SRE research works on ‘regular speech’, i.e., speech intentionally produced by people and involving clear linguistic content. For this type of speech, rich speaker information can be obtained from both vocal fold vibration and vocal tract modulation, so the speaker identifiablility is generally acceptable. Many algorithms have been proposed to perform SRE with this kind of speech, including the statistical model approach that has gained the most popularity [5, 6, 7] and the neural model approach that emerged recently and has attracted much interest. [8, 9, 10]
Despite the significant progress achieved on regular speech, research on the non-linguistic part of speech signals is still very limited. For example, we may cough and laugh when talking to others, and may ‘tsk-tsk’(people make with tongue when disapprove of something) or ‘hmm’(people make to express doubt or uncertainty) when listening to others. These events are produced by different personal habits and contain little linguistic information. However, they do convey information about speakers. For example, we can recognize a person by even a laugh if we have been familiar with him/her. As these non-linguistic and non-regular events occur ubiquitously in our conversations, we call them ‘trivial events’. Typical trivial events include cough, laugh, ‘ahem’(short cough made by sb who is trying to get attention), etc.
A key value of SRE on trivial events is that these events are resistant to potential disguise. In forensic examination, for example, the suspects may intentionally change their voices to counteract the voiceprint testing, which will largely fool the human listeners and cause failures in the existing SRE system. However, trivial events are much harder to be counterfeited by the speaker, which makes it possible to use these events to discover the true speaker from disguised speech. We will show how disguised speech deceives humans and state-of-the-art SRE techniques in Section 5.
An interesting question is: which type of trivial event conveys more speaker information? Moreover, who is more apt to identify speakers from these events, human or machine? In previous work, we have studied three trivial events: cough, laugh and ‘wei’ (Hello in Chinese), and found that with a convolutional & time-delay deep neural network (CT-DNN), an unexpected high recognition accuracy can be obtained: the equal error rate (EER) reaches as low as 11% with a cough of 0.3 seconds.  This good performance is largely attributed to the deep speaker feature learning technique that we proposed recently. 
In this paper, we extend the previous work  in several aspects: (1) we extend the study to types of trivial events, i.e., cough, laugh, ‘hmm’, ‘tsk-tsk’, ‘ahem’ and sniff; (2) we collect a trivial event speech database and release it for public usage; (3) we compare performance of human listeners and machines.
The organization of this paper is as follows: the deep feature learning approach is briefly described in Section 3, and then the trivial event speech database CSLT-TRIVIAL-I is presented in Section 4. The performance of human and machine tests is reported in Section 5, and some conclusions and discussions are presented in Section 6.
2 Related work
They analyzed the acoustic properties of scream speech and studied the SRE performance on this type of speech using a recognition system based on the Gaussian mixture models-universal background model. Significant performance reduction was reported compared with the performance on regular speech.
3 Deep feature learning
Most of existing speaker recognition techniques are based on statistical models, e.g., the Gaussian mixture model-universal background model (GMM-UBM) framework  and the subsequent subspace models, such as the joint factor analysis approach 
and the i-vector model.[7, 16] Additional gains have been obtained by discriminative models and various normalization techniques (e.g., the SVM model  and PLDA ). A shared property of these statistical methods is that they use raw acoustic features, e.g., the popular Mel frequency cepstral coefficients (MFCC) feature, and rely on long speech segments to discover the distributional patterns of individual speakers. Since most of trivial events are short, these statistical models are not very suitable to represent them.
The neural model approach has gained much attention recently. Compared to the statistical model approach, the neural approach focuses on learning frame-level speaker features, hence more suitable for dealing with short speech segments, e.g., trivial events. This approach was first proposed by Ehsan et al. , where a regular deep neural network (DNN) was trained to discriminate the speakers in the training data, conditioned on the input speech frames. The frame-level features are then extracted from the last hidden layer, and an utterance-based representation, called ‘d-vector’, is derived by averaging the frame-level features. Recently, we proposed a new convolutional & time-delay DNN (CT-DNN) structure, by which the quality of the learned speaker features is significantly improved.  Particularly, we found that the new features can achieve remarkable performance with short speech segments. This property has been employed to recognize two trivial events (cough and laugh) in our previous study, and good performance has been obtained.  More details about the CT-DNN model can be found in , including the architecture and optimization methods. The training recipe is also available online111http://project.cslt.org.
In this paper, the deep feature learning approach will be mainly used to recognize and analyze more trivial events, and the performance will be compared with that obtained by human listeners.
4 Database design
An appropriate speech corpus is the first concern before any analysis be conducted on trivial speech events. Unfortunately, few trivial event databases are publicly available at present. The only exception is the UT-NonSpeech corpus that was collected for scream detection and recognition [12, 13], but this corpus contains only screams, coughs and whistles. As we are more interested in ubiquitous events that are not easy to be changed by speakers intentionally, a more complicated database is required. Therefore, we decided to construct our own database and release it for public usage. This database is denoted by CSLT-TRIVIAL-I.
To collect the data, we designed a mobile application and distributed it to people who agreed to participate. The application asked the participants to utter types of trivial events in a random order, and each event occurred
times randomly. The random order ensures a reasonable variance of the recordings for each event. The sampling rate of the recordings was set tokHz and the precision of the samples was bits.
We received recordings from participants. The age of the participants ranges from to , and most of them are between and . These recordings were manually checked, and those recordings with clear channel effect (noise, back-ground babbling and echo) were deleted. Finally, the speech segments were purged and only a single event was retained (e.g., one cough or one laugh) in each segment. After this manual check, recordings from persons were remained, with to segments for each event per person. Table 1 presents the data profile of the purged database.
|Spks||Total Utts||Utts/Spk||Avg. duration (s)|
Besides the trivial event database, we also collected a disguise database. The goal of this database is to test how human listeners and the existing SRE techniques will be affected by speakers’ intentional disguise. This will provide a better understanding about the value of our study on trivial events.
The same application used for collecting CSLT-TRIVIAL-I was used to collect the recordings for the disguise database. Before the recording, the participants were instructed to try their best to counterfeit their voices when recording the disguise speech. During the recording, the application asked the participants to pronounce sentences, each involving to words. Each sentence was spoken twice, one time in the normal style and one time with intentional disguise. In manual check, segments with much channel effect were removed. After the manual check, recordings from speakers were remained. This database is denoted by CSLT-DISGUISE-I. Table 2 presents the data profile in details.
CSLT-TRIVIAL-I and CSLT-DISGUISE-I have been released online222http://data.cslt.org. Users can download them freely and use them under the Apache License Version .
|Spks||Total Utts||Utts/Spk||Avg. duration (s)|
This section reports our experiments. We first present some details of two SRE systems we built for the investigation, one based on the i-vector model and the other based on the deep speaker feature learning (denoted as d-vector system). Furthermore, performance with the two SRE systems on CSLT-TRIVIAL-I is reported and compared with the performance of human listeners. Finally, a disguise detection experiment conducted on CSLT-DISGUISE-I is reported, which demonstrates how speech disguise fools both humans and the existing SRE systems.
5.1 SRE systems
For the purpose of comparison, we build two SRE systems, an i-vector system and a d-vector system. For the i-vector system, the input feature involves -dimensional MFCCs plus the log energy, augmented by its first and second order derivatives. The UBM is composed of Gaussian components, and the dimensionality of the i-vector space is . Three scoring methods are used: cosine distance, cosine distance after LDA projection, and PLDA. The dimensionality of the LDA projection space is . When PLDA is used for scoring, the i-vectors are length-normalized. The system is trained using the Kaldi SRE08 recipe. 
For the d-vector system, the input feature involves -dimensional Filter banks(Fbanks). A symmetric -frame window is used to connect the neighboring frames, resulting in frames in total. The number of output units is , corresponding to the number of speakers in the training data. The frame-level speaker features are extracted from the last hidden layer, and the d-vector of each utterance is derived by averaging all its frame-level speaker features. The scoring methods used for the i-vector system are also used for the d-vector system during the test, including cosine distance, LDA and PLDA.
The Speech-ocean Datatang database is used as the training set, which was recorded by telephone and the sampling rate is kHz. The database consists of speakers, with Chinese utterances. This training set is used to train the UBM, the T matrix, and the LDA/PLDA models of the i-vector system, as well as the CT-DNN model of the d-vector system.
5.2 SRE on trivial events
In the first experiment, we evaluate the SRE performance on trivial events, by both human listeners and the two SRE systems. The CSLT-TRIVIAL-I database is used to conduct the test. It consists of speakers and types of trivial events, each type per speaker involving about segments. The original data of the recording is in kHz, which matches the Speech-ocean Datatang database.
During the human test, the listener is presented YES/NO questions,
questions per event type. For each question, the listener is asked to listen to two speech segments that are randomly sampled from the same event type, with a probability of 50% to be from the same speaker. Listeners are allowed to perform the test multiple times. We collectedtest sessions, amounting to trials in total. The performance is evaluated in terms of detection error rate (DER), which is the proportion of the incorrect answers within the whole trials, including both false alarms and false rejections. The results are shown in Table 3. It can be seen that humans can tell the speaker from a very short trivial event, particularly with the nasal sound ‘hmm’. For cough, laugh and ‘ahem’, humans can obtain some speaker information, but the performance is lower. For ‘tsk-tsk’ and sniff, the performance is very bad, and the answers given by the listeners are almost random. This is expected to some extent, as these two types of events sound rather weak, and producing them does not use much of vocal fold and vocal tract.
For the machine test, there are about trials for each event type. The EER results with the i-vector system and the d-vector system are reported in Table 4. It can be observed that the d-vector system outperforms the i-vector system by a large margin, confirming that the deep speaker feature learning approach is more suitable than the statistical model approach when recognizing short speech segments. Comparing different events, it can be found that ‘hmm’ conveys the most speaker information, and cough, laugh, ‘ahem’ are less informative. ‘Tsk-tsk’ and Sniff are the least discriminative. All these observations are consistent with the results of the human test. Moreover, we found that for d-vector systems, the discriminative normalization approaches, LDA and PLDA, did not provide clear advantage on ’hmm’ and sniff. A possible reason is that there is little intra-speaker variances involved in these two types of events, so the statistical based discrimination is not helpful.
Comparing humans and machines, we can find that the best machine system, i.e., the d-vector system, is highly competitive. Although DER and EER values are not directly comparable, the results still show roughly that on almost all the types of trivial events, the d-vector system makes fewer mistakes than humans. Particularly, on the events that humans perform the worst, i.e., ‘tsk-tsk’ and sniff, machines work much better. Although the listeners we invited are not professional speech scientists, and the results may be affected by the audio devices that human listeners used, these results still provide strong evidence that machines can potentially do better than human beings in listening to trivial events.
5.3 Disguise detection
In the second experiment, we examine how humans and machines can discriminate disguised speech. For the human test, the listener is presented trials, each containing two samples from the same speaker, but one of the sample can be a disguised version. The listener is asked to tell if the two samples are from the same speaker. To avoid any bias, the listeners are informed that some speech samples are disguised. Some trials may also involve imposter speech (not the same speaker), but these trials are only used to inject noise into the test, not counted in the final result. We collected trails in total, and the DER result is . This indicates that human listeners largely fail in discriminating disguised speech.
The EER results of the two SRE systems are reported in Table 5. It can be found that machines can do better than humans in discriminating disguised speech, but the error rates are still very high. Again, the d-vector system performs better than the i-vector system.
To observe the impact of speech disguise more intuitively, we plot the deep speaker features produced by the d-vector system in 2-dimensional space using t-SNE.  The results are shown in Fig. 1. We can see that the discrepancy between the normal and disguised speech is highly speaker-dependent: some speakers are not good voice counterfeiters, but some speakers can do it very well.
In this paper, we studied and compared the performance of human listeners and machines on the speaker recognition task with trivial speech events. Our experiments on types of trivial events demonstrated that both humans and machines can discriminate speakers to some extent with trivial events, particularly those events involving clear vocal tract activities, e.g., ‘hmm’. Additionally, the deep speaker feature learning approach works much better than the conventional statistical model approach on this task, and in most cases outperforms human listeners. We also tested the performance of humans and machines on disguised speech, and found that speech disguise does place a serious challenge for both of them.
-  JA Unar, Woo Chaw Seng, and Almas Abbasi, “A review of biometric technology along with trends and prospects,” Pattern Recognition, vol. 47, no. 8, pp. 2673–2688, 2014.
-  SR Kodituwakku, “Biometric authentication: A review,” International Journal of Trend in Research and Development, vol. 2, no. 4, pp. 2394–9333, 2015.
-  Homayoon Beigi, Fundamentals of speaker recognition, Springer Science & Business Media, 2011.
-  John HL Hansen and Taufiq Hasan, “Speaker recognition by machines and humans: A tutorial review,” IEEE Signal processing magazine, vol. 32, no. 6, pp. 74–99, 2015.
-  Douglas A Reynolds, Thomas F Quatieri, and Robert B Dunn, “Speaker verification using adapted Gaussian mixture models,” Digital signal processing, vol. 10, no. 1-3, pp. 19–41, 2000.
-  Patrick Kenny, Gilles Boulianne, Pierre Ouellet, and Pierre Dumouchel, “Joint factor analysis versus eigenchannels in speaker recognition,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 4, pp. 1435–1447, 2007.
-  Najim Dehak, Patrick J Kenny, Réda Dehak, Pierre Dumouchel, and Pierre Ouellet, “Front-end factor analysis for speaker verification,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 4, pp. 788–798, 2011.
-  Ehsan Variani, Xin Lei, Erik McDermott, Ignacio Lopez Moreno, and Javier Gonzalez-Dominguez, “Deep neural networks for small footprint text-dependent speaker verification,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 4052–4056.
-  Georg Heigold, Ignacio Moreno, Samy Bengio, and Noam Shazeer, “End-to-end text-dependent speaker verification,” in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016, pp. 5115–5119.
-  Lantian Li, Yixiang Chen, Ying Shi, Zhiyuan Tang, and Dong Wang, “Deep speaker feature learning for text-independent speaker verification,” in Proc. Interspeech 2017, 2017, pp. 1542–1546.
-  Miao Zhang, Yixiang Chen, Lantian Li, and Dong Wang, “Speaker recognition with cough, laugh and “wei”,” arXiv preprint arXiv:1706.07860, 2017.
-  John HL Hansen, Mahesh Kumar Nandwana, and Navid Shokouhi, “Analysis of human scream and its impact on text-independent speaker verification a,” The Journal of the Acoustical Society of America, vol. 141, no. 4, pp. 2957–2967, 2017.
-  Mahesh Kumar Nandwana and John HL Hansen, “Analysis and identification of human scream: Implications for speaker recognition,” in Proc. Interspeech 2014, 2014.
-  Xing Fan and John HL Hansen, “Speaker identification within whispered speech audio streams,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 5, pp. 1408–1421, 2011.
-  Cemal Hanilçi, Tomi Kinnunen, Rahim Saeidi, Jouni Pohjalainen, Paavo Alku, and Figen Ertas, “Speaker identification from shouted speech: Analysis and compensation,” in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE, 2013, pp. 8027–8031.
-  Yun Lei, Nicolas Scheffer, Luciana Ferrer, and Mitchell McLaren, “A novel scheme for speaker recognition using a phonetically-aware deep neural network,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 1695–1699.
William M Campbell, Douglas E Sturim, and Douglas A Reynolds,
“Support vector machines using GMM supervectors for speaker verification,”IEEE signal processing letters, vol. 13, no. 5, pp. 308–311, 2006.
-  Sergey Ioffe, “Probabilistic linear discriminant analysis,” Computer Vision–ECCV 2006, pp. 531–542, 2006.
-  Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al., “The kaldi speech recognition toolkit,” in IEEE 2011 workshop on automatic speech recognition and understanding. IEEE Signal Processing Society, 2011, number EPFL-CONF-192584.
Laurens van der Maaten and Geoffrey Hinton,
“Visualizing data using t-SNE,”
Journal of Machine Learning Research, vol. 9, pp. 2579–2605, 2008.