1 Introduction
Most current speaker verification systems are composed of several separate stages. First, framelevel features that represent the shorttime contents of the signal are extracted. These features are input to a deep neural network (DNN) which is trained to optimize speaker classification performance on the training dataset. A hidden layer within that DNN is then used as a signallevel feature extractor. These new features, termed speaker embeddings or ‘xvectors’
[1], are transformed using linear discriminant analysis (LDA), then mean and variancenormalized and, finally, length normalized. Next, probabilistic linear discriminant analysis (PLDA) is used to obtain scores for each speaker verification trial. Finally, a calibration stage is necessary to convert the scores produced by PLDA into proper loglikelihood ratios (LLRs) that can be thresholded to make decisions or used directly. This stage usually consists of an affine transformation of the scores where parameters are trained to optimize a weighted binary crossentropy objective which measures the overall quality of the scores as proper LLRs. Assuming the calibration training data reflects the evaluation conditions, this procedure has been repeatedly shown to provide hardtobeat performance on a wide range of datasets.
In a recent paper [2], we proposed an alternative backend which integrates all the steps from LDA to calibration into a single jointlytrained model. The functional form of this backend coincides with that of the standard backend described above, except for the calibration stage. In this final stage, instead of using a single set of trainable calibration parameters for all trials, these parameters are a function of vectors representing the conditions for the two sides of the speaker verification trial. These vectors are extracted from a layer in a DNN trained to predict the conditions present in a large set of training data. The transformation from condition vectors to calibration parameters is trained jointly with the rest of the model. In [2], we show that the discrimination performance of the proposed model is comparable to that of the standard backend, while the calibration performance is, in most cases, as good as or better than the best of three possible calibration models trained with different subsets of data. Overall, the proposed model achieves a very low calibration loss in most test sets.
This method, proposed in [2] and extended in this paper, is related to recent papers [1, 3]
which also propose to use the binary crossentropy as loss function during DNN training. In
[1], a DNN is used to obtain an embedding for each signal. The score for a trial is then computed using a simple function of the two embeddings with the same form as the PLDA scores, an idea that was first proposed in [4]. The parameters for the embedding extractor and the scorer are trained jointly to optimize binary crossentropy. In [3], the authors propose to use an architecture that mimics the previous ivector [5] pipeline for speaker verification, pretraining all parameters separately and then finetuning the full model to minimize binary crossentropy. Neither of these papers show overall system performance (i.e., including the effect of calibration), only discrimination performance. In fact, as we show in our previous paper [2], discriminatively training a PLDAlike backend as done in those papers does not suffice to achieve good generalization in terms of calibration. For this reason, we proposed to integrate the conditionaware calibration stage inside the backend, which significantly reduced calibration problems on most tested datasets. In the rest of this paper, we will call the method proposed in our previous paper conditionaware discriminative PLDA (CADPLDA).Several approaches have been proposed in the speaker verification literature which take into account the signal’s conditions in different ways during calibration. In some cases, the sideinformation was assumed to be discrete and known (or estimated separately) during testing [6, 7, 8, 9], and calibration parameters were conditioned on these discrete sideinformation values. The Focal Bilinear toolkit [10] implements a version of sideinformationdependent calibration where the calibrated score is a bilinear function of the scores and the sideinformation vector, which is assumed to be composed of numbers between 0 and 1. More recently, we proposed an approach called trialbased calibration (TBC) where calibration parameters are trained independently for each trial using a subset of the development data [11, 12] selected using a model trained to estimate the similarity between the conditions of two samples. This approach, while successful, is quite computationally expensive and requires tuning a few different parameters in order to obtain good performance. In our proposed model, both discrete (in the form of onehot vectors) and continuous sideinformation can be used and the functional form is a generalization of all previous approaches, except for TBC. In addition, in our proposed approach, the calibration model is trained jointly with the rest of the backend parameters, while in all previous approaches the calibration step was trained separately.
The CADPLDA method requires a separate model, a DNN, trained to predict condition labels and used to extract condition vectors as the activations from a layer within the DNN. This requires training data labeled with information about the conditions present in each signal. While some datasets have this information available, others do not. In our work, we used as much information about the signals’ conditions as was available in each of the sets used for training, which seemed to suffice. Yet, the need for a separate model has other disadvantages: the performance of the backend depends on how this model is trained, which data and labels are used to train it, which architecture is chosen and which seeds are used for initialization. All of these hyperparameters can be optimized for the final speaker verification performance after training the CADPLDA backend, but this is a slow and involved development process. For these reasons, we endeavored to eliminate the need for a conditionprediction model.
In this paper we propose a simplified version of the CADPLDA approach, which we call automatic sideinformation DPLDA (ASDPLDA). In this approach, the vectors used to obtain the calibration parameters are learned along with the rest of the backend as a function of the embeddings, without the need for condition labels. We call these vectors sideinformation vectors rather than condition vectors since they do not necessarily reflect the conditions present in the signals, given that they are not trained with this goal but only with the goal of optimizing speaker verification performance (i.e., to minimize crossentropy for the binary speaker verification task).
The contributions of the current paper are as follows: (1) we propose a simplified version of the method introduced in [2] which does not require externallycomputed condition vectors and show that it performs on par with the original method; (2) we compare the proposed method’s performance with that of TBC [12] and show that our current method outperforms TBC, while also being orders of magnitude faster to run; (3) we provide an analysis of the vectors used to compute calibration parameters both for the CADPLDA and the ASDPLDA methods; and (4) we show the effect that the initialization method and seed and the number of epochs have on the performance of the method. In summary, this paper is an extension of our previous paper [2], proposing a simplified version of the method introduced in that paper and showing a more detailed analysis of different aspects of the method.
2 Standard PLDAbased Backend
Most state of the art speaker verification systems consist of an embedding extraction stage followed by a PLDAbased backend. The PLDAbased backend is in itself composed of several stages. First, linear discriminant analysis is applied to reduce the dimension of the embeddings while emphasizing speaker information and reducing other irrelevant information. Then, each transformed dimension is mean and variancenormalized (MVN) and the resulting vectors are length normalized. Finally, PLDA is used to compute a score for each trial. While the training procedure for PLDA is somewhat involved and requires the use of an expectationmaximization algorithm, once parameters have been trained, scoring is done with a simple function of the two embeddings involved in the trial (see
[13] for a derivation).To summarize, the set of equations required to go from two individual embeddings, and , to a score for the trial are:
(1)  
(2) 
where is the LDA projection matrix restricted to the first N dimensions and scaled to result in variance of 1.0 in each dimension, is the global mean of the data after multiplication with , performs length normalization, and , , and are derived from the parameters of the PLDA model using Equations (14) and (16) in [13].
PLDA scores are given by the logarithm of the ratio between the likelihood for the hypothesis that the speakers in the two signals in the trial are the same and the likelihood of the hypothesis that the speakers are different (Equation 2 computes this value, given the PLDA model). That is, the score is defined as a loglikelihood ratio (LLR). Yet, in practice, the scores produced by PLDA are far from being proper LLRs, i.e., they do not reflect the distributions found during evaluation: a PLDA score of log(2.0) cannot be interpreted as indicating that the likelihood of the samespeaker hypothesis is two times higher than that of the differentspeaker hypothesis. This is due to the fact that PLDA’s assumptions do not exactly hold in practice. Hence, the LLRs produced by the model are not wellcalibrated. It is possible to use the raw scores from PLDA to make speaker verification decisions by tuning a threshold on some development data for the specific application of interest. Yet, in many cases, like in forensic applications, when no development data is available to choose a threshold, or when the operating point is not defined a priori, it is necessary for the system to output proper LLRs which can then be thresholded using Bayes rule for any cost of interest or directly used as standalone interpretable values. To achieve this, an additional stage of calibration is needed.
The standard procedure for calibration in speaker verification is to use linear logistic regression, which applies an affine transformation to the scores, training the parameters to minimize binary crossentropy
[14]. The objective function to be minimized is given by(3) 
where
(4)  
(5) 
and where is the score for trial given by Equation (2),
is the sigmoid function,
is a parameter reflecting the expected prior probability for samespeaker trials, and
and are the calibration parameters, trained to minimize the quantity in Equation (3).The top part of Figure 1 (the orange blocks) show the stages in the standard PLDA pipeline, referencing the equation implemented in each stage. The parameters involved in these equations are all trained separately, freezing the parameters of the previous steps in order to obtain input data to train the next step.
3 Discriminative Backend with SideInformationDependent Calibration
In a recent paper [2] we present a backend method with the same functional form as the PLDAbackend explained in the previous section, but where all parameters are optimized jointly, in a manner similar to the one used in [3] (though, note that in this paper we only optimize jointly up to the backend stage instead of the full pipeline, as in Rohdin’s paper). In this method, we first initialize all parameters in Equations (1), (2) and (5) as in the standard PLDAbased backend. Then, we fine tune the parameters to optimize the crossentropy using Adam optimization [15]. To this end, we define minibatches that contain both samespeaker and differentspeaker samples. This is done by randomly selecting N speakers for each minibatch. Then, two random samples from each of those speakers are chosen. All possible trials between the 2N selected samples are used to compute the crossentropy, after excluding all samesession target trials and differentdomain impostor trials. We found that these two restrictions were important to get good calibration performance.
In [2] we show that having a global calibration model with two trainable parameters and as in Equation (5) does not suffice to get good generalization in terms of calibration performance. This problem can be fixed, as usually done for the standard PLDA backend, by training a specific calibration model for each domain of interest, which requires having at least some domainspecific labeled data. In this research, though, we assume that no domainspecific data is available for system adaptation or for training a calibration model. This also means that a domainspecific decision threshold cannot be learned. Hence, we aim to design the best possible outofthebox system for unknown conditions for which the score produced can be thresholded using the theoretically optimal threshold assuming the scores are proper LLRs (see, for example, Equation (6) in [16]).
In order to achieve this goal, in [2], we proposed to make the calibration parameters depend on the conditions of the signal by having the calibration scale and shift, and in Equation (5), be functions of sideinformation vectors, and , for each of the signals in a trial:
(6)  
(7) 
In our implementation, all parameters in these equations are initialized to 0 except the values that are initialized using the global calibration parameters trained using linear logistic regression. Hence, at initialization, the calibration stage coincides with the global calibration model.
The key component of this model are the vectors , which we define to be given by
(8) 
where was, in our original proposed method, an additional input to the backend extracted from a bottleneck layer from a DNN trained to predict the conditions in the signal. Several other options were tested to transform into
: adding a bias terms, using lengthnormalization, no transformation, softmax without the logarithm and relu. None of these alternatives proved to be better in our experiments than the logarithm of the softmax transformation. Also, the trainable transformation proved to be essential to obtain good performance. Using the preactivations (or activations) from the condition DNN directly as
values with or without log softmax led to suboptimal results. In our experiments theparameter is trained jointly with the rest of the model. This is the only parameter that is initialized randomly using a normal distribution centered at 0.0 with standard deviation of 0.5.
In the current work, we propose a simpler alternative to extract the vectors that does not require training a separate conditionprediction DNN and, hence, does not require training data with condition labels. In this alternative, the vectors are given by an affine transform of the embeddings followed by length normalization. This affine transform is initialized using the last M dimensions of the same LDA transform used to initialize the speaker verification branch of the model (Figure 1), including mean and variance normalization. That is, we use the dimensions that are less useful for speaker verification under the LDA assumptions and apply the same procedure as the LDA stage in the speaker verification branch. After initialization, this transform, which has the form in Equation 1, is finetuned along with all the other parameters in the model.
Figure 1 shows the complete architecture of the proposed backend. The names in the blocks refer to the way the parameters of that stage are initialized. Without the lower blue part, the proposed backend coincides, at initialization, with the standard PLDA backend with a global calibration stage. LDAlast refers to using the last M dimensions of the LDA matrix for initialization of that block. As mentioned before, the parameters of the dimensionality reduction applied to the vectors to obtain the vectors (the DimRed block) are the only ones that are randomly initialized. Note that the blocks Alpha and Beta are in charge of computing the calibration parameters when those are dependent on the sideinformation. That is, in this case, the parameters become an input to the calibration stage while, when no sideinformation is used, the and parameters are global and are implicit in the calibration block.
4 TrialBased Calibration
Through the years we have been exploring the problem of achieving robust calibration performance across varying conditions using different approaches. The most successful of them, published first in [11] and then further developed and analyzed in [12] was trialbased calibration (TBC). The approach consists of training a separate calibration model for each verification trial using a subset of the available calibration data. The subset for each trial is selected based on a measure of condition similarity between the two sides of the trial and the calibration data. That is, for each trial we select calibration enrollment samples that are similar in terms of condition to the enrollment side of the trial to be scored, and calibration test samples that are similar to the test side of the trial to be scored. Next, we train a calibration model using all possible trials generated with the selected enrollment and test samples. In the latest version of the method [12], the similarity between two signals was measured using a PLDA model trained with condition rather than speaker labels. In this paper, we use the parameters we found to be best in [12], under the assumption that we want to score every possible trial. A reject option was proposed in that paper, but we do not use it for the comparisons here since we haven’t implemented an equivalent approach for DPLDA yet.
5 Experimental Setup
In this section we describe the system configuration and datasets used for our experiments.
5.1 Speaker Recognition System
The proposed backend uses standard xvectors as input [17]. The input features for the embedding extraction network are powernormalized cepstral coefficients (PNCC) [18] which, in our experiments, gave better results than the more standard mel frequency cepstral coefficients (MFCCs). We extract 30 PNCCs with a bandwidth going from 100 to 7600 Hz and root compression of 1/15. The features are mean and variance normalized over a rolling window of 3 seconds. Silence frames are then discarded using a DNNbased speech activity detection system.
System training data included 234K signals from 14,630 speakers. This data was compiled from NIST SRE 2004–2008, NIST SRE 2012, Mixer6, Voxceleb1, and Voxceleb2 (train set) data. Voxceleb1 data had 60 speakers removed that overlapped with Speakers in the Wild (SITW). All waveforms were up or downsampled to 16 KHz before further processing. In addition, we downsampled any data originally of 16 kHz or higher sampling rate (74K files) to 8 kHz before upsampling back to 16 kHz, keeping two “raw” versions of each of these waveforms. This procedure allowed the embeddings system to operate well in both 8kHz and 16kHz bandwidths.
Augmentation of data was applied using four categories of degradations as in [19]
, including music and noise, both at 10 to 25 dB signaltonoise ratio, compression, and low levels of reverb. We used 412 noises compiled from both freesound.org and the MUSAN corpus. Music degradations were sourced from 645 files from MUSAN and 99 instrumental pieces purchased from Amazon music. For reverberation, examples were collected from 47 real impulse responses available on echothief.com and 400 lowlevel reverb signals sourced from MUSAN. Compression was applied using 32 different codecbitrate combinations with open source tools. We augmented the raw training data to produce 2 copies per file per degradation type (randomly selecting the specific degradation and SNR level, when appropriate) such that the data available for training was 9fold the amount of raw samples. In total, this resulted in 2,778K files for training the speaker embedding DNNs.
The architecture of our embeddings extractor DNN follows the Kaldi recipe [17]
. The DNN is implemented in Tensorflow, trained using an Adam optimizer with chunks of speech between 2.0 and 3.5 seconds. Overall, we extract about 4K chunks of speech from each of the speakers. DNNs were trained over 4 epochs over the data using a mini batch size of 96 examples. We used dropout with a probability linearly increasing from 0.0 up to 0.1 at 1.5 epochs then linearly decreasing back to 0.0 at the final iteration. The learning rate started at 0.0005, increasing linearly after 0.3 epochs reaching 0.03 at the final iteration while training simultaneously using 8 GPUs, averaging the parameters from the 8 jobs every 100 minibatches.
The training data for the PLDA and DPLDA backends was a subset of the training data used for the speaker embeddings DNN including a random half of the speakers (for expedience of experimentation) and excluding all signals for which no information about the recording session could be obtained and all speakers for which a single session was available. In this case, we use full segments to train the backend rather than chunks and SNR level of 5dB for augmentation (using this SNR on the PLDA backend led to marginally better results than using 1025dB SNR, as for the embeddings extractor). Beside this training data, we add two datasets for backend training, FVCAUS and RATS. FVCAUS is composed of interviews and conversational excerpts from over 500 Australian English speakers from the forensic voice comparison dataset [20]. Audio was recorded using close talking microphones. RATS is composed of telephone calls in five nonEnglish languages from over 300 speakers. We only used the source data (not retransmitted) of the DARPA RATS program [21] for the SID task.
Both for PLDA and CADPLDA, the LDA dimension was found to be optimal at 200. For ASDPLDA the optimal LDA dimension turned out to be 300. The effect of changing this dimension from 200 to 300 is small for all three systems.
The condition DNN used to generate the embeddings
for the CADPLDA method has two layers of 100 and 10 nodes with ReLU activations and batch normalization. The classes used at the output layer are given by the domain (Voxceleb, Mixer, Switchboard, FVCAUS or RATS) concatenated with the degradation type and, when available, any further information about the condition of the signal (channel type, language, and speech style). Note that the classes are then extremely different in terms of granularity. All Voxceleb data is grouped into one class per degradation type, while Mixer data has much finer grained labels. While this is clearly suboptimal, it seems to work well in our experiments. For TBC, we train a conditionPLDA model using the same data and labels as for the condition DNN above. In the case of the ASDPLDA method, we do not need a separate condition prediction model. We simply initialize the transformation from embeddings to
vectors using the last 200 dimensions in the LDA matrix out of the 512 dimensions available.5.2 Twostage training
Our backend training data, described in Section 5.1, is highly imbalanced: 53% comes from Voxceleb collections, 25% from SRE and Mixer collections, 11% from Switchboard, 6% from RATS, and 4% from FVCAUS. This causes a problem when learning the calibration part of the model, since parameters cannot be robustly learned for the underrepresented conditions. For this reason, we implement a twostage training procedure. We use all the training data for the first few iterations, then freeze the parameters of the speaker verification branch up the score generation stage (Equation 2), subset the training list to use a balanced set of samples with similar representation for all five domains and continue training the calibration parameters. This allows the model to focus on improving sideinformationdependent calibration once the discriminative part of the model has converged.
5.3 Datasets
We use several different datasets for development and evaluation of the proposed approach. Table 1 shows the statistics for all sets. The SITW dataset contains speech samples in English from opensource media [22] including naturally occurring noises; reverberation; codec; and channel variability. The SRE16 dataset [23] includes variability due to domain/channel and language mismatches. We use the CMN2 subset of the SRE18 dataset [24], which has similar characteristics to the SRE16 dataset, with the exception of focusing on different languages, and including speech recorded over VOIP instead of just PSTN calls. The LASRS corpus is composed of 100 bilingual speakers from each of three languages, Arabic, Korean and Spanish [25]. Each speaker is recorded in two separate sessions speaking English and their native language using several recording devices. Finally, the FVCCMN is composed of interviews and conversational excerpts from female Chinese speakers [26], which were cut to durations between 10 and 60 seconds. Recordings were made with highquality lapel microphones.
Dataset  Dev Split  Eval Split  

#spk  #tgt  #imp  #spk  #tgt  #imp  
SITW  119  2.6k  335.0k  180  3.7k  0.7m 
SRE16  20  3.3k  13.2k  201  27.8k  1.4m 
SRE18  35  6.3k  80.2k  289  48.5k  1.6m 
FVCCMN        68  16.4k  1.1m 
LASRS        333  41.0k  4.8m 
SITW, SRE16 and SRE18 have welldefined development sets. We use those 3 sets to tune the parameters of our models. The rest of the sets are used for evaluation of the final systems. For SITW, SRE16 and SRE18 we use the 1side enrollment trials defined with the datasets. For LASRS and FVCCMN we create exhaustive trials excluding samesession trials.
6 Results
We show results in terms of Cllr. This metric [27] measures the quality of the scores as LLRs using a logarithmic cost function and is affected both by the discrimination and calibration performance of the system. A very discriminant system can have a high Cllr if the calibration is wrong (ie, if the scores do not represent proper LLRs for the task). Such a system would lead to bad decisions when thresholded with the theoretically optimal threshold for the cost function of interest. In this work, we aim to obtain a system that results in good calibration across a large variety of conditions. To measure whether we are succeeding in this goal, we need to separate the effect of the discrimination and the calibration performance of the system. This is done by obtaining the minimum Cllr that can be achieved with the system’s scores for a certain test set using a monotonic transformation [28]. The difference between the actual Cllr and minimum Cllr for the system indicates the effect of misscalibration. If the two values are equal, then the system is perfectly calibrated and the scores produced by it are proper LLRs.
Figure 2 shows the actual and minimum Cllr values for development and evaluation dataset for different systems. All development decisions (dimensions, training hyperparameters, number of epochs, initialization seed, etc) were made based on the average performance in all three development sets. During our initial experiments for [2] we concluded that including the and terms in Equations (6) and (7) did not improve results. Hence, those terms are not used in our experiments. The results shown correspond to the best epoch of each of 10 models run with different seeds for the chosen architecture, selected based on the average development set performance, since, as we will see in Section 6.2, the seed and the number of epochs have a very significant effect in system performance.
For completeness sake, we repeat here the baseline PLDA results shown in [2]. For these three PLDA systems the LDA and PLDA parameters are trained using only the training data, without adding FVCAUS and RATS since adding those sets slightly degrades discrimination performance of this system on our development sets. We show three options for training the calibration stage for the PLDA system: using TRN3h RAW which consists of 300 speakers from the raw part of the training set (using more speakers does not help and including the degraded part hurts performance), using only RATS data and using only FVCAUS data. Note that no calibration set is optimal for all test sets. Merging the three sets leads to a tradeoff in performance which highly depends on the proportion of each dataset used (results not shown). Note also that the discrimination performance of the baseline system is not affected by the calibration model, since this model is a single monotonic transformation for each test set.
The fourth system in the figure uses the same PLDA backend as the first three and TBC for conditiondependent calibration. In this case, the concatenation of the three calibration sets used for the first three systems is given to the TBC algorithm as candidate set for selection of calibration data adding back the degraded training data, since including that data provides improved performance on the SITWDev set and no degradation on the other two development sets. Ideally, the TBC algorithm should be able to select the best subset of data for each trial. We set TBC to select enough samples to achieve 100 target trials, and to use regularization toward the global calibration parameters, with a weight of 0.02. These parameters were found to give optimal results in [12]. As we can see, TBC is working as expected, giving, in most cases, a performance better than or similar to each of the three global calibration models, with a clear exception for FVCCMN (more on this issue below).
Finally, we show results for two DPLDA systems. The first system is the one proposed in [2] and corresponds to the system called “DPLDA with meta 2stage” in that paper. Results are slightly different from those in the paper since we used the best of 10 seeds here instead of 5 as in the original paper. The last system in the figure is the newly proposed variant, ASDPLDA, where no external condition prediction model is used to extract the vectors. As we can see, both DPLDA methods give very similar performance, both in terms of discrimination and calibration, with the newly proposed method being much simpler.
We can see that the only data set that remains significantly misscalibrated with the DPLDA methods is FVCCMN. This dataset is quite different from all our training data. While it is similar in terms of acoustic conditions to FVCAUS, it consists only of Chinese speech while FVCAUS consists of English speech with Australian accent. While around 1% of the training samples are in Chinese (all from SRE04 and SRE06 datasets), they are all recorded over a telephone channel, not in the extremely clean acoustic conditions of the FVCCMN dataset. Note that the FVCAUS dataset leads to reasonable calibration when used as training data for calibrating FVCCMN (pink bar) despite the fact that the languages in these two sets are different. Our DPLDA algorithms, on the other hand, do not believe that the FVCAUS data should be relevant to calibrate FVCCMN given this difference in language. This is probably the reason why the DPLDA (and TBC) methods give worse performance than PLDA with FVCAUS calibration.
In terms of run time, the PLDA baseline with global calibration and the DPLDA options take a similar amount of time to run since, once the model is trained, both have the same functional form, with the DPLDA system having a slight increase in run time due to the computation of the vectors. On the other hand, the TBC approach takes orders of magnitude more time than either PLDA with global calibration or DPLDA. For example, the evaluation for the SITWEval set took about 3000 minutes on a single CPU for the PLDA system with TBC, while the PLDA with global calibration and DPLDA systems took under 10 seconds on a single CPU.
6.1 Analysis of the sideinformation vectors
Using the vectors as input to a simple functional form (Equations 6 and 7) to extract the calibration parameters appears to be quite successful at achieving robustness across different datasets. So, a question arises: what information do these vectors represent? For the original model proposed in [2], where the vectors are given by the preactivations in a 10node bottleneck layer in a conditionprediction DNN, we expect the resulting vectors to contain mostly condition information and very little speaker information. On the other hand, for the ASDPLDA method, the vectors could, in fact, contain speaker information, since they are not prevented otherwise.
Figure 3 shows system performance when using the vectors from the two DPLDA systems in Figure 2 as input to a standard PLDAbased speaker verification system. In this case, the input is 5dimensional, so, the LDA transform is set to not reduce dimensionality further. We choose to show EER in this case since we do not care about calibration performance, we only wish to assess how much speaker information is present in the vectors from each system. For comparison, we also show the performance for two PLDA systems using the speaker embeddings as input, one with LDA dimension of 200 (this is the same system used in Figure 2 for the first three results) and one with LDA dimension of 5.
As we can see, using only 5 dimensions (obtained from the speaker embeddings in different ways) is significantly worse than using 200; no surprise there. Interestingly, though, using the ASDPLDA vectors gives a similar performance to the PLDA system with embeddings as input and an LDA dimension of 5. Hence, we need to assume that this system is probably prioritizing the preservation of speaker information within those 5 dimensions. On the other hand, the PLDA system using the CAPLDA vectors as input has almost random performance, as expected. Yet, Figure 2 shows that both sets of vectors are useful for obtaining calibration parameters that generalize across conditions. Hence, we might hypothesize that whatever speaker information present in those vectors is being discarded when computing and from the vectors using Equations (6) and (7). This is an open question for future work.
We can also qualitatively analyze the nature of the
vectors by plotting them in two dimensions after projection with principal component analysis, training this projection using the balanced training set used in the second stage of training. Figure
4 shows the center of the projected vectors for each dataset, along with a sample of the individual training points, for reference. We can see that, in both cases, the centers for the development and evaluation sets from the same collection fall relatively close to each other. This seems to indicate that both vectors preserve information about the conditions of the samples, apparently in similar degrees.Interestingly, we can see that the FVCCMN center lies outside of the main mass of training data for both systems, more extremely for the CADPLDA case. This is expected given that, as explained before, this data is unlike any of the training data. This might explain why the calibration with DPLDA is not working well for this dataset: since, during training, the model has not seen enough values in the region where the FVCCMN values lie, its prediction of the calibration parameters for this data is not reliable. In the future we will look into detecting these cases where the model is doomed to fail, in a similar way as we did for TBC in [12].
6.2 Effect of initialization and number of epochs
The results shown in Figure 2 correspond to the systems that lead to the best results over 10 different seeds and 5 different epochs (20, 25, 30, 35 and 40) on average over the development sets. This optimization turns out to be essential for good performance. Figure 5 shows the average actual Cllr over the development sets for 5 different seeds over different epochs for the ASDPLDA model. We can see that different seeds and epochs can lead to very different results. In fact, even for the same seed, nearby epochs vary greatly in performance suggesting that the model is jumping from one local minimum to another, some of them having much better generalization performance than others. Note that this behavior is also observed for the originally proposed model that required external condition vectors. Lower learning rates and larger regularization values do not solve this problem.
While it would be desirable for the performance to be more stable across seeds and epochs, the good news is that the best model selected using the development set is also a very good model on the evaluation sets. Figure 6 shows the scatter plot of development versus evaluation performance for the same seeds and epochs as in Figure 5. We can see that on the sets that are matched to the three development sets, the correlation between development and evaluation performance is almost perfect. For the other two eval sets that do not have a corresponding development set, the correlation is not perfect, but we can still see that the best system for the development sets is, in both cases, a very good system for the eval set. Hence, the selection of the best model generalizes well in our experiments, even for cases for which no matching development set is available.
Finally, Figure 7 shows the effect of the initialization method described in Section 3, where all parameters in the model are initialized with some meaningful value, except the DimRed stage, which has no obvious default value. We call this method “warmstart”. Further, we wish to evaluate how important it is to initialize the extractor of
vectors with the lower part of the LDA transform. Hence, we disable this initialization and replace it with random initialization, while all other parameters are initialized as in the warmstart approach. We call this method “warmstart partial”. Finally, we compare these two approaches with random initialization for all parameters. All random initializations are done using a normal distribution centered at 0.0 with standard deviation of 0.5. Of course, the distribution of the random initializers could be optimized, perhaps separately for each parameter, but this would be a very costly experiment. In all cases, the best model over 10 different seeds is selected. We can see that using the precomputed parameter values for initialization leads to significantly better results than random initialization and smaller but also consistently better results than using a random matrix to initialize the
vector extractor instead of the lower part of the LDA transform.7 Conclusions
We presented a novel backend approach for speaker verification which consists of a series of operations that mimic the standard PLDAbackend followed by calibration. The parameters of the model are learned jointly to optimize the overall speaker verification performance of the system, directly targeting the loss of interest in the speaker verification task. In order to achieve good generalization in terms of calibration performance across varying conditions, we introduced a sideinformation dependent calibration stage, where the sideinformation is learned jointly with the rest of the model. We showed that the proposed approach improves performance over the standard PLDA backend on a wide variety of test conditions, leading to a robust backend that does not require specific development data for calibration.
The proposed approach fails to give good calibration performance on only one of our test sets, which contains conditions that are not represented in the training data. We believe this is a case where the system should be able to reject the trials for being severely mismatched to the training data. We plan to pursue this research direction in the near future. Further, we plan to extend the approach to be able to score multiple enrollment trials and use additional external sideinformation like the signal’s duration as input to the calibration model. Finally, the end goal is to integrate this backend in the embeddings extractor DNN for joint training.
References
 [1] D. Snyder, P. Ghahremani, D. Povey, D. GarciaRomero, Y. Carmiel, and S. Khudanpur, “Deep neural networkbased speaker embeddings for endtoend speaker verification,” in Spoken Language Technology Workshop (SLT), 2016 IEEE. IEEE, 2016, pp. 165–170.
 [2] Luciana Ferrer and Mitchell McLaren, “A discriminative conditionaware backend for speaker verification,” arXiv preprint arXiv:1911.11622, 2019.
 [3] J. Rohdin, A. Silnova, M. Diez, O. Plchot, P. Matejka, and L. Burget, “Endtoend DNN based speaker recognition inspired by ivector and PLDA,” in Proc. ICASSP, Calgary, Canada, April 2018.
 [4] L. Burget, O. Plchot, S. Cumani, O. Glembek, P. Matejka, and N. Brümmer, “Discriminatively trained probabilistic linear discriminant analysis for speaker verification,” in Proc. ICASSP, Prague, May 2011.
 [5] N. Dehak, P.J. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet, “Frontend factor analysis for speaker verification,” IEEE Trans. Audio, Speech, and Lang. Process., vol. 19, no. 4, pp. 788–798, May 2011.
 [6] L. Ferrer, K. Sönmez, and S. Kajarekar, “Classdependent score combination for speaker recognition,” in Proc. Interspeech, Lisbon, Sept. 2005.
 [7] Y. Solewicz and M. Koppel, “Considering speech quality in speaker verification fusion,” in Proc. Interspeech, Lisbon, Sept. 2005.

[8]
Y. Solewicz and M. Koppel,
“Using postclassifiers to enhance fusion of low and highlevel speaker recognition,”
IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 7, Sept. 2007.  [9] L. Ferrer, M. Graciarena, A. Zymnis, and E. Shriberg, “System combination using auxiliary information for speaker verification,” in Proc. ICASSP, Las Vegas, Apr. 2008.
 [10] Niko Brümmer, “Focal bilinear toolkit,” http://niko.brummer.googlepages.com/focalbilinear, 2008.
 [11] M. McLaren, A. Lawson, L. Ferrer, N. Scheffer, and Y. Lei, “Trialbased calibration for speaker recognition in unseen conditions,” in Proc. Odyssey14, Joensuu, Finland, June 2014.
 [12] L. Ferrer, M. K. Nandwana, M. McLaren, D. Castan, and A. Lawson, “Toward failsafe speaker recognition: Trialbased calibration with a reject option,” IEEE/ACM Trans. Audio Speech and Language Processing, vol. 27, jan 2019.
 [13] Sandro Cumani, Niko Brümmer, Lukáš Burget, Pietro Laface, Oldřich Plchot, and Vasileios Vasilakakis, “Pairwise discriminative speaker verification in the ivector space,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, no. 6, pp. 1217–1227, 2013.
 [14] N. Brümmer and G. Doddington, “Likelihoodratio calibration using priorweighted proper scoring rules,” in Proc. Interspeech, Lyon, France, Aug. 2013.
 [15] Diederik P Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” in Proc. of ICLR, San Diego, 2015.
 [16] David A Van Leeuwen and Niko Brümmer, “An introduction to applicationindependent evaluation of speaker recognition systems,” in Speaker classification I, pp. 330–353. Springer, 2007.
 [17] “NIST SRE 2016 xvector recipe,” https://davidryansnyder.github.io/2017/10/04/model_sre16_v2.html.
 [18] C. Kim and R.M. Stern, “Powernormalized cepstral coefficients (PNCC) for robust speech recognition,” in Proc. ICASSP, Kyoto, Mar. 2012.
 [19] M. McLaren, D. Castan, M. Nandwana, L. Ferrer, and E. Yilmaz, “How to train your speaker embeddings extractor,” in Proc. of Speaker Odyssey, Les Sables d’Olonne, France, June 2018.
 [20] GS Morrison, C Zhang, E Enzinger, F Ochoa, D Bleach, M Johnson, BK Folkes, S De Souza, N Cummins, and D Chow, “Forensic database of voice recordings of 500+ australian english speakers,” http://databases.forensicvoicecomparison.net, 2015.
 [21] K. Walker and S. Strassel, “The RATS radio traffic collection system,” in Proc. Odyssey12, Singapore, June 2012.
 [22] M. McLaren, L. Ferrer, D. Castan, and A. Lawson, “The speakers in the wild (SITW) speaker recognition database,” in Proc. Interspeech, San Francisco, September 2016.
 [23] “NIST 2016 speaker recognition evaluation plan,” https://www.nist.gov/sites/default/files/documents/itl/iad/mig/SRE16_Eval_Plan_V10.pdf.
 [24] “NIST 2018 speaker recognition evaluation plan,” https://www.nist.gov/document/sre18evalplan20180531v6.pdf.
 [25] S. D. Beck, R. Schwartz, and H. Nakasone, “A bilingual multimodal voice corpus for language and speaker recognition (LASR) services,” in Proc. Odyssey04, Toledo, Spain, May 2004.
 [26] C Zhang and GS Morrison, “Forensic database of audio recordings of 68 female speakers of standard chinese,” http://databases.forensicvoicecomparison.net, 2011.
 [27] N. Brümmer and J. du Preez, “Application independent evaluation of speaker detection,” Computer Speech and Language, vol. 20, apr 2006.
 [28] N. Brümmer and J. du Preez, “The PAV algorithm optimizes binary proper scoring rules,” https://sites.google.com/site/nikobrummer/pav_optimizes_rbpsr.pdf, 2013.
Comments
There are no comments yet.