Meta-learning for robust child-adult classification from speech

10/24/2019 ∙ by Nithin Rao Koluguri, et al. ∙ 0

Computational modeling of naturalistic conversations in clinical applications has seen growing interest in the past decade. An important use-case involves child-adult interactions within the autism diagnosis and intervention domain. In this paper, we address a specific sub-problem of speaker diarization, namely child-adult speaker classification in such dyadic conversations with specified roles. Training a speaker classification system robust to speaker and channel conditions is challenging due to inherent variability in the speech within children and the adult interlocutors. In this work, we propose the use of meta-learning, in particular, prototypical networks which optimize a metric space across multiple tasks. By modeling every child-adult pair in the training set as a separate task during meta-training, we learn a representation with improved generalizability compared to conventional supervised learning. We demonstrate improvements over state-of-the-art speaker embeddings (x-vectors) under two evaluation settings: weakly supervised classification (up to 14.53 relative improvement in F1-scores) and clustering (up to relative 9.66 improvement in cluster purity). Our results show that protonets can potentially extract robust speaker embeddings for child-adult classification from speech.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Automated child speech understanding using machines is an inherently more difficult problem than that of adult speech. A variety of factors have been identified and addressed in recent years, both from a signal processing viewpoint (such as large within- and across-age and gender variability due to a developing vocal tract [12], errors in semantic structure of spoken language [9]

), and limited data availability which has necessitated data augmentation and transfer learning techniques

[17].

An additional layer of complexity arises when addressing clinical and mental health applications involving children, where the condition may give rise to speech and language abnormalities. One such domain is autism spectrum disorder (ASD). ASD refers to a complex group of neuro-developmental disorders that are commonly characterized by social and communication idiosyncrasies, and whose reported prevalence in US children has been steadily rising (1 in 59 children as of 2014 [1]). Child-adult interactions have been used in the ASD domain primarily for diagnosis (ADOS [13]) and measuring intervention response (BOSCC [8]). Automated computational processing of the participants’ audio [2] and language streams [11] has provided objective descriptions that characterize the session progress and understanding the relation with symptom severity.

However, behavioral feature extraction in above studies has necessitated manual annotation for speaker labels, which can be expensive and time-consuming to obtain especially for large corpora. Automatic speaker label extraction involves a combination of speech activity detection (speech/non-speech classification) and speaker classification (categorization of speech regions into

child and adult). In this work, we assume that oracle speech activity detection is available and focus on building a robust child-adult classification model.

Interest in child-adult speaker classification in spontaneous conversations has increased recently. Some of the early works used traditional feature representations such as MFCC, PLP and i-vectors, and clustering techniques such as Bayesian information criterion, information bottleneck and agglomerative hierarchical clustering

[22, 5, 25, 20, 14]. In [21], speech from children and adults was found to be sufficiently different in the embedding space to justify speaker classification. A recent work using DNN embeddings (x-vectors) explored augmentation of child speech for PLDA training [23]. The authors also observed that splitting the adult speech into gender-specific portions while training the PLDA returned improvements in diarization performance. However, similar to most of the above works, the authors do not make any distinctions within the child speech.

Training a conventional child-adult classifier from speech has at least two major issues: 1) Large within-class variability especially for

child from age, gender, clinical symptom severity; and 2) Lack of sufficient amounts of balanced training data needed to tackle the above issue. We propose to address the above issues using meta-learning, also known as learning-to-learn [6]. Meta-learning consists of two optimizations: the conventional learner which learns within a task; and a meta-learner which learns across tasks. This is in contrast to conventional supervised learning, which operates within a single task for training and testing, and learns across samples. Meta-learning is inspired by the human learning process for rapid generalization to new tasks, for instance children who have never seen a new animal before can learn to identify them using only a handful of images. As a consequence, meta-learning has demonstrated success in low-resource applications [18, 6]

in computer vision in recent years.

In this work, we model each session in the training set as a separate task. Hence, each task consists of two classes: child and adult from the particular session. During training, classes are not shared across tasks, i.e., child in one session is a separate class from child in another session. By optimizing the network to discriminate between child-adult speaker pairs across all training tasks (sessions), we mitigate the influence of within-class variabilities. Further, we remove the need for large amounts of training data by randomly sampling training and testing subsets (referred to as supports and queries respectively in meta-learning [18]) within each batch. We evaluate our proposed method under two settings: 1) Weak supervision - a handful of labeled examples are available from the test session, and 2) Unsupervised - automated clustering. The latter is similar to conventional speaker clustering in diarization systems. We show that the learnt representations outperform baselines in both settings.

2 Methods

2.1 Meta learning using prototypical networks

Meta-learning methods were introduced to address the problem of few-shot learning [6], where only a handful of labeled examples are available in new tasks not seen by the trained model. Deep metric-learning methods were developed within the meta-learning paradigm to specifically address generalization in the few-shot scenario. We choose prototypical networks (protonets) [18] which presume a simple learning bias when compared to other metric-learning methods, and have demonstrated state-of-the-art performance in image classification [6]

and natural language processing tasks such as sequence classification

[24]

. Protonets learn a non-linear transformation into an embedding space where each class is represented using a single sample, specifically the centroid of examples from that class. During inference, a test sample is assigned to the nearest centroid.

Our application of protonets for speaker classification is motivated by the fact that participants in a test session represent unseen classes, i.e., speakers in an audio recording to be diarized are typically assumed unknown. However, the target roles namely child and adult are shared across train and test sessions. Hence, by treating child-adult speaker classification in each train session as an independent task, we hypothesize that protonets learn the common discriminating characteristics between child and adult classes irrespective of local variabilities which might influence the task performance.

As a metric-learning method, protonets share similarities with triplet networks [3] and siamese networks [4]

for learning speaker embeddings. Other than a recently proposed work which used protonet loss function for speaker ID and verification

[21], to the best of our knowledge this work is one of the early applications of protonets for speaker clustering. Following, we illustrate the protonet training process using a single batch, then extend it to multiple training sessions.

2.1.1 Batch training

Consider a set of labeled training examples from classes = where each sample is a vector in -dimensional space and . Protonets learn a non-linear mapping where the prototype of each class is computed as follows:

(1)

represents the set of train samples belonging to class . For every test sample

, the posterior probability given class

is as follows:

(2)

denotes distance metric. While the choice of can be arbitrary, it was shown in [18] that using Euclidean distance is equivalent to modeling the supports using Gaussian mixture density functions, and empirically performed better than other functions. Thus, we use Euclidean distance in this work. Learning proceeds by minimizing the negative log probability for the true class using gradient descent.

(3)

Pseudo-code for training a batch is provided in Algorithm 1.

(S,N) denotes uniform sampling of elements from       without replacement
      Input: , where
      Output: (Batch training loss)

1:for  in  do
2:      Supports
3:      Queries
4:      Prototypes
5:
6:for  in  do
7:     for  in  do
8:               
Algorithm 1 Single batch of protonet training

2.1.2 Extension to multiple sessions

Consider sessions in the training corpus, with number of samples belonging to class in session . We iterate through each session , and randomly sample examples each from child and adult without replacement. These samples (supports) are used to construct the prototypes using Equation (1). From the remaining samples, samples are chosen without replacement from each class, where denotes the training batch size. These samples (queries) are used to update the weights in a single back-propagation step according to Equation (3

). Although a significant fraction of samples are not seen during a single epoch (1 epoch

batches), random sampling of supports and queries over multiple epochs improve the generalizability of protonets.

2.2 Siamese networks

For unsupervised evaluation (clustering), we compare protonets with siamese networks [10], which learn a metric space to maximize pairwise similarity between same-class pairs and minimize similarity between different-class pairs. Specifically, we implement the variant used in speaker diarization [7], where the training label for each input pair represents the probability of belonging to the same speaker. The network jointly learns both the embedding space and distance metric for computing similarity. In our work, we randomly select same-speaker (child-child, adult-adult) and different speaker (child-adult) x-vector pairs to provide input to the model.

Fig. 1 illustrates the differences between siamese networks and protonets during training.

 

Corpus Duration(min) Child Age(yrs) # Utts
(mean std.) (mean std.) Child Adult

 

ASD 17.76 11.99 9.02 3.10 11045 20313
ASD-Infants 10.35 0.51 1.87 0.78 1371 4120

 

Table 1: Statistics of child-adult corpora used in this work.

3 Experiments

3.1 Dataset

We select two types of child-adult interactions from the ASD domain: the gold-standard Autism Diagnostic Observation Schedule (ADOS [13]) which is used for diagnostic purposes and a recently proposed treatment outcome measure, Brief Observation of Social Communication Change (BOSCC [8]) for verbal children who fluently used complex sentences. The ADOS Module 3 typically lasts between 45 and 60 minutes and includes over 10 semi-structured tasks. The ADOS produces a diagnostic algorithm score which can be used to classify children between ASD vs. non-ASD groups. On the other hand, BOSCC is a treatment outcome measure used to track changes in social-communication skills over the course of treatment in individuals with ASD, and is applicable in different collection settings (clinics, homes, research labs). A BOSCC session lasts typically for 12 minutes and consists of 4 segments (two 4-minute-play segments with toys and two 2-minute-conversation segments). We used a combination of ADOS (n=3) and BOSCC (n=24) sessions which were administered by clinicians and manually labeled by trained annotators for speaking times and transcripts. We refer to this corpus as ASD. The sessions in ASD cover sources of variability in child age, collection centers (4) and amount of available speech per child (Table 1).

To check generalization performance, we train our models on ASD and evaluate on a different child-adult corpus within the autism diagnosis and intervention domain. The ASD-Infants corpus (Table 1) consists of BOSCC (n=12) sessions with minimally verbal toddlers and preschoolers with limited language (nonverbal, single words or phrase speech). As opposed to ASD, these sessions are administered by a caregiver, and represent a more naturalistic data collection setup aimed at early behavioral assessments with a familiar adult. The age differences between children in both corpora provides a significant domain mismatch.

3.2 Features and Model Architecture

We use x-vectors from the CALLHOME recipe111https://kaldi-asr.org/models/m6 as pre-trained audio embeddings in this work, which have demonstrated state-of-the-art performance in speaker diarization [15] and recognition systems [19]

. X-vectors are fixed-length embeddings extracted from variable length utterances using a time-delay neural network followed by a statistics pooling layer. In all our experiments, 128 dimensional x-vectors are input to a feed-forward neural network with 3 hidden layers (128, 64 and 32 units per layer). Embeddings from the third hidden layer (32-dimensional) are treated as speaker representations. Rectified linear unit (ReLU) non-linearity is used in between the layers. Batch-normalization and dropout (

= 0.2) are used for regularization. Adam optimizer ( = , = 0.9, = 0.999) is used for weight updates. A batch size of 128 samples is employed. Since ASD

corpus contains only 27 sessions, we use nine-fold cross validation to estimate test performance. At each fold, 18 sessions are used for model training. The best model is chosen using validation loss computed with 6 sessions. The remaining 3 sessions are treated as evaluation data. No two folds share the data from same speaker.

Figure 1: Training in protonets (left) vs siamese networks (right) in the embedding space. Colored backgrounds represent class decision regions. Distances from the query sample (non-filled) to prototypes from each class (filled with black) are used to estimate loss training loss using Equations (2) and (3). siamese networks are trained to maximize similarity between same-speaker pairs (dashed line) and minimize similarity between different-speaker pairs (solid line). Illustration adopted from [18, 21].

3.3 Evaluation

3.3.1 Weak Supervision

We evaluate our models in a few-shot setting similar to the original formulation of protonets [18]

which is equivalent to sparsely labeled segments from the test session. In practice, such labels can be made available from the session through random selection or active learning

[16]. We train a baseline model using the architecture from Section 3.2

and a softmax layer to minimize cross-entropy loss between

child and adult classes. This model is directly used to estimate class posteriors on the testing data. We refer to this model as Base. We use a second baseline where the labeled samples from test sessions in each fold are made available during the training process, i.e., updating protonet weights using back-propagation (Base-backprop).

For protonets, we train two variants: P20 and P30 with 20 and 30 supports per class during training. A larger number of supports translates into more samples for reliable prototype computation, however it results in fewer queries for back-propagation. During evaluation, 5 samples from each class in the test session are randomly chosen as training data. These samples are used to compute prototypes for child and adult followed by minimum-distance based assignment for the remaining samples in that session. In order to estimate a robust performance measure for Base-backprop, P20 and P30, we repeat each evaluation 200 times by selecting a different set of 5 samples and compute the mean macro (unweighted) F1-score over the corpus.

3.3.2 Unsupervised: Clustering

Clustering x-vectors using AHC and PLDA scores (trained with supervision) is an integral part of recent diarization systems [15]

. This method forms our first baseline. We note that the training data for PLDA transformation represents significant domain mismatch with our corpora. We use k-means and spectral clustering (using cosine-distance based affinity matrix) as unsupervised clustering methods for comparing x-vectors, siamese embeddings and protonet embeddings. In the siamese network, the distance measure between a segment pair is learnt between outputs from the third hidden layer (32-dimensional). For protonets, we use the models trained for weak supervision and extract embeddings at the prototype space (32-dimensional) for clustering. We use purity as the clustering metric, which describes to what extent samples from a cluster belong to the same speaker.

Method ASD ASD-Infants
Base 82.67 53.67
Base-backprop 78.64 56.29
P20 86.66 61.30
P30 86.10 61.47
Table 2: Child-adult classification results using macro-F1 (%)

4 Results

4.1 Classification

Weakly-supervised classification results are presented in Table 2. In general, both variants of protonet outperform the baselines significantly in their respective corpora (ASD: <0.05, ASD-Infants: <0.01). However, all models degrade in performance on the ASD-Infants corpus as compared to ASD. As mentioned before, the data from younger children presents a large domain mismatch between training and evaluation data and we suspect this as the primary reason for lower performance. Surprisingly in ASD, updating network weights using samples from test session (Base-backprop) reduces classification performance. We suspect that the network overfits on the labeled samples. However in the case of ASD-Infants, the labeled samples from the test session provide useful information about the speakers resulting in modest improvement over a weaker Base. While protonets provide the best F1-scores in both corpora, the performance in ASD-Infants leaves room for improvement. We do not observe any significant difference between P20 and P30, suggesting that the performance is robust to the number of supports and queries during training.

Method ASD ASD-Infants
K-Means SC K-Means SC
x-vectors 77.05 75.22 77.98 75.97
siamese 78.22 79.18 78.30 76.86
P20 81.39 80.70 85.51 85.55
P30 79.80 80.24 83.57 83.26
Table 3: Speaker clustering results using purity (%)

4.2 Clustering

Clustering x-vectors using AHC and PLDA scores results in a purity of 63.45% in ASD, which is significantly lower than both K-means and Spectral Clustering (SC) for all the models in Table 3

. This suggests that the supervised PLDA models may be susceptible to unknown speaker types. Unsupervised PLDA adaptation using x-vectors’ mean and variance from

ASD marginally improves the performance to 64.32%, hence we do not include this method in the rest of our comparisons. As opposed to classification, clustering performance does not degrade in ASD-Infants, suggesting that discriminative information between child and adult speakers within a session is preserved in all the embeddings compared in Table 3. siamese networks present a modest improvement over x-vectors, upto 5.26% relative improvement for spectral clustering in ASD. However, protonets provide the best performance in both the corpora. In particular, P20 results in slightly higher purity scores than P30 across clustering methods and corpora. Hence, a larger number of queries within a batch appears beneficial for speaker clustering in this work. We also note that the best clustering performance (P20) is better in the out-of-domain corpus. We believe that the younger ages of children in ASD-Infants over ASD might benefit the clustering process.

Figure 2: TSNE visualizations for protonet embeddings (left) and x-vectors (right) for 3 test sessions in ASD corpora

4.3 TSNE Analysis

We provide a qualitative analysis using TSNE in Figure 2. We collect embeddings from both child and adult from a single-fold (3 sessions) in ASD and provide the TSNE visualizations for protonet embeddings and x-vectors. Embeddings from child and adult class are represented using 3 shades of red and blue respectively, one shade for each session. Although x-vectors cluster compactly within each speaker in a session, embeddings across sessions from the same class are spread apart. Protonets are able to cluster within classes compactly, while preserving the discriminative information between classes. In particular, embeddings belonging to child (which are expected to cover more sources of variability) are as compact as embeddings from adult. This suggests that protonets are able to learn across within-class variabilities for child-adult classification from speech.

5 Conclusions

In this work, we used meta-learning to perform child-adult speaker classification in spontaneous conversations. By modeling speaker classification from different sessions as separate tasks, we train protonets to learn speaker representations invariant to local variabilities. Using weakly-supervised and unsupervised settings, we show that protonets outperform x-vectors. Further, protonets outperform siamese networks for clustering when trained on the same input representations (x-vectors). In the future, we would like to train a generic speaker diarization system using protonets. Protonets are a suitable choice for this problem, since an arbitrary number of speakers can be accommodated in every training session, and speaker identities need not be shared across sessions.

References

  • [1] J. Baio et al. (2018) Prevalence of autism spectrum disorder among children aged 8 years—autism and developmental disabilities monitoring network, 11 sites, united states, 2014. MMWR Surveillance Summaries 67 (6), pp. 1. Cited by: §1.
  • [2] D. Bone, S. L. Bishop, M. P. Black, M. S. Goodwin, C. Lord, and S. S. Narayanan (2016)

    Use of machine learning to improve autism screening and diagnostic instruments: effectiveness, efficiency, and multi-instrument fusion

    .
    Journal of Child Psychology and Psychiatry 57 (8), pp. 927–937. Cited by: §1.
  • [3] H. Bredin (2017-03) TristouNet: triplet loss for speaker turn embedding. In ICASSP, pp. 5430–5434. Cited by: §2.1.
  • [4] K. Chen and A. Salman (2011) Learning speaker-specific characteristics with a deep neural architecture. IEEE Transactions on Neural Networks 22 (11), pp. 1744–1756. Cited by: §2.1.
  • [5] A. Cristia, S. Ganesh, M. Casillas, and S. Ganapathy (2018) Talker diarization in the wild: the case of child-centered daylong audio-recordings. In Interspeech, pp. 2583–2587. Cited by: §1.
  • [6] C. Finn, P. Abbeel, and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In ICML-Volume 70, pp. 1126–1135. Cited by: §1, §2.1.
  • [7] D. Garcia-Romero, D. Snyder, G. Sell, D. Povey, and A. McCree (2017) Speaker diarization using deep neural network embeddings. In ICASSP, pp. 4930–4934. Cited by: §2.2.
  • [8] R. Grzadzinski et al. (2016) Measuring changes in social communication behaviors: preliminary development of the brief observation of social communication change (BOSCC). Journal of autism and developmental disorders 46 (7), pp. 2464–2479. Cited by: §1, §3.1.
  • [9] V. Hazan and S. Barrett (2000) The development of phonemic categorization in children aged 6–12. Journal of Phonetics 28 (4), pp. 377 – 396. Cited by: §1.
  • [10] G. Koch, R. Zemel, and R. Salakhutdinov (2015) Siamese neural networks for one-shot image recognition. In

    ICML deep learning workshop

    ,
    Vol. 2. Cited by: §2.2.
  • [11] M. Kumar, R. Gupta, D. Bone, N. Malandrakis, S. Bishop, and S. S. Narayanan (2016) Objective language feature analysis in children with neurodevelopmental disorders during autism assessment.. In Interspeech, pp. 2721–2725. Cited by: §1.
  • [12] S. Lee, A. Potamianos, and S. Narayanan (1999) Acoustics of children’s speech: developmental changes of temporal and spectral parameters. The Journal of the Acoustical Society of America 105 (3), pp. 1455–1468. Cited by: §1.
  • [13] C. Lord et al. (2000) The autism diagnostic observation schedule—Generic: a standard measure of social and communication deficits associated with the spectrum of autism. Journal of autism and developmental disorders 30 (3), pp. 205–223. Cited by: §1, §3.1.
  • [14] M. Najafian and J. H. Hansen (2016) Speaker independent diarization for child language environment analysis using deep neural networks. In 2016 IEEE Spoken Language Technology Workshop (SLT), pp. 114–120. Cited by: §1.
  • [15] G. Sell, D. Snyder, A. McCree, D. Garcia-Romero, J. Villalba, M. Maciejewski, V. Manohar, N. Dehak, D. Povey, S. Watanabe, and S. Khudanpur (2018-09) Diarization is hard: some experiences and lessons learned for the JHU team in the inaugural DIHARD challenge. In Interspeech, pp. 2808–2812. Cited by: §3.2, §3.3.2.
  • [16] B. Settles (2009) Active learning literature survey. Technical report University of Wisconsin-Madison Department of Computer Sciences. Cited by: §3.3.1.
  • [17] P. G. Shivakumar and P. Georgiou (2018) Transfer Learning from Adult to Children for Speech Recognition: Evaluation, Analysis and Recommendations. arXiv preprint arXiv:805.03322. Cited by: §1.
  • [18] J. Snell, K. Swersky, and R. Zemel (2017) Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pp. 4077–4087. Cited by: §1, §1, §2.1.1, §2.1, Figure 1, §3.3.1.
  • [19] D. Snyder, D. Garcia-Romero, G. Sell, D. Povey, and S. Khudanpur (2018) X-vectors: robust dnn embeddings for speaker recognition. In ICASSP, pp. 5329–5333. Cited by: §3.2.
  • [20] L. Sun, J. Du, T. Gao, Y. Lu, Y. T. Tsao, C. Lee, and N. Ryant (2018-04) A novel lstm-based speech preprocessor for speaker diarization in realistic mismatch conditions. In ICASSP, pp. 5234–5238. External Links: Document Cited by: §1.
  • [21] J. Wang, K. Wang, M. T. Law, F. Rudzicz, and M. Brudno (2019) Centroid-based deep metric learning for speaker recognition. In ICASSP, pp. 3652–3656. Cited by: §1, §2.1, Figure 1.
  • [22] X. Wang, J. Du, L. Sun, Q. Wang, and C. Lee (2018) A progressive deep learning approach to child speech separation. In ISCSLP, pp. 76–80. Cited by: §1.
  • [23] J. Xie1, L. P. Garcia-Perera, D. P. Povey, and S. Khudanpur (2019) Multi-plda diarization on children’s speech. In Interspeech, pp. 376–380. Cited by: §1.
  • [24] M. Yu et al. (2018-06) Diverse few-shot text classification with multiple metrics. In NAACL: Human Language Technologies, Volume 1 (Long Papers), pp. 1206–1215. External Links: Document Cited by: §2.1.
  • [25] T. Zhou, W. Cai, X. Chen, X. Zou, S. Zhang, and M. Li (2016-10) Speaker diarization system for autism children’s real-life audio data. In ISCSLP, pp. 1–5. Cited by: §1.