The Audio Auditor: Participant-Level Membership Inference in Voice-Based IoT

05/17/2019 ∙ by Yuantian Miao, et al. ∙ Swinburne University of Technology 0

Voice interfaces and assistants implemented by various services have become increasingly sophisticated, powered by increased availability of data. However, users' audio data needs to be guarded while enforcing data-protection regulations, such as the GDPR law and the COPPA law. To check the unauthorized use of audio data, we propose an audio auditor for users to audit speech recognition models. Specifically, users can check whether their audio recordings were used as a member of the model's training dataset or not. In this paper, we focus our work on a DNN-HMM-based automatic speech recognition model over the TIMIT audio data. As a proof-of-concept, the success rate of participant-level membership inference can reach up to 90% with eight audio samples per user, resulting in an audio auditor.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The automatic speech recognition (ASR) system is widely adopted on IoT devices [1, 2]. The voice-based IoT platform competition among Apple, Microsoft, and Amazon is continuously heating up the smart speaker market [3]. In parallel, the privacy concerns about the ASR system and unauthorized access to user’s audio are of great awareness for customers. Privacy policies and regulations, such as the General Data Protection Regulations (GDPR) [4] and the Children’s Online Privacy Protection Act (COPPA) [5], have been enforced to regulate personal data processing. However, the murky privacy and security boundary can thwart IoT’s trustworthiness [6, 7] and many IoT devices attempt to sniff and analyze the audio captured in real-time without user’s consent [8]. Most recently, on WeChat, an enormously popular messaging platform within China and worldwide, a scammer camouflaged to voice like an acquaintance by spoofing her or his voice [9]. Therefore, it is important to develop techniques that enable auditing the use of customers’ audio data in ASR models.

In this paper, we designed and evaluated an audio auditor to help users determine whether their audio data had been used without authorization to train an ASR model. The targeted ASR model used in this paper is a DNN-HMM-based speech-to-text model. With an audio signal input, this model transcribes speech into written text. The auditor audits this target model with an intent to infer participant-level membership. The auditor will behave differently depending on if it is transcribing audio from within its training set or transcribing audio from other datasets. Thus, one can analyze the transcriptions and use the outputs to train a binary classifier as the auditor. As our primary focus is to infer participant-level membership, speaker-related information is filtered out while analyzing the transcription outputs (see details in Section 

III).

Participant-level membership inference on textual data has been recently studied [10]. However, in this work, we target an ASR model, instead of a text-generation model. The time-series audio data is significantly more complex than the textual data, causing feature patterns to be greatly varied [11]. Furthermore, current IoT applications demonstrate significantly higher security and privacy impacts than most verbal applications in learning tasks [12, 13]

. In doing so, firstly, we assume a different auditing scenario. To reproduce a target model close to ASR systems in practice, we use multi-task learning, which includes audio feature extraction, DNN learning, HMM learning, and an

-gram language model with natural language processing. Secondly, the auditor has black-box access to the target model which only outputs one final transcription result. Additionally, the auditor can audit the model by simultaneously providing multiple audio inputs supplied from the same user, instead of just one. Thirdly, we extract a different set of features from the model’s outputs. Instead of using the rank lists of several top output results, we only use one text output with the highest posterior and the length of input audio frames.

Our participant-level membership auditing method achieves high performance on the TIMIT dataset. The auditing accuracy results reach over 90% while the F1-score reaches 95% when 125 speaker records are used to train the auditor model. Even when training with 25 users, the resulting accuracy is approximately 85%. The auditor is also effective in auditing ASR models with different numbers of audio queries from the same individual. When the speaker audits the target model with more than one audio sample (one-audio sample membership inference success rate approaches to random guessing), the success rate is significantly boosted, reaching up to 90% with eight audio samples per user.

Ii Background

Ii-a Automatic Speech Recognition (ASR) Model

Fig. 1: An advanced ASR system. The ASR system has three main steps: (1) the preprocessing step extracts features to represent the raw audio data, (2) the DNN training step trains the acoustic model and calculates the pseudo-posteriors, and (3) the decoding step aims to map the predicted symbol combinations to texts and output the transcription results with the highest score.

The DNN-HMM-based acoustic model is popular in the current ASR system [14]. As defined by [15], the ASR system contains a preprocessing step, model training step, and decoding step as displayed in Figure 1

. The preprocessing step performs the feature processing and labeling for an audio input. In this paper, the audio frame is processed using the Discrete Fourier Transform (DFT) to extract information from the frequency domain, namely Mel-Frequency Cepstral Coefficients (MFCCs) as features. Forced alignment is applied on the raw audio inputs to extract the text label which is processed and used in training our acoustic model. We train an acoustic model on a DNN. The acoustic model outputs posterior probabilities for all HMM states which are processed in the decoding step mapping posterior probabilities to a sequence of text. The language model contained within the decoder provides a language probability which is used by the decoder to re-evaluate the acoustic score to the most suited language 

[16]. The final transcription text is the sequence with the highest score.

Ii-B Membership Inference Attack

Membership inference attack aims to determine whether a specific data sample is within the training set by training a series of shadow models constituting the attack model [17]. The attack model intends to learn from the differences in the target model’s output by feeding in pristine or bogus training data. In this paper, we adapt the membership inference attack for the task of audio auditing. Specifically, instead of inferring the record-level membership, we aim to infer the participant-level membership. That is, we focus on whether a particular user had unwillingly contributed data to train an ASR model. Our work differs from another user-level membership audit [10], as the features extracted from the outputs of developed ASR models are three pieces of audio related information including: transcription text, text probability, and frame length, rather than words’ rank lists.

Iii Auditing the ASR Model

Fig. 2: Auditing an ASR model. (1) In the training process, we sample datasets from the auxiliary reference dataset as to build shadow models. Each shadow model dataset , is split to a training set and a testing set . Then we query the shadow model with and a subset of and label their outputs with “member” and “nonmember”. With some preprocessing methods, the audit model can be trained with these outputs. (2) In the auditing process, we randomly sample a particular speaker’s () audios from to query our target ASR model. With the same preprocessing methods, the outputs can be passed to the audit model to determine whether .

In this section, we first formalize our objective for auditing automatic speech recognition models. Secondly, we present how an audio auditor can be constructed. Finally, we outline how the auditor is used for auditing the target model.

Iii-a Problem Definition

As shown in Figure 1, we describe the workflow of audio transcription using an ASR system. By querying an ASR system with an audio sample of a recorded speech, the speech recognition model outputs pseudo-posterior probabilities for all context-dependent phonetic units. During the decoding step, the probabilities are used to infer the most probable text sequence.

Suppose there is a group of audio recordings from a set of individuals . Our target model is an speech recognition model denoted as which is trained on using a learning algorithm . For a specific user , our objective is to find out whether this user is in the target model’s training set, such that . The participant-level membership inference against requires an auxiliary reference dataset to build the audio auditor. Specifically, is used to train several shadow models which simulate the target model in approximation. We denote the set of all users in . By querying , the transcription outputs are properly labeled depending on the audio speaker belonging to or not.

Finally, we assume that our auditor only has black-box access to the target model. Given an input audio recording, the auditor can only obtain the text transcription and its probability as outputs. Neither the training data nor the training parameters and hyper-parameters of the target model is known to the auditor. We assume that our auditor knows any learning algorithm used in the ASR system, including feature extraction, the training algorithm, and the decoding algorithm.

Iii-B Overview of the Audio Auditor

The nature of membership inference [17] is to learn the difference of a model fed with its actual training samples and other samples. Thus, to audit whether an ASR model had been trained with a user’s audio data or not, the auditor’s task can be transferred as inferring this user’s membership in this ASR model’s training dataset. The audio auditor’s training and auditing processes are depicted in Figure 2. We assume that our target model’s dataset is disjoint from the auxiliary reference dataset (). In addition, and are also disjoint ().

The primary task to train an audio auditor is to build up several shadow models to infer the targeted ASR model’s decision boundary. We assume all learning algorithms are known to the auditor; therefore, the learning algorithms for the shadow model are known accordingly (). Different from the target model, we have full knowledge of the shadow models’ ground truth. For a user querying the model with her audio samples, if , we collapse the features extracted from these samples’ results into one record and label it as “member”; otherwise, “nonmember

”. Taken all together with these labeled records (processed), a training dataset is set to train a binary classifier as the audit model using a supervised learning algorithm. As also evidenced in

[17], the more shadow models built, the more accurate the audit model performed.

As shown in Figure 2, datasets are sampled from to train shadow models with . The testing set and the subset of training set are used to query each shadow model. The query outputs are preprocessed below. For participant-level membership, some users’ pertinent characters are extracted from each output, including the transcription text (denoted as TXT), the posterior probability (denoted as Probability), and the audio frame length (denoted as Frame Length). The features of the auditor’s training set are written as: {TXT1=type(string), Probability1=type(float), Frame_Length1=type(integer), , TXTn=type(txt), Probabilityn=type(float), Frame_Lengthn=type(integer), class}, where is the number of audios belonging to a speaker. To process categorical features, such as the TXT features, we map the text to integers using a label encoder [18]. The built auditor determines whether

or not by the processed outputs. Exploring alternative preprocessing methods, such as a one-hot encoder, will be an avenue for future research.

Iv Experiment and Results

Iv-a Dataset


Model Training Dataset Testing Dataset
Target
154 speakers,
1232 audio
54 speakers,
432 audios
Shadow1
154 speakers,
1232 audio
57 speakers,
456 audios
Shadow2
154 speakers,
1232 audio
57 speakers,
456 audios
TABLE I: Datasets across models

As a proof-of-concept, we aim to build up one target model and design two shadow models based on this target model. As mentioned in Section III-B, we curated three disjoint datasets from the TIMIT speech corpus as listed in Table I. See more details in Appendix A. In this experiment, we trained two shadow models on with a similar distribution to . Training with differently distributed datasets will be our future research.

The outputs of our two shadow models are used to train the audit model. By querying the shadow model with all its testing set and one-third of its training set, we processed their outputs and labeled them as “nonmember” and “member”, respectively. Since the training datasets for all three models include eight sentences for each speaker, the feature set of the auditor’s training dataset is {TXT1, Probability1, Frame_Length1, , TXT8, Probability8, Frame_Length8, class}

. To audit the target model, a speaker may query the auditor model from one to eight pieces of audios. When a user audits the target model less than eight pieces of audios, we pad zeros to all the missing feature values.

Iv-B Target Model


Fig. 3: Target Model
Fig. 4: Shadow Model 1
Fig. 5: Shadow Model 2

Our target model is a speech-to-text model. The inputs are a set of audio files with phonetic text as labels, while the outputs are the transcribed phonetic texts with final probabilities and the corresponding input frame length. To simulate most of the current ASR models in the real world, we created a state-of-art DNN-HMM-based ASR model [15]

using the PyTorch-Kaldi Speech Recognition Toolkit 

[16]

. In the preprocessing step, MFCC features are used to train the model with the multilayer perceptron (MLP) algorithm. The training epoch is 24. The outputs of this MLP model are decoded and rescored with the probabilities of the HMM and

-gram language model to obtain the transcription. A decision tree is used for the audit model. See more details in Appendix 

B.

To evaluate the target model’s performance, we use the training accuracy and validation accuracy as shown in Figure 5. Comparing the training accuracy performed by two shadow models and our target model, the trends are similar and the accuracy curve can ultimately reach 70% (see Figures 555). This indicates that our shadow models can successfully mimic the target model (same transcription on the same audio inputs), or are able to achieve the same utility, i.e., speech recognition (same transcription accuracy, not the same input samples between models.)

Iv-C Results


Class
Actual:
member
Actual:
nonmember
Predicted:
member
TP FP
Predicted:
nonmember
FN TN
TABLE II:

The confusion matrix for the auditor.

To evaluate the auditor’s performance, four metrics are calculated from the confusion matrix, including accuracy, precision, recall, and F1-score. True Positive (TP): the number of records we predicted as “member” are correctly labelled. True Negative (TN): the number of records we predicted as “nonmember” are correctly labelled. False Positive (FP): the number of records we predicted as “member” are incorrectly labelled. False Negative (FN): the number of records we predicted as “nonmember” are incorrectly labelled:

  • Accuracy: the percentage of records correctly classified by the audit model.

  • Recall: the percentage of all true “member” records correctly determined as “member”.

  • Precision: the percentage of records correctly determined as “member” by the audit model among all records determined as “member”.

  • F1-score: the harmonic mean of precision and recall.

We show results for the behavior of the auditor under two different circumstances: when the number of users in the training dataset is varied, and when the number of the audio samples from the user to be audited is varied. Four metrics are calculated from the confusion matrix including accuracy, precision, recall, and F1-score

Effect of the number of users used in training dataset.

Fig. 6: The audit model’s performance across the training set size.

The audit model’s behavior when training sets containing different numbers of users is depicted in Figure 6. We trained the audit model with 25, 50, 75, 100, 125, and 150 users that were randomly sampled from the outputs of two shadow models. The testing set querying these audit models is fixed at 78 test audio records. To eliminate trial specific deviations, we repeated each experiment 10 times and averaged the results. The audit model performs fairly well for all metrics, with all metrics under different configurations above or approximately 85%. The model performs better when the number of users within the training set increases. When 100 users used in training set size, the performance reached a highest score especially in accuracy (approximately 93%) and F1-score (approximately 95%). When the number of users increased to 125, both two metrics’ results slightly drop but raise back while the number of users increased to 150. In all configurations, the audit model performs well. Herein, the more users that are used to train the audit model, the more accurate a user’s membership within the target model can be determined. With regards to the performance of the audit model when training with an even larger number of users of the training dataset, we will consider this problem in our future work.

Effect of the number of audio records for each user used in querying the auditor.

Fig. 7: The audit model’s performance by the number of audios for one speaker.

Since we randomly sample a user’s audio to test our audit model, the number of audio samples for this user may not be the same as the number of audios for each user in the auditor’s training dataset. That is, the number of non-zero features of an audit query may vary. We evaluate the effect on auditor’s performance using a variable number of audio samples from each user in auditing. Herein, the number of users used in different testing sets are the same and . To gather a different number of non-zero features in audit model’s testing dataset, we queried the target model with 78 users where each user was randomly sampled from one to eight test audio records. Like the experiment above, we repeated the experiment 100 times and averaged these results to reduce deviations in performance. The results are displayed in Figure 7. The more audios for each user used to audit their membership, the more accurate our audio auditor performed. When the user audits the target model with only one audio, the audit model’s performance is relatively low — except the accuracy approaches to 50% — the other three metrics’ results are around 25%. When the number of audio reaches eight, all performance results are above 90%.

V Conclusion

This work highlights, and leaves open, the potential of mounting participant-level membership inference attack in voice-based IoT. While our work has yet to examine the attack success rate on various IoT applications across multiple learning models, they do narrow the gap towards defining clear membership privacy in the user level, rather than the record level [17] which leaves questions about whether the privacy leakage hails from the data distribution or its intrinsic uniqueness of the record. Nevertheless, as we argued, both the size of user base and the number of audio samples per user used in the testing set have shown to have a positive effect on the IoT audit model. Examining other factors on performance and extending possible defenses against audit are all worth further exploration.

References

Deep Learning for Acoustic Models

Deep learning methods are used to build up acoustic models, such as speech transcription [19], word spotting or triggering [20], speaker identification or verification [21]. With supervised learning, a neural network can be trained as a classifier using a softmax across the phonetic units. A feature stream of audio is the input of the network in deep learning, while the output should be a posterior probability for the predicted phonetic states. Subsequently, these output representations will be decoded by the HMM-based decoder and will be mapping to possible sequences of phonetic texts with different probabilities.

Multilayer Perceptron (MLP) is one of the DNN algorithms used in this work. Assume that MLP is a stack of

layers of logistic regression models,

represents the active function in the

layer. Given an input , where

is the number of neurons in the

layer, this layer’s output can be formalized as:

(1)

where represents the weight matrix, and is the bias from the to

layer. Specifically, we applied the sigmoid function in the hidden layers and used the softmax activation function for the final output layer. As for the loss function, the MLP uses the cross-entropy. Moreover, the MLP tunes the parameters using the error back-propagation procedure (BP) and the stochastic gradient descent method.

In the case of building the ASR system with DNN-HMM algorithms, the posterior probability output in the output layer can be expressed as , where is the total number of phonemes corresponding to the number of the layer’s output nodes. This is a set of posterior probabilities of each phoneme in the time frame of the audio input. The posterior probability of each phoneme (i.e., ) is transferred and processed by the HMM-based decoder:

(2)

In Equation 2, is the probability of phonemes in the time frame mapping to HMM state

based on continuous Probability Density Functions (PDFs)

[22]. Herein, is a fixed number of PDFs, and is the weight for each phoneme.

Overview of Audio Auditor

To build up shadow models (see Figure 2), we sample datasets from the auxiliary reference dataset as . Further, we split each shadow model dataset , into training set and testing set . is used to train the shadow model with , while is used to evaluate its performance. To generate the training set with ground truth for the audit model, we query the shadow model with obtaining all samples in and a subset of which are sampled randomly.

Each shadow model , is trained with using . For a user querying the model with their audio samples, if , we combine the features extracted from these samples’ results into one record and label it as “member”; otherwise, we label this record as “nonmember”. The labels combined together with the outputs from the shadow models, form the training dataset for our audit model.

As for the auditing process, we randomly sample one or a few audios recorded by one speaker from to query our target ASR model. These sampled audios are transcribed into text with some outputs. To audit whether the target model had used this speaker’s audio in its training phase, we analyze these transcription outputs as part of a testing record with our audio auditor. Feature extraction and preprocessing methods used in this testing record are the same as the methods used for shadow models’ results. The auditor finally classifies this testing record as “member” or “nonmember” and hence determines whether or not.

Appendix A TIMIT Dataset

The TIMIT speech corpus contains 6,300 sentence spoken by 630 speakers from 8 major dialect regions of the United States. Three kinds of sentences are recorded including the dialect sentences, the phonetically-compact sentences and the phonetically-diverse sentences. The dialect sentences are spoken by all speakers from different dialect regions. The phonetically-compact sentences were recorded with a good coverage of pairs of phones, while the phonetically-diverse audios recorded the sentence selected from different corpus for diverse sentence types and phonetic contexts.

We selected three disjoint datasets from TIMIT speech corpus manually as described in Table I. Specifically, each training dataset and testing dataset obtains the audio recorded by speakers from 8 dialect regions. In addition, each subset dataset contains all three kinds of audios that mentioned above. The diversity of audios within each dataset not only is more similar to the reality ASR model’s training set, but also remains some users’ information for participant-level auditing task.

Appendix B Target Model

During the audio input’s preprocessing step, we utilize the Kaldi Toolkit [23]

to extract MFCC features for each audio of waveform. The force alignment among features and phone states were used to process the label. To prepare a training set, we applied a simple DNN algorithm — multilayer perceptron (MLP) to learn the relationship between the input audios and the output transcriptions. As for the hyperparameters in the MLP model, we set up 4 hidden layers and 1,024 hidden neurons per layer. The learning rate was set at 8%, the model was trained with 24 epochs. The output of this MLP model is a set of pseudo-posteriors probabilities of all possible phonetic units. These outputs are normalized and then fed into a HMM-based encoder. After encoding, an

-gram language model was applied to rescore the probabilities. The final transcription is the text sequence with the highest final probability.