With recent improvements in deep neural network, researchers came up with neural network based vocoders such as WaveNet (van den Oord et al., 2016) and SampleRNN (Mehri et al., 2017). Those models showed their ability to generate high quality waveform from acoustic features. Some researchers further devised neural network based text-to-speech (TTS) models which can replace the entire TTS system with neural networks (Arık et al., 2017a, b; Ping et al., 2018; Wang et al., 2017; Shen et al., 2017). Neural network based TTS models can be built without prior knowledge of a language when generating speech. Neural TTS models can be easily built compared to the previous approaches, which require carefully designed features, if we have enough (speech, text) pair data. Furthermore, the neural network based TTS models are capable of generating speech with different voices by conditioning on a speaker’s index (Arık et al., 2017b; Ping et al., 2018) or an emotion label (Lee et al., 2017).
Some researchers have tried to imitate a new speaker’s voice using the speaker’s recordings (Taigman et al., 2018). Taigman et al. reported their model’s ability to mimic a new speaker’s voice by learning a speaker embedding of the new speaker (2018). However, this approach requires additional training stage and transcriptions of the new speaker’s speech sample. The transcription may not be always available, and the additional training stage prohibits immediate imitation of a new speaker’s voice. In this study, we propose a voice imitating TTS model that can imitate a new speaker’s voice without transcript of speech sample or additional training. This enables the voice imitation process immediately using only a short speech sample of a speaker. The proposed model takes two inputs: (1) target text and (2) a speaker’s speech sample. The speech sample is first transformed into a speaker embedding by the speaker embedder network. Then a neural network based TTS model generates speech output by conditioning on the speaker embedding and the text input.
We implemented a baseline multi-speaker TTS model based on Tacotron, and we also implemented voice imitating TTS model by extending the baseline model. We investigated latent space of the learned speaker embeddings by visualizing with principal component analysis (PCA). We directly qualitatively compared similarity of voice from the both TTS models and the ground truth data. We further conducted two types of surveys to analyze the result quantitatively. The first survey compared generation quality of the voice imitating TTS and the multi-speaker TTS. The second survey checked how speaker-discriminable speech samples are generated by the both models.
The main contributions of this study can be summarized as follows:
The proposed model makes it possible to imitate a new speaker’s voice using only a 6-seconds long speech sample.
Imitating a new speaker’s voice can be done immediately without additional training.
Our approach allows TTS model to utilize various sources of information by changing the input of the speaker embedder network.
In this section, we review previous works that are related to our study. We will cover both traditional TTS systems and neural network based TTS systems. The neural network based TTS systems includes neural vocoder, single-speaker TTS, multi-speaker TTS, and voice imitation model.
Common TTS systems are composed of two major parts: (1) text encoding part and (2) speech generation part. Using prior knowledge about the target language, domain experts have defined useful features of the target language and have extracted them from input texts. This process is called a text encoding part, and many natural language processing techniques have been used in this stage. For example, a grapheme-to-phoneme model is applied to input texts to obtain phoneme sequences, and a part-of-speech tagger is applied to obtain syntactic information. In this manner, the text encoding part takes a text input and returns various linguistic features. Then, the following speech generation part takes the linguistic features and generates waveform of the speech. Examples of the speech generation part include concatenative and parametric approach. The concatenative approach generates speech by connecting short units of speech which has a scale of phoneme or sub-phoneme level, and the parametric TTS utilizes a generative model to generate speech.
Having seen neural networks show great performance in regression and classification tasks, researchers have tried to substitute previously used components in TTS systems. Some group of researchers came up with neural network architectures that can substitute the vocoder of the speech generation part. Those works include Wavenet (van den Oord et al., 2016) and SampleRNN (Mehri et al., 2017). Wavenet can generate speech by conditioning on several linguistic features, and Sotelo et al. showed that SampleRNN can generate speech by conditioning on vocoder parameters (2017). Although these approaches can substitute some parts of the previously used speech synthesis frameworks, they still required external modules to extract the linguistic features or the vocoder parameters. Some researchers came up with neural network architectures that can substitute the whole speech synthesis framework. Deep Voice 1 (Arık et al., 2017a) is made of 5 modules where all modules are modelled using neural networks. The 5 modules exhaustively substitute the text encoding part and the speech generation part of the common speech synthesis framework. While Deep voice 1 is composed of only neural networks, it was not trained in end-to-end fashion.
Wang et al. proposed fully end-to-end speech synthesis model called Tacotron (2017). Tacotron can be regarded as a variant of a sequence-to-sequence network with attention mechanism (Bahdanau et al., 2014). Tacotron is composed of three modules: encoder, decoder, and post-processor (refer to Figure 1
). Tacotron basically follows the sequence-to-sequence framework with attention mechanism, especially which converts a character sequence into corresponding waveform. More specifically, the encoder takes the character sequence as an input and generates a text encoding sequence which has same length with the character sequence. The decoder generates Mel-scale spectrogram in an autoregressive manner. Combining attention alignment with the text encoding gives a context vector, and decoder RNN takes context vector and output of the attention RNN as inputs. The decoder RNN predicts Mel-scale spectrogram, and the post-processor module consequently generates linear-scale spectrogram from the Mel-scale spectrogram. Finally, Griffin-Lim reconstruction algorithm estimates waveform from the linear-scale spectrogram(Griffin & Lim, 1984).
Single-speaker TTS systems have further extended to the multi-speaker TTS systems which can generate speech by conditioning on a speaker index. Arik et al. proposed Deep Voice 2, a modified version of Deep Voice 1, to enable multi-speaker TTS (2017a; 2017b)
. By feeding learned speaker embedding as nonlinearity biases, recurrent neural network initial states, and multiplicative gating factors, they showed their model can generate multiple voices. They also showed Tacotron is able to generate multiple voice using the similar approach. Another study reported a TTS system that can generate voice containing emotions(Lee et al., 2017). This approach is similar to the multi-speaker Tacotron in Deep Voice 2 paper, but the model could be built with less number of speaker embedding input connections.
Multi-speaker TTS model is further extended to voice imitation model. Current multi-speaker TTS models takes a one-hot represented speaker index vector as an input, and this is not easily extendable to generate voices which are not in the training data. Because the model can learn embeddings only for the speakers represented by one-hot vectors, there is no way to get a new speaker’s embedding. If we want to generate speech of a new speaker, we need to retrain the whole TTS model or fine-tune the embedding layer of the TTS model. However, training of the network requires large amount of annotated speech data, and it takes time to train the network until convergence. Taigman et al. proposed a model that can mimic a new speaker’s voice (2018)
. While freezing the model’s parameters, they backpropagated errors using new speaker’s (speech, text, speaker index) pairs to get a learned embedding. However, this model could not overcome the problems we mentioned earlier. The retraining step requires (speech, text) pair which can be inaccurate or even unavailable for data from the wild. Furthermore, because of the additional training, voice imitation cannot be done immediately. In this study, we will propose a TTS model that does not require annotated (speech, text) pairs so that it can be utilized in more general situations. Moreover, our model can immediately mimic a new speaker’s voice without retraining.
3 Voice imitating neural speech synthesizer
3.1 Multi-speaker TTS
One advantage to use neural network for a TTS model is that it is easy to give conditions when generating speech. For instance, we can give condition by just adding a speaker index input. Among several approaches to neural network based multi-speaker TTS models, we decided to adopt the architecture of Lee et al. (2017). Their model extends Tacotron to take a speaker embedding vector at the decoder of Tacotron (see Figure 1). If we drop the connections from the one-hot speaker ID input and the speaker embedding vector , there is no difference from the original Tacotron architecture. The model has two targets in its objective function: (1) Mel-scale spectrogram target and (2) linear-scale spectrogram target . L1 distances of each Mel-scale spectrograms and and linear-scale spectrograms and are added to compute the objective function as follows:
where ’s are output of the Tacotron and ’s are the ground truth spectrograms. Note that, there is no direct supervision on the speaker embedding vector, and each speaker index will have corresponding speaker embedding vector
learned by backpropagated error from the loss function (1). By its formulation, the model can store only the speaker embeddings appeared in the training data at the Lookup table. When we want to generate a speech with a new speaker’s voice, we need another speaker embedding for that speaker. In order to get a speaker embedding of the unseen speakers, we should train the model again with the new speaker’s data. This retraining process consumes much time, and the model’s usability limited to the voice with large data size.
3.2 Proposed model
One possible approach to address the problem is direct manipulation of the speaker embedding vector. Assuming the speaker embedding vector can represent arbitrary speakers’ voices, we may get desired voice by changing values of the speaker embedding vector, but it will be hard to find the exact combination of values of the speaker embedding vector. This approach is not only inaccurate but also labor intensive. Another possible approach is to retrain the network using the new speaker’s data. With enough amount of data, this approach can give us the desired speech output. However, it is not likely to have enough data of the new speaker, and the training process requires much time until convergence. To tackle this problem more efficiently, we propose a novel TTS architecture that can generate a new speaker’s voice using a small amount of speech sample. The imitation of a speaker’s voice can be done immediately without requiring additional training or manual search of the speaker embedding vector.
The proposed voice imitating TTS model is an extension of the multi-speaker Tacotron in Section 3.1. We added a subnetwork that predicts a speaker embedding vector from a speech sample of a target speaker. Figure 2 shows the subnetwork, the speaker embedder, that contains convolutional layers followed by fully connected layers. This network takes log-Mel-spectrogram as input and predicts a fixed dimensional speaker embedding vector. Notice that, the input of speaker embedder network is not limited to the speech sample. Substituting input of the speaker embedder network enables TTS models to condition on various sources of information, but we focus on conditioning on a speaker’s speech sample in this paper.
Prediction of the speaker embedding vector requires only one forward pass of the speaker embedder network. This enables immediate imitation of the proposed model to generate speech for a new speaker. Although the input spectrograms may have various lengths, the max-over-time pooling layer, which is located at the end of the convolutional layers, squeezes the input into a fixed dimensional vector with length 1 for time axis. In this way, the voice imitating TTS model can deal with input spectrograms with arbitrary lengths. The speaker embedder with input speech sample replaces the Lookup table with one-hot speaker ID input of the multi-speaker Tacotron as described in Figure 3. For training of the voice imitating TTS model, we also use the same objective function (1) with the multi-speaker TTS. Note also that, there is no supervision on training the speaker embedding vector.
In accordance with Arik et al., we used VCTK corpus which contains 109 native English speakers with various accents (2017b). The population of speakers in VCTK corpus has various accents and ages, and each speaker recorded around 400 sentences.
We preprocessed the raw dataset in several ways. At first, we manually annotated transcripts for audio files which did not have corresponding transcripts. Then, for the text data, we filtered out symbols if they are not English letters or numbers or punctuation marks. We used capital letters without decapitalization. For the audio data, we trimmed silence using WebRTC Voice Activity Detector (Google, ). Reportedly, trimming silence is important for training Tacotron (Arık et al., 2017b; Wang et al., 2017). Note that, there is no label which tells the model when to start speech. If there is silence in the beginning of audio file, the model cannot learn what is the proper time to start speech. Removing silence can alleviate this problem by aligning the starting times of speeches. After the trimming, the total length of the dataset became 29.97 hours. Then, we calculated log-Mel-spectrogram and log-linear-spectrogram of each audio file. When generating spectrogram, we used Hann window of frame length 50ms and shifted windows by 12.5ms.
In this experiment, we trained two TTS models: multi-speaker Tacotron and voice imitating Tacotron. In the rest of this paper, we will use terms multi-speaker TTS and voice imitating TTS to refer the two models respectively. To train the latter model, we did additional data preparation process. We prepared speech samples of each speaker since the model needs to predict a speaker embedding from log-Mel-spectrogram of a speech sample. Since we thought it is hard to capture a speaker’s characteristic in a short sentence, we concatenated one speaker’s whole speech data and made samples by applying fixed size rectangular window with overlap. The resulting window covers around 6 seconds of speech, which can contain several sentences. We fed a speech sample a target speaker to the model together with text input, while randomly drawing the speech sample from the windowed sample pool. We did not used the speech sample that is matched to the text input to prevent model from learning to generate by coping from the input speech sample. Furthermore, when training the voice imitating TTS model, we held out 10 speakers’ data for test set since we wanted to check if the model can generate unseen speakers’ voices. The profiles of 10 held out speakers are shown in Table 1. We selected them to have similar distribution with training data in terms of gender, age, and accent.
For the Tacotron’s parameters, we basically followed specifications written in the original Tacotron paper except the reduction factor (Wang et al., 2017). We used 5 for the
, which means generating 5 spectrogram frames at each time-step. For hyperparameters of the speaker embedder network, we used the following settings. We used 5-layered 1D-convolutional neural network with 128 channels and window size of 3. The first 2 layers have stride of 1 and the remaining 3 layers have stride of 2. We used 2 linear layers with 128 hidden units after the convolutional layers. We used ReLU as a nonlinearity and applied batch normalization for every layer(Ioffe & Szegedy, 2015). We also applied dropout with ratio of 0.5 to improve generalization (Srivastava et al., 2014). The last layer of the speaker embedder network is a learnable projection layer without nonlinearity and dropout.
We used mini-batch size of 32. During the training, limited capacity of GPU’s memory prevented us from loading a mini-batch of long sequences at once. To maximize the utilization of data, we used truncated backpropagation through time (Rumelhart et al., 1985)2013). We used 1.0 as a clipping threshold. For the optimization, we used ADAM (Kingma & Ba, 2014), which adaptively changes scale of update, with parameters 0.001, 0.9, and 0.999 for learning rate, , and respectively.
|(a) Voice imitating TTS||(b) Multi-speaker TTS|
|Voice imitating TTS||Multi-speaker TTS||Ground truth|
|Voice imitating TTS||Multi-speaker TTS||Ground truth|
We first checked performance of voice imitating TTS qualitatively by investigating learned latent space of the speaker embeddings. In order to check how the speaker embeddings are trained, we applied PCA to the speaker embeddings. Previous researches reported discriminative patterns were found from the speaker embeddings in terms of gender and other aspects (Arık et al., 2017b; Taigman et al., 2018). Figure 4
shows the first two principal components of the speaker embeddings where green and red colors represent female and male respectively. We could see clear separation from speaker embeddings of the multi-speaker TTS as reported from other studies. Although the speaker embeddings of voice imitating TTS had an overlapped area, we could observe that the female embeddings are dominant in the left part, whereas the male embeddings are dominant in the right part. Besides, some of the embeddings located far from the center. We suspect that the overlap and the outliers are existing because the speaker embedding is extracted from a randomly chosen speech sample of a speaker. A speech sample of a male speaker can have only the particularly lower-pitched voice, or a speech sample of a female speaker can have only particularly higher-pitched voice. This may result in prediction of the out-lying embeddings, and similar argument could be applied for the overlapping embeddings.
To check how similar are the generated voices and the ground truth voice, we compared spectrogram and speech samples from the voice imitating TTS to that of multi-speaker TTS model and the ground truth data. Then, by feeding a text from the training data while conditioning on the same speaker, we generated samples from voice imitating TTS and multi-speaker TTS. Then we compared the generated samples and the corresponding ground truth speech samples. Example spectrograms from the both models and the ground truth data are shown in Figure 5. We could observe both models gave us similar spectrogram, and also the difference between them was negligible when we listened to the speech samples. From the spectrogram, we could observe they have similar pitch and speed by seeing heights and widths of harmonic patterns. When we compared generated samples of the both models to the ground truth data, we could observe the samples from the both models having simliar pitch with the ground truth. We could see the model can learn to predict speaker embedding from the speech samples.
Similarly, we analyzed spectrograms to check whether the voice imitating TTS can generalize well on the test set. Note that, the multi-speaker TTS included the test set of the voice imitating TTS for its training data, because otherwise multi-speaker TTS cannot generate speech for unseen speakers. In Figure 6, we also could observe spectrograms from generated samples showing similar pattern, especially for the pitch of each speaker. With these results, we conjecture that the model at least learned to encode pitch information in the speaker embedding, and it was generalizable to the unseen speakers.
Since it is difficult to evaluate generated speech sample objectively, we conducted surveys using crowdsourcing platforms such as Amazon’s Mechanical Turk. We first made speech sample comparison questions to evaluate voice quality of generated samples. This survey is composed of 10 questions. For each question, 2 audio samples–one from the voice imitating TTS and the other one from the multi-speaker TTS–are presented to participants, and the participants are asked to give a score from -2 (multi-speaker TTS is far better than voice imitating TTS) to 2 (multi-speaker TTS is far worse than voice imitating TTS). We gathered 590 ratings on the 10 questions from 59 participants (see Figure 7). From the result, we could observe the ratings were concentrated on the center with overall mean score of . It seems there is not much difference in the voice quality of the voice imitating TTS and the multi-speaker TTS.
For the second survey, we made speaker identification questions to check whether generated speech samples contain distinct characteristics. The survey consists of 40 questions, where each question has 3 audio samples: ground truth sample and two generated samples. The two generated samples were from the same TTS model, but each of which conditioned on different speakers’ index or speech samples. The participants are asked to choose one speech sample that sounds mimicking the same speaker identity of the ground truth speech sample. From the crowdsourcing platform, we found 50 participants for surveying the voice imitating TTS and other 50 participants for surveying the multi-speaker TTS model. The resulted speaker identification accuracies were 60.1 and 70.5 for the voice imitating TTS and the multi-speaker TTS respectively. Considering random selection will score 50 of accuracy, we may argue higher accuracies than 50 reflect distinguishable speaker identity in the generated speech samples. By its nature of the problem, it is more difficult to generate distinct voice for the voice imitating TTS. Because the voice imitating TTS must capture a speaker’s characteristic in a short sample whereas the multi-speaker TTS can learn the characteristic from vast amount of speech data. Considering these difficulties, we think the score gap between the two models are explainable.
We have proposed a novel architecture that can imitate a new speaker’s voice. In contrast to the current multi-speaker speech synthesis models the voice imitating TTS could generate a new speaker’s voice using a small amount of speech sample. Furthermore, our method could imitate voice immediately without additional training. We have evaluated generation performance of the proposed model both in qualitatively and quantitatively, and we have found there is no significant difference in the voice quality between the voice imitating TTS and the multi-speaker TTS. Though generated speech from the voice imitating TTS have showed less distinguishable speaker identity than that from the multi-speaker TTS, generated voices from the voice imitating TTS contained pitch information which can make voice distinguishable from other speakers’ voice.
Our approach is particularly differentiated from the previous approaches by learning to extract features with the speaker embedder network. Feeding various sources of information to the speaker embedder network makes TTS models more versatile, and exploring its possibility is connected to our future works. We expect intriguing researches can be done in the future by extending our approach. One possible direction will be a multi-modal conditioned text-to-speech. Although this paper has focused on extracting speaker embedding from a speech sample, the speaker embedder network can learn to extract speaker embedding from various sources such as video. In this paper, the speaker embedder network has extracted a speaker’s characteristic from a speech sample. By applying same approach to a facial video sample, the speaker embedder network may capture emotion or other characteristics from the video sample. The resulting TTS system will be able to generate a speaker’s voice which contains appropriate emotion or characteristics for a given facial video clip and an input text. Another direction will be cross-lingual voice imitation. Since our model requires no transcript corresponding to the new speaker’s speech sample, the model has a potential to be applied in the cross-lingual environment. For instance, imitating a Chinese speaker’s voice to generate English sentence can be done.
- Arık et al. (2017a) Arık, Sercan Ö., Chrzanowski, Mike, Coates, Adam, Diamos, Gregory, Gibiansky, Andrew, Kang, Yongguo, Li, Xian, Miller, John, Ng, Andrew, Raiman, Jonathan, Sengupta, Shubho, and Shoeybi, Mohammad. Deep voice: Real-time neural text-to-speech. In Precup, Doina and Teh, Yee Whye (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 195–204, International Convention Centre, Sydney, Australia, 06–11 Aug 2017a. PMLR.
- Arık et al. (2017b) Arık, Sercan O, Diamos, Gregory, Gibiansky, Andrew, Miller, John, Peng, Kainan, Ping, Wei, Raiman, Jonathan, and Zhou, Yanqi. Deep voice 2: Multi-speaker neural text-to-speech. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 30, pp. 2966–2974. Curran Associates, Inc., 2017b.
- Bahdanau et al. (2014) Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. 2014. URL http://arxiv.org/abs/1409.0473.
- (4) Google. Webrtc voice activity detector. https://webrtc.org/.
Griffin & Lim (1984)
Griffin, Daniel and Lim, Jae.
Signal estimation from modified short-time fourier transform.IEEE Transactions on Acoustics, Speech, and Signal Processing, 32(2):236–243, 1984.
- Ioffe & Szegedy (2015) Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448–456, 2015.
- Kingma & Ba (2014) Kingma, Diederik P. and Ba, Jimmy. Adam: A method for stochastic optimization. International Conference on Learning Representations, 2014.
- Lee et al. (2017) Lee, Younggun, Rabiee, Azam, and Lee, Soo-Young. Emotional end-to-end neural speech synthesizer. Workshop Machine Learning for Audio Signal Processing at NIPS (ML4Audio@NIPS17), 2017.
- Mehri et al. (2017) Mehri, Soroush, Kumar, Kundan, Gulrajani, Ishaan, Kumar, Rithesh, Jain, Shubham, Sotelo, Jose, Courville, Aaron, and Bengio, Yoshua. Samplernn: An unconditional end-to-end neural audio generation model. International Conference on Learning Representations, 2017.
- Pascanu et al. (2013) Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pp. 1310–1318, 2013.
- Ping et al. (2018) Ping, Wei, Peng, Kainan, Gibiansky, Andrew, Arik, Sercan O., Kannan, Ajay, Narang, Sharan, Raiman, Jonathan, and Miller, John. Deep voice 3: 2000-speaker neural text-to-speech. International Conference on Learning Representations, 2018.
- Rumelhart et al. (1985) Rumelhart, David E, Hinton, Geoffrey E, and Williams, Ronald J. Learning internal representations by error propagation. Technical report, California Univ San Diego La Jolla Inst for Cognitive Science, 1985.
- Shen et al. (2017) Shen, Jonathan, Pang, Ruoming, Weiss, Ron J, Schuster, Mike, Jaitly, Navdeep, Yang, Zongheng, Chen, Zhifeng, Zhang, Yu, Wang, Yuxuan, Skerry-Ryan, RJ, et al. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. arXiv preprint arXiv:1712.05884, 2017.
- Sotelo et al. (2017) Sotelo, Jose, Mehri, Soroush, Kumar, Kundan, Santos, Joao Felipe, Kastner, Kyle, Courville, Aaron, and Bengio, Yoshua. Char2wav: End-to-end speech synthesis. International Conference on Learning Representations workshop, 2017.
- Srivastava et al. (2014) Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014.
- Taigman et al. (2018) Taigman, Yaniv, Wolf, Lior, Polyak, Adam, and Nachmani, Eliya. Voiceloop: Voice fitting and synthesis via a phonological loop. International Conference on Learning Representations, 2018.
- van den Oord et al. (2016) van den Oord, Aaron, Dieleman, Sander, Zen, Heiga, Simonyan, Karen, Vinyals, Oriol, Graves, Alexander, Kalchbrenner, Nal, Senior, Andrew, and Kavukcuoglu, Koray. Wavenet: A generative model for raw audio. In Arxiv, 2016. URL https://arxiv.org/abs/1609.03499.
- Wang et al. (2017) Wang, Yuxuan, Skerry-Ryan, R.J., Stanton, Daisy, Wu, Yonghui, Weiss, Ron J., Jaitly, Navdeep, Yang, Zongheng, Xiao, Ying, Chen, Zhifeng, Bengio, Samy, Le, Quoc, Agiomyrgiannakis, Yannis, Clark, Rob, and Saurous, Rif A. Tacotron: Towards end-to-end speech synthesis. In Proc. Interspeech 2017, pp. 4006–4010, 2017. doi: 10.21437/Interspeech.2017-1452.