a novel cross-lingual voice cloning approach with a few text-free samples

10/29/2019
by   Xinyong Zhou, et al.
0

In this paper, we present a cross-lingual voice cloning approach. BN features obtained by SI-ASR model are used as a bridge across speakers and language boundaries. The relationships between text and BN features are modeled by the latent prosody model. The acoustic model learns the translation from BN features to acoustic features. The acoustic model is fine-tuned with a few samples of the target speaker to realize voice cloning. This system can generate speech of arbitrary utterance of target language in cross-lingual speakers' voice. We verify that with small amount of audio data, our proposed approach can well handle cross-lingual tasks. And in intra-lingual tasks, our proposed approach also performs better than baseline approach in naturalness and similarity.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/08/2020

Latent linguistic embedding for cross-lingual text-to-speech and voice conversion

As the recently proposed voice cloning system, NAUTILUS, is capable of c...
11/12/2021

Deciphering Speech: a Zero-Resource Approach to Cross-Lingual Transfer in ASR

We present a method for cross-lingual training an ASR system using absol...
10/14/2021

Improve Cross-lingual Voice Cloning Using Low-quality Code-switched Data

Recently, sequence-to-sequence (seq-to-seq) models have been successfull...
12/28/2020

Building Multi lingual TTS using Cross Lingual Voice Conversion

In this paper we propose a new cross-lingual Voice Conversion (VC) appro...
08/28/2020

Voice Conversion Challenge 2020: Intra-lingual semi-parallel and cross-lingual voice conversion

The voice conversion challenge is a bi-annual scientific event held to c...
10/14/2021

Exploring Timbre Disentanglement in Non-Autoregressive Cross-Lingual Text-to-Speech

In this paper, we present a FastPitch-based non-autoregressive cross-lin...
11/01/2021

Cross-lingual Hate Speech Detection using Transformer Models

Hate speech detection within a cross-lingual setting represents a paramo...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, with the rapid development of speech synthesis, customized or personalized service such as voice cloning has drawn much interest. Voice cloning aims to learn the voice of an unseen speaker from a few samples. With a more ambitious goal, cross-lingual voice cloning is to learn the voice from speech in other languages and to synthesize speech in a specific language not spoken by the target speaker. The technology can benefit various fields such as speech translation and personalized computer-aided language learning.

There are some prior works focus on cross-lingual text-to-speech (TTS). Xie et al. [3]

propose a Kullback-Leibler divergence and deep neural network (DNN) based approach to cross-lingual TTS training. Li et al.

[4] proposed a multilingual parametric neural TTS system, which used a unified input representation and shared parameters across languages. In [5], Ming et al. described a bilingual Chinese and English neural TTS system trained on speech from a bilingual speaker, aiming to synthesize speech of both languages using same voice. Nachmani et al. [6] presented a multilingual neural TTS model which supports voice cloning across English, Spanish, and German. It used language-specific text and speaker encoders. Xue et al. [7] built a mixed-lingual TTS system with only monolingual data by adding speaker embedding and phoneme embedding. In [8], Sun et al. presented a cross-lingual TTS system using Phonetic Posteriorgrams (PPGs) with decent voice cloning performance. However, in this approach, a large amount of target speaker’s speech is required to train a voice conversion model. The above existing voice cloning approaches face some obvious drawbacks in real applications: 1) such as the need of recordings from bilingual speakers, or a large amount of multi-speaker audio-text pairs; 2) they need to design a specific method of phoneme sharing cross different languages; 3) they add extra modules to encode speaker and language, which complicates the building pipeline and may be hard to train.

Inspired by [8], we propose a novel cross-lingual voice cloning framework using BN features as a bridge across speakers and language boundaries. Suppose we have a on-the-shelf DNN-based speech recognition engine for the target language, and the BN features are the representation from the last hidden layer before the softmax. Previous studies have consistently shown that BN features are less language-dependent [11] and are more smoothness on the pronunciation space than PPGs, since PPGs are directly defined on the phone set of the training language.

Figure 1: Architecture of framework.

The proposed voicing cloning framework consists of two parts: latent prosody model and acoustic model. Firstly, audio-text pairs from a single speaker in the target language is used to train a Tacotron2-based [12] latent prosody model, which takes text sequence as input, predicting the corresponding BN features with automatic time alignment. Secondly, the CBHG-structured [14] acoustic model is trained with multi-speaker’s audio data in the target language, which translates BN features to acoustic features. For an unseen speaker, the acoustic model is fine-tuned using a few audio samples of this speaker without the need of the corresponding texts, as the input BN features to the acoustic model are from the output of the BN extractor. In the synthesis stage, for any text in the target language, the corresponding BN features can be predicted by the latent prosody model, and then the acoustic model predicts the acoustic features. Finally, given the acoustic features, a speaker-independent neural vocoder is used to synthesize the speech that sounds like the target speaker’s voice. The advantages of our approach are as follows. 1) No recordings from bilingual speakers are required, where such data are hard to collect. 2) Audio-text pairs are not required to train and fine-tune the acoustic model, as our approach is text-free. 3) Our approach is simple and trainable as no extra complicated modules are needed to encode speaker or language.

The paper is organized as follows: Section 2 describes our approach for cross-lingual voice cloning in detail. Section 3 introduces the experiments and Section 4 presents subjective evaluation results. In Section 5, we give a brief summary and mention our future work.

2 system architecture

2.1 Bottleneck Features

As a narrow hidden layer right before the softmax layer in DNN, the bottleneck layer creates a constriction in the network that forces the information pertinent to classification into a compact feature representation. The work in

[11]

has shown that the BN features extracted from ASR trained using monolingual data perform quite well on language identification task (classifying multiple languages). This indicates that BN features are language-independent to some extent and have the potential of cross-lingual task. What’s more, the BN extractor used in this paper is trained with a large-scale ASR corpus (containing tens of thousands of speakers), thus can be considered as speaker-independent.

2.2 Latent Prosody Model

The latent prosody model predicts the corresponding BN features with a sequence of phoneme as input, shown as the training step 1 in Figure 1. The model is based on Tacotron2 composed of encoder, attention and decoder.The encoder takes the text input sequence of length as an input, and learns a continuous sequential representation .

(1)

The location-sensitive attention [13]

is used as attention module, which uses cumulative attention weights from previous decoder time steps as an additional feature. The decoder is an autoregressive recurrent neural network which consists of 2 uni-directional LSTM layers. At each output time step

, attention and decoder word together in the following manner:

(2)
(3)
(4)

where is the -th state of the decoder, are the attention weights and

is the context vector. The decoder takes the previous hidden state

and the current context vector as inputs and generates the current output . We minimize the ground truth BN features and the predicted BN features with L2 loss.

2.3 Acoustic Model

The acoustic model is based on CBHG, consisting of a bank of 1-D convolutional filters, followed by highway networks and a bidirectional GRU recurrent neural network. Besides, two pre-net layers is added to improve model’s generalization ability. For a given utterance from the corpus, denotes the frame index of this sequence. The input is the BN features (,…,,…,). The target value of the output layer is the acoustic features sequence (,…,,…,). The predicted value of the output layer is (,…,,…,). The cost function of training process is defined as follows:

(5)

2.4 Speaker Adaptation

As shown in the training step 3 in Figure 1, given a few speech samples of target speaker, we extract the BN features and the acoustic features firstly. The the acoustic model is fine-tuned using these features without the need of the corresponding texts. For a given utterance, the corresponding BN features are predicted by the latent prosody model, then the BN features are fed into the adapted acoustic model to predict acoustic features. Finally, a speaker-independent neural vocoder is used to synthesize the speech.

3 Experiments

In our work, Mandarin and English are chosen as target language and non-target language respectively. We aim to synthesize one speaker’s speech in Mandarin (with arbitrary textual input), given just a few English speech samples of this speaker, without the corresponding texts. As illustrated in Figure 1, the proposed framework is divided into training stage and synthesizing stage. In training stage, the latent prosody model and the acoustic model can be trained in parallel. Then the acoustic model is fine-tuned using a few audio samples of target speaker. In the synthesis stage, for any text in the target language, the corresponding BN features can be predicted by the latent prosody model, and then the acoustic model predicts the acoustic features. Finally, a speaker-independent LPCNet vocoder [16] is used to synthesize speech.

3.1 Dataset

The training data of the latent prosody model is DB-1 corpus from Databaker technology, which contains 12 hours of Mandarin female data. For the acoustic model training, THCHS-30 (an open Chinese speech database published by Center for Speech and Language Technology (CSLT) at Tsinghua University) [15] is used, which contains 60 speakers, each of whom has about 250 sentences. The signals are sampled at 16kHz with mono channel, windowed with 25 ms and shifted every 10 ms. Acoustic feature includes 30-dimensional bark-frequency cepstral coefficients (BFCC), 2 pitch parameters (period, correlation). The dimension of BN features is 512.

3.2 Baseline approach

The baseline approach is a speaker adaptation training approach. The model is based on Tacotron2 architecture, which predicts the acoustic features from the phoneme sequence directly. Firstly, the model is trained using THCHS-30 data, then fine-tuned with a few target speaker’s audio-text pairs. The vocoder used is same as our approach. Since our training data is only Mandarin, this approach can only be applied to Mandarin speaker adaptation. We compared the effect of our proposed and baseline approaches on Mandarin speakers.

3.3 Experimental Setup

The ASR model is based on time-delay neural network-long short term memory (TDNN-LSTM)

[17]. We take the output of the last LSTM layer (near to softmax layer) as BN features. The SI-ASR system is implemented using Kaldi speech recognition toolkit [18]. For latent prosody model and baseline training, we follow the specifications mentioned in Tacotron2 [12]. We use the Adam optimizer [19]

with learning rate decay, which starts from 1e-3 and is reduced to 1e-5 after 50k steps. The network is trained with a batch size of 16 with an NVIDIA GTX1080Ti GPU. Transcripts are converted to the corresponding phoneme sequences using grapheme-to-phoneme (G2P) library. For Chinese text, we first perform word segmentation which separates words and phrases with specific symbols in order to improve speech fluency. The phoneme sequences are fed to the encoder of latent prosody model as input. Acoustic model is pre-trained for 200k steps with L1 loss and a batch size of 32, then fine-tuned for 4k steps with target speaker’s data. The specific hyperparameters are consistent with

[14].

3.4 Evaluation

One Mandarin male (MM) speaker and one Mandarin female (MF) speaker are chosen as target speakers to compared the effects of our proposed and baseline approaches.

Target
Speaker
Number of sentences
50 100 200
MF (baseline) 3.150.12 3.260.11 3.560.09
MF (our) 3.820.11 3.950.10 3.980.08
MM (baseline) 2.850.13 3.040.09 3.110.10
MM (our) 3.680.09 3.730.12 3.750.09
EF (our) 3.720.08 3.820.11 3.880.09
EM (our) 3.630.06 3.710.07 3.720.08
Table 1:

Speech naturalness Mean Opinion Score (MOS) with 95% confidence intervals.

In addition, one English male (EM) speaker and one English female (EF) speaker are used to evaluate the proposed approach on cross-lingual voice cloning. For measuring the naturalness and speaker similarity of synthesize speech, we conducted a Mean Opinion Score (MOS) test as subjective evaluations. For each set of experiments, 50, 100 and 200 sentences of target speaker are used to fine-tune acoustic model respectively and randomly select 30 sentences (not in the training set) for testing. ***Audio samples: https://xinge333.github.io/speaker_adaptation_demo

3.4.1 Speech naturalness

We invite 20 listeners to participate in the subjective tests, whose first language is Chinese and are well educated in English. The subjects are asked to rate the naturalness of generated utterances on a five-point Likert scale (1:Bad, 2:Poor, 3:Fair, 4:Good, 5:Excellent). The results are shown in Table 1, which indicates our proposed approach performs better than baseline in naturalness on Mandarin. In the case of extremely limited data, the speech synthesized by our approach is still clear and stable, but baseline is not. What’s more, our approach achieve satisfactory performance in speech quality and naturalness for English speakers.

3.4.2 Speech similarity

In similarity test, a subject is presented with a pair of utterances comprises a real utterance recorded by a speaker and another synthesized utterance from the same speaker. The similarity MOS test uses five-scale-score for evaluation (1: Not at all similar, 2: Slightly similar, 3: Moderately similar, 4: Very similar, 5: Extremely similar). As shown in Table 2, our proposed approach performs better than baseline in speaker similarity on Mandarin. The similarity MOS of English speakers are both above 3.5, which demonstrates the model can primely generalize to the new cross-lingual speakers.

Target
Speaker
Number of sentences
50 100 200
MF (baseline) 3.850.05 3.920.10 4.060.08
MF (our) 3.970.08 4.120.11 4.150.09
MM (baseline) 3.250.12 3.340.13 3.410.11
MM (our) 3.740.11 3.860.06 3.890.10
EF (our) 4.020.09 4.100.08 4.110.07
EM (our) 3.750.12 3.810.09 3.870.07
Table 2: Speech similarity Mean Opinion Score (MOS) with 95% confidence intervals.

3.5 Analysis

We analyze why our proposed approach can realize cross-lingual voice cloning using limited audio samples. We think that the most important thing is the utilization of BN features in our approach. Firstly, the BN features are language-independent and speaker-independent, which makes the pronunciation space of different languages and speakers can be represented uniformly. Secondly, compared with other acoustic feature such as mel-frequency spectrogram, BN feature as a high-level feature is insensitive to background noise, speaker channel, accent and gender. Finally, since BN feature is close to the softmax layer of ASR model, we think it is acoustics weakly dependent and mainly contains linguistic feature and prosody information. Therefore, it is easy for the latent prosody model to learn the mapping between text and BN feature. When the acoustic model is fine-tuned, the model does not need to consider prosody information of target speaker. But in baseline approach, the model needs to learn not only the acoustics-related information of the target speaker but also the speaking style information including prosody. Thus even with extremely limited data, our approach can still acquire a stable pronunciation effect while learning the target speaker’s voice.

4 Conclusions

In this paper, we present a cross-lingual voice cloning approach. BN features obtained by SI-ASR model are used as a bridge across speakers and language boundaries. The relationships between text and BN features are modeled by the latent prosody model. The acoustic model learns the translation from BN features to acoustic features. The acoustic model is fine-tuned with a few samples of the target speaker to realize voice cloning. This system can generate speech of arbitrary utterance of target language in cross-lingual speakers’ voice. We verify that with small amount of audio data, our proposed approach can well handle cross-lingual tasks. And in intra-lingual tasks, our proposed approach also performs better than baseline approach in naturalness and similarity.

Because BN features lost energy information, the synthesized speech is lack of expressiveness in stress and mood words. Besides, the speaking style of synthetic speech is fixed, as the latent prosody model is trained using a single speaker’s corpus. In the future, more research will be do on synthesizing expressive personalized speech and transferring the prosodic style of the target speaker while performing voice cloning.

References

  • [1] Sercan Arik, Jitong Chen, Kainan Peng, Wei Ping, and Yanqi Zhou, “Neural voice cloning with a few samples,” in Advances in Neural Information Processing Systems, 2018, pp. 10019–10029.
  • [2] Heiga Zen, Norbert Braunschweiler, Sabine Buchholz, Mark JF Gales, Kate Knill, Sacha Krstulovic, and Javier Latorre, “Statistical parametric speech synthesis based on speaker and language factorization,” IEEE transactions on audio, speech, and language processing, vol. 20, no. 6, pp. 1713–1724, 2012.
  • [3] Feng-Long Xie, Frank K Soong, and Haifeng Li, “A kl divergence and dnn approach to cross-lingual tts,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016, pp. 5515–5519.
  • [4] Bo Li and Heiga Zen, “Multi-language multi-speaker acoustic modeling for lstm-rnn based statistical parametric speech synthesis,” 2016.
  • [5] Huaiping Ming, Yanfeng Lu, Zhengchen Zhang, and Minghui Dong, “A light-weight method of building an lstm-rnn-based bilingual tts system,” in 2017 International Conference on Asian Language Processing (IALP). IEEE, 2017, pp. 201–205.
  • [6] Eliya Nachmani and Lior Wolf, “Unsupervised polyglot text-to-speech,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 7055–7059.
  • [7] Liumeng Xue, Wei Song, Guanghui Xu, Lei Xie, and Zhizheng Wu, “Building a mixed-lingual neural tts system with only monolingual data,” CoRR, 2019.
  • [8] Lifa Sun, Hao Wang, Shiyin Kang, Kun Li, and Helen M Meng, “Personalized, cross-lingual tts using phonetic posteriorgrams.,” in INTERSPEECH, 2016, pp. 322–326.
  • [9] Dong Yu and Michael L Seltzer, “Improved bottleneck features using pretrained deep neural networks,” in Twelfth annual conference of the international speech communication association, 2011.
  • [10] Yan Song, Bing Jiang, YeBo Bao, Si Wei, and Li-Rong Dai, “I-vector representation based on bottleneck features for language identification,” Electronics Letters, vol. 49, no. 24, pp. 1569–1570, 2013.
  • [11] Wang Geng, Jie Li, Shanshan Zhang, Xinyuan Cai, Bo Xu, Xinyuan Cai, et al., “Multilingual tandem bottleneck feature for language identification,” 2016.
  • [12] Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, et al., “Natural tts synthesis by conditioning wavenet on mel spectrogram predictions,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 4779–4783.
  • [13] Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio, “Attention-based models for speech recognition,” in Advances in neural information processing systems, 2015, pp. 577–585.
  • [14] Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, et al., “Tacotron: A fully end-to-end text-to-speech synthesis model,” INTERSPEECH, 2017.
  • [15] Dong Wang and Xuewei Zhang, “Thchs-30: A free chinese speech corpus,” CoRR, 2015.
  • [16] Jean-Marc Valin and Jan Skoglund, “Lpcnet: Improving neural speech synthesis through linear prediction,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 5891–5895.
  • [17] Vijayaditya Peddinti, Yiming Wang, Daniel Povey, and Sanjeev Khudanpur, “Low latency acoustic modeling using temporal convolution and lstms,” IEEE Signal Processing Letters, vol. 25, no. 3, pp. 373–377, 2017.
  • [18] Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al., “The kaldi speech recognition toolkit,” in IEEE 2011 workshop on automatic speech recognition and understanding. IEEE Signal Processing Society, 2011, number CONF.
  • [19] Diederik P Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” 3rd International Conference on Learning Representations, 2015.