Effective and Differentiated Use of Control Information for Multi-speaker Speech Synthesis

07/07/2021 ∙ by Qinghua Wu, et al. ∙ Xiaomi 0

In multi-speaker speech synthesis, data from a number of speakers usually tends to have great diversity due to the fact that the speakers may differ largely in their ages, speaking styles, speeds, emotions, and so on. The diversity of data will lead to the one-to-many mapping problem <cit.>. It is important but challenging to improve the modeling capabilities for multi-speaker speech synthesis. To address the issue, this paper researches into the effective use of control information such as speaker and pitch which are differentiated from text-content information in our encoder-decoder framework: 1) Design a representation of harmonic structure of speech, called excitation spectrogram, from pitch and energy. The excitation spectrogrom is, along with the text-content, fed to the decoder to guide the learning of harmonics of mel-spectrogram. 2) Propose conditional gated LSTM (CGLSTM) whose input/output/forget gates are re-weighted by speaker embedding to control the flow of text-content information in the network. The experiments show significant reduction in reconstruction errors of mel-spectrogram in the training of multi-speaker generative model, and a great improvement is observed in the subjective evaluation of speaker adapted model, e.g, the Mean Opinion Score (MOS) of intelligibility increases by 0.81 points.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Neural text-to-speech is very popular in recent years [1, 3, 4, 5], and it can already produce speech that’s almost as natural as speech from a real person with high voice quality. However data collection is still a big challenge. We often need to collect a large amount of data in a high-fidelity recording studio with professional guidance to obtain high voice quality and high consistency of recordings. It is very costly, time-consuming or even impossible, e.g. in cases of custom speech and lombard speech [6]. However, noisy and diverse data is usually easier to be collected. Thereby multi-speaker speech synthesis is proposed, which collects diverse data from lots of speakers to train a robust multi-speaker generative model. It can be further adapted in different tasks such as Speaker adaptation [7], cross-lingual Text-to-speech synthesis [8], and style conversion [9].

The state-of-art systems have an encoder-decoder structure of network with speaker embedding as conditions [9, 10, 11, 12, 13, 14, 15]. Some works investigated into the effective representations of speaker, e.g. [11, 13]

studied the effects of different speaker embeddings such as d-vector

[16], x-vector [17], LDE-based speaker encoding [13]. [10] proposed an attention-based variable-length embedding. [18] measured the speaker similarity between the predicted mel-spectrogram and the reference. Some works focused on solving the problem of noisy data [7, 9, 19, 20], e.g. [9, 19]

researched into the methods of transfer learning for noisy samples.

[21] aimed to disentangle speaker embedding and noise by data augmentation and a conditional generative model. And some works were interested in the controllability of system in the manner of zero-shot. [10, 13] tried to obtain target voice by feeding target speaker embedding without speaker adaptation. [22, 23] introduced latent variables to control the speaking style.

The previous studies rarely gave insights to what role the information other than text-content (called control information in this paper, e.g. speaker embedding, pitch and energy) played. The control information is usually represented by a fixed-or-variable-length embedding which may not be as effective as we excepted, e.g. the pitch embedding is relevant to the harmonics of speech but it’s not an effective representation of the harmonic structure. Besides, the embedding of control information is typically concatenated or added to the text-content representation, or is simply used to perform affine transformation on it [2]. In this way, the control information is playing a similar role as the text-content information in the network. However, the text-content is the most important characteristic of speech which determines the intelligibility while the control information is affecting other characteristics like voice color of speech.

In this paper, we investigate better use of the control information under an encoder-decoder architecture. The major contributions are 1) Excitation spectrogram is designed to explicitly characterize harmonic structure of speech, which is fed to decoder instead of pitch/energy embeddings. 2) Conditional gated LSTM (CGLSTM) is proposed whose input/output/forget gates are re-weighted by speaker embedding while its cell/hidden states are dependent on the text-content. That’s to say, the speaker embedding controls the flow of text-content information through gates without affecting cell state and hidden state directly.

The rest article is organized as: Section 2 describes the proposed multi-speaker generative model from the overall framework (Section 2.1), excitation spectrogram generator(Section 2.2), and CGLSTM for decoder(Section 2.3). Section 3 is about the detailed settings and results of experiments. Finally, conclusion is drew in Section 4.

2 Multi-speaker Generative Model

2.1 Framework

The framework of the proposed system is illustrated as Figure 1. It is the state-of-art Tacotron-like structure with speaker encoder jointly trained. Due to the insufficient performance of attention mechanism for diverse data, phonemes’ duration are predicted for the alignment between phoneme sequence and mel-spectrogram sequence through a length regular module [1]. In addition, energy/pitch are predicted to generate the excitation spectrogram which is fed to the CGLSTM-decoder finally.

Figure 1: Overall framework of the proposed multi-speaker generative model.

The text-encoder is the standard Tacotron2 encoder [3] which has a stacked Conv1d followed by BLSTM. It takes phoneme-sequences with tones and prosody notations as inputs, and the output text-content embedding which, along with the speaker embedding, is used to predict phonemes’ duration firstly and then lf0/energy/mel-spectrogram after length regular.

The speaker-encoder has a GST-encoder-like structure [24], with a stacked Conv2d+BatchNorm reference encoder, and a multi-head attention [25]

. It takes the mel-spectrogrom of reference as inputs, and outputs the speaker embedding which is used to classify the speakers on the one hand and used as control information of the system on the other hand. Instead of introducing Gradient Reversal Layer (GRL)

[26] to remove text-content information from the speaker embedding, the reference is randomly chosen from the same speaker with the target mel-spectrogram [15].

The structure of duration-predictor is simply a layer of BLSTM with pre-dense and post-dense. It is used to replace attention based alignment between phoneme sequence and mel-spectrogram sequence. In the length-regular module, the phoneme sequence is repeated to the length of the mel-spectrogram sequence according to the duration. Besides, the frame position in phoneme is concatenated.

Pitch and energy are predicted separately with the same network structure, a stacked Conv1d with post-dense. Then excitation spectrogram is generated using the pitch and energy (see Section 2.2), which aims to address the one-to-many mapping problem by providing information of harmonic structure.

Finally, the decoder is an auto-regressive structure as Figure 2 with the proposed CGLSTM (see Section 2.3) in it.

Figure 2: Network structure of CGLSTM decoder.

In the flow of information, affine transformation is carried out on the text-content. It is defined as Equation 1

(1)
(2)

where means element-wise multiplication, and means matrix multiplication.

2.2 Excitation Spectrogram Generator

In the source-filter analysis [27], speech is produced when excitation signal passes through a system composed of chest, glottis and oral-cavity, etc., in which resonance occurs. The resonance phenomenon is reflected in the voice as the harmonic structure which is a very important characteristic of speech. Unfortunately, existing studies pay more attention to the use of pitch, rather than the harmonic structure. Pitch can reflect the periodic characteristics of the excitation signals while it does not reflect the resonance phenomenon. Thus we propose an excitation spectrogram generator to act as a simple resonator which takes pitch/energy as inputs and generates the excitation spectrogram with harmonics at vowels and uniform spectrogram at consonants. It provides a start point with explicit harmonic structure for the prediction of target mel-spectrogram.

The harmonics are defined as the multiples of the fundamental frequency as Equation 3:

(3)

where means the harmonic position of speech, is the number of harmonics, and is the fundamental frequency.

Then the excitation spectrogram is supposed to have energy only at harmonic positions during vowel and at all positions during constant. Energy is distributed equally at these positions as Equation 4:

(4)

where means the linear spectrogram at frame, is the total energy of frame, and is the fft number in the calculation of linear spectrum.

Finally, the linear excitation spectrogram is converted to mel excitation spectrogram by Equation 5

(5)

where is the transformation matrix from linear to mel spectrogram and is the dimension of mel-spectrogram.

2.3 Conditional Gated LSTM

Text content is the most important feature of speech due to its decisive role on the intelligibility. In addition, speech can be characterized in terms of timbre, style, speaker and emotion etc. Many researches aims to change or control some of these characteristics without a negative influence on the intelligibility. For this purpose, the control information is usually added or concatenated to the text content, which is fed as the inputs of network. However, in this way, the control information plays a similar role to the text content in the network. Both of them would directly take effect in the same way on the intelligibility and other characteristics at the same time, which is not the way we expected. Consequently, we propose conditional gated LSTM (CGLSTM) where the control information is used to re-weight the gates and the text content flows in the hidden/cell states. Thereby the control information will directly play a part in the gates-based flow of the text content without operating the text content in itself.

Compared with Long Short Time Memory (LSTM), which is frequently used in speech synthesis tasks due to its good capacity of learning long dependencies, the proposed CGLSTM calculates hidden/cell states in the same manner by current inputs and previous hidden/cell states. As for the calculation of input/output/forget gates, the control information is used to re-weight the LSTM gates as Equation 68.

(6)
(7)
(8)

where , and are the forget, input and output gates; , , and are the current text content inputs, current control information inputs, and previous hidden state; and are the corresponding weights and biases.

3 Experiments

3.1 Corpus

The data set of our experiments is the public multi-speaker mandarin corpus AISHELL-3 [28], which contains roughly 85 hours of recordings spoken by 218 native Chinese mandarin speakers. Among them, recordings from 173 speakers have Chinese character-level and pinyin-level transcripts and total 63263 utterances. This part of the transcribed data will be used in our experiments, which is divided into training set and test set without overlapping.

  • Training set: contains 57304 utterances from 165 speakers, with 133 females 46915 utterances and 32 males 10389 utterances. The training set is used to pre-train the multi-speaker generative model, which is further adapted using the test set.

  • Test set: contains 4 females and 4 males, and only 20 utterances of each speaker are randomly chosen for speaker adaptation.

The recordings are mono, 16bit, and down-sampled from 44100HZ to 16000HZ. Preprocessing is conducted on both the training and the test sets to reduce the diversity of them: 1) Energy normalization by scaling the maximal amplitude of utterance. 2) Silence trimming by keeping 60ms silence at the head and tail of utterance.

3.2 Setup

The pipeline of our experiments includes 1) Pre-training: train the multi-speaker generative model using the training set. 2) Speaker adaptation: train the target model by transfer learning using single-speaker data from the test set and 3) Inference: infer the mel-spectrogram to synthesize the waveform by vocoder. Here the modified neural vocoder LPCNet [29] is used, which takes the mel-spectrogram as inputs.

In our experiments, the frame hop size is set to 12.5ms, the window size is set to 50ms, and the number of mel-bank is set to 80 for mel-spectrogram. Mean Absolute Error (MAE) is calculated to measure the reconstruction error of lf0 and energy while Mean Square Error (MSE) is applied to mel-spectrogram. Besides, the task of speaker classification uses cross-entropy as loss function. The setup of our experiments is described as follows:

  • Baseline: Compared with the framework in Figure 1

    , following modifications are made: 1) the excitation spectrogram generator is removed 2) the CGLSTM in the decoder is replaced with the standard LSTM while the speaker embedding is used to transform the text content by an affine layer before fed to the decoder.

  • System-1: Baseline + excitation spectrogram generator

  • System-2: Baseline + CGLSTM decoder

  • System-3: Baseline + excitation spectrogram generator + CGLSTM decoder.

3.3 Multi-speaker Generative Model

Figure 3 shows the reconstruction error of mel-spectrogram of different systems in the pre-training stage. Compared with the baseline, the excitation-spectrogram generator (System-1) and CGLSTM-decoder (System-2) brought obvious improvement in terms of reconstruction error separately. The reconstruction error reduced further in System-3. It shows that the excitation spectrogram and the CGLSTM, used together or separately, can greatly improve the modeling capability for multi-speaker data.

Figure 3: Reconstruction Error of mel-spectrogram of different systems

We also compared the amount of parameters of each system as Table 1. In general, there is no big difference among them. Compared with the baseline, the parameter amount of System-3 even drops by 10%. In other words, we can achieve better performance with less computation.

Baseline System-1 System-2 System-3
11.24 9.5 11.37 10.08
Table 1: The amount of parameters of different systems (millions)

3.4 Speaker Adapted Model

For unseen speakers in the test set, we adapted the multi-speaker model using data from the target speaker. The Mean Opinion Score (MOS) test was carried out to evaluate the performance in intelligibility of speech, voice quality and speaker similarity. 20 native Chinese tester participated in it. The MOS results are shown at Table 2.

MOS
Intellig. Quality Similarity
female male female male
GT 4.75 4.46 4.56 - -
Baseline 3.30 2.54 2.38 2.76 3.02
System-1 4.09 3.93 2.76 3.92 3.13
System-2 3.56 2.57 2.67 3.17 3.26
System-3 4.11 3.93 3.10 4.05 3.05
Table 2: MOS of intelligibility (Intellig.), voice quality (Quality) and speaker similarity (Similarity) for unseen-speaker after speaker adaptation. In the evaluations, scores range from 1 to 5. 1) For intelligibility, score=1 indicates that the voice is hard to understand, having ambiguous or bad pronunciation while score=5 means the voice is pronounced clearly and correct, easy to understand. 2) For voice quality, score=1 means the voice has strong and annoying noise while score=5 means the voice is clean and pleasant. 3) For speaker similarity, score=1 means the compared two voices don’t sound like from the same person at all while score=5 means it’s easy to make the judgement that they are from the same person. (p.s. GT means Ground truth.)

According to the MOS results, the System-1 outperforms the baseline in all aspects of intelligibility, voice quality and speaker similarity. It indicates that the excitation spectrogram, which captures explicit harmonic structure of speech, is much more effective than the simple use of pitch and energy. It can improve the clearness of pronunciation of some speakers, and at the same time reduce the noise or signal distortion caused by insufficient modeling capabilities for complex data.

For the proposed CGLSTM-decoder, we can find that, it also brings much improvement by comparing the System-2 and the baseline. The MOS of intelligibility increased by 0.26 points, which proves that CGLSTM can reduce the negative impact of control information on the intelligibility. Besides, the improvement in speaker similarity indicates that CGLSTM can control the specific characteristics of voice better than LSTM.

After using the excitation spectrogram and the CGLSTM-decoder together in System-3, we achieve the best performance from the MOS. In addition, AB-preference test is conducted between System-1 and System-3 as Figure 4. Here System-3 performs slightly better than System-1 for male and worse for female in voice quality with on average 37.5% testers have no preference. Considering both the results of MOS and AB preference, System-1 and System-3 are comparable.

Figure 4: AB preference of voice quality between System-1 and System-3.

Finally, the performance of males in the test set is obviously worse than that of females. One possible reason is that the data in the training set for females and males is not balanced, with a rough ratio female:male=9:2. The performance gap between females and males becomes smaller after using CGLSTM-decoder, e.g. the voice quality gap drops from 1.17 (System-1) to 0.83 (System-3). We may explain it like this: in the case of imbalanced data, CGLSTM can share information in a better way than LSTM and it can control the specific feature of voice through the control information. In the future, we need more investigations to prove it.

4 Conclusions

In this paper, we have proposed 1) the excitation spectrogram generator to capture the harmonic structure of speech, which aims to handle the diversity of multi-speaker data by providing a start point for the mel-spectrogram, and 2) CGLSTM to better control the specific characteristics of speech with less impact on the intelligibility than LSTM. The experiments showed large reduction in the reconstruction errors of mel-spectrogram by using the excitation spectrogram generator and CGLSTM decoder. In System-3, the multi-speaker generative model obtained better modeling capabilities with 10% reduction in model size. The effectiveness of the proposed methods is further verified in the subjective evaluations of speaker adapted models. We have made a comprehensive improvement in terms of intelligibility, voice quality and speaker similarity. e.g. the MOS of intelligibility was improved from 3.30 to 4.11, and voice quality improved from 2.54 to 3.93 for female in System-3. However, we also found that the performance of male is worse than that of female, which perhaps derives from the imbalanced data for females/males and needs further research in the future.

References