WaveGlow: A Flow-based Generative Network for Speech Synthesis

10/31/2018 ∙ by Ryan Prenger, et al. ∙ 0

In this paper we propose WaveGlow: a flow-based network capable of generating high quality speech from mel-spectrograms. WaveGlow combines insights from Glow and WaveNet in order to provide fast, efficient and high-quality audio synthesis, without the need for auto-regression. WaveGlow is implemented using only a single network, trained using only a single cost function: maximizing the likelihood of the training data, which makes the training procedure simple and stable. Our PyTorch implementation produces audio samples at a rate of more than 500 kHz on an NVIDIA V100 GPU. Mean Opinion Scores show that it delivers audio quality as good as the best publicly available WaveNet implementation. All code will be made publicly available online.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As voice interactions with machines become increasingly useful, efficiently synthesizing high quality speech becomes increasingly important. Small changes in voice quality or latency have large impacts on customer experience and customer preferences. However, high quality, real-time speech synthesis remains a challenging task. Speech synthesis requires generating very high dimensional samples with strong long term dependencies. Additionally, humans are sensitive to statistical imperfections in audio samples. Beyond the quality challenges, real-time speech synthesis has challenging speed and computation constraints. Perceived speech quality drops significantly when the audio sampling rate is less than 16kHz, and higher sampling rates generate even higher quality speech. Furthermore, many applications require synthesis rates much faster than 16kHz. For example, when synthesizing speech on remote servers, strict interactivity requirements mean the utterances must be synthesized quickly at sample rates far exceeding real-time requirements.

Currently, state of the art speech synthesis models are based on parametric neural networks. Text-to-speech synthesis is typically done in two steps. The first step transforms the text into time-aligned features, such as a mel-spectrogram 

[4, 5], or frequencies and other linguistic features [2, 6]. A second model transforms these time-aligned features into audio samples. This second model, sometimes referred to as a vocoder, is computationally challenging and affects quality as well. We focus on this second model in this work. Most of the neural network based models for speech synthesis are auto-regressive, meaning that they condition future audio samples on previous samples in order to model long term dependencies. These approaches are relatively simple to implement and train. However, they are inherently serial, and hence can’t fully utilize parallel processors like GPUs or TPUs. Models in this group often have difficulty synthesizing audio faster than 16kHz without sacrificing quality.

At this time we know of three neural network based models that can synthesize speech without auto-regression: Parallel WaveNet [2], Clarinet [7], and MCNN for spectrogram inversion [8]

. These techniques can synthesize audio at more than 500kHz on a GPU. However, these models are more difficult to train and implement than the auto-regressive models. All three require compound loss functions to improve audio quality or problems with mode collapse 

[9, 7, 8]. In addition, Parallel WaveNet and Clarinet require two networks, a student network and teacher network. The student networks underlying both Parallel WaveNet and Clarinet use Inverse Auto-regressive Flows (IAF) [10]. Though the IAF networks can be run in parallel at inference time, the auto-regressive nature of the flow itself makes calculation of the IAF inefficient. To overcome this, these works use a teacher network to train a student network on a approximation to the true likelihood. These approaches are hard to reproduce and deploy because of the difficulty of training these models successfully to convergence.

In this work, we show that an auto-regressive flow is unnecessary for synthesizing speech. Our contribution is a flow-based network capable of generating high quality speech from mel-spectrograms. We refer to this network as WaveGlow, as it combines ideas from Glow [1] and WaveNet [2]. WaveGlow is simple to implement and train, using only a single network, trained using only the likelihood loss function. Despite the simplicity of the model, our PyTorch implementation synthesizes speech at more than 500kHz on an NVIDIA V100 GPU: more than 25 times faster than real time. Mean Opinion Scores show that it delivers audio quality as good as the best publicly available WaveNet implementation trained on the same dataset.

2 WaveGlow

WaveGlow is a generative model that generates audio by sampling from a distribution. To use a neural network as a generative model, we take samples from a simple distribution, in our case, a zero mean spherical Gaussian with the same number of dimensions as our desired output, and put those samples through a series of layers that transforms the simple distribution to one which has the desired distribution. In this case, we model the distribution of audio samples conditioned on a mel-spectrogram.

(1)
(2)

We would like to train this model by directly minimizing the negative log-likelihood of the data. If we use an arbitrary neural network this is intractable. Flow-based networks [11, 12, 1] solve this problem by ensuring the neural network mapping is invertible. By restricting each layer to be bijective, the likelihood can be calculated directly using a change of variables:

(3)
(4)

In our case, the first term is the log-likelihood of the spherical Gaussian. This term penalizes the norm of the transformed sample. The second term arises from the change of variables, and the is the Jacobian. The log-determinant of the Jacobian rewards any layer for increasing the volume of the space during the forward pass. This term also keeps a layer from just multiplying the terms by zero to optimize the norm. The sequence of transformations is also referred to as a normalizing flow [13].

Our model is most similar to the recent Glow work [1], and is depicted in figure 1

. For the forward pass through the network, we take groups of 8 audio samples as vectors, which we call the ”squeeze” operation, as in 

[1]. We then process these vectors through several ”steps of flow”. A step of flow here consists of an invertible convolution followed by an affine coupling layer, described below.

Figure 1: WaveGlow network

2.1 Affine Coupling Layer

Invertible neural networks are typically constructed using coupling layers [11, 12, 1]. In our case, we use an affine coupling layer [12]. Half of the channels serve as inputs, which then produce multiplicative and additive terms that are used to scale and translate the remaining channels:

(5)
(6)
(7)
(8)

Here can be any transformation. The coupling layer preserves invertibility for the overall network, even though does not need to be invertible. This follows because the channels used as the inputs to , in this case , are passed through unchanged to the output of the layer. Accordingly, when inverting the network, we can compute and from the output , and then invert to compute , by simply recomputing . In our case, uses layers of dilated convolutions with gated-

nonlinearities, as well as residual connections and skip connections. This

architecture is similar to WaveNet [2] and Parallel WaveNet [9], but our convolutions have 3 taps and are not causal. The affine coupling layer is also where we include the mel-spectrogram in order to condition the generated result on the input. The upsampled mel-spectrograms are added before the gated- nonlinearites of each layer as in WaveNet [2].

With an affine coupling layer, only the term changes the volume of the mapping and adds a change of variables term to the loss. This term also serves to penalize the model for non-invertible affine mappings.

(9)

2.2 1x1 Invertible Convolution

In the affine coupling layer, channels in the same half never directly modify one another. Without mixing information across channels, this would be a severe restriction. Following Glow [1], we mix information across channels by adding an invertible 1x1 convolution layer before each affine coupling layer. The weights of these convolutions are initialized to be orthonormal and hence invertible. The log-determinant of the Jacobian of this transformation joins the loss function due to the change of variables, and also serves to keep these convolutions invertible as the network is trained.

(10)
(11)

After adding all the terms from the coupling layers, the final likelihood becomes:

(12)

Where the first term comes from the log-likelihood of a spherical Gaussian. The

term is the assumed variance of the Gaussian distribution, and the remaining terms account for the change of variables.

2.3 Early outputs

Rather than having all channels go through all the layers, we found it useful to output 2 of the channels to the loss function after every 4 coupling layers. After going through all the layers of the network, the final vectors are concatenated with all of the previously output channels to make the final . Outputting some dimensions early makes it easier for the network to add information at multiple time scales, and helps gradients propagate to earlier layers, much like skip connections. This approach is similar to the multi-scale architecture used in [1, 12], though we do not add additional squeeze operations, so vectors get shorter throughout the network.

2.4 Inference

Once the network is trained, doing inference is simply a matter of randomly sampling values from a Gaussian and running them through the network. As suggested in [1], and earlier work on likelihood-based generative models [14], we found that sampling

s from a Gaussian with a lower standard deviation from that assumed during training resulted in slightxly quality higher audio. During training we used

, and during inference we sampled s from a Gaussian with standard deviation 0.6. Inverting the 1x1 convolutions is just a matter of inverting the weight matrices. The inverse is guaranteed by the loss. The mel-spectrograms are included at each of the coupling layers as before, but now the affine transforms are inverted, and these inverses are also guaranteed by the loss.

(13)

3 Experiments

For all the experiments we trained on the LJ speech data [15]. This data set consists of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. The data consists of roughly 24 hours of speech data recorded on a MacBook Pro using its built-in microphone in a home environment. We use a sampling rate of 22,050kHz.

We use the mel-spectrogram of the original audio as the input to the WaveNet and WaveGlow networks. For WaveGlow, we use mel-spectrograms with 80 bins using mel filter defaults, i.e. each bin is normalized by the filter length and the scale is the same as HTK. The parameters of the mel-spectrograms are FFT size 1024, hop size 256, and window size 1024.

3.1 Griffin-Lim

As baseline for mean opinion score we compare the popular Griffin-Lim algorithm [16]

. Griffin-Lim takes the entire spectrogram (rather than the reduced mel-spectrogram) and iteratively estimates the missing phase information by repeatedly converting between frequency and time domain. For our experiments we use 60 iterations from frequency to time domain.

3.2 WaveNet

We compare against the popular open source WaveNet implementation [17]. The network has 24 layers, 4 dilation doubling cycles, and uses 512/512/256, for number of residual, gating, and skip channels respectively. The network upsamples the mel-spectrogram to full time resolution using 4 separate upsampling layers. The network was trained for iterations using the Adam optimizer [18]

. The mel-spectrogram for this network is still 80 dimensions but was processed slightly differently from the mel-spectrogram we used in the WaveGlow network. Qualitatively, we did not find these differences had an audible effect when changed in the WaveGlow network. The full list of hyperparameters is available online.

3.3 WaveGlow

The WaveGlow network we use has 12 coupling layers and 12 invertible 1x1 convolutions. The coupling layer networks () each have 8 layers of dilated convolutions as described in Section 2, with 512 channels used as residual connections and 256 channels in the skip connections. We also output 2 of the channels after every 4 coupling layers. The WaveGlow network was trained on 8 Nvidia GV100 GPU’s using randomly chosen clips of 16,000 samples for 580,000 iterations using weight normalization [19] and the Adam optimizer [18], with a batch size of 24 and a step size of When training appeared to plateau, the learning rate was further reduced to .

3.4 Audio quality comparison

We crowd-sourced Mean Opinion Score (MOS) tests on Amazon Mechanical Turk. Raters first had to pass a hearing test to be eligible. Then they listened to an utterance, after which they rated pleasantness on a five-point scale. We used 40 volume normalized utterances disjoint from the training set for evaluation, and randomly chose the utterances for each subject. After completing the rating, each rater was excluded from further tests to avoid anchoring effects.

The MOS scores are shown in Table 1

with 95% confidence intervals. Though MOS scores of synthesized samples are close on an absolute scale, none of the methods reach the MOS score of real audio. Though WaveGlow has the highest MOS, all the methods have similar scores with only weakly significant differences after collecting approximately 1,000 samples. This roughly matches our subjective qualitative assessment. Samples of the test utterances can be found online 

[3]. The larger advantage of WaveGlow is in training simplicity and inference speed.

Model Mean Opinion Score (MOS)
Griffin-Lim
WaveNet
WaveGlow
Ground Truth
Table 1: Mean Opinion Scores

3.5 Speed of inference comparison

Our implementation of Griffin-Lim can synthesize speech at 507kHz for 60 iterations of the algorithm. Note that Griffin-Lim requires the full spectrogram rather than the reduced mel-spectrogram like the other vocoders in this comparison. The inference implementation of the WaveNet we compare against synthesizes speech at 0.11kHz, significantly slower than the real time.

Our unoptimized PyTorch implementation of WaveGlow synthesizes a 10 second utterance at approximately 520kHz on an NVIDIA V100 GPU. This is slightly faster than the 500kHz reported by Parallel WaveNet [9], although they tested on an older GPU. For shorter utterances, the speed per sample goes down because we have the same number of serial steps, but less audio produced. Similar effects should be seen for Griffin-Lim and Parallel WaveNet. This speed could be increased with further optimization. Based on the arithmetic cost of computing WaveGlow, we estimate that the upper bound of a fully optimized implementation is approximately 2,000kHz on an Nvidia GV100.

4 Discussion

Existing neural network based approaches to speech synthesis fall into two groups. The first group conditions future audio samples on previous samples in order to model long term dependencies. The first of these auto-regressive neural network models was WaveNet [2] which produced high quality audio. However, WaveNet inference is challenging computationally. Since then, several auto-regressive models have attempted to speed up inference while retaining quality [6, 20, 21]. As of this writing, the fastest auto-regressive network is [22], which uses a variety of techniques to speed up an auto-regressive RNN. Using customized GPU kernels, [22] was able to produce audio at 240kHz on an Nvidia P100 GPU, making it the fastest auto-regressive model.

In the second group, Parallel WaveNet [9] and ClariNet [7] are discussed in Section 1. MCNN for spectrogram inversion [8] produces audio using one multi-headed convolutional network. This network is capable of producing samples at over 5,000kHz, but their training procedure is complicated due to four hand-engineered losses, and it operates on the full spectrogram rather than a reduced mel-spectrogram or other features. It is not clear how a non-generative approach like MCNN would generate realistic audio from a more under-specified representation like mel-spectrograms or linguistic features without some kind of additional sampling procedure to add information.

Flow-based models give us a tractable likelihood for a wide variety of generative modeling problems, by constraining the network to be invertible. We take the flow-based approach of [1] and include the architectural insights of WaveNet. Parallel WaveNet and ClariNet use flow-based models as well. The inverse auto-regressive flows used in Parallel WaveNet [9] and ClariNet [7] are capable of capturing strong long-term dependencies in one individual pass. This is likely why Parallel WaveNet was structured with only 4 passes through the IAF, as opposed to the 12 steps of flow used by WaveGlow. However, the resulting complexity of two networks and corresponding mode-collapse issues may not be worth it for all users.

WaveGlow networks enable efficient speech synthesis with a simple model that is easy to train. We believe that this will help in the deployment of high quality audio synthesis.

Acknowledgments

The authors would like to thank Ryuichi Yamamoto, Brian Pharris, Marek Kolodziej, Andrew Gibiansky, Sercan Arik, Kainan Peng, Prafulla Dhariwal, and Durk Kingma.

References