DeepAI
Log In Sign Up

CycleGAN-based Non-parallel Speech Enhancement with an Adaptive Attention-in-attention Mechanism

07/28/2021
by   Guochen Yu, et al.
0

Non-parallel training is a difficult but essential task for DNN-based speech enhancement methods, for the lack of adequate noisy and paired clean speech corpus in many real scenarios. In this paper, we propose a novel adaptive attention-in-attention CycleGAN (AIA-CycleGAN) for non-parallel speech enhancement. In previous CycleGAN-based non-parallel speech enhancement methods, the limited mapping ability of the generator may cause performance degradation and insufficient feature learning. To alleviate this degradation, we propose an integration of adaptive time-frequency attention (ATFA) and adaptive hierarchical attention (AHA) to form an attention-in-attention (AIA) module for more flexible feature learning during the mapping procedure. More specifically, ATFA can capture the long-range temporal-spectral contextual information for more effective feature representations, while AHA can flexibly aggregate different AFTA's intermediate output feature maps by adaptive attention weights depending on the global context. Numerous experimental results demonstrate that the proposed approach achieves consistently more superior performance over previous GAN-based and CycleGAN-based methods in non-parallel training. Moreover, experiments in parallel training verify that the proposed AIA-CycleGAN also outperforms most advanced GAN-based and Non-GAN based speech enhancement approaches, especially in maintaining speech integrity and reducing speech distortion.

READ FULL TEXT VIEW PDF

page 1

page 6

09/26/2021

Joint magnitude estimation and phase recovery using Cycle-in-Cycle GAN for non-parallel speech enhancement

For the lack of adequate paired noisy-clean speech corpus in many real s...
10/13/2021

Dual-branch Attention-In-Attention Transformer for single-channel speech enhancement

Curriculum learning begins to thrive in the speech enhancement area, whi...
06/13/2020

Dynamic Attention Based Generative Adversarial Network with Phase Post-Processing for Speech Enhancement

The generative adversarial networks (GANs) have facilitated the developm...
10/26/2022

Parallel Gated Neural Network With Attention Mechanism For Speech Enhancement

Deep learning algorithm are increasingly used for speech enhancement (SE...
06/22/2021

Learning to Inference with Early Exit in the Progressive Speech Enhancement

In real scenarios, it is often necessary and significant to control the ...
11/15/2021

Time-Frequency Attention for Monaural Speech Enhancement

Most studies on speech enhancement generally don't consider the energy d...

1 Introduction

Speech enhancement (SE) aims to recover clean speech components from the noise-corrupted mixture, so as to improve speech quality and intelligibility. It has become a fundamental technique in many communication applications such as the front-ends for automatic speech recognition (ASR) systems and hearing assistant devices 

[1]

. Due to the unprecedented development of deep neural networks (DNNs), many DNN-based SE approaches have demonstrated better performance over traditional signal-processing-based approaches 

[2]. These DNN-based approaches can be divided into two categories, namely masking-based approaches [3, 4, 5] and mapping-based approaches [6, 7, 8]

. Recently, Generative Adversarial Networks (GANs) have shown their promising performance in the SE area for its powerful capability of mapping the target output distribution from the original input distribution 

[9, 10, 11, 12], in which a generator () tries to conduct the enhancement process and a discriminator () tries to distinguish between real inputs and fake outputs generated by this generator.

Figure 1: Training procedure of the proposed method. Forward noisy-clean-noisy cycle and backward clean-noisy-clean cycle are illustrated in the left and right parts, respectively.

In the standard formulation of supervised speech enhancement methods, the mapping functions are trained to minimize the loss between the output features of the enhanced speech and the features of the corresponding clean speech. Therefore, they always need a large number of paired clean-noisy samples to conduct supervised training and improve the generalization of the whole network. However, there exist many practical scenarios in which it is difficult or impossible to obtain parallel recordings of clean-noisy pairs, and sometimes we can only acquire clean data that mismatches the source noisy data. To resolve this problem, CycleGAN was adopted for both standard parallel and non-parallel training in the SE area [14, 15, 16]

, which was originally proposed for unpaired image-to-image translation 

[17] and began to thrive in speech applications in recent years [18]. Nonetheless, these previous non-parallel CycleGAN-based SE methods with unpaired data can hardly achieve competitive performance when compared with the standard parallel training, because of the limited mapping ability of the generator.

In this paper, a novel CycleGAN-based system is proposed with an adaptive attention-in-attention mechanism to cope with non-parallel speech enhancement. Specifically, two generators (dubbed and ) and two discriminators (dubbed and ) are jointly trained with relativistic adversarial losses, cycle-consistency losses and an identity mapping loss. To improve the mapping ability of the generators, we propose a novel attention mechanism dubbed attention-in-attention (AIA) in generators for more powerful feature fusion and feature correlation learning. This AIA consists of adaptive time-frequency attention (ATFA) and adaptive hierarchical attention (AHA). Specifically, ATFA aims at capturing the long-range temporal-spectral contextual dependency in parallel, while AHA aims to flexibly aggregate all the output feature maps of ATFA together by the hierarchical attention weights depending on the global context. For discriminators, multi-scale discriminators are adopted to force the generator to pay more attention to finer details. Besides, considering the effectiveness of the power compression in the dereverberation and denoising task [19, 20], the magnitude of the spectrum is compressed as the input features to better attenuate the background noise.

The remainder of the paper is organized as follows. In Section 2, the proposed framework is described in detail. The experimental setup is presented in Section 3, while experimental results are provided and discussed in Section 4. Finally, conclusions are drawn in Section 5.

2 Proposed Scheme

2.1 Problem Formulation

In the speech enhancement task, when taking the short-time Fourier transform (STFT) to the mixture signal, in the time-frequency (T-F) domain, we can have:

(1)

where , and denote the T-F representations of the mixture, clean speech and noise components in the time index of and frequency index of , respectively. Most recently, using the power-compressed spectra as input features dramatically improves speech quality in the dereverberation and denoising task [19, 20], so we conduct the power compression on the spectral magnitude before feeding into the mapping network. The compression coefficient is set to 0.5, which is reported an optimal choice in [20]. Therefore, the enhanced spectral magnitude can be expressed as:

(2)

where ; denotes the mapping function of the generator and denotes its parameter set.

Figure 2:

The framework of the proposed generators. IN, GLU, and PRelu indicate instance normalization, gated linear unit, and parametric Relu activation, respectively.

2.2 Network architecture

In our CycleGAN-based SE system, a forward noisy-to-clean generator is first employed to enhance the noisy features to the clean ones, while an inverse clean-to-noisy generator is applied to convert the enhanced features back to the original domain. As illustrated in Fig. 1, a forward-inverse noisy-clean-noisy cycle and an inverse-forward clean-noisy-clean cycle jointly constrain and to conduct non-parallel mapping. Discriminators and

are trained to classify the target speech features as real and the generated speech features as fake. As shown in Fig. 

2

, the generator is composed of three components, including three downsampling layers, an adaptive attention-in-attention (AIA) module and three homologous upsampling layers. Each downsampling/upsampling layer block is composed of a 2D convolution/deconvolution layer, followed by instance normalization (IN), Parametric Relu activation function (PRelu) and gated liner units (GLUs) 

[21]. The proposed AIA consists of six ATFA modules and an AHA module, where ATFA is proposed to capture the long-range dependencies along temporal-spectral dimensions with low computational cost and AHA is introduced to aggregate different intermediate features to capture the long-term hierarchical contextual information by the adaptive weights depending on the global context.

The discriminator is composed of six 2D convolutions, each of which is followed by spectral normalization (SN) and PRelu, so as to compress the feature maps into a high-level representation. SN can stabilize the training process of the discriminator and avoid vanishing or exploding gradients [22]. Note that we set the configuration of the utilized discriminators the same as our previous study [23]. Inspired by recent studies on image enhancement [24], we propose to apply a multi-scale discriminator that uses the intermediate layer of the discriminator with a smaller receptive field, which can force the generator to produce speech features with global consistency and finer details.

Figure 3: The diagram of adaptive time-frequency attention (ATFA) modules. and denote the matrix multiplication and element-wise multiplication, respectively.

2.3 Adaptive Time-Frequency Attention

Attention mechanism [25] has been widely used in speech processing tasks for its capability of leveraging the contextual information in the feature maps. Following the terminology in [26], we compute the attention function on the output feature maps of the downsampling layers. Here, denotes the batch size of input features, denotes the number of frames, denotes the number of frequency bins and denotes the number of channels in each feature map. To alleviate the heavy computational complexity of conventional self-attention, we introduce an adaptive time-frequency attention (ATFA) mechanism as a lightweight solution to capture the long-range correlations exhibited in T-F spectrogram as in [27]. As illustrated in Fig. 3, the proposed ATFA consists of two branches: an adaptive time attention branch (ATAB) and an adaptive frequency attention branch (AFAB). The two branches cooperate to capture the global dependencies along temporal and spectral dimensions in parallel. By factorizing the original attention into time-dimension and frequency-dimension, we can reduce the large attention weight matrix to two much smaller ones, i.e., and . Along the time path, we reshape the input feature features into vectors with dimension to calculate temporal self-attention, which can be calculated as:

(3)

where , , , and . Analogously, we reshape the input into vectors with dimension to calculate the adaptive attention along the frequency axis in parallel. Finally, The output features of these two branches and the original features are then combined together by two adaptive weights to generate the final output of ATFA module, which can be formulated as:

(4)

where , and represent the original input feature map given by the last downsampling layer, the output of ATAB and the output of AFAB, respectively. Here, and are initialized as 0 and gradually lean to assign a larger weight. In summary, each branch has the following steps: (1) Reshape the input features; (2) Extract the long-range contextual dependencies along time and frequency axes, respectively; (3) Perform feature fusion along different dimensions with adaptive weights.

Figure 4: The diagram of the adaptive hierarchical attention (AHA) module. , and denote the softmax function, matrix multiplication and element-wise multiplication, respectively.

2.4 Adaptive Hierarchical Attention

As shown in Fig. 4, we introduce an adaptive hierarchical attention (AHA) module to integrate different hierarchical feature maps given a set of ATFA modules’ outputs , where is the number of ATFA blocks and set to be 6. Specifically, we first employ an average pooling layer and a convolutional layer to squeeze the output feature map of each ATFA modules into a global representation: , and then we concatenate all the outputs as , which is then fed into the Softmax function to obtain the hierarchical attention map . After that we cascade all inputs to obtain a global feature map . Subsequently, we incorporate the global contextual information by performing a matrix multiplication between and the hierarchical attention weights , which can be defined as:

(5)

where denotes the global contextual feature maps. Finally, we perform an element-wise sum operation between the output feature map the last ATFA module and the global contextual feature map to obtain the final :

(6)

Note that is a learnable scalar coefficient and initialized as 0. This adaptive learning weight gradually learns to assign larger weight to merge global contextual information effectively. In a nutshell, is a weighted sum of all ATFA modules’ outputs, thus helping to fuse the global context of all intermediate feature maps with different weights and progressively guide the enhancement procedure.

2.5 Loss function

To ensure the effective mapping in non-parallel training, we use the following losses, namely relativistic adversarial losses, cycle-consistency losses, and an identity mapping loss, to jointly optimize the proposed model.

Relativistic adversarial loss: For the noisy-to-clean mapping procedure, we employ the relativistic average least-square (RaLS) adversarial loss [28] to make the enhanced compressed spectral magnitude appear to the target clean ones , which can be expressed as below:

(7)
(8)

where and are the compressed magnitudes of noisy and clean spectrum (i.e. and ), respectively. Here, the generator tries to synthesize the enhanced spectral magnitude that can deceive the discriminator by minimizing , whereas the discriminator attempts to distinguish the generated spectral magnitude from the clean one by minimizing . Analogously, the inverse clean-to-noisy generator and its corresponding discriminator are optimized using and .

Cycle-consistency loss: Without parallel supervision, generators may map source feature space to any random permutation of the target space within only adversarial losses. To constrain the non-parallel mapping, a cycle-consistency loss is utilized to bring the output back to original input data. The cycle-consistency loss can help two generators and to identity the pseudo pair without paired data as follows:

(9)

where indicates the Norm.

Identity-mapping loss: Since the generator should not modify the compositions, such as linguistic information of the target speech feature when it is fed into the generator as the input [16], too much, we regularize generators and to be as close as possible to the identity mapping by minimizing an identity-mapping loss [17], which can be given by:

(10)

where the magnitudes of the target spectrum (i.e., and ) are provided as the inputs of the generators (i.e., and ), respectively. In summary, the overall loss can be summarized as follows:

(11)

where and are tunable hyper-parameters, which are set to be 5 and 10, respectively.

3 Experiments

3.1 Datasets

The dataset used in this work is publicly available as proposed in [29], which is a selection of the Voice Bank corpus [30] with 28 speakers for training and another 2 unseen speakers for the test. The training set consists of 11572 mono audio samples, while the test set contains 2 speakers’ (one male and one female) 824 utterances. For the training set, audio samples are mixed together with one of the 10 noise types, i.e., two artificial (babble and speech shaped) and eight real noise from the DEMAND database [31], at four SNRs of 0, 5, 10 and 15 dB. The test utterances are created with 5 unseen test-noise types (all from the DEMAND database) at SNRs of 2.5, 7.5, 12.5 and 17.5 dB. The original raw waveforms are downsampled from 48kHz to 16kHz beforehand.

Models PESQ SSNR STOI DNSMOS
Unprocessed 1.97 1.68 0.921 3.02
Normal magnitude
CycleGAN (baseline) 2.47 5.69 0.924 3.36
CycleGAN+ATAB 2.53 6.28 0.929 3.40
CycleGAN+AFAB 2.54 6.42 0.928 3.39
CycleGAN+ATFA 2.59 6.65 0.932 3.41
AIA-CycleGAN 2.61 6.69 0.931 3.43
Compressed magnitude (parameter )
CycleGAN (baseline) 2.56 6.21 0.926 3.38
CycleGAN+ATAB 2.59 6.67 0.928 3.41
CycleGAN+AFAB 2.61 6.61 0.931 3.43
CycleGAN+ATFA 2.64 6.94 0.930 3.45
AIA-CycleGAN 2.67 7.23 0.932 3.47
Table 1: Ablation study for Normal and Compressed magnitude under non-parallel training.

3.2 Implementation Setup

The Hanning window of length 32ms is utilized, with 75% overlap between adjacent frames. The 512-point STFT is utilized and the 257-dimension compressed spectral magnitude is used as the input feature. For the non-parallel training strategy, we randomly crop a fixed-length segment (i.e., 108 frames) from a randomly selected noisy audio file as the input, while the target is a randomly selected clean audio file that is different from the input audio. That is to say, the input totally mismatches the target speech features. We adopt the Adam optimizer [32] with the momentum term ,

and train the networks with an initial learning rate of 0.0001 for discriminators and 0.0002 for generators, respectively. The same learning rates are maintained for the first 50 epochs, while they linearly decay in the remaining iterations. We set the batch size to 4 and use

only for the first 20 epochs.

4 Results and Analysis

We use the following objective metrics to evaluate speech enhancement performance: the perceptual evaluation of speech quality (PESQ) [33], short-time objective intelligibility (STOI) [34]

, segmental signal-to-noise ratio (SSNR), the mean opinion score (MOS) prediction of the speech signal distortion (CSIG) 

[35], the MOS prediction of the intrusiveness of background noise (CBAK) and the MOS prediction of the overall effect (COVL) [35]. In addition, we evaluate the subjective quality by DNSMOS [36], which is a robust non-intrusive perceptual speech quality metric designed to evaluate noise suppressors. Higher values of all metrics indicate better performance.

4.1 Ablation study

We first investigate the effectiveness of the proposed attention modules and the power compression. As shown in Table 1, we set the CycleGAN without proposed attention modules as the baseline, which is also trained with the relativistic average least-square loss. From the results, we can have the following observations. Firstly, compared with non-compressed methods, CycleGAN-based approaches fed with the compressed spectral magnitude achieve better performance, indicating that the power compression facilitates more accurate spectrum recovery. For example, compressed-magnitude CycleGAN achieves average 0.09 PESQ and 0.52dB SSNR improvements over the normal-magnitude CycleGAN. The possible rationale is that, when the compression operation is applied, the gap between the magnitude of the speech and noise components is narrowed, that is to say, the residual noise components in the weak energy regions (i.e., middle and high-frequency regions) are given more priority during the enhancement produce [19]. Secondly, by adding the proposed attention branches, CycleGAN+ATAB improves the average PESQ and STOI by 0.03 and 0.002 over the compressed-magnitude baseline, while CycleGAN + AFAB improves the average PESQ and STOI by 0.05 and 0.004. This indicates that the ATAB and AFAB can effectively guide the feature learning procedure of the generators. By integrating ATAB and ATFB as the ATFA modules, we also observe considerable improvements on PESQ, SSNR and DNSMOS scores. Finally, by integrating the AHA module and ATFA modules as an AIA module, we can see that the proposed AIA-CycleGAN significantly outperforms other comparisons, which verifies the effectiveness of the proposed AIA module in improving the speech quality.

4.2 Comparison under non-parallel and parallel training

To investigate the effectiveness of our proposed method with both the parallel and non-parallel training strategy, we compare our proposed method with the reference methods including conventional GANs and CycleGANs. Here, ”GAN-normal” is fed with the normal magnitude as the input features while ”GAN-compressed” is fed with the compressed magnitude. From Table 2, we can observe the following two phenomena. Firstly, GAN-based methods yield similar performance with CycleGAN in the parallel training, whereas CycleGAN outperforms GAN-based methods by a large margin in the non-parallel training. This indicates the cycle consistency constraint can prevent the mapping ability of the generators from sharply degrading under unpaired data. Besides, by employing the AIA module, the proposed approach under unpaired data achieves similar and competitive performance compared with parallel training, and significantly outperforms all the baselines. For example, AIA-CycleGAN surpasses GAN-compressed by a large margin in PESQ, CSIG, CBAK, COVL and DNSMOS using non-parallel training, which is 0.64, 0.38, 0.46, 0.54 and 0.79, respectively.

Fig. 5 shows the spectrograms of the noisy/clean utterances and the utterances enhanced by GAN-compressed, CycleGAN and AIA-CycleGAN in the non-parallel training. This figure demonstrates that CycleGANs dramatically surpass the conventional GAN-based method. Moreover, we observe that AIA-CycleGAN can effectively surpass the original CycleGAN in terms of noise suppression. For example, as shown in the red sign area and green sign area of Fig. 5 (c) and (d), AIA-CycleGAN shows a more powerful capability of suppressing residual noise components.

Methods PESQ STOI CSIG CBAK COVL DNSMOS
Noisy 1.97 0.921 3.35 2.44 2.63 3.02
Non-parallel training
GAN-Normal 2.01 0.914 3.48 2.74 2.67 2.68
GAN-Compressed 2.03 0.916 3.54 2.78 2.72 2.72
CycleGAN 2.56 0.927 3.78 3.14 3.16 3.38
AIA-CycleGAN 2.67 0.932 3.86 3.20 3.21 3.47
Parallel training
GAN-Normal 2.56 0.931 3.72 3.23 3.16 3.33
GAN-Compressed 2.60 0.934 3.78 3.25 3.18 3.35
CycleGAN 2.62 0.932 3.87 3.16 3.24 3.41
AIA-CycleGAN 2.74 0.936 3.96 3.25 3.29 3.49
Table 2: Experimental results among different models under non-parallel and parallel training. bold indicates the best results for different training conditions.

Figure 5: Visualization of the noisy/clean spectrogram and enhanced spectrogram using different methods.

4.3 Comparison with other competitive GAN-based and Non-GAN based approaches

Our proposed model is also compared with several other competitive GAN-based and Non-GAN based baselines under standard paired data. As seen from Table 3, AIA-CycleGAN outperforms these advanced GAN-based methods in terms of PESQ, CSIG and COVL by considerable improvements, while providing similar STOI and CBAK with CP-GAN [12]. For example, AIA-CycleGAN provides average 0.58 PESQ, 0.011 STOI, 0.48 CSIG, 0.31 CBAK and 0.49 COVL improvements than SEGAN [9], which is the first GAN-based SE approach in the time domain. The significant improvements in PESQ, CSIG and COVL indicate that our proposed method better maintains speech integrity while reducing speech distortion. When compared with other Non-GAN based methods, our AIA-CycleGAN also provides competitive performance especially in terms of PESQ and CSIG scores. Note that we reimplement GCRN [8] and DCCRN [5]

in Voice bank + DEMAND dataset, and directly use the reported scores of other methods in their original papers.

Methods PESQ STOI CSIG CBAK COVL
Noisy 1.97 0.921 3.35 2.44 2.63
GAN-based methods
SEGAN [9] 2.16 0.925 3.48 2.94 2.80
MMSEGAN [13] 2.53 0.930 3.80 3.12 3.14
RSGAN [10] 2.51 0.937 3.78 3.23 3.16
RaSGAN [10] 2.57 0.937 3.83 3.28 3.20
CP-GAN [12] 2.64 0.940 3.93 3.29 3.28
Non-GAN based methods
Wave-U-net [37] 2.64 3.56 3.08 3.09
DFL-SE [38] 3.86 3.33 3.22
CRN-MSE [39] 2.61 0.938 3.78 3.11 3.24
GCRN [8] 2.51 0.940 3.71 3.24 3.09
DCCRN [5] 2.68 0.939 3.88 3.18 3.27
AIA-CycleGAN 2.74 0.936 3.96 3.25 3.29
Table 3: Experimental results among different models under parallel training.

5 Conclusions

In this paper, we propose a novel adaptive attention-in-attention CycleGAN (AIA-CycleGAN) to solve the difficulty in non-parallel speech enhancement task. Specifically, we use relativistic adversarial losses, cycle-consistency losses and an identity loss to jointly constrain the forward noisy-clean-noisy cycle and backward clean-noisy-clean cycle. To effectively improve the feature correlation learning in the generators, we integrate adaptive time-frequency attention and adaptive hierarchical attention to form an attention-in-attention module to capture local and global long-range dependencies. By employing ATFA, generators can capture the long-range temporal and frequency contextual information to distinguish different types of information for more effective feature representations. By employing AHA, generators can capture the long-range hierarchical contextual information to flexibly aggregate different global feature maps by learnable weights. Experimental results demonstrate that the proposed approach provides consistently better speech enhancement performance than the previous GAN-based and CycleGAN-based baselines under both parallel and non-parallel training.

Acknowledgment

This work was supported in part by the National Natural Science Foundation of China under Grant 61631016 and Grant 61501410, and in part by the Fundamental Research Funds for the Central Universities under Grant 3132018XNG1805.

References