The human auditory cortex has a remarkable capability to focus on a target speech by selectively suppressing the ambient noise. The selective suppression of unwanted background noise is known to exploit the noise robust visual cues to enhance a person’s capacity to resolve the phonological ambiguities golumbic2013visual . In addition, studies have shown the importance of visual cues in improving the speech intelligibility summerfield1992lipreading as well as speech detection in noisy environments grant2000use ; grant2001speech . In this study, we achieve this selective speech enhancing ability computationally.
In the recent years, speech enhancement (SE) has attracted wide attention due to the noise reducing ability that helps hearing impaired listen better in noisy social situations and opened the doors for speech processing systems (such as speech recognition and voice activity detector systems) in noisy environments narayanan2014investigation ; kayser2015improving . SE approaches can be categorised into statistical analysis based noise reduction models (such as spectral subtraction (SS), linear minimum mean square error (LMMSE) and Wiener filtering) and computational auditory scene analysis (CASA) wang2006fundamentals . It has been observed that, the statistical methods fail to achieve improved speech intelligibility in some scenarios due to introduction of distortions such as musical noises. In contrast, CASA has shown to be more effective in stationary and non-stationary noises chen2018dnn .
In CASA, the speech is separated from interfering background noise by using a time-frequency (T-F) spectral mask to the T-F representation of noisy speech. The T-F spectral mask is used to enhance speech dominant regions and suppress the noise-dominant regions. The ideal binary mask (IBM) assigns zero to a T-F unit if the local signal-to-noise ratio (SNR) is lower than the local criterion (LC), and unit value otherwise. The IBM is defined as follows:
The IBM has shown to improve the speech quality and intelligibility for the hearing impaired and normal hearing listeners kjems2010speech ; ahmadi2013perceptual ; wang2009speech . The IBM cannot be calculated using equation 1
in real-world scenarios because the target speech and interfering background noise cannot be estimated with high accuracy. However, the IBM estimation can be modelled as a data-driven optimisation problem that jointly exploits noisy speech and visual face images for the spectral mask estimation.
In the literature, extensive research has been carried out to develop audio-only (A-only) and audio-visual (AV) SE methods. Researchers have proposed several SE models such as deep neural network (DNN) based spectral mask estimation models ephrat2018looking ; gogate2018dnn , DNN based clean spectrogram estimation models gabbay2018visual ; hou2018audio , Wiener filtering based hybrid models adeel2017towards ; adeel2018lip ; adeel2018contextual , and time-domain SE models rethage2018wavenet ; pandey2018new ; luo2019conv . However, limited work has been conducted to develop robust language-independent, causal, speaker and noise-independent AV SE models for low SNRs ( dB) observed in everyday social environments (such as cafeteria, and restaurants) where traditional A-only hearing aids fail to improve the speech intelligibility. The few attempts to develop such robust models have been limited to speaker-dependent scenarios hou2018audio and small scale ( speakers) speaker independent scenarios gogate2018dnn ; adeel2018contextual .
In addition, none of the aforementioned AV SE studies have conducted listening tests on real noisy mixtures that often consists of speech signal reverberantly mixed with multiple competing background noise sources CHiME3 . Finally, studies have shown that a pretrained DNN based SE model does not generalise well on new languages pascual2018language . The model can be fine-tuned on large AV corpus consisting of wide variety of languages such as AVSPEECH ephrat2018looking
(consisting of 1500 hours recording) to potentially achieve the language-independent performance given enough model capacity. However, training on corpora like AVSPEECH requires a large number of graphics processing units (GPUs) or tensor processing units (TPUs) that are often unavailable in academic research environments.
In this paper, we present a causal, language, noise and speaker independent AV model to focus on a target speaker by selectively suppressing the background noise. More specifically, we design and train a cross-modal DNN architecture, called CochleaNet, that ingests the noisy sound mixture and cropped images of speakers lip as an input and output a T-F mask to selectively suppress and enhance each T-F bin. In addition, the model contextually exploits the available AV cues to estimate the spectral mask independent of the SNRs.
The proposed AV SE model is evaluated using, ASPIRE, a first of its kind high quality AV binaural speech corpus recorded in real noisy settings such as cafeteria and restaurant. It is to be noted that, most of the aforementioned AV SE methods used a synthetic mixture of clean speech and noises for model evaluation. However, the synthetic mixture do not reflect the real noisy mixtures as speech is often reverberantly mixed with multiple competing noise background sources. Therefore, the ASPIRE corpus can be used by speech and machine learning communities as a benchmark resource to support reliable evaluation of AV SE technologies.
We demonstrate superior speech quality and intelligibility of proposed approach over the state-of-the-art A-only SE approaches as well as recent DNN based SE models. In addition, we show that a model trained on a synthetic mixture of Grid corpus Grid (with only 33 speakers and a small English vocabulary) and ChiME 3 CHiME3 noises (consisting of bus, pedestrian, cafe, and street noises) generalise well on real noisy ASPIRE corpus, large vocabulary corpora (such as TCD-TIMIT TCDTIMIT ), other languages (such as Mandarin hou2018audio ) and wide variety of speakers and noises NOISEX92 ; snyder2015musan . An overview of our proposed AV SE model is shown in Figure 1.
In summary, our paper presents six major contributions:
A causal, language, noise and speaker independent AV DNN driven model for SE is proposed. The model contextually exploits the audio and visual cues, independent of the SNR, to estimate the spectral mask that is used to selectively suppress and enhance each T-F bin.
A first of its kind AV corpus, consisting of high quality binaural speech recorded in real noisy environments such as cafeteria and restaurant, is collected to evaluate the performance of the proposed model in challenging real noisy settings. In the literature, a synthetic mixture of clean speech and noise is generally used to evaluate the AV SE methods. However, the synthetic mixtures do not depict the real noisy mixtures as in real mixtures the speech is reverberantly mixed with multiple competing noise background sources.
To the best of our knowledge, our paper is first to propose a speaker, noise and language-independent model that generalises on different languages even after training on a small English vocabulary Grid corpus. In the literature, it has been shown that a pretrained SE model trained on a single language do not perform well on new languages pascual2018language .
We perform extensive evaluation of our proposed approach, using real noisy ASPIRE corpus, with state-of-the-art A-only SE approaches (including spectral subtraction, linear minimum mean square error) as well as recent DNN based SE models (including SEGAN) using both objective measures (PESQ, SI-SDR, and ESTOI) and subjective MUSHRA (MUlti Stimulus test with Hidden Reference and Anchor) listening tests.
We also study the behaviour of the trained AV model, in terms of objective metrics, when the visual cues are temporarily or permanently absent for random duration of time due to occlusions.
Finally, we critically analyse and compare the performance of A-only CochleaNet model with the AV counterpart to empirically identify the role visual cues plays in the performance of AV model. Specifically, we study the behaviour of the A-only and AV models in silent speech regions as well as we conduct listening tests to gauge the model performances on different phonemes. We hypothesise that the model perform better on visually distinguishable phonemes as compared to visually indistinguishable phonemes.
The rest of the paper is organised as follows: Section 2 briefly reviews the related work, section 3 presents the ASPIRE corpus collection setup and the postprocessing involved. Section 4 presents, CochleaNet, an AV Mask Estimation model for SE. Section 5 discuss the experimental setup and results. Section 6 concludes this work and propose future research directions.
2 Related work
This section briefly reviews the related works in the area of A-only and AV SE.
2.1 Audio-Visual Speech Enhancement
Ephrat et al. ephrat2018looking proposed a speaker independent AV DNN for complex ratio mask estimation to separate speech from overlapping speech and background noises. The model is trained on, AVSPEECH, a new large AV corpus consisting of 1500 hours recording with wide variety of languages, people and face poses. The main limitation, with the aforementioned study is that the model is trained and evaluated on a fixed SNR. Similarly, Gogate et al. gogate2018dnn presented a speaker independent AV DNN for IBM estimation to separate speech from background noises. However, the model is trained and evaluated using a limited vocabulary Grid corpus Grid and can help in achieving superior performance. In addition, Hou et al. hou2018audio proposed a speaker-dependent based SE model, trained and evaluated on a single speaker, that predicts the enhanced spectrogram from the noisy spectrogram using multimodal deep convolutional network. On the other hand, Gabbay et al. gabbay2018visual trained a convolutional encoder-decoder architecture to estimate the spectrogram of the enhanced speech from noisy speech spectrogram and cropped mouth regions. However, the model fails to work when the visuals are occluded. Adeel et al. adeel2018lip ; adeel2018contextual proposed a visual-only and AV SE models by integrating an enhanced visually-derived wiener filter (EVWF) and DNN based lip reading regression model. The preliminary evaluation demonstrated the effectiveness to deal with spectro-temporal variations in any wide variety of noisy environments. Owens et al. owens2018audio
proposed a self-supervised trained network to categorise whether audio and visual streams are temporally aligned. The model is then used for feature extraction to condition an on/off screen speaker source separation model. Afouras et al.afouras2018deep trained a DNN to predict both magnitude and phase of denoised speech spectrograms. Finally, Zhao et al. Zhao_2018_ECCV presented a model to separate the sound of multiple objects from a video (e.g. musical instruments).
2.2 Audio-only Speech Enhancement
Hershey et al. hershey2016deep proposed deep clustering that exploits discriminatively trained speech embeddings to cluster and separate the different sources. For time-domain SE, Rethage et al. rethage2018wavenet
proposed a non-causal Wavenet based SE model that operates on raw audios to address the invalid short-time fourier transform (STFT) problemgriffin1984signal in spectral mask based models. Similarly, Pandey et al. pandey2018new and Luo et al. luo2019conv proposed a fully-convolutional time-domain SE model that address the shortcomings of separation in the frequency domain, including the decoupling of phase and magnitude, and high latency of calculating the STFT.
A fundamental problem with A-only SE and separation is the label permutation problem hershey2016deep i.e. there is no easy way to associate a mixture of audio sources with the corresponding speakers or instruments yu2017permutation . In addition, the main limitation with most of the aforementioned A-only and AV SE approaches is that the developed model are either evaluated on high SNRs ( dB) or on a fixed SNR. In addition, none of the aforementioned AV approaches have used an AV speech corpus recorded in real noisy settings for evaluation.
3 ASPIRE Corpus
In the literature, extensive research has been carried out to develop A-only real noisy mixtures that often consists of speech signal that is reverberantly mixed with multiple competing noise background sources CHiME3 . However, to the best of our knowledge, no such AV corpus recorded in real noisy settings is available. In this section, we present ASPIRE, a first of its kind, AV speech corpus recorded in real noisy environments (such as cafeteria and restaurant) to support reliable evaluation of AV SE technologies.
3.1 Sentence design
ASPIRE corpus follows the same sentence format as the AV Grid corpus as shown in Table 1. The six words sentence consists of command, colour, preposition, letter, digit and adverb. The letter ”w” was excluded because it is the only multi-syllabic letter. Each speaker produced all combinations of colour, letter and digit leading to 1000 utterances per talker in both real noisy settings and acoustically isolated booth. Thus, each talker recorded 2000 utterances.
3.2 Speaker population
Three speakers (one male and two female) contributed to the corpus. The speakers age ranged from 23 to 55. All the speakers have spent most of their lives in the United Kingdom and together encompassed a range of mixed English accents. All the participants were paid for their contribution. The corpus consists of total 6000 utterances (3000 recorded in real noisy settings, 3000 in acoustically isolated booth).
The ASPIRE corpus is recorded in real noisy settings specifically the university cafeteria and restaurant during busy lunch times (11.30 to 1.30) as well as in an acoustically isolated booth. The recording setup is shown in Figure 2. Apple iPad mini 2, placed at an eye level to avoid noise and distraction from the video apparatus, was used to record the video (the distance between iPad and speaker was 90 centimetres) at 30 frames per second (fps) and 1080p resolution. A collar microphone was also connected to the iPad. The high quality binaural audio from speaker is recorded using Zoom H4n pro recorder at a sampling rate of 44100 Hz and binaural microphone. The listener was wearing the binaural microphone at an approximate distance of 140 centimetres.
The listener and speaker were sitting opposite to each other on the fixed chairs. Speaker was initially trained with few utterances and the purpose of research is also explained in detail. Periodic breaks were given to the speakers during the recording to avoid fatigue and each sentence was mandatory to be read correctly without any interruption. The sentences as detailed in section 3.1 were presented to the speaker on a laptop in random order and speaker was allowed to repeat the sentence if the sentence recording is interrupted or sentence is incorrectly uttered. In addition, the speaker repeated the utterance if any mistake is spotted by the listener. In all, 2000 utterances per speaker (1000 utterances in real noisy settings and 1000 utterances in the booth) around 2% and 4% of the utterances were re-recorded in booth and real noisy settings respectively.
Audio and video data were continuously collected throughout a session. The drift between audio and video data was calculated by synchronising the claps. The utterance start and end times were identified using Gentle (a robust forced-aligner built on Kaldi), speech recorded from the collar microphone and the presented transcriptions. Finally, all the segmented utterances were manually checked to correct any additional alignment errors.
The raw videos recorded in busy restaurant and cafeteria consists of a few clearly identifiable people except the speaker itself. Therefore, to ensure the privacy, we estimate the speaker area for the first frame using a segmentation model and pixelate the non-speaker area for the complete utterance using the estimated segmentation mask. This is possible because the speaker is sitting in a single position throughout an utterance. Figure 3 shows some sample video frames from the ASPIRE corpus.
4.1 Data Representation
Our model ingests both audio and visual as input. For batch training, 3 second video clips are considered. A cropped 80 x 40 lip region is extracted from the video and is used as a visual input (75 cropped lip images assuming 3 second clip recorded at 25 fps). For audio input, we compute STFT of audio segments and a magnitude spectrogram is used. The trained model can be applied to both streaming data as well as data of arbitrary lengths during inference time.
The output of our network is an IBM, a multiplicative spectrogram mask, that describes the T-F relationship between clean audio and background noise. In the literature, it has been shown that the multiplicative masks perform better than direct prediction of time-domain waveform and clean spectrogram magnitudes wang2014training ; wang2018supervised .
|Filter size||5 x 5||5 x 5||5 x 5||5 x 5||1 x 1|
|Dilation||1 x 1||2 x 1||4 x 1||8 x 1||1 x 1|
|Size||3 x 3||3 x 3||2 x 3||3 x 3||3 x 3||2 x 3||256|
|Dilation||1 x 1||1 x 1||2 x 2||3 x 3|
4.2 Network Architecture
This section describes the network architecture of the proposed AV SE model. Figure 4 depicts a high-level overview of the multi-stream modules present in the network. The subsequent subsections describes each module in detail.
4.2.1 Audio Feature Extraction
The audio feature extraction consist of dilated convolutional layers as detailed in Table LABEL:tab:audFeatEx
. Each layer is followed by a ReLU activation for non-linearity.
4.2.2 Visual Feature Extraction
4.2.3 Multimodal Fusion
The visual features are sampled at 25 fps while the audio feature sampling rate is 75 vectors per second (VPS). Visual features were upsampled to match the audio vector per second rate and to compensate for the sampling rate discrepancies. This is done using simple repetition of each element 3 times in the temporal dimension. After upsampling, the audio and visual features are concatenated and fed to a LSTM layer consisting of 622 units. The LSTM output is then fed to two fully connected layers with 622 neurons and ReLU activation. The weights of the fully connected layers are shared across the time dimension. Finally, the extracted features were fed to a fully connected layer with 622 neurons and sigmoid activation. The binary cross-entropy between the estimated and the actual IBM is used as a loss function. It is to be noted that, no thresholding was applied to the predicted mask and the sigmoidal outputs were considered as the estimated mask.
4.3 Speech Resynthesis
The model estimates a T-F IBM when a noisy spectrogram and cropped lip images are fed. The estimated multiplicative spectral mask is applied to the noisy magnitude spectrum. The masked magnitude is then combined with the noisy phase to get the enhanced speech using ISTFT. Figure 1 depicts an overview of speech resynthesis.
5 Experiments and Results
We qualitatively and quantitatively evaluated our proposed approach with other state-of-the-art A-only and AV SE in real noisy environments and a range of synthetic AV corpora.
5.1 Synthetic AV Corpora
This section present the synthetic AV corpora used for training and testing of CochleaNet.
5.1.1 Grid + ChiMe 3
In our experiments, benchmark Grid corpus Grid is used for the training and evaluation of the proposed framework. All 33 speakers with 1000 utterances each are considered. The sentence format is depicted in Table 1. The Grid corpus is randomly mixed with non-stationary noises from 3rd CHiME challenge (CHiME 3)CHiME3 , consisting of bus, cafeteria, street, and pedestrian noises, for SNRs ranging [-12, 9] dB with a step size of 3 dB. It is to be noted that, the trained model is SNR-independent i.e. the utterances at all SNRs were combined for training, and evaluation. For training, 21000 utterances from 21 speakers were employed. The model was validated and tested on 4000 and 8000 utterances from 4 and 8 speakers respectively.
5.1.2 Tcd-Timit + Musan
For large vocabulary generalisation analysis, we used benchmark TCD-TIMIT TCDTIMIT corpus. Specifically, 5488 utterances from 56 speakers are mixed with randomly selected non speech noises from MUSAN noises snyder2015musan . The MUSAN noises includes technical noises (e.g. dialtones, fax machine noises etc.) as well as ambient sounds (e.g. thunder, wind, footsteps, animal noises etc.). It to be noted that, all the 5488 utterances were used as a test set to asses the model performance on large vocabulary, speaker and noise independent settings.
5.1.3 Hou et al. + NOISEX-92
For language-independent generalisation testing, a Mandarin dataset hou2018audio based on Taiwan Mandarin Hearing in Noise Test (MHINT) with 320 utterances is mixed with randomly selected noise from NOISEX-92 NOISEX92 consisting of voice babble, factory radio channel and various military noises including fighter jets, engine room, operations room, tank and machine gun.
5.2 Data Preprocessing
5.2.1 Audio Preprocessing
The audio signals were resampled at 16 kHz and a mono channel is used for processing. The resampled audio signal was segmented into N 78 millisecond (ms) frames and 17% increment rate to produce 75 fps. A hanning window and STFT is applied to produce 622-bin magnitude spectrogram.
5.2.2 Video Preprocessing
The Grid and TCD-TIMIT corpora are recorded at 25 fps. However, the Mandarin dataset hou2018audio , recorded at 30 fps, is downsampled to 25 fps using ffmpeg ffmpeg . A dlib face detector dlib09 is used to locate the faces in each frame of a video clip (75 face cropped images assuming 3 second clip recorded at 25 fps). The speakers lip images are extracted out of the 25 fps faces video using a minified dlib dlib09 model optimised for extracting the lip landmarks. A region of aspect ratio 1:2 centred at lip-centre is extracted using the lip landmark points. The extracted region is resized to 40 pixels x 80 pixels and converted to grey scaled image. It is to be noted that, the lip sequences are extracted at 25 fps and audio features are extracted at 75 VPS.
5.3 Experimental Setup
For the AV features fusion and mask estimation, the network is trained using TensorFlow library and NVIDIA Titan Xp GPUs. A subset of speakers from Grid ChiME 3 corpus (as described in section5.1
) are used for training/validation of the neural network and rest of the speakers are used to test the performance of the trained neural network in speaker independent scenario (25% testing dataset). The preprocessed training set of Grid ChiME 3 corpus consists of around 25000 utterances, that are split into 21000 and 4000 utterances for training and validation respectively. It is to be noted that, there was no overlap between the speakers and the noises present in the train, validation and test set for ensuring the speaker and noise independent criteria. When a missing visual frame is encountered a vector of zeros is used in lieu of the lip image. The preprocessed dataset consists of cropped lip images and noisy audio spectrogram as input and IBM as an output. The network is trained using backpropagation with the Adam optimiseradam till the validation error stop decreasing.
5.4 Objective testing on Synthetic mixtures
The quality of re-synthesised speech is evaluated using the following objective metrics for estimating speech intelligibility and aforementioned synthetic AV datasets (section 5.1)
5.4.1 Perceptual Evaluation of Speech quality (PESQ) comparison
PESQ pesq is one of the most commonly used objective assessment metric in the SE literature and has shown to correlate well with the subjective listening tests hu2007evaluation . PESQ is computed as a linear combination of the average disturbance value and the average asymmetrical disturbance values between a reference signal and modified signal. PESQ score ranges from , indicating the minimum and maximum possible reconstructed speech quality. The PESQ scores for A-only and AV CochleaNet, SEGAN, SS, and LMMSE with Grid + ChiME 3, TCD TIMIT + MUSAN and Hou et al hou2018audio + NOISEX-92 for different SNRs are presented in Table 4, 5, 6 respectively. The variety of datasets ensure speaker and noise independent criteria, large vocabulary corpus as well as language-independent scenario. It is to be noted that, the model trained on Grid + ChiME 3 corpus is used for evaluation. It can be seen that, at low SNRs, AV CochleaNet and A-only CochleaNet outperformed SS boll1979spectral , LMMSE ephraim1985speech , and SEGAN pascual2017segan based SE methods. In addition, AV perform better than A-only CochleaNet especially for low SNR ranges (i.e. dB ), where AV CochleaNet model achieved the 1.97, 2.16, and 2.33 PESQ score at SNR levels, of -12dB, -9dB, and -6 dB respectively, as compared to 1.84, 2.04, and 2.24 PESQ score achieved by A-only CochleaNet model for Grid ChiME 3 speaker independent test set. However, at high SNRs (i.e. dB) AV slightly outperformed A-only mask estimation model, where AV CochleaNet achieved 2.58, 2.69, and 2.79 PESQ score at SNR levels, of 0 dB, 3 dB, and 6 dB respectively, as compared to 2.52, 2.63, and 2.73 achieved by A-only CochleaNet model for Grid ChiME 3 speaker independent test set. The overall PESQ improvement as compared to noisy audio is depicted in Figure 5, where AV CochleaNet outperformed the A-only CochleaNet, and achieved near optimal performance (close to an ideal IBM) for Grid ChiME 3 corpus.
|AV + No Visuals||1.81||1.94||2.04||2.11||2.40||2.47||2.54||2.58|
|AV + No Visuals||1.23||1.45||1.44||1.46||1.66||1.68||1.74||1.75|
5.4.2 Short Term Objective Intelligibility (STOI) comparison
STOI is another benchmark objective evaluation metric used for speech intelligibility that shows high correlation with subjective listening test scoresstoi . The correlation of short-time temporal envelopes between the clean and modified speech is calculated in STOI with values ranging from , and a higher value indicates better intelligibility. The STOI scores for A-only and AV CochleaNet, SEGAN, SS, and LMMSE with Grid + ChiME 3, TCD TIMIT + MUSAN and Hou et al hou2018audio + NOISEX-92 for different SNRs are presented in Fig 6. It can be seen that, at low SNRs, AV CochleaNet and A-only CochleaNet outperformed SS boll1979spectral , LMMSE ephraim1985speech , SEGAN pascual2017segan based SE methods. In addition, AV perform better than A-only model especially for low SNR ranges (i.e. dB ), where AV CochleaNet model achieved the STOI scores of 0.521, 0.560, and 0.607 at SNR levels, of -12dB, -9dB, and -6 dB respectively, as compared to 0.483, 0.513, and 0.544 achieved by A-only CochleaNet model for Hou et al hou2018audio + NOISEX-92 language-independent test set. However, at high SNRs (i.e. dB) AV slightly outperformed A-only mask estimation model, where AV CochleaNet achieved STOI scores of 0.719, 0.739, and 0.776 at SNR levels, of 0 dB, 3 dB, and 6 dB respectively, as compared to 0.665, 0.701, and 0.752 achieved by A-only CochleaNet model for Hou et al hou2018audio + NOISEX-92 language-independent test set.
5.4.3 Scale-Invariant Signal-to-Distortion Ratio (SI-SDR) comparison
SI-SDR sisdr is slightly modified scale invariant version of SDR. SDR is one of the standard speech separation evaluation metric that measures the amount of distortion introduced by the separated signal and is defined as the ratio between clean signal energy and distortion energy. The higher SDR values indicate better speech separation performance. The SI-SDR scores for A-only and AV CochleaNet, SEGAN, SS, and LMMSE with Grid + ChiME 3, TCD TIMIT + MUSAN and Hou et al hou2018audio + NOISEX-92 for different SNRs are presented in Fig 7 respectively. It can be seen that, at low SNRs, AV CochleaNet and A-only CochleaNet outperformed SS boll1979spectral , LMMSE ephraim1985speech , SEGAN pascual2017segan based SE methods. In addition, AV perform better than A-only mask estimation model especially for low SNR ranges (i.e. dB ), where AV CochleaNet model achieved the SI-SDR scores of 3.62, 4.80, and 5.41 at SNR levels, of -12dB, -9dB, and -6 dB respectively, as compared to 3.04, 4.41, and 5.29 achieved by A-only CochleaNet model for TCD-TIMIT + MUSAN speaker independent and large vocabulary test set. However, at high SNRs (i.e. dB) AV slightly outperformed A-only mask estimation model, where AV CochleaNet achieved SI-SDR scores of 7.77, 8.64, and 9.31 at SNR levels, of 0 dB, 3 dB, and 6 dB respectively, as compared to 7.76, 8.62, and 9.27 achieved by A-only CochleaNet model for TCD-TIMIT + MUSAN speaker independent and large vocabulary test set.
Figure 9 presents the noisy, clean spectrogram and spectrograms for the reconstructed speech signal of a random utterance from GRID + ChiME 3 AV corpus using SS, LMMSE, SEGAN+, A-only CochleaNet, AV CochleaNet and Oracle IBM. It is to be noted that, the speech is completely swamped with background noise and the performance of CochleaNet models can be seen (i.e. close to the Oracle IBM).
5.5 Subjective testing on ASPIRE Corpus
In the literature, significant number of objective metrics pesq ; stoi ; sisdr have been proposed to computationally approximate the subjective listening tests. However, the only way to quantify the subjective quality is to ask listeners for their opinions. We used MUSHRA-style recommendation20011534 listening test method for subjective evaluation, using enhanced speech from real noisy ASPIRE corpus (section 3). A total of 20 native English speakers with normal-hearing participated in the listening test. The individual test consist of 20 randomly selected utterances drawn from the ASPIRE corpus. The first two screens were used to train participants to adjust the volume and to familiarise with the screen and the task. In each screen, the participant were asked to score the quality of each audio sample, on a scale from , generated by each SE model for the same sentence. The range from is described as “excellent”, from as “good”, from as “fair”, from as “poor”, and from as “bad”. Noisy speech was included in the test so that participants would have a reference for the degraded speech as well as for checking if participants go through the material.
The times required to complete each screen were also recorded and used for removing any outliers. We evaluated five SE models including: SEAGN, SS, LMMSE, A-only CochleaNet and AV CochleaNet. Figure8 shows the boxplot of listeners responses in terms of the rank order of systems for the ASPIRE corpus. The listening test results show that the superior performance of our AV CochleaNet, over A-only CochleaNet, SEGAN, spectral subtraction (SS), and log-minimum mean square error (LMMSE) based SE methods. The results demonstrate the capability of CochleaNet to deal with the reverberation caused by multiple competing background sources observed in real-world noisy environment, by exploiting the audio and visual cues. In addition, the results show that an AV model trained on synthetic additive mixtures generalise well real noisy corpus.
5.6 Additional Analysis
Effect of occluded visual information
The model is trained and evaluated on a professionally recorded corpus that ensured none of the visual frames consists of occluded lip images (except a small number of Grid corpus utterances where visuals are absent). However, in real life scenarios specifically when the source and the target is non-stationary the model needs to be robust against the missing visual information. Therefore, to experimentally evaluate the trained AV CochleaNet behaviour in such conditions we randomly replaced a percentage of lip images with a blank visual frame. The results for lip occlusion is depicted in Figure 10. It can be seen that, for both -9 dB and -12 dB, as the visual occlusion increases the PESQ score initially remains constant and after 20% occlusion linearly starts decreasing. It is worth mentioning that, AV model performs similar to the A-only model when visuals are completely absent even though the model has not encountered such situation during training.
Phoneme level comparison of audio-only and audio-visual CochleaNet
It is well known in the literature that, visual information help disambiguate the phonological ambiguity. In addition, some phonemes such as /p/ are visually distinguishable and phonemes such as /g/ cannot be visually distinguished. However, the relationship between the visually distinguishable phonemes and the AV SE performance is not known. Therefore, we conducted comparative listening tests with 3 listeners and 1000 random enhanced utterances from Grid CHiME 3 speaker independent test set to empirically identify if there is a relation between the visually distinguishable phonemes and the phonemes that AV CochleaNet can enhance better than A-only CochleaNet. The listening tests revel that AV model enhanced the /r/, /p/, /l/, /w/, /EH1/, /AE1/, /IY1/, /EY1/, /AA1/ and /OW1/ phonemes better than A-only model and the AV performance on phoneme such as /h/, /g/ and /k/ was similar to A-only performance. This confirmed the hypothesis that there is a direct relation between visually distinguishable phonemes and the phonemes that AV model works better on.
Comparison of audio-only and audio-visual CochleaNet in silent speech regions
The superior performance of AV CochleaNet as compared to A-only CochleaNet could be because of the visual cues, specifically, the closed lip, could give extra information to AV model in silent speech regions. In ordered to verify this hypothesis, we calculated the mean squared error (MSE) between the predicted masks and the IBM in the silent speech regions. The A-only model achieved MSE of 0.0123 as compared to the AV that achieved MSE of 0.0108. This confirms the aforementioned hypothesis, however further analysis is needed to visualise the convolutional receptive fields and to check if a particular part of the model is active when the speaker is silent. Figure 11 presents the noisy spectrogram and spectrograms for the reconstructed speech signal of a random utterance from TCD-TIMIT corpus using SS, LMMSE, SEGAN+, A-only CochleaNet, AV CochleaNet. It can be seen that, the speech is completely swamped with background noise and the A-only and AV CochleaNet managed to suppress the noise dominant regions and speech dominant regions as compared to SS, LMMSE and SEGAN+. It can be seen that, in silent speech regions, AV CochleaNet outperformed A-only CochleaNet.
The main limitation with the proposed work is that: (1) the process of IBM based SE ignore the phase spectrum that lead to invalid STFT problem pandey2018new (2) the model cannot separate the overlapping speech if more than one speaker is speaking simultaneously as the model is not trained with such mixed AV corpora (3) the ASPIRE corpus consists of only three speakers recorded in controlled real noisy environments with stationary speaker-listener setting and more challenging non-stationary real noisy corpora are required to assess the robustness of the model (4) the proposed model works only on a single channel audio and cannot exploit the binaural nature of speech we experience everyday (5) the major bottleneck in deployment of proposed mask estimation based model in listening devices such as hearing aids and cochlear implants is the data privacy concerns, high processing power requirements and processing latency.
This paper presented a causal, language, noise and speaker independent AV DNN model for SE that contextually exploits the audio and visual cues, independent of the SNR, to estimate the spectral IBM and enhance speech. In addition, we presented a novel AV corpus, ASPIRE111ASPIRE Corpus, enhanced speech samples, and additional supplementary material is available on the project website: https://cochleanet.github.io, consisting of speech recorded in real noisy environments such as cafeteria and restaurant to evaluate the proposed model. The corpus can be used as a resource by speech community to evaluate AV SE models. We perform extensive experiments taking into consideration the noise, speaker and language-independent criteria. The performance evaluation in terms of objective metrics (PESQ, SI-SDR, and ESTOI) and subjective MUSHRA listening tests revealed significant improvement of our proposed AV CochleaNet as compared to the A-only CochleaNet, state-of-the-art SE (including SS, LMMSE) approaches as well as DNN based SE approaches (including SEGAN). The simulation results have validated the phenomena of more effective visual cues at low SNRs, less effective visual cues at high SNRs. The visual occlusion study depicts that the model performance initially remains constant till 20% of the visuals are removed and after 20% occlusion the performance linearly decreases as the number of occluded frame increases. The empirical study to identify the role visual cues play in superior performance of AV model as compared to A-only model show that, there is a high correlation between visually distinguishable phonemes and the AV model performance. Moreover, the study shows that AV model significantly outperform A-only in silent speech region because it is relatively easier to audio-visually distinguish if a speaker is speaking or not as compared to only using only audio input. In future, we intend to investigate the generalisation capability of our proposed DNN model with other more challenging conversational real noisy AV corpora. Ongoing and future work also addresses the real time implementation challenges and privacy concerns with multimodal AV hearing aids.
This work was supported by the Edinburgh Napier University Research Studenship and UK Engineering and Physical Sciences Research Council (EPSRC) Grant No. EP/M026981/1. The authors would also like to acknowledge Dr Ricard Marxer and Prof Jon Barker from the University of Sheffield. Finally, we would like to acknowledge all the participants and support staff involved in the collection of ASPIRE corpus.
- (1) E. Z. Golumbic, G. B. Cogan, C. E. Schroeder, D. Poeppel, Visual input enhances selective speech envelope tracking in auditory cortex at a “cocktail party”, Journal of Neuroscience 33 (4) (2013) 1417–1426.
- (2) Q. Summerfield, Lipreading and audio-visual speech perception, Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 335 (1273) (1992) 71–78.
- (3) K. W. Grant, P.-F. Seitz, The use of visible speech cues for improving auditory detection of spoken sentences, The Journal of the Acoustical Society of America 108 (3) (2000) 1197–1208.
- (4) K. W. Grant, S. Greenberg, Speech intelligibility derived from asynchronous processing of auditory-visual information, in: AVSP 2001-International Conference on Auditory-Visual Speech Processing, 2001.
- (5) A. Narayanan, D. Wang, Investigation of speech separation as a front-end for noise robust speech recognition, IEEE/ACM Transactions on Audio, Speech, and Language Processing 22 (4) (2014) 826–835.
- (6) H. Kayser, C. Spille, D. Marquardt, B. T. Meyer, Improving automatic speech recognition in spatially-aware hearing aids, in: Sixteenth Annual Conference of the International Speech Communication Association, 2015.
- (7) D. Wang, G. J. Brown, Fundamentals of computational auditory scene analysis.
- (8) J. Chen, D. Wang, Dnn based mask estimation for supervised speech separation, in: Audio source separation, Springer, 2018, pp. 207–235.
- (9) U. Kjems, M. S. Pedersen, J. B. Boldt, T. Lunner, D. Wang, Speech intelligibility of ideal binary masked mixtures, in: 2010 18th European Signal Processing Conference, IEEE, 2010, pp. 1909–1913.
- (10) M. Ahmadi, V. L. Gross, D. G. Sinex, Perceptual learning for speech in noise after application of binary time-frequency masks, The Journal of the Acoustical Society of America 133 (3) (2013) 1687–1692.
- (11) D. Wang, U. Kjems, M. S. Pedersen, J. B. Boldt, T. Lunner, Speech intelligibility in background noise with ideal binary time-frequency masking, The Journal of the Acoustical Society of America 125 (4) (2009) 2336–2347.
- (12) A. Ephrat, I. Mosseri, O. Lang, T. Dekel, K. Wilson, A. Hassidim, W. T. Freeman, M. Rubinstein, Looking to listen at the cocktail party: a speaker-independent audio-visual model for speech separation, ACM Transactions on Graphics (TOG) 37 (4) (2018) 112.
- (13) M. Gogate, A. Adeel, R. Marxer, J. Barker, A. Hussain, Dnn driven speaker independent audio-visual mask estimation for speech separation, Proc. Interspeech 2018 (2018) 2723–2727.
- (14) A. Gabbay, A. Shamir, S. Peleg, Visual speech enhancement, in: Interspeech, ISCA, 2018, pp. 1170–1174.
J.-C. Hou, S.-S. Wang, Y.-H. Lai, Y. Tsao, H.-W. Chang, H.-M. Wang, Audio-visual speech enhancement using multimodal deep convolutional neural networks, IEEE Transactions on Emerging Topics in Computational Intelligence 2 (2) (2018) 117–128.
- (16) A. Adeel, M. Gogate, A. Hussain, Towards next-generation lip-reading driven hearing-aids: A preliminary prototype demo, in: International Workshop on Challenges in Hearing Assistive Technology (CHAT-2017), Stockholm University, August 19th, Collocated with Interspeech 2017, 2017.
- (17) A. Adeel, M. Gogate, A. Hussain, W. M. Whitmer, Lip-reading driven deep learning approach for speech enhancement, arXiv preprint arXiv:1808.00046.
- (18) A. Adeel, M. Gogate, A. Hussain, Contextual deep learning-based audio-visual switching for speech enhancement in real-world environments, Information Fusion.
- (19) D. Rethage, J. Pons, X. Serra, A wavenet for speech denoising, in: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2018, pp. 5069–5073.
- (20) A. Pandey, D. Wang, A new framework for supervised speech enhancement in the time domain., in: Interspeech, 2018, pp. 1136–1140.
- (21) Y. Luo, N. Mesgarani, Conv-tasnet: Surpassing ideal time–frequency magnitude masking for speech separation, IEEE/ACM Transactions on Audio, Speech, and Language Processing 27 (8) (2019) 1256–1266.
- (22) J. Barker, R. Marxer, E. Vincent, S. Watanabe, The third ‘chime’speech separation and recognition challenge: Dataset, task and baselines, in: Automatic Speech Recognition and Understanding (ASRU), 2015 IEEE Workshop on, IEEE, 2015, pp. 504–511.
- (23) S. Pascual, M. Park, J. Serrà, A. Bonafonte, K.-H. Ahn, Language and noise transfer in speech enhancement generative adversarial network, in: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2018, pp. 5019–5023.
- (24) M. Cooke, J. Barker, S. Cunningham, X. Shao, An audio-visual corpus for speech perception and automatic speech recognition, The Journal of the Acoustical Society of America 120 (5) (2006) 2421–2424.
- (25) N. Harte, E. Gillen, Tcd-timit: An audio-visual corpus of continuous speech, IEEE Transactions on Multimedia 17 (5) (2015) 603–615. doi:10.1109/TMM.2015.2407694.
- (26) A. Varga, H. J. Steeneken, Assessment for automatic speech recognition: Ii. noisex-92: A database and an experiment to study the effect of additive noise on speech recognition systems, Speech communication 12 (3) (1993) 247–251.
- (27) D. Snyder, G. Chen, D. Povey, Musan: A music, speech, and noise corpus, arXiv preprint arXiv:1510.08484.
A. Owens, A. A. Efros, Audio-visual scene analysis with self-supervised multisensory features, in: Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 631–648.
- (29) T. Afouras, J. S. Chung, A. Senior, O. Vinyals, A. Zisserman, Deep audio-visual speech recognition, IEEE transactions on pattern analysis and machine intelligence.
- (30) H. Zhao, C. Gan, A. Rouditchenko, C. Vondrick, J. McDermott, A. Torralba, The sound of pixels, in: The European Conference on Computer Vision (ECCV), 2018.
- (31) J. R. Hershey, Z. Chen, J. Le Roux, S. Watanabe, Deep clustering: Discriminative embeddings for segmentation and separation, in: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2016, pp. 31–35.
- (32) D. Griffin, J. Lim, Signal estimation from modified short-time fourier transform, IEEE Transactions on Acoustics, Speech, and Signal Processing 32 (2) (1984) 236–243.
- (33) D. Yu, M. Kolbæk, Z.-H. Tan, J. Jensen, Permutation invariant training of deep models for speaker-independent multi-talker speech separation, in: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2017, pp. 241–245.
- (34) Y. Wang, A. Narayanan, D. Wang, On training targets for supervised speech separation, IEEE/ACM transactions on audio, speech, and language processing 22 (12) (2014) 1849–1858.
- (35) D. Wang, J. Chen, Supervised speech separation based on deep learning: An overview, IEEE/ACM Transactions on Audio, Speech, and Language Processing 26 (10) (2018) 1702–1726.
- (36) F. Developers, ffmpeg tool [software], http://ffmpeg.org/ (2000–2019).
- (37) D. E. King, Dlib-ml: A machine learning toolkit, Journal of Machine Learning Research 10 (2009) 1755–1758.
- (38) D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980.
- (39) A. W. Rix, J. G. Beerends, M. P. Hollier, A. P. Hekstra, Perceptual evaluation of speech quality (pesq)-a new method for speech quality assessment of telephone networks and codecs, in: 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No. 01CH37221), Vol. 2, IEEE, 2001, pp. 749–752.
- (40) Y. Hu, P. C. Loizou, Evaluation of objective quality measures for speech enhancement, IEEE Transactions on audio, speech, and language processing 16 (1) (2007) 229–238.
- (41) S. Boll, A spectral subtraction algorithm for suppression of acoustic noise in speech, in: Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP’79., Vol. 4, IEEE, 1979, pp. 200–203.
- (42) Y. Ephraim, D. Malah, Speech enhancement using a minimum mean-square error log-spectral amplitude estimator, IEEE transactions on acoustics, speech, and signal processing 33 (2) (1985) 443–445.
- (43) S. Pascual, A. Bonafonte, J. Serrà, Segan: Speech enhancement generative adversarial network, Proc. Interspeech 2017 (2017) 3642–3646.
- (44) C. H. Taal, R. C. Hendriks, R. Heusdens, J. Jensen, An algorithm for intelligibility prediction of time–frequency weighted noisy speech, IEEE Transactions on Audio, Speech, and Language Processing 19 (7) (2011) 2125–2136.
- (45) J. Le Roux, S. Wisdom, H. Erdogan, J. R. Hershey, Sdr–half-baked or well done?, in: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2019, pp. 626–630.
- (46) I. Recommendation, 1534-1,“method for the subjective assessment of intermediate sound quality (mushra)”, International Telecommunications Union, Geneva, Switzerland.