End-to-end Recurrent Denoising Autoencoder Embeddings for Speaker Identification

Speech 'in-the-wild' is a handicap for speaker recognition systems due to the variability induced by real-life conditions, such as environmental noise and emotions in the speaker. Taking advantage of representation learning, on this paper we aim to design a recurrent denoising autoencoder architecture that extracts robust low-dimensional representations –speaker embeddings– from noisy spectrograms to perform speaker identification. The end-to-end proposed architecture uses a feedback loop to encode information regarding to the speaker into a spectrogram denoising autoencoder. We make use of data augmentation techniques to corrupt clean speech with additive real-life environmental noise and utilize a database with real stressed speech. The proposed architecture benefits from the time sequences and frequency patterns present in the spectrograms that inherently represent the speaker, outperforming other architectures compared by using state-of-the-art speaker embeddings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset