DeepAI AI Chat
Log In Sign Up

Learning to Denoise Historical Music

by   Yunpeng Li, et al.

We propose an audio-to-audio neural network model that learns to denoise old music recordings. Our model internally converts its input into a time-frequency representation by means of a short-time Fourier transform (STFT), and processes the resulting complex spectrogram using a convolutional neural network. The network is trained with both reconstruction and adversarial objectives on a synthetic noisy music dataset, which is created by mixing clean music with real noise samples extracted from quiet segments of old recordings. We evaluate our method quantitatively on held-out test examples of the synthetic dataset, and qualitatively by human rating on samples of actual historical recordings. Our results show that the proposed method is effective in removing noise, while preserving the quality and details of the original music.


page 1

page 2

page 3

page 4


A Two-Stage U-Net for High-Fidelity Denoising of Historical Recordings

Enhancing the sound quality of historical music recordings is a long-sta...

Music demixing with the sliCQ transform

Music source separation is the task of extracting an estimate of one or ...

BEHM-GAN: Bandwidth Extension of Historical Music using Generative Adversarial Networks

Audio bandwidth extension aims to expand the spectrum of narrow-band aud...

Classification of Audio Segments in Call Center Recordings using Convolutional Recurrent Neural Networks

Detailed statistical analysis of call center recordings is critical in t...

Invariances and Data Augmentation for Supervised Music Transcription

This paper explores a variety of models for frame-based music transcript...

Music Enhancement via Image Translation and Vocoding

Consumer-grade music recordings such as those captured by mobile devices...

Audio Denoising for Robust Audio Fingerprinting

Music discovery services let users identify songs from short mobile reco...