Generative Adversarial Source Separation

10/30/2017
by   Cem Subakan, et al.
0

Generative source separation methods such as non-negative matrix factorization (NMF) or auto-encoders, rely on the assumption of an output probability density. Generative Adversarial Networks (GANs) can learn data distributions without needing a parametric assumption on the output density. We show on a speech source separation experiment that, a multi-layer perceptron trained with a Wasserstein-GAN formulation outperforms NMF, auto-encoders trained with maximum likelihood, and variational auto-encoders in terms of source to distortion ratio.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/31/2019

End-to-end Non-Negative Autoencoders for Sound Source Separation

Discriminative models for source separation have recently been shown to ...
research
05/23/2019

CASS: Cross Adversarial Source Separation via Autoencoder

This paper introduces a cross adversarial source separation (CASS) frame...
research
04/24/2023

Adversarial Generative NMF for Single Channel Source Separation

The idea of adversarial learning of regularization functionals has recen...
research
06/14/2019

Single-Channel Signal Separation and Deconvolution with Generative Adversarial Networks

Single-channel signal separation and deconvolution aims to separate and ...
research
03/11/2022

Reprogramming FairGANs with Variational Auto-Encoders: A New Transfer Learning Model

Fairness-aware GANs (FairGANs) exploit the mechanisms of Generative Adve...
research
05/01/2019

A Style Transfer Approach to Source Separation

Training neural networks for source separation involves presenting a mix...
research
02/07/2022

Algorithms that get old : the case of generative algorithms

Generative IA networks, like the Variational Auto-Encoders (VAE), and Ge...

Please sign up or login with your details

Forgot password? Click here to reset