Conditional WaveGAN

09/27/2018
by   Chae Young Lee, et al.
0

Generative models are successfully used for image synthesis in the recent years. But when it comes to other modalities like audio, text etc little progress has been made. Recent works focus on generating audio from a generative model in an unsupervised setting. We explore the possibility of using generative models conditioned on class labels. Concatenation based conditioning and conditional scaling were explored in this work with various hyper-parameter tuning methods. In this paper we introduce Conditional WaveGANs (cWaveGAN). Find our implementation at https://github.com/acheketa/cwavegan

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/03/2023

Shap-E: Generating Conditional 3D Implicit Functions

We present Shap-E, a conditional generative model for 3D assets. Unlike ...
research
03/01/2020

Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models

Starting with Gilmer et al. (2018), several works have demonstrated the ...
research
10/12/2022

3D Brain and Heart Volume Generative Models: A Survey

Generative models such as generative adversarial networks and autoencode...
research
05/18/2023

Data Redaction from Conditional Generative Models

Deep generative models are known to produce undesirable samples such as ...
research
10/26/2022

Full-band General Audio Synthesis with Score-based Diffusion

Recent works have shown the capability of deep generative models to tack...
research
12/01/2022

Why Are Conditional Generative Models Better Than Unconditional Ones?

Extensive empirical evidence demonstrates that conditional generative mo...
research
06/09/2020

Low Distortion Block-Resampling with Spatially Stochastic Networks

We formalize and attack the problem of generating new images from old on...

Please sign up or login with your details

Forgot password? Click here to reset