Solving Inverse Problems with Conditional-GAN Prior via Fast Network-Projected Gradient Descent

09/02/2021
by   Muhammad Fadli Damara, et al.
24

The projected gradient descent (PGD) method has shown to be effective in recovering compressed signals described in a data-driven way by a generative model, i.e., a generator which has learned the data distribution. Further reconstruction improvements for such inverse problems can be achieved by conditioning the generator on the measurement. The boundary equilibrium generative adversarial network (BEGAN) implements an equilibrium based loss function and an auto-encoding discriminator to better balance the performance of the generator and the discriminator. In this work we investigate a network-based projected gradient descent (NPGD) algorithm for measurement-conditional generative models to solve the inverse problem much faster than regular PGD. We combine the NPGD with conditional GAN/BEGAN to evaluate their effectiveness in solving compressed sensing type problems. Our experiments on the MNIST and CelebA datasets show that the combination of measurement conditional model with NPGD works well in recovering the compressed signal while achieving similar or in some cases even better performance along with a much faster reconstruction. The achieved reconstruction speed-up in our experiments is up to 140-175.

READ FULL TEXT

page 7

page 11

page 13

research
02/26/2019

GAN-based Projector for Faster Recovery in Compressed Sensing with Convergence Guarantees

A Generative Adversarial Network (GAN) with generator G trained to model...
research
02/25/2021

Provable Compressed Sensing with Generative Priors via Langevin Dynamics

Deep generative models have emerged as a powerful class of priors for si...
research
01/15/2019

Using auto-encoders for solving ill-posed linear inverse problems

Compressed sensing algorithms recover a signal from its under-determined...
research
03/08/2022

Regularized Training of Intermediate Layers for Generative Models for Inverse Problems

Generative Adversarial Networks (GANs) have been shown to be powerful an...
research
06/08/2023

Solution of physics-based inverse problems using conditional generative adversarial networks with full gradient penalty

The solution of probabilistic inverse problems for which the correspondi...
research
08/04/2018

Global Convergence to the Equilibrium of GANs using Variational Inequalities

In optimization, the negative gradient of a function denotes the directi...
research
11/21/2021

Bilevel learning of l1-regularizers with closed-form gradients(BLORC)

We present a method for supervised learning of sparsity-promoting regula...

Please sign up or login with your details

Forgot password? Click here to reset