Stable Distribution Alignment Using the Dual of the Adversarial Distance

07/13/2017
by   Ben Usman, et al.
0

Methods that align distributions by minimizing an adversarial distance between them have recently achieved impressive results. However, these approaches are difficult to optimize with gradient descent and they often do not converge well without careful hyperparameter tuning and proper initialization. We investigate whether turning the adversarial min-max problem into an optimization problem by replacing the maximization part with its dual improves the quality of the resulting alignment and explore its connections to Maximum Mean Discrepancy. Our empirical results suggest that using the dual formulation for the restricted family of linear discriminators results in a more stable convergence to a desirable solution when compared with the performance of a primal min-max GAN-like objective and an MMD objective under the same restrictions. We test our hypothesis on the problem of aligning two synthetic point clouds on a plane and on a real-image domain adaptation problem on digits. In both cases, the dual formulation yields an iterative procedure that gives more stable and monotonic improvement over time.

READ FULL TEXT
research
07/11/2018

The Limit Points of (Optimistic) Gradient Descent in Min-Max Optimization

Motivated by applications in Optimization, Game Theory, and the training...
research
07/05/2022

Cooperative Distribution Alignment via JSD Upper Bound

Unsupervised distribution alignment estimates a transformation that maps...
research
11/04/2020

On the Convergence of Gradient Descent in GANs: MMD GAN As a Gradient Flow

We consider the maximum mean discrepancy (MMD) GAN problem and propose a...
research
07/22/2019

Stochastic Variance Reduced Primal Dual Algorithms for Empirical Composition Optimization

We consider a generic empirical composition optimization problem, where ...
research
04/22/2019

Training generative networks using random discriminators

In recent years, Generative Adversarial Networks (GANs) have drawn a lot...
research
03/26/2020

Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable Neural Distribution Alignment

Unsupervised distribution alignment has many applications in deep learni...

Please sign up or login with your details

Forgot password? Click here to reset