Decentralized Learning of Generative Adversarial Networks from Multi-Client Non-iid Data

05/23/2019
by   Ryo Yonetani, et al.
4

This work addresses a new problem of learning generative adversarial networks (GANs) from multiple data collections that are each i) owned separately and privately by different clients and ii) drawn from a non-identical distribution that comprises different classes. Given such multi-client and non-iid data as input, we aim to achieve a distribution involving all the classes input data can belong to, while keeping the data decentralized and private in each client storage. Our key contribution to this end is a new decentralized approach for learning GANs from non-iid data called Forgiver-First Update (F2U), which a) asks clients to train an individual discriminator with their own data and b) updates a generator to fool the most `forgiving' discriminators who deem generated samples as the most real. Our theoretical analysis proves that this updating strategy indeed allows the decentralized GAN to learn a generator's distribution with all the input classes as its global optimum based on f-divergence minimization. Moreover, we propose a relaxed version of F2U called Forgiver-First Aggregation (F2A), which adaptively aggregates the discriminators while emphasizing forgiving ones to perform well in practice. Our empirical evaluations with image generation tasks demonstrated the effectiveness of our approach over state-of-the-art decentralized learning methods.

READ FULL TEXT

page 16

page 17

page 18

research
06/23/2022

EFFGAN: Ensembles of fine-tuned federated GANs

Generative adversarial networks have proven to be a powerful tool for le...
research
12/03/2018

Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning

Federated learning, i.e., a mobile edge computing framework for deep lea...
research
07/12/2022

Face editing with GAN – A Review

In recent years, Generative Adversarial Networks (GANs) have become a ho...
research
08/05/2020

Annealing Genetic GAN for Minority Oversampling

The key to overcome class imbalance problems is to capture the distribut...
research
02/09/2021

Training Federated GANs with Theoretical Guarantees: A Universal Aggregation Approach

Recently, Generative Adversarial Networks (GANs) have demonstrated their...
research
02/13/2018

First Order Generative Adversarial Networks

GANs excel at learning high dimensional distributions, but they can upda...

Please sign up or login with your details

Forgot password? Click here to reset