Compressive Learning of Generative Networks

02/12/2020
by   Vincent Schellekens, et al.
0

Generative networks implicitly approximate complex densities from their sampling with impressive accuracy. However, because of the enormous scale of modern datasets, this training process is often computationally expensive. We cast generative network training into the recent framework of compressive learning: we reduce the computational burden of large-scale datasets by first harshly compressing them in a single pass as a single sketch vector. We then propose a cost function, which approximates the Maximum Mean Discrepancy metric, but requires only this sketch, which makes it time- and memory-efficient to optimize.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/04/2018

Compressive Classification (Machine Learning without learning)

Compressive learning is a framework where (so far unsupervised) learning...
research
04/17/2020

Statistical Learning Guarantees for Compressive Clustering and Compressive Mixture Modeling

We provide statistical learning guarantees for two unsupervised learning...
research
10/22/2019

Compressive Learning for Semi-Parametric Models

In the compressive learning theory, instead of solving a statistical lea...
research
10/21/2021

Mean Nyström Embeddings for Adaptive Compressive Learning

Compressive learning is an approach to efficient large scale learning ba...
research
06/09/2016

Sketching for Large-Scale Learning of Mixture Models

Learning parameters from voluminous data can be prohibitive in terms of ...
research
04/20/2021

Asymmetric compressive learning guarantees with applications to quantized sketches

The compressive learning framework reduces the computational cost of tra...
research
06/06/2023

A sketch-and-select Arnoldi process

A sketch-and-select Arnoldi process to generate a well-conditioned basis...

Please sign up or login with your details

Forgot password? Click here to reset