When compressive learning fails: blame the decoder or the sketch?

09/14/2020
by   Vincent Schellekens, et al.
0

In compressive learning, a mixture model (a set of centroids or a Gaussian mixture) is learned from a sketch vector, that serves as a highly compressed representation of the dataset. This requires solving a non-convex optimization problem, hence in practice approximate heuristics (such as CLOMPR) are used. In this work we explore, by numerical simulations, properties of this non-convex optimization landscape and those heuristics.

READ FULL TEXT

page 1

page 2

page 3

research
12/04/2018

Compressive Classification (Machine Learning without learning)

Compressive learning is a framework where (so far unsupervised) learning...
research
06/22/2017

Compressive Statistical Learning with Random Feature Moments

We describe a general framework --compressive statistical learning-- for...
research
10/30/2021

Optimizing Binary Symptom Checkers via Approximate Message Passing

Symptom checkers have been widely adopted as an intelligent e-healthcare...
research
06/13/2021

An Extended Multi-Model Regression Approach for Compressive Strength Prediction and Optimization of a Concrete Mixture

Due to the significant delay and cost associated with experimental tests...
research
04/20/2021

Asymmetric compressive learning guarantees with applications to quantized sketches

The compressive learning framework reduces the computational cost of tra...
research
06/09/2016

Sketching for Large-Scale Learning of Mixture Models

Learning parameters from voluminous data can be prohibitive in terms of ...
research
09/09/2020

Meta-learning for Multi-variable Non-convex Optimization Problems: Iterating Non-optimums Makes Optimum Possible

In this paper, we aim to address the problem of solving a non-convex opt...

Please sign up or login with your details

Forgot password? Click here to reset