Local Loss Optimization in Operator Models: A New Insight into Spectral Learning

06/27/2012
by   Borja Balle, et al.
0

This paper re-visits the spectral method for learning latent variable models defined in terms of observable operators. We give a new perspective on the method, showing that operators can be recovered by minimizing a loss defined on a finite subset of the domain. A non-convex optimization similar to the spectral method is derived. We also propose a regularized convex relaxation of this optimization. We show that in practice the availabilty of a continuous regularization parameter (in contrast with the discrete number of states in the original method) allows a better trade-off between accuracy and model complexity. We also prove that in general, a randomized strategy for choosing the local loss will succeed with high probability.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/05/2012

Rejoinder: Latent variable graphical model selection via convex optimization

Rejoinder to "Latent variable graphical model selection via convex optim...
11/05/2012

Discussion: Latent variable graphical model selection via convex optimization

Discussion of "Latent variable graphical model selection via convex opti...
01/04/2022

SpecSolve: Spectral methods for spectral measures

Self-adjoint operators on infinite-dimensional spaces with continuous sp...
02/22/2021

Non-Convex Optimization with Spectral Radius Regularization

We develop a regularization method which finds flat minima during the tr...
08/02/2021

The SDP value of random 2CSPs

We consider a very wide class of models for sparse random Boolean 2CSPs;...
09/04/2020

Chordal Decomposition for Spectral Coarsening

We introduce a novel solver to significantly reduce the size of a geomet...
05/31/2022

Feature Learning in L_2-regularized DNNs: Attraction/Repulsion and Sparsity

We study the loss surface of DNNs with L_2 regularization. We show that ...