Local Loss Optimization in Operator Models: A New Insight into Spectral Learning

by   Borja Balle, et al.

This paper re-visits the spectral method for learning latent variable models defined in terms of observable operators. We give a new perspective on the method, showing that operators can be recovered by minimizing a loss defined on a finite subset of the domain. A non-convex optimization similar to the spectral method is derived. We also propose a regularized convex relaxation of this optimization. We show that in practice the availabilty of a continuous regularization parameter (in contrast with the discrete number of states in the original method) allows a better trade-off between accuracy and model complexity. We also prove that in general, a randomized strategy for choosing the local loss will succeed with high probability.


page 1

page 2

page 3

page 4


Rejoinder: Latent variable graphical model selection via convex optimization

Rejoinder to "Latent variable graphical model selection via convex optim...

Discussion: Latent variable graphical model selection via convex optimization

Discussion of "Latent variable graphical model selection via convex opti...

SpecSolve: Spectral methods for spectral measures

Self-adjoint operators on infinite-dimensional spaces with continuous sp...

Non-Convex Optimization with Spectral Radius Regularization

We develop a regularization method which finds flat minima during the tr...

The SDP value of random 2CSPs

We consider a very wide class of models for sparse random Boolean 2CSPs;...

Chordal Decomposition for Spectral Coarsening

We introduce a novel solver to significantly reduce the size of a geomet...

Feature Learning in L_2-regularized DNNs: Attraction/Repulsion and Sparsity

We study the loss surface of DNNs with L_2 regularization. We show that ...