DeepAI AI Chat
Log In Sign Up

Adaptive Regularization for Weight Matrices

by   Koby Crammer, et al.

Algorithms for learning distributions over weight-vectors, such as AROW were recently shown empirically to achieve state-of-the-art performance at various problems, with strong theoretical guaranties. Extending these algorithms to matrix models pose challenges since the number of free parameters in the covariance of the distribution scales as n^4 with the dimension n of the matrix, and n tends to be large in real applications. We describe, analyze and experiment with two new algorithms for learning distribution of matrix models. Our first algorithm maintains a diagonal covariance over the parameters and can handle large covariance matrices. The second algorithm factors the covariance to capture inter-features correlation while keeping the number of parameters linear in the size of the original matrix. We analyze both algorithms in the mistake bound model and show a superior precision performance of our approach over other algorithms in two tasks: retrieving similar images, and ranking similar documents. The factored algorithm is shown to attain faster convergence rate.


page 1

page 2

page 3

page 4


An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss

The estimation of high dimensional precision matrices has been a central...

MARS: A second-order reduction algorithm for high-dimensional sparse precision matrices estimation

Estimation of the precision matrix (or inverse covariance matrix) is of ...

Stochastic parameterization with VARX processes

In this study we investigate a data-driven stochastic methodology to par...

Covariance Model with General Linear Structure and Divergent Parameters

For estimating the large covariance matrix with a limited sample size, w...

LoCoV: low dimension covariance voting algorithm for portfolio optimization

Minimum-variance portfolio optimizations rely on accurate covariance est...

Multilevel approximation of Gaussian random fields: Covariance compression, estimation and spatial prediction

Centered Gaussian random fields (GRFs) indexed by compacta such as smoot...

Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic Uncertainty

Numerous recent works utilize bi-Lipschitz regularization of neural netw...