Low-Rank plus Sparse Decomposition of Covariance Matrices using Neural Network Parametrization

08/01/2019
by   Michel Baes, et al.
0

This paper revisits the problem of decomposing a positive semidefinite matrix as a sum of a matrix with a given rank plus a sparse matrix. An immediate application can be found in portfolio optimization, when the matrix to be decomposed is the covariance between the different assets in the portfolio. Our approach consists in representing the low-rank part of the solution as the product MM^T, where M is a rectangular matrix of appropriate size, parameterized by the coefficients of a deep neural network. We then use a gradient descent algorithm to minimize an appropriate loss function over the parameters of the network. We deduce its convergence speed to a local optimum from the Lipschitz smoothness of our loss function. We show that the rate of convergence grows polynomially in the dimensions of the input, output, and each of the hidden layers and hence conclude that our algorithm does not suffer from the curse of dimensionality.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

04/28/2020

Denise: Deep Learning based Robust PCA for Positive Semidefinite Matrices

We introduce Denise, a deep learning based algorithm for decomposing pos...
02/21/2017

A Nonconvex Free Lunch for Low-Rank plus Sparse Matrix Recovery

We study the problem of low-rank plus sparse matrix recovery. We propose...
10/04/2018

A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks

We analyze speed of convergence to global optimum for gradient descent t...
04/15/2020

Towards a theory of machine learning

We define a neural network as a septuple consisting of (1) a state vecto...
03/14/2017

Convergence of Deep Neural Networks to a Hierarchical Covariance Matrix Decomposition

We show that in a deep neural network trained with ReLU, the low-lying l...
10/12/2021

Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic Uncertainty

Numerous recent works utilize bi-Lipschitz regularization of neural netw...
05/26/2019

On Learning Over-parameterized Neural Networks: A Functional Approximation Prospective

We consider training over-parameterized two-layer neural networks with R...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.