1 Introduction
As the Singular Value Decomposition (SVD) is sensitive to outliers, the Principal Component Analysis (PCA) doesn’t perform well if the matrix is corrupted even by a few outliers. The robust PCA which is robust to outliers, can be defined as the problem of decomposing a given matrix M into the sum of a low rank matrix
, whose column subspace gives the principal components, and a sparse matrix S (outliers’ matrix) (Bouwmans2018).A popular application of the robust PCA is the decomposition of a video into a background video with slow changes (low rank matrix) and a foreground video with moving objects (sparse matrix). While the robust PCA is applied to different problems such as image and video processing (8425659) or compressed sensing (Otazo2015), we focus on symmetric positive semidefinite matrices, as for instance covariance matrices. In financial economics, especially in portfolio theory, the covariance matrix plays an important role as it is used to model the correlation between the returns of different financial assets. The standard PCA fails to identify correctly groups of heavily correlated assets if there are some pairs of assets which do not belong to the same group but are nonetheless highly correlated. Such pairs can be interpreted as outliers of the corrupted covariance matrix (lisa2001).
In this paper we present Denise^{1}^{1}1The name Denise comes from Deep and Semidefinite., an algorithm that solves the robust PCA for positive Semidefinite matrices, using a deep neural network. The main idea is the following:

First, according to the Cholesky decomposition, a positive semidefinite symmetric matrix can be decomposed to . If has rows and columns, then the matrix will be of rank or less.

Second, in order to obtain the desired decomposition , we just need to find a matrix satisfying . This leads to the following optimization problem: find that minimizes the difference between and . As we want to be sparse^{2}^{2}2Loosely speaking a sparse matrix is as matrix that contains a lot of zeros. We realize this requirement by minimizing norms on matrix elements, or by regularizing through norms., a good approximation widely used is to minimize the norm of .

Third, in line with this optimization problem, the matrix
can be chosen as the output of a deep neural network, which is trained with the loss function
where are the parameters to be optimized.

Fourth, the DNN is trained only on
Comments
There are no comments yet.