Learning a Factor Model via Regularized PCA

11/26/2011
by   Yi-Hao Kao, et al.
0

We consider the problem of learning a linear factor model. We propose a regularized form of principal component analysis (PCA) and demonstrate through experiments with synthetic and real data the superiority of resulting estimates to those produced by pre-existing factor analysis approaches. We also establish theoretical results that explain how our algorithm corrects the biases induced by conventional approaches. An important feature of our algorithm is that its computational requirements are similar to those of PCA, which enjoys wide use in large part due to its efficiency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/26/2022

Tensor Principal Component Analysis

In this paper, we develop new methods for analyzing high-dimensional ten...
research
05/31/2019

Diagonally-Dominant Principal Component Analysis

We consider the problem of decomposing a large covariance matrix into th...
research
07/11/2019

Gain with no Pain: Efficient Kernel-PCA by Nyström Sampling

In this paper, we propose and study a Nyström based approach to efficien...
research
11/01/2021

PCA-based Multi Task Learning: a Random Matrix Approach

The article proposes and theoretically analyses a computationally effici...
research
07/19/2017

Unmixing dynamic PET images with variable specific binding kinetics

To analyze dynamic positron emission tomography (PET) images, various ge...
research
10/01/2020

EigenGame: PCA as a Nash Equilibrium

We present a novel view on principal component analysis (PCA) as a compe...
research
11/17/2021

Interpreting multi-variate models with setPCA

Principal Component Analysis (PCA) and other multi-variate models are of...

Please sign up or login with your details

Forgot password? Click here to reset