DeepAI AI Chat
Log In Sign Up

Sparse Principal Components Analysis: a Tutorial

by   Giovanni Maria Merola, et al.

The topic of this tutorial is Least Squares Sparse Principal Components Analysis (LS SPCA) which is a simple method for computing approximated Principal Components which are combinations of only a few of the observed variables. Analogously to Principal Components, these components are uncorrelated and sequentially best approximate the dataset. The derivation of LS SPCA is intuitive for anyone familiar with linear regression. Since LS SPCA is based on a different optimality from other SPCA methods and does not suffer from their serious drawbacks. I will demonstrate on two datasets how useful and parsimonious sparse PCs can be computed. An R package for computing LS SPCA is available for download.


page 1

page 2

page 3

page 4


SIMPCA: A framework for rotating and sparsifying principal components

We propose an algorithmic framework for computing sparse components from...

An iterative coordinate descent algorithm to compute sparse low-rank approximations

In this paper, we describe a new algorithm to build a few sparse princip...

Solving Large-Scale Sparse PCA to Certifiable (Near) Optimality

Sparse principal component analysis (PCA) is a popular dimensionality re...

Optimal detection of sparse principal components in high dimension

We perform a finite sample analysis of the detection levels for sparse p...

Decomposing an information stream into the principal components

We propose an approach to decomposing a thematic information stream into...

Weighted Orthogonal Components Regression Analysis

In the multiple linear regression setting, we propose a general framewor...

Fast computation of the principal components of genotype matrices in Julia

Finding the largest few principal components of a matrix of genetic data...