DeepAI AI Chat
Log In Sign Up

Solving Large-Scale Sparse PCA to Certifiable (Near) Optimality

by   Dimitris Bertsimas, et al.

Sparse principal component analysis (PCA) is a popular dimensionality reduction technique for obtaining principal components which are linear combinations of a small subset of the original features. Existing approaches cannot supply certifiably optimal principal components with more than p=100s covariates. By reformulating sparse PCA as a convex mixed-integer semidefinite optimization problem, we design a cutting-plane method which solves the problem to certifiable optimality at the scale of selecting k=10s covariates from p=300 variables, and provides small bound gaps at a larger scale. We also propose two convex relaxations and randomized rounding schemes that provide certifiably near-exact solutions within minutes for p=100s or hours for p=1,000s. Using real-world financial and medical datasets, we illustrate our approach's ability to derive interpretable principal components tractably at scale.


page 1

page 2

page 3

page 4


Sparse PCA With Multiple Components

Sparse Principal Component Analysis is a cardinal technique for obtainin...

Large-Scale Paralleled Sparse Principal Component Analysis

Principal component analysis (PCA) is a statistical technique commonly u...

Priming PCA with EigenGame

We introduce primed-PCA (pPCA), an extension of the recently proposed Ei...

Sparse Principal Components Analysis: a Tutorial

The topic of this tutorial is Least Squares Sparse Principal Components ...

Exact and Approximation Algorithms for Sparse PCA

Sparse PCA (SPCA) is a fundamental model in machine learning and data an...

Large-Scale Sparse Principal Component Analysis with Application to Text Data

Sparse PCA provides a linear combination of small number of features tha...

Compact Optimization Learning for AC Optimal Power Flow

This paper reconsiders end-to-end learning approaches to the Optimal Pow...