Penalized versus constrained generalized eigenvalue problems

10/22/2014
by   Irina Gaynanova, et al.
0

We investigate the difference between using an ℓ_1 penalty versus an ℓ_1 constraint in generalized eigenvalue problems, such as principal component analysis and discriminant analysis. Our main finding is that an ℓ_1 penalty may fail to provide very sparse solutions; a severe disadvantage for variable selection that can be remedied by using an ℓ_1 constraint. Our claims are supported both by empirical evidence and theoretical analysis. Finally, we illustrate the advantages of an ℓ_1 constraint in the context of discriminant analysis and principal component analysis.

READ FULL TEXT

page 6

page 7

page 9

page 10

page 12

page 13

page 14

research
03/25/2019

Eigenvalue and Generalized Eigenvalue Problems: Tutorial

This paper is a tutorial for eigenvalue and generalized eigenvalue probl...
research
11/08/2017

Penalized Orthogonal Iteration for Sparse Estimation of Generalized Eigenvalue Problem

We propose a new algorithm for sparse estimation of eigenvectors in gene...
research
07/28/2023

Stratified Principal Component Analysis

This paper investigates a general family of models that stratifies the s...
research
03/10/2019

Generalized Minkowski sets for the regularization of inverse problems

Many works on inverse problems in the imaging sciences consider regulari...
research
01/21/2014

Alternating direction method of multipliers for penalized zero-variance discriminant analysis

We consider the task of classification in the high dimensional setting w...
research
05/22/2017

View-Invariant Recognition of Action Style Self-Dissimilarity

Self-similarity was recently introduced as a measure of inter-class cong...
research
04/28/2022

Derivation of Learning Rules for Coupled Principal Component Analysis in a Lagrange-Newton Framework

We describe a Lagrange-Newton framework for the derivation of learning r...

Please sign up or login with your details

Forgot password? Click here to reset