DeepAI AI Chat
Log In Sign Up

Some Options for L1-Subspace Signal Processing

by   Panos P. Markopoulos, et al.

We describe ways to define and calculate L_1-norm signal subspaces which are less sensitive to outlying data than L_2-calculated subspaces. We focus on the computation of the L_1 maximum-projection principal component of a data matrix containing N signal samples of dimension D and conclude that the general problem is formally NP-hard in asymptotically large N, D. We prove, however, that the case of engineering interest of fixed dimension D and asymptotically large sample support N is not and we present an optimal algorithm of complexity O(N^D). We generalize to multiple L_1-max-projection components and present an explicit optimal L_1 subspace calculation algorithm in the form of matrix nuclear-norm evaluations. We conclude with illustrations of L_1-subspace signal processing in the fields of data dimensionality reduction and direction-of-arrival estimation.


page 1

page 2

page 3

page 4


Efficient L1-Norm Principal-Component Analysis via Bit Flipping

It was shown recently that the K L1-norm principal components (L1-PCs) o...

Optimal subspace codes in PG(4,q)

We investigate subspace codes whose codewords are subspaces of PG(4,q) ...

Tensor Matched Kronecker-Structured Subspace Detection for Missing Information

We consider the problem of detecting whether a tensor signal having many...

Fast Graph Filters for Decentralized Subspace Projection

A number of inference problems with sensor networks involve projecting a...

Data Integration Via Analysis of Subspaces (DIVAS)

Modern data collection in many data paradigms, including bioinformatics,...

On Projections to Linear Subspaces

The merit of projecting data onto linear subspaces is well known from, e...

Distributed Principal Subspace Analysis for Partitioned Big Data: Algorithms, Analysis, and Implementation

Principal Subspace Analysis (PSA) is one of the most popular approaches ...