Linear Tensor Projection Revealing Nonlinearity

07/08/2020
by   Koji Maruhashi, et al.
0

Dimensionality reduction is an effective method for learning high-dimensional data, which can provide better understanding of decision boundaries in human-readable low-dimensional subspace. Linear methods, such as principal component analysis and linear discriminant analysis, make it possible to capture the correlation between many variables; however, there is no guarantee that the correlations that are important in predicting data can be captured. Moreover, if the decision boundary has strong nonlinearity, the guarantee becomes increasingly difficult. This problem is exacerbated when the data are matrices or tensors that represent relationships between variables. We propose a learning method that searches for a subspace that maximizes the prediction accuracy while retaining as much of the original data information as possible, even if the prediction model in the subspace has strong nonlinearity. This makes it easier to interpret the mechanism of the group of variables behind the prediction problem that the user wants to know. We show the effectiveness of our method by applying it to various types of data including matrices and tensors.

READ FULL TEXT
research
05/25/2021

Hierarchical Subspace Learning for Dimensionality Reduction to Improve Classification Accuracy in Large Data Sets

Manifold learning is used for dimensionality reduction, with the goal of...
research
06/05/2010

Rasch-based high-dimensionality data reduction and class prediction with applications to microarray gene expression data

Class prediction is an important application of microarray gene expressi...
research
08/05/2018

Hybrid Subspace Learning for High-Dimensional Data

The high-dimensional data setting, in which p >> n, is a challenging sta...
research
12/03/2019

A Fast deflation Method for Sparse Principal Component Analysis via Subspace Projections

Deflation method is an iterative technique that searches the sparse load...
research
07/26/2018

Dynamical Component Analysis (DyCA): Dimensionality Reduction For High-Dimensional Deterministic Time-Series

Multivariate signal processing is often based on dimensionality reductio...
research
10/06/2021

Boosting RANSAC via Dual Principal Component Pursuit

In this paper, we revisit the problem of local optimization in RANSAC. O...
research
12/31/2022

A Study on a User-Controlled Radial Tour for Variable Importance in High-Dimensional Data

Principal component analysis is a long-standing go-to method for explori...

Please sign up or login with your details

Forgot password? Click here to reset