DeepAI AI Chat
Log In Sign Up

Gradient-based kernel dimension reduction for supervised learning

by   Kenji Fukumizu, et al.

This paper proposes a novel kernel approach to linear dimension reduction for supervised learning. The purpose of the dimension reduction is to find directions in the input space to explain the output as effectively as possible. The proposed method uses an estimator for the gradient of regression function, based on the covariance operators on reproducing kernel Hilbert spaces. In comparison with other existing methods, the proposed one has wide applicability without strong assumptions on the distributions or the type of variables, and uses computationally simple eigendecomposition. Experimental results show that the proposed method successfully finds the effective directions with efficient computation.


page 1

page 2

page 3

page 4


An RKHS-Based Semiparametric Approach to Nonlinear Sufficient Dimension Reduction

Based on the theory of reproducing kernel Hilbert space (RKHS) and semip...

A Supervised Tensor Dimension Reduction-Based Prognostics Model for Applications with Incomplete Imaging Data

This paper proposes a supervised dimension reduction methodology for ten...

Continuum directions for supervised dimension reduction

Dimension reduction of multivariate data supervised by auxiliary informa...

Deep Embedding Kernel

In this paper, we propose a novel supervised learning method that is cal...

Supervised Coarse-Graining of Composite Objects

We consider supervised dimension reduction for regression with composite...

A new reproducing kernel based nonlinear dimension reduction method for survival data

Based on the theories of sliced inverse regression (SIR) and reproducing...

Deep Dimension Reduction for Supervised Representation Learning

The success of deep supervised learning depends on its automatic data re...