Log In Sign Up

Deep Dimension Reduction for Supervised Representation Learning

by   Jian Huang, et al.

The success of deep supervised learning depends on its automatic data representation abilities. Among all the characteristics of an ideal representation for high-dimensional complex data, information preservation, low dimensionality and disentanglement are the most essential ones. In this work, we propose a deep dimension reduction (DDR) approach to achieving a good data representation with these characteristics for supervised learning. At the population level, we formulate the ideal representation learning task as finding a nonlinear dimension reduction map that minimizes the sum of losses characterizing conditional independence and disentanglement. We estimate the target map at the sample level nonparametrically with deep neural networks. We derive a bound on the excess risk of the deep nonparametric estimator. The proposed method is validated via comprehensive numerical experiments and real data analysis in the context of regression and classification.


page 1

page 2

page 3

page 4


Deep Sufficient Representation Learning via Mutual Information

We propose a mutual information-based sufficient representation learning...

Transformed Central Quantile Subspace

We present a dimension reduction technique for the conditional quantiles...

Deep Learning for Functional Data Analysis with Adaptive Basis Layers

Despite their widespread success, the application of deep neural network...

Local Component Analysis for Nonparametric Bayes Classifier

The decision boundaries of Bayes classifier are optimal because they lea...

Gradient-based kernel dimension reduction for supervised learning

This paper proposes a novel kernel approach to linear dimension reductio...

Nonlinear dimension reduction for surrogate modeling using gradient information

We introduce a method for the nonlinear dimension reduction of a high-di...