DeepAI AI Chat
Log In Sign Up

Deep Sufficient Representation Learning via Mutual Information

by   Siming Zheng, et al.

We propose a mutual information-based sufficient representation learning (MSRL) approach, which uses the variational formulation of the mutual information and leverages the approximation power of deep neural networks. MSRL learns a sufficient representation with the maximum mutual information with the response and a user-selected distribution. It can easily handle multi-dimensional continuous or categorical response variables. MSRL is shown to be consistent in the sense that the conditional probability density function of the response variable given the learned representation converges to the conditional probability density function of the response variable given the predictor. Non-asymptotic error bounds for MSRL are also established under suitable conditions. To establish the error bounds, we derive a generalized Dudley's inequality for an order-two U-process indexed by deep neural networks, which may be of independent interest. We discuss how to determine the intrinsic dimension of the underlying data distribution. Moreover, we evaluate the performance of MSRL via extensive numerical experiments and real data analysis and demonstrate that MSRL outperforms some existing nonlinear sufficient dimension reduction methods.


page 1

page 2

page 3

page 4


Learning to Explain: An Information-Theoretic Perspective on Model Interpretation

We introduce instancewise feature selection as a methodology for model i...

Deep Dimension Reduction for Supervised Representation Learning

The success of deep supervised learning depends on its automatic data re...

Representation Learning with Information Theory for COVID-19 Detection

Successful data representation is a fundamental factor in machine learni...

A Theory of Usable Information Under Computational Constraints

We propose a new framework for reasoning about information in complex sy...

A Deep Generative Approach to Conditional Sampling

We propose a deep generative approach to sampling from a conditional dis...

A Separation Principle for Control in the Age of Deep Learning

We review the problem of defining and inferring a "state" for a control ...

Statistical Physics of Deep Neural Networks: Initialization toward Optimal Channels

In deep learning, neural networks serve as noisy channels between input ...