TCMI: a non-parametric mutual-dependence estimator for multivariate continuous distributions

01/30/2020
by   Benjamin Regler, et al.
4

The identification of relevant features, i.e., the driving variables that determine a process or the property of a system, is an essential part of the analysis of data sets whose entries are described by a large number of variables. The preferred measure for quantifying the relevance of nonlinear statistical dependencies is mutual information, which requires as input probability distributions. Probability distributions cannot be reliably sampled and estimated from limited data, especially for real-valued data samples such as lengths or energies. Here, we introduce total cumulative mutual information (TCMI), a measure of the relevance of mutual dependencies based on cumulative probability distributions. TCMI can be estimated directly from sample data and is a non-parametric, robust and deterministic measure that facilitates comparisons and rankings between feature sets with different cardinality. The ranking induced by TCMI allows for feature selection, i.e., the identification of the set of relevant features that are statistical related to the process or the property of a system, while taking into account the number of data samples as well as the cardinality of the feature subsets. We evaluate the performance of our measure with simulated data, compare its performance with similar multivariate dependence measures, and demonstrate the effectiveness of our feature selection method on a set of standard data sets and a typical scenario in materials science.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2015

Mutual Dependence: A Novel Method for Computing Dependencies Between Random Variables

In data science, it is often required to estimate dependencies between d...
research
02/10/2019

Feature Selection for multi-labeled variables via Dependency Maximization

Feature selection and reducing the dimensionality of data is an essentia...
research
11/25/2017

Feature Selection Facilitates Learning Mixtures of Discrete Product Distributions

Feature selection can facilitate the learning of mixtures of discrete ra...
research
08/08/2017

Learning non-parametric Markov networks with mutual information

We propose a method for learning Markov network structures for continuou...
research
09/04/2023

OutRank: Speeding up AutoML-based Model Search for Large Sparse Data sets with Cardinality-aware Feature Ranking

The design of modern recommender systems relies on understanding which p...
research
10/17/2019

Ranking variables and interactions using predictive uncertainty measures

For complex nonlinear supervised learning models, assessing the relevanc...
research
07/12/2017

Computing Entropies With Nested Sampling

The Shannon entropy, and related quantities such as mutual information, ...

Please sign up or login with your details

Forgot password? Click here to reset