Causal Entropy and Information Gain for Measuring Causal Control

Artificial intelligence models and methods commonly lack causal interpretability. Despite the advancements in interpretable machine learning (IML) methods, they frequently assign importance to features which lack causal influence on the outcome variable. Selecting causally relevant features among those identified as relevant by these methods, or even before model training, would offer a solution. Feature selection methods utilizing information theoretical quantities have been successful in identifying statistically relevant features. However, the information theoretical quantities they are based on do not incorporate causality, rendering them unsuitable for such scenarios. To address this challenge, this article proposes information theoretical quantities that incorporate the causal structure of the system, which can be used to evaluate causal importance of features for some given outcome variable. Specifically, we introduce causal versions of entropy and mutual information, termed causal entropy and causal information gain, which are designed to assess how much control a feature provides over the outcome variable. These newly defined quantities capture changes in the entropy of a variable resulting from interventions on other variables. Fundamental results connecting these quantities to the existence of causal effects are derived. The use of causal information gain in feature selection is demonstrated, highlighting its superiority over standard mutual information in revealing which features provide control over a chosen outcome variable. Our investigation paves the way for the development of methods with improved interpretability in domains involving causation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2023

Fair Causal Feature Selection

Causal feature selection has recently received increasing attention in m...
research
04/26/2021

Instance-wise Causal Feature Selection for Model Interpretation

We formulate a causal extension to the recently introduced paradigm of i...
research
08/23/2018

Multivariate Extension of Matrix-based Renyi's α-order Entropy Functional

The matrix-based Renyi's α-order entropy functional was recently introdu...
research
07/06/2020

Confounding Ghost Channels and Causality: A New Approach to Causal Information Flows

Information theory provides a fundamental framework for the quantificati...
research
07/17/2019

Feature Selection via Mutual Information: New Theoretical Insights

Mutual information has been successfully adopted in filter feature-selec...
research
08/07/2014

Robust Feature Selection by Mutual Information Distributions

Mutual information is widely used in artificial intelligence, in a descr...
research
05/16/2017

All-relevant feature selection using multidimensional filters with exhaustive search

This paper describes a method for identification of the informative vari...

Please sign up or login with your details

Forgot password? Click here to reset