DeepAI AI Chat
Log In Sign Up

Causal KL: Evaluating Causal Discovery

by   Rodney T. O'Donnell, et al.

The two most commonly used criteria for assessing causal model discovery with artificial data are edit-distance and Kullback-Leibler divergence, measured from the true model to the learned model. Both of these metrics maximally reward the true model. However, we argue that they are both insufficiently discriminating in judging the relative merits of false models. Edit distance, for example, fails to distinguish between strong and weak probabilistic dependencies. KL divergence, on the other hand, rewards equally all statistically equivalent models, regardless of their different causal claims. We propose an augmented KL divergence, which we call Causal KL (CKL), which takes into account causal relationships which distinguish between observationally equivalent models. Results are presented for three variants of CKL, showing that Causal KL works well in practice.


page 1

page 2

page 3

page 4


Description of a Tracking Metric Inspired by KL-divergence

A unified metric is given for the evaluation of tracking systems. The me...

On the Robustness to Misspecification of α-Posteriors and Their Variational Approximations

α-posteriors and their variational approximations distort standard poste...

On model misspecification and KL separation for Gaussian graphical models

We establish bounds on the KL divergence between two multivariate Gaussi...

Identifying Invariant Factors Across Multiple Environments with KL Regression

Many datasets are collected from multiple environments (e.g. different l...

Principled Bayesian Minimum Divergence Inference

When it is acknowledged that all candidate parameterised statistical mod...

Principles of Bayesian Inference using General Divergence Criteria

When it is acknowledged that all candidate parameterised statistical mod...

Task Specific Adversarial Cost Function

The cost function used to train a generative model should fit the purpos...