Hidden independence in unstructured probabilistic models

by   Antony Pearson, et al.

We describe a novel way to represent the probability distribution of a random binary string as a mixture having a maximally weighted component associated with independent (though not necessarily identically distributed) Bernoulli characters. We refer to this as the latent independent weight of the probabilistic source producing the string, and derive a combinatorial algorithm to compute it. The decomposition we propose may serve as an alternative to the Boolean paradigm of hypothesis testing, or to assess the fraction of uncorrupted samples originating from a source with independent marginals. In this sense, the latent independent weight quantifies the maximal amount of independence contained within a probabilistic source, which, properly speaking, may not have independent marginals.


page 1

page 2

page 3

page 4


Privacy-aware Distributed Hypothesis Testing in Gray-Wyner Network with Side Information

The problem of distributed binary hypothesis testing in the Gray-Wyner n...

Distributed Hypothesis Testing Over Orthogonal Discrete Memoryless Channels

A distributed binary hypothesis testing problem is studied in which mult...

Large independent sets on random d-regular graphs with d small

In this paper, we present a prioritized local algorithm that computes a ...

Using Data Compressors to Construct Rank Tests

Nonparametric rank tests for homogeneity and component independence are ...

The Capacity of Some Pólya String Models

We study random string-duplication systems, which we call Pólya string m...

Case-Factor Diagrams for Structured Probabilistic Modeling

We introduce a probabilistic formalism subsuming Markov random fields of...

Tuple-Independent Representations of Infinite Probabilistic Databases

Probabilistic databases (PDBs) are probability spaces over database inst...