
Predictive Learning on Hidden TreeStructured Ising Models
We provide highprobability sample complexity guarantees for exact struc...
read it

Predictive Learning on SignValued Hidden Markov Trees
We provide highprobability sample complexity guarantees for exact struc...
read it

Robust Estimation of Tree Structured Markov Random Fields
We study the problem of learning treestructured Markov random fields (M...
read it

Exact Inference of Hidden Structure from Sample Data in NoisyOR Networks
In the literature on graphical models, there has been increased attentio...
read it

Identifiability in Gaussian Graphical Models
In highdimensional graph learning problems, some topological properties...
read it

Tensors, Learning, and 'Kolmogorov Extension' for Finitealphabet Random Vectors
Estimating the joint probability mass function (PMF) of a set of random ...
read it

Structure learning for extremal tree models
Extremal graphical models are sparse statistical models for multivariate...
read it
NonParametric Structure Learning on Hidden TreeShaped Distributions
We provide high probability sample complexity guarantees for nonparametric structure learning of treeshaped graphical models whose nodes are discrete random variables with a finite or countable alphabet, both in the noiseless and noisy regimes. First, we introduce a new, fundamental quantity called the (noisy) information threshold, which arises naturally from the error analysis of the ChowLiu algorithm and characterizes not only the sample complexity, but also the inherent impact of the noise on the structure learning task, without explicit assumptions on the distribution of the model. This allows us to present the first nonparametric, highprobability finite sample complexity bounds on treestructure learning from potentially noisecorrupted data. In particular, for number of nodes p, success rate 1δ, and a fixed value of the information threshold, our sample complexity bounds for exact structure recovery are of the order of O(log^1+ζ (p/δ)), for all ζ>0, for both noiseless and noisy settings. Subsequently, we apply our results on two classes of hidden models, namely, the Mary erasure channel and the generalized symmetric channel, illustrating the usefulness and importance of our framework. As a byproduct of our analysis, this paper resolves the open problem of tree structure learning in the presence of nonidentically distributed observation noise, providing explicit conditions on the convergence of the ChowLiu algorithm under this setting as well.
READ FULL TEXT
Comments
There are no comments yet.