
On the Statistical Efficiency of ℓ_1,p MultiTask Learning of Gaussian Graphical Models
In this paper, we present ℓ_1,p multitask structure learning for Gaussi...
read it

Learning Some Popular Gaussian Graphical Models without Condition Number Bounds
Gaussian Graphical Models (GGMs) have wideranging applications in machi...
read it

On the Sample Complexity of Learning Graphical Games
We analyze the sample complexity of learning graphical games from purely...
read it

Efficient Statistics for Sparse Graphical Models from Truncated Samples
In this paper, we study highdimensional estimation from truncated sampl...
read it

On model misspecification and KL separation for Gaussian graphical models
We establish bounds on the KL divergence between two multivariate Gaussi...
read it

Sample Efficient Linear MetaLearning by Alternating Minimization
Metalearning synthesizes and leverages the knowledge from a given set o...
read it

Learning to Discover Sparse Graphical Models
We consider structure discovery of undirected graphical models from obse...
read it
Support Union Recovery in Meta Learning of Gaussian Graphical Models
In this paper we study Meta learning of Gaussian graphical models. In our setup, each task has a different true precision matrix, each with a possibly different support (i.e., set of edges in the graph). We assume that the union of the supports of all the true precision matrices (i.e., the true support union) is small in size, which relates to sparse graphs. We propose to pool all the samples from different tasks, and estimate a single precision matrix by ℓ_1regularized maximum likelihood estimation. We show that with high probability, the support of the estimated single precision matrix is equal to the true support union, provided a sufficient number of samples per task n ∈ O((log N)/K), for N nodes and K tasks. That is, one requires less samples per task when more tasks are available. We prove a matching informationtheoretic lower bound for the necessary number of samples, which is n ∈Ω((log N)/K), and thus, our algorithm is minimax optimal. Synthetic experiments validate our theory.
READ FULL TEXT
Comments
There are no comments yet.