Bias-Variance Tradeoffs in Joint Spectral Embeddings

05/05/2020 ∙ by Benjamin Draves, et al. ∙ 0

Latent position models and their corresponding estimation procedures offer a statistically principled paradigm for multiple network inference by translating multiple network analysis problems to familiar task in multivariate statistics. Latent position estimation is a fundamental task in this framework yet most work focus only on unbiased estimation procedures. We consider the ramifications of utilizing biased latent position estimates in subsequent statistical analysis in exchange for sizable variance reductions in finite networks. We establish an explicit bias-variance tradeoff for latent position estimates produced by the omnibus embedding of arXiv:1705.09355 in the presence of heterogeneous network data. We reveal an analytic bias expression, derive a uniform concentration bound on the residual term, and prove a central limit theorem characterizing the distributional properties of these estimates. These explicit bias and variance expressions enable us to show that the omnibus embedding estimates are often preferable to comparable estimators with respect to mean square error, state sufficient conditions for exact recovery in community detection tasks, and develop a test statistic to determine whether two graphs share the same set of latent positions. These results are demonstrated in several experimental settings where community detection algorithms and hypothesis testing procedures utilizing the biased latent position estimates are competitive, and oftentimes preferable, to unbiased latent position estimates.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.