Sharp thresholds in inference of planted subgraphs
A major question in the study of the Erdős–Rényi random graph is to understand the probability that it contains a given subgraph. This study originated in classical work of Erdős and Rényi (1960). More recent work studies this question both in building a general theory of sharp versus coarse transitions (Friedgut and Bourgain 1999; Hatami, 2012) and in results on the location of the transition (Kahn and Kalai, 2007; Talagrand, 2010; Frankston, Kahn, Narayanan, Park, 2019; Park and Pham, 2022). In inference problems, one often studies the optimal accuracy of inference as a function of the amount of noise. In a variety of sparse recovery problems, an “all-or-nothing (AoN) phenomenon” has been observed: Informally, as the amount of noise is gradually increased, at some critical threshold the inference problem undergoes a sharp jump from near-perfect recovery to near-zero accuracy (Gamarnik and Zadik, 2017; Reeves, Xu, Zadik, 2021). We can regard AoN as the natural inference analogue of the sharp threshold phenomenon in random graphs. In contrast with the general theory developed for sharp thresholds of random graph properties, the AoN phenomenon has only been studied so far in specific inference settings. In this paper we study the general problem of inferring a graph H=H_n planted in an Erdős–Rényi random graph, thus naturally connecting the two lines of research mentioned above. We show that questions of AoN are closely connected to first moment thresholds, and to a generalization of the so-called Kahn–Kalai expectation threshold that scans over subgraphs of H of edge density at least q. In a variety of settings we characterize AoN, by showing that AoN occurs if and only if this “generalized expectation threshold” is roughly constant in q. Our proofs combine techniques from random graph theory and Bayesian inference.
READ FULL TEXT